- Digit Top10
Featured Hot Dealspowered byRs. 9000Rs. 15000Rs. 5500Rs. 7000Rs. 4000Rs. 7000Rs. 35000Rs. 39000Rs. 1699Rs. 16000
Technology bottlenecks currently holding our future to ransom
W.H. Auden once wrote in his famous poem, ‘As I Walked Out One Evening’: “All the clocks in the city began to whirr and chime: O' let not time deceive you, you cannot conquer time.” A facet technology seems willing to ignore at this juncture. Several years have passed since the introduction of 1 GHz CPUs, Blu-Ray Discs and even electric cars. And yet, technology seems stuck, unable to progress further into the beyond and achieve the next stage of performance. But it doesn’t mean it’s not close. Let’s take a look at some of the biggest technology bottlenecks currently holding the future to ransom, and the advances coming our way to set us free (without the horrifying consequences that discovering a world conquered by machines would bring).
CPU SpeedWhen processors broke the 1 GHz barrier, it was a significant achievement in computer history. However, over the past few years, CPU speeds have hovered around 2.5 GHz, when you take single-core processors into consideration. Yes, multi-core systems seemed to be the breakthrough we needed but they carry their own bottlenecks (see next). Also, despite said breakthrough, a single core still clocks in at around 1.5-2 GHz. So why hasn’t processor speeds increased over the years? A prominent factor is transistor size – they're getting smaller, which allows more transistors to be backed onto a single dye (thus obeying Moore's Law), but they're not getting faster. CPU transistors these days follow the metal-oxide semiconductor field-effect transistor or MOSFET scaling process, and they function as electronic signal switchers. As they become smaller, transistors are supposed to switch faster, which leads to increased performance. However, MOSFET scaling has its fair share of bottlenecks. The electrical circuit itself faces various issues with higher sub threshold conduction (which consumes half of the total power consumption), difficulties in scaling down the transistor size beyond a certain point (as Intel has done with its Ivy Bridge chipset, managing a 22 nm size); interconnect capacitance (which is the metal-layer capacitance between various portions of the chip. As it becomes greater, there is an increased delay in the travel of signals through the interconnect, leading to lesser performance), and many more.Another issue is the “interconnect bottleneck”. Integrated circuits are downscaled in size to allow transistors to run at higher frequencies – at the cost of tighter packing of the already dense CPU. This increases parasitic capacitance – a type of capacitance created simply by the proximity of electrical components, which thus causes them to divert from their best possible performance. In more serious scenarios, it can lead to “crosstalk” wherein signals from one circuit meld into another, thus generating inference in operation. There’s also the issue of signal propagation delay, thus resulting in lesser speeds.Solutions range from altering transistor material to using optical components for integrated circuits, both currently expensive methods. Intel has been working on a new 3D integrated circuit, wherein the components are arranged both vertically and horizontally in a single circuit. This allows for more transistors within a smaller volume of space, thus following Moore’s Law and providing increased performance. This would result in shorter interconnects, significant reduction in power consumption, lesser production costs, and a brand new range of design possibilities.Multi-Core ProcessorsMulti-core CPUs were borne of a single desire: How do chip manufacturers continue following the deteriorating Moore's Law, without rapidly altering transistor design or size? This came about by segregating the CPU into cores or distinct sections that followed the diktat of parallel processing. If a 1.5 GHz processor could thread 500 million instructions, then a quad-core processor clocked at the same speed could thread 2000 million instructions. Theoretically, this meant having a single CPU with the power of four. Yet two single-core CPUs could still easily eclipse the performance of a quad-core. Why has the potential thus far remained untapped? Blame it on current software architectures that must be extensively rewritten to take advantage of this new avenue of power. After all, if the program only understands enough to utilize one core while the remaining cores remain untouched, then what difference does it make? Smart phones with quad core CPUs and mobile OS's like Android are also facing this problem. Developers always program for the lowest spectrum when creating software, since it's not always a given that everyone will have multi-core CPUs. Nonetheless, as they become the norm across consumer systems, operating systems will need to be retooled and optimized to fully exploit their power.GPUsCompared to CPUs, Graphical Processing Units (GPUs) are in a league of their own. Despite coming across the same bottlenecks in transistor size and parallel processing, game developers and manufacturers have still developed various algorithms and drivers to unleash their overwhelming power. On top of this, GPUs have only now begun realizing the full power of multi-processing after ATI's sullen sojourn with CrossFire. Now regardless of manufacturer, two graphical chipsets can work in parallel to boost performance of a system. It's no stretch that often it's the other components of the system that bottleneck the GPU by simply not being able to keep up. However, with this much power, energy consumption and heat become major factors. So much so, that various, sometimes potentially problematic, cooling solutions have been devised for the heat. Energy consumption may be solved a bit differently in the year to come. An interesting development forsakes accuracy in computing for up to 15 times more efficiency in power management. Called “inexact” technology, it allows microchips can best be defined as “smart error management”, controlling the probability of errors while confining their frequency. This is achieved through “pruning” or removing the barely used parts within the processor, and showed that it could cut energy usage by 15 times for up to 8 percent performance deviations in chips. Today's CPUs will theoretically be able to make up for this sacrifice in performance; GPUs, with their excessive power, would be able to manage better and benefit well from the reduced energy consumption. Currently, this technology will see greater exposure by 2013.Front-Side BusThe Front-Side Bus or FSB is a little-known but very vital component for every system. As part of the motherboard, it is the central link between the CPU and other devices within a computer. The overall bandwidth that FSB can provide is dependent on by its clock frequency (or the number of cycles it can perform each second), its data path width and the number of data transfers it performs per cycle. The problem arises with the number of clock cycles an FSB can perform. If it goes by the frequency set by the motherboard, which leads to a set number of clock cycles, then overall bandwidth will be limited. Hence no matter how much faster CPUs get, their speed will always be constrained if the FSB can't keep up, as the CPU remains idle for one or more clock cycles until the memory returns its value. Memory in a system is also accessed via the front-side bus, which limits overall transfer speeds since bandwidth is being utilized for this purpose.Both AMD and Intel are working on their own advancements over the old FSB, HyperTransport and Quickpath Interconnect respectively, which promise to provide point-to-point connections over the use of the FSB as a central connecting point. Using this AMD’s HyperTransport, for instance, allows for a theoretical transfer rate of 51.2 GB/s of aggregated throughput. To ensure none of the bandwidth for these point-to-point links is wasted, a separate memory controller on the CPU itself is used to access RAM. Currently, AMD and Intel employ their respective technologies within various components, and they are being tested for replacing front side bus in motherboards.Bandwidth LimitationsAt last count, there are 1 billion currently connected to the internet around the world. By 2016, the number of internet devices will outnumber the world's people. Blame it on smart phones and netbooks, as the world lives half its life out online. And yet, despite advances made over dial-up and narrowband, internet speeds just feel diminished. We've been using the same routers for the past five years to connect to the internet, which really doesn't do justice to the amount of data flowing through the future-proof fiber optics cables we have flowing underwater. And even though Wi-Fi removes this hurdle, with data flowing in its purest form, it still must respond to a transmitter or tower that is hard-wired to the vast global network of cables. Giving everyone their own personal fiber optics cable isn't practical or economic to overcome this bottleneck.However, optical memory devices are being developed to replace the routers of old. NIT, a Japan-based telecommunications company, have been working on such devices for years now which operate between light-transmitting and light-blocking states (like the signal switching of MOSFET) to create digital signals. The technology is very light on energy consumption, using just 30 nanowatts of power and retaining data for one microsecond. This leads to more efficiency than normal electrical routers, and helps maintain the high data rate transmitted via fiber optics cables.Wireless technology is also being constantly improved, and the introduction of LTE or 4G showcases how far mobile bandwidth has come with download peak rates of 300 Mbit/s and uplink peak rates of 75 Mbit/s with a support for carrier bandwidths between 1.4 MHz to 20 MHz and transfer latency of less than 5 milliseconds. Of course, this depends on the equipment one is using. Not all devices currently support LTE, but as it becomes more popular, it should begin phasing out 3G in the years to come, and pave the way for an even more advanced iteration to come.Visit page two to read about more 'Technology bottlenecks currently holding our future to ransom'...
Listed under tags :
Recommended StoriesPut your views on... quick poll
FREE DownloadsIBM DB2 Express-C 10.1.2
It is the free edition of DB2 database for Linux, Windows, Solaris,......