Intel turns to light for fast data transfers

“Intel believes the days of using copper wires for data transfers, both between computers and inside of them, are numbered because optical communications are on the horizon,” Agam Shah reports for IDG News Service.

“The chipmaker has started shipping silicon photonics modules, which use light and lasers to speed up data transfers between computers,” Shah reports. “The silicon photonics components will initially allow for optical communications between servers and data centers, stretching over long distances, said Diane Bryant, executive vice president and general manager of Intel’s Data Center Group.”

“Over time, Intel will put optical communications at the chip level, Bryant said during a keynote at Intel Developer Forum on Wednesday,” Shah reports. “That means light will drive communications inside computers… The first silicon photonics modules will allow for data transfers at up to 100Gbps.”

Read more in the full article here.

MacDailyNews Take: The faster the better!

SEE ALSO:
Review: Corning’s 33-foot Optical Thunderbolt cable lets you put Thunderbolt devices (or Mac) far away from your desk – March 8, 2014

13 Comments

      1. *sigh*

        I suppose most people don’t really understand what LightPeak was. The original intention of LightPeak was to hit 10Gbps over fiber optic cables – but they were able to achieve those speeds over copper, which was/is much cheaper. And they’ve been able to continually increase those speeds without the need of fiber optics.

        Having said that, the link above clearly shows that Thunderbolt does support fiber optic cables for the sole purpose of transmitting over distances not practical for copper wiring. This works because the optical interface is contained in the cable heads.

        1. Actually, the initial announcement for LightPeak was for 100 Gbps, not 10 Gbps. Intel’s goal has *always* been 100 Gbps or greater over fiber optics for external interconnects.

          They backed off to 10 Gbps per channel over copper because the electro-optic converters of the day were too expensive for mass marketing and 10 Gbps per channel was still a lot faster than USB was at that time. Even 20 Gbps per channel (as in TB3) was too far a stretch back then.

          Bye-the-bye, the 20 Gbps “jump” for TB2 was nothing more than bringing into the TB chips what was already being done in software with dual channel TB1. You could have a 20 Gbps TB1 link through channel bonding of the two TB1 10 Gbps channels. (However, during the TB1 era a lot of systems — including some Macs — shipped with TB interface chips that supported only a single channel.)

          The major problem with 100 Gbps links (if you exclude the cost of the electro-optic conversion hardware) is that the internal channels on a PC or Server are not that fast. The fasted, single channel PCIe link of today is approximately 7.88 Gbps. So today, it takes six PCIe channels to fill a TB3 link (but it’s most commonly done with eight). That’s part of why having TB3 baked natively baked into some variants of Kaby Lake is such a big deal. You don’t have to go through the CPU to PCIe to TB3 conversion and waste PCIe lanes.

          Sure, PCIe 4.0, due out by late next year, will be better as it will double the then current top PCIe data rate [about 15.8 Gbps], but it will still be, per channel, slower than TB3. The same goes for the far future (expected no later than 2021) PCIe 5.0 [if they call it that] as it is expected to be only about 32 Gbps — less than a third of the anticipated,, per channel speed of a 100 Gbps LightPeak implementation.

          Maybe by then (2020/2021) Intel’s 3D Xpoint implementation will be fully implemented and available at the consumer level and the 100 Gbps “LightPeak” connections won’t be hobbled by slow internal buses. (Yes, while 3D Xpoint is marketed as primarily a memory construct, its interconnect capabilities are impressive. Using that interconnect implementation to replace, or supplement, PCIe would radically move hardware designs forward.)

    1. The main reason the highly HYPED Light Peak never actually happened and instead became Thunderbolt was the need for power transference. You can’t do that (not much anyway) using light, I.E. optical cable. Thus copper became the compromise…

      Michael points out below that you can get Thunderbolt optical cables, which effectively turns it into Light Peak, mostly.

      1. Derek, that was a corollary reason, not the main reason. Intel (and Apple back then) could have had a dual cable: fiber & copper. The main reason was they could not mass produce the connections inexpensively enough to make them interesting to the common market. And, there was the issue of the bottleneck being elsewhere in the system as I described above. Who wants to give up half their PCIe lanes in order to fill a LightPeak link?

        Further, TB1 or TB2 (and even TB3) over fiber optic (used mainly for long distances for TB) does not come close to the promise of LightPeak. The 10 Gbps per channel of TB1 and TB2 or 20 Gbps per channel of TB3 does not even come close to the originally announced goal (and demonstrated!) of 100 Gbps per channel of LightPeak.

        1. The dual cable idea died because of the desired length. Copper dissipated power conduction to the point of making it infeasible. The result is relatively short Thunderbolt cables to transfer both power and data. Whereas optical cables used for Thunderbolt can be far longer. Doing a quick research grab, I read the following:

          Copper Thunderbolt cables can be up to 3 m (10 ft). Optical Thunderbolt cables can be 100+ m.

          Thank you for pointing out the speeds of Thunderbolt 1,2,3 as well as what was intended for LightPeak. I didn’t bother to look up the numbers for my statement above. My statement that TB optical cables ‘mostly’ qualifies as LightPeak is clearly WRONG. TB remains a considerable compromise.

          Meanwhile, thank you for reminding me that the cost of LightPeak spec cables was prohibitive.

          Then again, I always wondered how much of LightPeak’s stated spec was imaginary/intended versus realistic. Will a LightPeak spec optical system eventually go public? I don’t know.

  1. Using light for data transfer inside a CPU? I say ‘shenanigans!’ This sounds like someone telling tall tails. Such technology would be pointless in our current CPU technology, if only because of the need to translate from light to electronic data with accompanying required hardware. Maybe this could happen in a decade or two after our current CPU tech is tossed in favor of something along the lines of quantum computing, at which point bothering with electrons (technically ‘electron holes’) would be over.

    Place your bets on which sci-fi tech will become real!

  2. In fact, Light CPUs and communication chips already exists in many laboratories. Money is the big and main reason companies produce anything. If intel or Apple came out with a light processor, the speed improvements will be very slow because the next improvement will be to speed up the speed of light. That is why no one has come out with a light processor because they will colapse the market and all the copper technology for copper-gold processors will became a huge waste.

Reader Feedback

This site uses Akismet to reduce spam. Learn how your comment data is processed.