Silicon photonics stumbles at the last meter

Original author: Anthony FJ Levi
  • Transfer

We have already laid the optics to the house, but it is still problematic to lay it to the processor.




If it seems to you that today we are on the verge of a technological revolution, imagine what it was in the mid-1980s. Silicon chips used transistors with a characteristic size measured by microns. Fiber-optic systems moved trillions of bits around the world at great speeds. It seemed that everything is possible - it is only necessary to combine the digital silicon logic, optoelectronics and data transmission over optical fiber.

Engineers imagined how all these breakthrough technologies would continue to evolve and converge at the point at which photonicsmerges with electronics and gradually replaces it. Photonics would allow moving bits not only between countries, but also inside data centers, and even inside computers. Optical fiber would move data from chip to chip - so they thought. And even the chips themselves would be photonic - many thought that incredibly fast logic chips would someday begin to work using photons instead of electrons.

Naturally, this did not happen. Companies and governments have invested hundreds of millions of dollars in the development of new photon components and systems that integrate racks of computer servers in data centers using fiber. And today, such photon devices really connect the racks in many data centers. But the photons stop there. Inside the rack, the individual servers are connected to each other using inexpensive copper wires and high-speed electronics. And, of course, metal conductors are placed on the boards themselves, all the way up to the processor.

Attempts to push the technology into the servers themselves, directly feed the fiber optic processors, based on the economic foundation. Indeed, there is a market for optical transceivers for Ethernet with a volume of almost $ 4 billion a year, which should grow to $ 4.5 billion and 50 million components by 2020, says LightCounting, a market research company . But photonics did not pass these last few meters separating the computer rack in the data center and the processor.

Nevertheless, the enormous potential of this technology continued to support the dream. While technical problems remain significant. But now, finally, new ideas about data center schemes offer feasible ways to organize the photon revolution, which can help contain the influx of big data.




Inside the photon module

Every time you go to the web, watch digital TV or perform virtually any action in today's digital world, you use data that has passed through optical modules-transceivers. From the task - to convert the signal between the optical and digital modes. These devices live on each end of the optical fiber that drives the data inside the data center of any major cloud service or social network. Devices are included in the switch located at the top of the server rack and turn optical signals into electrical ones so that they can reach several servers in this rack. Transceivers also convert data from these servers into optical signals for transmission to other racks or through a network of switches to the Internet.

Each optical module contains three main components: a transmitter with one or several optical modulators, a receiver with one or several photodiodes, and CMOS chips encoding and decoding data. Ordinary silicon emits light very poorly, so photons are generated by a laser separated from the chips (although it can be placed in the same package as them). The laser does not represent the bits by switching on and off - it is turned on all the time, and the bits are encoded in the beam of its light using an optical modulator.

This modulator, the heart of the transmitter, can be of different kinds. Especially successful and simple is called the modulator Mach-Zehnder. In it, a narrow silicon waveguide directs the laser light. Waveguide branches into two, and after a few millimeters, they converge again. In a typical situation, such a fork and connection would have no effect on the light output, since both arms of the waveguide have the same length. Connecting back, the light waves remain in phase with each other. However, if one voltage is applied to one branch, it will change its refractive index, which will slow down or speed up the light wave. As a result, after meeting two waves, they interfere with each other in a destructive way, suppressing the signal. Therefore, by varying the voltage on the branch, we use an electrical signal to modulate the optical.

The receiver is simpler: it's just a photodiode and its supporting circuits. The light passing through the fiber reaches a germanium or silicon-germanium photodiode of the receiver, which produces a current - usually each light pulse is converted into voltage.

The modulator and receiver are serviced by circuits involved in amplification, packet processing, error correction, buffering, and other tasks that need to be solved in order to meet the Gigabit Ethernet standard for optical fiber. How many tasks are performed on the same chip, or at least in the same package that controls the photonics, depends on the manufacturer, but most of the electronic logic is separated from photonics.


Photonics can never transfer data between different parts of a silicon chip. The circular oscillator of an optical switch performs the same function as a single transistor, however, it takes up 10,000 times more area.

There are more and more silicon integrated circuits with optical components, and this may make you think that the integration of photonics into the processor was inevitable. And for some time it was thought so.

However, the growing discrepancy between the rapid decrease in the size of chips with electronic logic and the inability of photonics to keep up with them was underestimated or even ignored. Today, transistors have characteristic dimensions of several nanometers. At 7 nm CMOS technology, more than one hundred general-purpose logic transistors can be placed on each square micrometer. And we still do not mention the labyrinth of complex copper wires above them. In addition to having billions of transistors on each chip, there are still a dozen levels of metal connections connecting these transistors to registers, multipliers, arithmetic logic devices, and more complex designs that make up the processor cores and other necessary circuits.

The problem is that a typical optical component, such as a modulator, cannot be made significantly smaller than the size of the wavelength of the light it carries — which limits its minimum width to 1 micrometer. No Moore's law of this restriction will overcome. It is not a question of using more and more advanced lithography technologies. Just electrons - the wavelength of which is several nanometers - is thin, and photons are thick.

But can not manufacturers simply integrate the modulator and accept the fact that there will be fewer transistors on the chip? After all, are there already billions of them? Can not. Due to the huge number of system functions that each square micrometer of a silicon electronic chip is capable of performing, it will be very expensive to replace even not very many transistors with worse-performing optical-type components.

Simple counting. Suppose an average of 100 transistors are located on a square micrometer. Then an optical modulator occupying an area of ​​10 µm by 10 µm replaces a circuit consisting of 10,000 transistors! Recall that a conventional optical modulator operates as a single switch that turns the light on and off. But each transistor itself can work as a switch. Therefore, roughly speaking, the cost of including this primitive function in the circuit is 10,000: 1, since each optical modulator has 10,000 electronic switches that the circuit designer can use. No manufacturer will accept such a high cost, even in exchange for a noticeable increase in speed and efficiency, which could be obtained from the integration of modulators directly into the processor.

The idea of ​​replacing electronics with photonics on chips has other disadvantages. For example, on a chip, critical tasks are performed, such as working with memory, for which optics have no capabilities. Photons are simply incompatible with the basic functions of a computer chip. And in cases where this is not the case, it does not make sense to arrange a competition between optical and electronic components on the same chip.


The scheme of the data center.
Today (left) photonics transmits data over a multi-tier network. Internet connection is at the top (main) level. The switch transmits data over fiber to the top rack switches.
Tomorrow (right) photonics will be able to change the architecture of data centers. Rack-scale architecture could make data centers more flexible by physically separating computers from memory and linking these resources across an optical network.


But this does not mean that optics will not be able to get close to processors, memory and other key chips. Today, the optical communications market in data centers revolves around top-of-rack (TOR) switches, which include optical modules. At the top of two-meter racks in which servers, memory and other resources are installed, optical fiber links the TORs together using a separate layer of switches. And they are connected to another set of switches that form the output of the data center to the Internet.

A typical TOR panel where transceivers are plugged in gives an idea of ​​data movement. Each TOR is connected to one transceiver, and that, in turn, is connected to two optical cables (one for transmission, the second for reception). In the 45 mm high TOR, you can plug in 32 modules, each of which is capable of transmitting data at a speed of 40 Gbit / s in both directions, with the result that data can be transmitted at a speed of 2.56 Tbit / s between two racks.

However, within the racks and inside the servers, data still flows through the copper wires. This is bad, as they become an obstacle to the creation of faster and more energy-efficient systems. Optical solutions of the last meter (or a couple of meters) —the connection of the optics to the server or even directly to the processor — are probably the best opportunity to create a huge market for optical components. But until then it is necessary to overcome serious obstacles both in the field of prices and in the field of speed.

Circuits called "fiber to processor" are not new. The past gives us a lot of lessons about their cost, reliability, energy efficiency and channel width. About 15 years ago, I participated in the development and creation of an experimental transceivershowed a very high bandwidth. The demonstration connected a cable of 12 optical wires to the processor. Each core transmitted digital signals generated separately by four surface-emitting lasers with a vertical resonator (VCSEL). This is a laser diode, emitting light from the surface of the chip, and the light has a higher density than that of conventional laser diodes. Four VCSELs coded the bits by turning the light on and off, and each of them worked at its frequency in the same core, which increased its transmission capacity by four times due to the coarse spectral bandwidth of the channels.. Therefore, if each VCSEL produced a data stream of 25 Gbit / s, then the total system throughput reached 1.2 Tbit / s. Today, the industry standard distance between adjacent conductors in a 12-core cable is 0.25 mm, which gives a throughput density of 0.4 Tbit / s / mm. In other words, in 100 seconds each millimeter can process as much data as the web archive of the Library of Congress of the USA saves in a month.

Today, even higher speeds are required to transfer data from the optics to the processor, but the beginning was quite good. Why is this technology not accepted? Partly because this system was both insufficiently reliable and impractical. At that time it was very difficult to make 48 VCSEL for the transmitter and to guarantee the absence of failures during its life. An important lesson is that a single laser with many modulators can be made much more reliable than 48 lasers.

Today, VCSEL reliability has increased so much that transceivers operating on this technology can be used in solutions for short distances in data centers. Optical conductors can be replaced with multi-core optics, carrying as much data, redirecting them to different threads inside the main fiber. Also recently, it became possible to implement more sophisticated digital data transmission standards - for example, PAM4 , which increases the speed of data transmission using not two, but four light powers. Studies are being conducted in the direction of increasing the bandwidth density in data transmission systems from optics to the processor — for example, the MIT Shine program makes it possible to achieve 17 times more density than was available 15 years ago.

All these are quite significant breakthroughs, but taken together, they will not be enough to allow photonics to make the next step towards the processor. However, I still think that such a move is possible - because now the movement to change the system architecture of data centers is gaining momentum.

Today, processors, memory and storage system are collected in the so-called. blade servers whose special enclosures are located in racks. But this is not necessary. Instead of placing the memory on the chips in the server, it can be placed separately - on the same, or even on a different rack. It is believed that such a rack-scale architecture ( rack-scale architecture, RSA) can more efficiently use computational resources, especially for social networks like Facebook, where the amount of computation and memory needed to solve problems grows with time. It also simplifies the maintenance and replacement of equipment.

Why does such a configuration help photonics to penetrate deeper? Because it is precisely this ease of configuration change and dynamic resource allocation that you can afford thanks to a new generation of efficient, low-cost optical switches that transmit several terabits per second.


The technology of connecting optics directly to the processor has existed for over 10 years

The main obstacle to this change in data centers is the cost of components and their production. Silicon photonics already have one cost advantage - it can take advantage of existing production facilities, a huge chip production infrastructure and its reliability. However, silicon and light combine imperfectly: in addition to the interfering inefficiency in the emission of light, the silicon components suffer from large light losses. A typical silicon optical transceiver shows optical loss of 10 dB (90%). This inefficiency does not matter for short connections between TOR switches, as long as the potential cost advantage of silicon outweighs its disadvantages.

An important part of the cost of a silicon optical module is such a modest but crucial detail as an optical connection. This is the physical connection of the optical fiber and the receiver or transmitter, and the connection between the fibers. Every year, hundreds of millions of optical-optics connectors have to be manufactured with the highest precision. To imagine this accuracy, note that the diameter of a human hair is usually only slightly smaller than the diameter of a single thread of 125 µm silica fiber used to connect optical cables. The accuracy with which it is necessary to align the fiber in the connector is about 100 nm - one thousandth of the thickness of a human hair - or the signal will attenuate too much. It is necessary to develop innovative methods for the production of connectors for two cables and for connecting the cable to the transceiver, to meet growing customer demands for high accuracy and low cost. However, there are very few production technologies that make production quite inexpensive.

One of the ways to reduce the cost is to reduce the cost of the optical module chips. Here the technology of implementing systems at the level of a whole substrate ( wafer-scale integration , WSI) can help . According to this technology, photonics are placed on one silicon substrate, electronics - on another, and then the substrates are connected (a laser made not from silicon, but from another semiconductor, remains separate). Such an approach saves production costs, since it allows for parallel production and assembly.

Another factor reducing the cost is, of course, the volume of production. Suppose that the entire optical Gigabit Ethernet market is 50 million transceivers per year, and each optical transceiver chip occupies 25 mm square. Assuming that the factory uses for its production substrates with a diameter of 200 mm, and that then 100% of the output is used, this market requires 42,000 substrates.

This may seem like a large number, but this figure actually describes just two weeks of work in a typical factory. In reality, any transceiver manufacturer can capture 25% of the market in a few days of production. There must be a way to increase the volume if we want to really reduce the cost. The only way to do this is to figure out how to use photonics below the TOR switch, right down to the processors in the servers.

If silicon photonics ever penetrates where all the electronic systems work, for this, convincing technical and economic reasons will have to appear. Components will have to solve all important problems and seriously improve the system as a whole. They must be small, energy efficient and extremely reliable, and they must transfer data extremely quickly.

Today, there is no solution that meets all these requirements, so electronics will continue to evolve without integration with optics. Without serious breakthroughs, thick photons will no longer fall into those parts of the system where lean electrons dominate. However, if optical components can be reliably produced in very large volumes at a very low price, the dream of several decades of connecting optics to the processor and its associated architectures can become a reality.

Over the past 15 years we have made significant progress. We better understand optical technologies and where they can and where they cannot be used in data centers. A sustainable multi-billion dollar commercial market for optical components has been developed. Optical connectors have become a critical part of the global information structure. However, the integration of a large number of optical components in the heart of electronic systems remains impractical. But will it remain so further? I think no.

Also popular now: