What is 100G?

In 2011-2012, the telecommunications world stepped into the 100G era. I mean, he took a massive step, everywhere. If reputable habratchitel it will be interesting how this happened - a little knowledge under the cut.

First of all, you need to make a little explanation: 100G means data transfer at a speed of one hundred gigabits per second. However, there are several options for implementing such data rates. For example, the family of 802.3 standards for the so-called "client" connections, allowing you to work at a distance from 100m to 10km.
Hereinafter, 100G transmission is assumed in DWDM networks, within the 50GHz channel. As a rule, DWDM (Dense Wavelength Division Multiplexing) implies the possibility of transmitting about 40-88 channels in C (96 in the extended) range, over distances from 80 to several thousand kilometers, these are the so-called Optical Transport Networks. More information about such networks has already been written, for example, here. The OTU4 structure is used to transfer 100G in OTN networks, but this is a completely different story ...

So how did this speed become possible?

In order to realize the possibility of transmission at such a speed, the engineers had to take some tricks to reduce the initial symbolic speed:
1. Using polarization compaction technology.
2. The use of new types of modulation
3. The use of coherent reception.
It is important to understand that there is a difference between bit rate and symbol rate, as one character can encode multiple bits. The symbolic speed is measured in Baud, more about this can be found here ru.wikipedia.org/wiki/%D0%91%D0%BE%D0%B4 or google.

Polarization Multiplexing

During the propagation of light, there are two perpendicularly oriented components:

In previous generation systems, information was transmitted by two components simultaneously, which caused problems with signal dispersion (in particular, polarization-mode dispersion, PMD). Using both components for data transmission, the required symbol rate is reduced (bit rate multiplier - 2).
Some manufacturers, who did not have 100G high-performance transmitters at the dawn of the day, got out of the situation using the multiplexing of two subcarriers:

Bit rate multiplier - 2 * 2, thus, lower bit rate is required, but transmission performance deteriorates and production complexity increases.
The disadvantage of this solution is the stronger requirements for dispersion compensation, compared to conventional non-2 polarization systems (i.e., the effect of dispersion is even more disastrous).

New types of modulation.

For speeds of 10G and lower, as a rule, simple types of modulation are used, called OOK On Off Keying, for example CRZ; CSRZ, ODB (one bit per character)

For speeds above 10G, modulations that increase the character speed are used, for example DQPSK (Differential Quadrature Phase Shift Keying):

Since in this case we can encode two values ​​when transmitting one character, then the bit rate more (two bits per character). The next step is the introduction of QAM modulation, in particular 16-QAM, but this is the subject of another article.

As a result, the required speed of 100G requires about 112 Gbd symbol rate (due to headers and service information), but due to technical solutions, this requirement is reduced to 28Gbd (28 * 2 x polarization * 2 x modulation = 112) for one carrier and 14 Gbd for two subcarriers. Of course, this is a very approximate explanation on the "fingers", but perhaps it will help someone understand where and what to look for next.

Coherent Reception

The use of coherent reception is mainly associated with the problems of dispersion (broadening) of pulses in the fiber.
The main difference between coherent detection and systems with conventional (direct detection direct detection) is that in systems with direct detection it is only possible to read the effective value, for example, the current light intensity on a photodiode.

The main idea of ​​coherent reception is that two signals are sent to the receiver (4) from the source and from the local generator (2) (the so-called reference signal). In this case, two signals interfere (3) and the photo detector already sees a certain interference pattern, which means it can get some information about the phase.
In addition, after the conversion of light - electricity on a photodiode (4), then error compensation (FEC) caused by dispersion and the influence of noise occurs, which increases the ability to transmit signals hundreds of kilometers without the need for their restoration (3R regeneration). In fact, this technology has brought DWDM systems to a new stage of evolution, due to the extremely effective error compensation.

As a result, such a mixture of engineering and technical solutions allowed us to implement such a necessary and promising technology. New difficulties are on the verge, at the moment there are active studies of transmissions at speeds of 400Gbit and 1000Gbit (1Tbit) per second and I think that in a couple of years these technologies will also move from laboratories to the world of practical use.

Honestly, before that I had never written articles, so I apologize if some points are covered a bit chaotically, I will try to answer your questions if they suddenly appear.

Also popular now: