# Systematic corrective codes. Linear Group Code

This publication will consider a linearly group code as one of the representatives of systematic corrective codes and its implementation in C ++ is proposed.

What is a corrective code? Corrective code is a code aimed at detecting and correcting errors. A systematic codes - These are codes in which the control and information bits are placed on a specific system. One such example is the Hamming code or the actually linear group codes.

The linearly group code consists of information bits and control bits. For example, for an initial 4-character combination, a linearly grouped code would look like this:

Where the first 4 characters is our original combination, and the last 3 characters are control bits.

The total length of the linear group code is 7 characters. If we know the number of bits of the original combination, then to calculate the number of check bits, you need to use the formula:

Where n is the number of information bits, that is, the length of the original combination, and log is base 2. And the total length N of the code will be calculated by the formula:

Suppose the original combination is 10 bits.

d is always rounded in

And the full code length will be

Having figured out the length of the code, we need to compose a production and verification matrix.

The generating matrix, dimension N by n, where N is the length of the linearly group code, and n is the length of the information part of the linearly group code. In fact, the producing matrix consists of two matrices: a unit dimension m by m, and a matrix of control bits of dimension d by n. If the identity matrix is compiled by arranging the units on the main diagonal, then compiling the “control” part of the matrix has some rules. Easier to explain by example. We take the combination of 10 information bits we already know, but add redundancy to the code and add the 5th control bit to it. The matrix will have a dimension of 15 by 10.

The “control” part is compiled according to the scheme for reducing the binary number and observing the minimum code distance between the lines: in our case it is 11111, 11110, 11101 ...

The minimum code distance for the combination will be calculated by the formula:

Where r is the rank of the error being detected, and s is the rank of the error being corrected.

In our case, the rank of the error being corrected and detected is 1.

It is also necessary to compile a check matrix. It is compiled by transposing the “control” part and after it a unit matrix of dimension d by d is added.

Having compiled the matrices, we can already write a linearly group code by summing the rows of the generating matrix under the numbers of nonzero bits of the original message.

Consider this step as an example of the initial message 1001101010.

Linear group code: 100110101011100

Immediately, we note that the control bits in the LGK are determined by the parity rules of the sum of the corresponding indices, in our case, these amounts are: 5,3,3,4,4. Therefore, the control part of the code looks like: 11100.

As a result, we compiled a linearly group code. However, as mentioned earlier, a linearly group code has a corrective ability, in our case, it is able to detect and correct a single error.

Let's say our code was sent with an error in the 6th category. To determine errors in the code, a previously compiled verification matrix is used.

In order to determine in which particular category an error occurred, we need to find out the “error syndrome”. The error syndrome is calculated by the method of checks for nonzero positions of the parity check matrix. In our case, there are five of these checks, and we post our received message through all these checks.

Having received a binary number, we compare it with the columns of the verification matrix. As soon as we find the corresponding "syndrome", we determine its index, and perform the inverse of the bit according to the obtained index.

In our case, the syndrome is: 01111, which corresponds to the 6th category in LGA. We invert the bit and get the correct linear group code.

The decoding of the corrected LGK occurs by simply removing the control bits. After removing the control bits of the LGK, we get the original combination, which was sent for encoding.

In conclusion, we can say that such correcting codes as linear group codes, the Hamming code are already quite outdated, and in their effectiveness they will definitely yield to their modern alternatives. However, they quite cope with the task of getting acquainted with the process of encoding binary codes and the method of correcting errors as a result of the effect of interference on the communication channel.

Implementation of work with LGK in C ++:

What is a corrective code? Corrective code is a code aimed at detecting and correcting errors. A systematic codes - These are codes in which the control and information bits are placed on a specific system. One such example is the Hamming code or the actually linear group codes.

The linearly group code consists of information bits and control bits. For example, for an initial 4-character combination, a linearly grouped code would look like this:

`|1100|110|`

Where the first 4 characters is our original combination, and the last 3 characters are control bits.

The total length of the linear group code is 7 characters. If we know the number of bits of the original combination, then to calculate the number of check bits, you need to use the formula:

$$

Where n is the number of information bits, that is, the length of the original combination, and log is base 2. And the total length N of the code will be calculated by the formula:

$$

Suppose the original combination is 10 bits.

$$

$$

d is always rounded in

__a big way__, and d = 4.And the full code length will be

__14__bits.Having figured out the length of the code, we need to compose a production and verification matrix.

The generating matrix, dimension N by n, where N is the length of the linearly group code, and n is the length of the information part of the linearly group code. In fact, the producing matrix consists of two matrices: a unit dimension m by m, and a matrix of control bits of dimension d by n. If the identity matrix is compiled by arranging the units on the main diagonal, then compiling the “control” part of the matrix has some rules. Easier to explain by example. We take the combination of 10 information bits we already know, but add redundancy to the code and add the 5th control bit to it. The matrix will have a dimension of 15 by 10.

```
1 0 0 0 0 0 0 0 0 0 1 1 1 1 1
0 1 0 0 0 0 0 0 0 0 1 1 1 1 0
0 0 1 0 0 0 0 0 0 0 1 1 1 0 1
0 0 0 1 0 0 0 0 0 0 1 1 0 1 1
0 0 0 0 1 0 0 0 0 0 1 0 1 1 1
0 0 0 0 0 1 0 0 0 0 0 1 1 1 1
0 0 0 0 0 0 1 0 0 0 1 1 1 0 0
0 0 0 0 0 0 0 1 0 0 1 1 0 0 1
0 0 0 0 0 0 0 0 1 0 1 0 0 1 1
0 0 0 0 0 0 0 0 0 1 0 1 0 1 1
```

The “control” part is compiled according to the scheme for reducing the binary number and observing the minimum code distance between the lines: in our case it is 11111, 11110, 11101 ...

The minimum code distance for the combination will be calculated by the formula:

`Wp=r+s`

Where r is the rank of the error being detected, and s is the rank of the error being corrected.

In our case, the rank of the error being corrected and detected is 1.

It is also necessary to compile a check matrix. It is compiled by transposing the “control” part and after it a unit matrix of dimension d by d is added.

```
1 1 1 1 1 0 1 1 1 0 1 0 0 0 0
1 1 1 1 0 1 1 1 0 1 0 1 0 0 0
1 1 1 0 1 1 1 0 0 0 0 0 1 0 0
1 1 0 1 1 1 0 0 1 1 0 0 0 1 0
1 0 1 1 1 1 0 1 1 1 0 0 0 0 1
```

Having compiled the matrices, we can already write a linearly group code by summing the rows of the generating matrix under the numbers of nonzero bits of the original message.

Consider this step as an example of the initial message 1001101010.

Linear group code: 100110101011100

Immediately, we note that the control bits in the LGK are determined by the parity rules of the sum of the corresponding indices, in our case, these amounts are: 5,3,3,4,4. Therefore, the control part of the code looks like: 11100.

As a result, we compiled a linearly group code. However, as mentioned earlier, a linearly group code has a corrective ability, in our case, it is able to detect and correct a single error.

Let's say our code was sent with an error in the 6th category. To determine errors in the code, a previously compiled verification matrix is used.

In order to determine in which particular category an error occurred, we need to find out the “error syndrome”. The error syndrome is calculated by the method of checks for nonzero positions of the parity check matrix. In our case, there are five of these checks, and we post our received message through all these checks.

$$

$$

$$

$$

$$

Having received a binary number, we compare it with the columns of the verification matrix. As soon as we find the corresponding "syndrome", we determine its index, and perform the inverse of the bit according to the obtained index.

In our case, the syndrome is: 01111, which corresponds to the 6th category in LGA. We invert the bit and get the correct linear group code.

The decoding of the corrected LGK occurs by simply removing the control bits. After removing the control bits of the LGK, we get the original combination, which was sent for encoding.

In conclusion, we can say that such correcting codes as linear group codes, the Hamming code are already quite outdated, and in their effectiveness they will definitely yield to their modern alternatives. However, they quite cope with the task of getting acquainted with the process of encoding binary codes and the method of correcting errors as a result of the effect of interference on the communication channel.

Implementation of work with LGK in C ++:

`#include `
#include
#include
#include
#include
#include
#include
#include
#include
using namespace std;
int main()
{
setlocale(LC_ALL, "Russian");
cout<<"Производящая матрица:"<>str;
if(str.size()!=10)
{
cout<<"Недопустимая размерность строки!"< arr;
for(int i=0;i S;
vector > R;
for(int i=0;i<10;i++)
{
if(arr[i]==1)
{
vector T;
for(int j=0;j<15;j++)
{
T.push_back(matr[i][j]);
}
R.push_back(T);
}
}
cout<>::iterator it=R.begin();it!=R.end();it++)
{
copy((*it).begin(),(*it).end(),ostream_iterator(cout,"\t"));
cout<<"\n";
}
cout< P;
for(int i=0; i<15;i++)
{
int PT=0;
for(int j=0; j

Источники:

1. StudFiles – файловый архив студентов [Электронный ресурс] studfiles.net/preview/4514583/page:2/.

2. Задачник по теории информации и кодированию [Текст] / В.П. Цымбал. – Изд. объед. «Вища школа», 1976. – 276 с.

3. Тенников, Ф.Е. Теоретические основы информационной техники / Ф.Е. Тенников. – М.: Энергия, 1971. – 424 с.