
Trucks and refrigerated trucks in the cloud
In this article, we want to share with you the story of creating a basic prototype of a gateway and transferring it to the Microsoft Azure cloud platform using the Software as a Service (SaaS) model.

In order to make it clear what will be discussed below - a small help:
The AutoGRAF system is a classic client-server application with access to data and analytics. Its current architecture has a high cost of scaling. That is why the key idea for collaboration was the attempt to rethink the architecture, bringing it to the SaaS model to solve scalability problems and create an innovative offer for the market.

Before starting the project, it was obvious that the technological interaction in the paradigm of the Internet of Things could be a very difficult task, since it includes, in addition to integrating Azure services into existing software, also issues related to obtaining data from the equipment installed on vehicles - satellite terminals monitoring of AvtoGRAF transport (hereinafter referred to as "SMT terminals"). And only appropriate preparation greatly simplifies the task. Therefore, preparation began about a month before the event. As a result, several key questions were discussed, which are given below with answers to them.
Is the system up and running? What is the state of the devices, what do they know how to do, and what do they not?
Today the system has been working for several years. Since it was intended for use with vehicles of various manufacturers / partners, there are differences in equipment. Almost all devices are similar in only one way - changing the firmware even on a small number of them is not possible because of their work in a productive environment. For this reason, using the Field Gateway model (placing or installing a gateway close to devices) is not possible.
What protocol do devices use to communicate with the server side?
Binary protocol of own development of the company TechnoCom.
Is it possible to create a new version of the server side without disrupting the ongoing processes and redirect them there from the old version?
Difficult, but real. But since such an approach requires serious and long-term testing and development, it was decided to exclude it from this iteration.
Do I need to manage devices remotely?
Perhaps in the future, but not now.
In the planning process, it was decided to expand the list of tasks in connection with the addition to the system of components for processing the collected data. Accordingly, the current monolithic architecture was to be divided into modules. An incoming data stream from AvtoGRAF was organized, so the issues related to device emulation were closed.
For hosting a gateway that listens to the TCP port and transfers data to the data processing subsystem, Microsoft Azure Cloud Services was chosen .
In order to implement the prototype, the team had 5 hours and 3 developers for 6 tasks:
Initially, we decided to focus on the question of how to break a monolithic application into components.
The old architecture was a classic two-tier architecture “Client - Server”, a monolithic application written in C ++ and running in the Windows Server environment. His job was to receive data packets from the monitoring system installed on the devices, store these packets in local storage and be a frontend for accessing data for external users. Inside the application there were several modules: network, data storage, data decryption, and working with the database.
Data between the server and devices was transmitted using the proprietary binary TCP protocol. To establish a connection, the client sends a handshake packet, the server had to respond to it with a confirmation packet.

In its work, the application used a set of Win32API system calls. Data is stored in binary form as files.
The problem of such scaling lies in the limited implementation methods: increasing server resources / adding a new one and setting up a load balancer and performing other tasks.
After applying the new architecture, the application lost its relevance, as in fact it was a gateway and performed several service tasks (for example, saving data without changing it). In the new architecture, all this was replaced by Azure services (the developers evaluated which option is better to choose: Azure IoT Gateway or a hand-written gateway; and chose the second option with hosting in Azure Cloud Services).

The principles of functioning of the new system are similar to the principles of the old, but provide additional opportunities for scaling (including automatic) and remove infrastructure tasks, for example, to configure the load balancer. Several Cloud Service instances (instances) with a gateway listen on a special port, connect devices and transfer data to their processing subsystem.
It is important to start with small steps and not implement everything all at once in the cloud. The cloud has its own advantages and features that can complicate prototyping (for example, opening ports, debugging a remote project, deployment, and so on). Therefore, in this case, the first stage of development was to create a local version of the gateway:
In the process of creating the prototype, a problem arose - the developers worked in the Microsoft office, where the network security perimeter has a certain number of different policies, which could complicate the prototyping of the local gateway. Since there was no opportunity to change policies, the team used ngrok , which allows you to create a secure tunnel on localhost.
During the development, there were several more issues that it was decided to work deeper - writing to the local storage, obtaining the address and ports of the instance on which the gateway works. Since many variables change dynamically in the cloud, these issues were important to resolve before moving on to the next step.
Of course, in comparison with local storage of data on an instance, Azure has better ways of storing information, but the old architecture implied local storage, so it was necessary to check whether the system can work in this mode. By default, everything that works in the Cloud Service does not have access to the file system - that is, it just won’t work out and write information to c: / temp. Since this is PaaS, you need to configure a special space called Local Storage in the cloud service configuration and set the cleanOnRoleRecycle flag, which with a certain degree of certainty ensures that information from the local storage will not be deleted when the role is restarted. Below is the code that was used to solve this problem.
After testing, it turned out that the data remains in storage, so this can be a good way to store temporary data.
In order to get data in runtime, you need to define the Endpoint inside the configuration of the cloud service and access it in code.

Now the automatically configured load balancer will manage requests between instances, while the developer can scale the number of these instances, with each instance receiving the necessary data in runtime.
So, the local gateway is launched and tested on Cloud Services (which can be emulated locally), now it is the turn to deploy the project to the cloud. Visual Studio toolkit allows you to do this in a few clicks.

The issue of further data transfer from the gateway instance was successfully closed using official code samples.
When planning, the team was worried that one day would not be enough to develop a working prototype of the system, especially in the presence of a variety of environments, devices and requirements (obsolete devices, protocols, monolithic C ++ application). Something really did not have time to do, but the basic prototype was created and worked.
Using PaaS for this kind of task is a great way:
In a few hours, we managed to create the basis for developing an end-to-end solution in the IoT paradigm using real data. Future plans include using machine learning to get the most out of the data, migrating to a new protocol, and testing the solution on the Azure Service Fabric platform.
Thank you for participating in the creation of the material by Alexander Belotserkovsky and the team of Quart Technology.

In order to make it clear what will be discussed below - a small help:
- Quarta Technologies is a provider of solutions in the field of basic IT infrastructure and a leader in the Russian IT market in the field of full information and technical technology support for embedded systems based on Microsoft Windows Embedded.
- TechnoCom works in the field of development and production of GLONASS / GPS systems for satellite monitoring of transport, personnel, fuel control sensors and software for any companies and industries in transport, industry and agriculture. The most famous product of the company is the navigation terminals of the AvtoGRAF series.
- iQFreeze is a solution from Quart Technology that collects, processes and transmits information about the state of cargo and transport in real time.
Creating a basic gateway prototype
The AutoGRAF system is a classic client-server application with access to data and analytics. Its current architecture has a high cost of scaling. That is why the key idea for collaboration was the attempt to rethink the architecture, bringing it to the SaaS model to solve scalability problems and create an innovative offer for the market.

Before starting the project, it was obvious that the technological interaction in the paradigm of the Internet of Things could be a very difficult task, since it includes, in addition to integrating Azure services into existing software, also issues related to obtaining data from the equipment installed on vehicles - satellite terminals monitoring of AvtoGRAF transport (hereinafter referred to as "SMT terminals"). And only appropriate preparation greatly simplifies the task. Therefore, preparation began about a month before the event. As a result, several key questions were discussed, which are given below with answers to them.
Is the system up and running? What is the state of the devices, what do they know how to do, and what do they not?
Today the system has been working for several years. Since it was intended for use with vehicles of various manufacturers / partners, there are differences in equipment. Almost all devices are similar in only one way - changing the firmware even on a small number of them is not possible because of their work in a productive environment. For this reason, using the Field Gateway model (placing or installing a gateway close to devices) is not possible.
What protocol do devices use to communicate with the server side?
Binary protocol of own development of the company TechnoCom.
Is it possible to create a new version of the server side without disrupting the ongoing processes and redirect them there from the old version?
Difficult, but real. But since such an approach requires serious and long-term testing and development, it was decided to exclude it from this iteration.
Do I need to manage devices remotely?
Perhaps in the future, but not now.
In the planning process, it was decided to expand the list of tasks in connection with the addition to the system of components for processing the collected data. Accordingly, the current monolithic architecture was to be divided into modules. An incoming data stream from AvtoGRAF was organized, so the issues related to device emulation were closed.
For hosting a gateway that listens to the TCP port and transfers data to the data processing subsystem, Microsoft Azure Cloud Services was chosen .
In order to implement the prototype, the team had 5 hours and 3 developers for 6 tasks:
- Evaluate the current architecture and rethink it using PaaS.
- Implement a test gateway (console application) to verify operation.
- Evaluate existing options for creating a cloud gateway (for example, Azure IoT Gateway, Cloud Services, and so on).
- Migrate the gateway to the cloud.
- Connect the gateway to the data processing subsystem (that is, write code for this).
- Add monitoring capabilities with Application Insights to the prototype.
Initially, we decided to focus on the question of how to break a monolithic application into components.
Architecture before
The old architecture was a classic two-tier architecture “Client - Server”, a monolithic application written in C ++ and running in the Windows Server environment. His job was to receive data packets from the monitoring system installed on the devices, store these packets in local storage and be a frontend for accessing data for external users. Inside the application there were several modules: network, data storage, data decryption, and working with the database.
Data between the server and devices was transmitted using the proprietary binary TCP protocol. To establish a connection, the client sends a handshake packet, the server had to respond to it with a confirmation packet.

In its work, the application used a set of Win32API system calls. Data is stored in binary form as files.
The problem of such scaling lies in the limited implementation methods: increasing server resources / adding a new one and setting up a load balancer and performing other tasks.
Architecture after
After applying the new architecture, the application lost its relevance, as in fact it was a gateway and performed several service tasks (for example, saving data without changing it). In the new architecture, all this was replaced by Azure services (the developers evaluated which option is better to choose: Azure IoT Gateway or a hand-written gateway; and chose the second option with hosting in Azure Cloud Services).

The principles of functioning of the new system are similar to the principles of the old, but provide additional opportunities for scaling (including automatic) and remove infrastructure tasks, for example, to configure the load balancer. Several Cloud Service instances (instances) with a gateway listen on a special port, connect devices and transfer data to their processing subsystem.
Porting the prototype to the Microsoft Azure cloud platform using the Software as a Service (SaaS) model
It is important to start with small steps and not implement everything all at once in the cloud. The cloud has its own advantages and features that can complicate prototyping (for example, opening ports, debugging a remote project, deployment, and so on). Therefore, in this case, the first stage of development was to create a local version of the gateway:
- The gateway listens on the TCP port and initiates a connection when it receives a handshake packet from the device.
- The gateway responds with an ACK packet and establishes a connection. Upon receipt of data packets, according to the specification, these packets are decomposed into ready-made content.
- The gateway forwards the data further.
In the process of creating the prototype, a problem arose - the developers worked in the Microsoft office, where the network security perimeter has a certain number of different policies, which could complicate the prototyping of the local gateway. Since there was no opportunity to change policies, the team used ngrok , which allows you to create a secure tunnel on localhost.
During the development, there were several more issues that it was decided to work deeper - writing to the local storage, obtaining the address and ports of the instance on which the gateway works. Since many variables change dynamically in the cloud, these issues were important to resolve before moving on to the next step.
Recording to Cloud Service on-premises storage
Of course, in comparison with local storage of data on an instance, Azure has better ways of storing information, but the old architecture implied local storage, so it was necessary to check whether the system can work in this mode. By default, everything that works in the Cloud Service does not have access to the file system - that is, it just won’t work out and write information to c: / temp. Since this is PaaS, you need to configure a special space called Local Storage in the cloud service configuration and set the cleanOnRoleRecycle flag, which with a certain degree of certainty ensures that information from the local storage will not be deleted when the role is restarted. Below is the code that was used to solve this problem.
const string azureLocalResourceNameFromServiceDefinition = "LocalStorage1";
var azureLocalResource = RoleEnvironment.GetLocalResource(azureLocalResourceNameFromServiceDefinition);
var filepath = azureLocalResource.RootPath + "telemetry.txt";
Byte[] bytes = new Byte[514];
String data = null;
while (true)
{
TcpClient client = server.AcceptTcpClient();
data = null;
int i;
NetworkStream stream = client.GetStream();
while ((i = stream.Read(bytes, 0, bytes.Length)) != 0)
{
…
System.IO.File.AppendAllText(filepath, BitConverter.ToString(bytes));
…
}
client.Close();
}
After testing, it turned out that the data remains in storage, so this can be a good way to store temporary data.
Retrieving Instance Address and Port
In order to get data in runtime, you need to define the Endpoint inside the configuration of the cloud service and access it in code.

IPEndPoint instEndpoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints\["TCPEndpoint"\].IPEndpoint;
IPAddress localAddr = IPAddress.Parse(instEndpoint.Address.ToString());
TcpListener server = new TcpListener(localAddr, instEndpoint.Port);
Now the automatically configured load balancer will manage requests between instances, while the developer can scale the number of these instances, with each instance receiving the necessary data in runtime.
So, the local gateway is launched and tested on Cloud Services (which can be emulated locally), now it is the turn to deploy the project to the cloud. Visual Studio toolkit allows you to do this in a few clicks.

The issue of further data transfer from the gateway instance was successfully closed using official code samples.
conclusions
When planning, the team was worried that one day would not be enough to develop a working prototype of the system, especially in the presence of a variety of environments, devices and requirements (obsolete devices, protocols, monolithic C ++ application). Something really did not have time to do, but the basic prototype was created and worked.
Using PaaS for this kind of task is a great way:
- Rapid prototyping solutions.
- Using services and settings to make the solution scalable and flexible from the start.
In a few hours, we managed to create the basis for developing an end-to-end solution in the IoT paradigm using real data. Future plans include using machine learning to get the most out of the data, migrating to a new protocol, and testing the solution on the Azure Service Fabric platform.
Thank you for participating in the creation of the material by Alexander Belotserkovsky and the team of Quart Technology.