Cisco Live 2019 EMEA. Technical Sessions: External Simplification with Internal Complication

I’m Artyom Klavdiev, Technical Leader of the Linxdatacenter HyperCloud Hypercloud Cloud Project. Today I will continue the story of the global conference Cisco Live EMEA 2019. Immediately move on from the general to the particular, to the announcements presented by the vendor at relevant sessions.
This was my first participation at Cisco Live, the mission was to attend technical program events, plunge into the world of advanced technologies and company solutions and gain a foothold in the forefront of specialists involved in the Cisco product ecosystem in Russia.
Putting this mission into practice was not easy: the program of technical sessions turned out to be super-saturated. All round tables, panels, master classes and discussions, divided into many sections and starting in parallel, cannot be visited simply physically. Everything was discussed: data centers, network, information security, software solutions, hardware - any aspect of the work of Cisco and its vendor partners was presented in a separate section with a huge number of events. I had to follow the recommendations of the organizers and make myself a kind of personal program for events, pre-booked places in the halls.
I will dwell in more detail on the sessions that I managed to attend.
Accelerating Big Data and AI / ML on UCS and HX (Accelerating AI and machine learning on UCS and HyperFlex platforms)

This session was devoted to a review of Cisco platforms for the development of solutions on artificial intelligence and machine learning. Semi-marketing event interspersed with technical points.
The bottom line is: IT engineers and data experts today spend a significant amount of time and resources on designing architectures that combine legacy infrastructure, several stacks to provide machine learning and software to manage this complex.
Cisco serves to simplify this task: the vendor focuses on changing traditional patterns for managing data centers and work processes by increasing the level of integration of all the components necessary for AI / MO.
Cisco- Google Case Study Presented as an Example: Companies are combining the UCS and HyperFlex platforms with industry leading AI / MO software products like KubeFlow to create a comprehensive on-premise infrastructure.
The company told how KubeFlow, deployed on the basis of UCS / HX in combination with the Cisco Container Platform, allows you to transform the solution into something that the company called "Cisco / Google open hybrid cloud" - an infrastructure in which it is possible to implement a symmetrical development and operation of the work environment under AI tasks are simultaneously based on on-premise components and in Google Cloud.
Internet of Things (IoT) Session

Cisco is actively promoting the idea of the need to develop IoT and based on its own network solutions. The company talked about its Industrial Router product - a special line of small-sized LTE switches and routers with increased fault tolerance, moisture resistance and the absence of moving parts. Such switches can be built into any objects of the surrounding world: transport, industrial facilities, commercial buildings. The basic idea: "Deploy these switches on your sites and manage them from the cloud using a centralized console." The line is powered by Kinetic Software to optimize remote deployment and management. The goal is to increase the manageability of IoT systems.
ACI-Multisite Architecture and Deployment (ACI or Application Centric Infrastructure and Network Microsegmentation)

Session devoted to the consideration of the concept of infrastructure focused on micro-segmentation of networks. It was the most difficult and detailed session that I managed to attend. The general message from Cisco was as follows: earlier traditional elements of IT systems (network, servers, storage, etc.) were connected and configured separately. The task of the engineers was to bring everything together into a single working, controlled environment. UCS changed the situation - the network part was allocated in a separate area, and server management began to be carried out centrally from a single panel. It doesn’t matter how many servers - 10 or 10,000, any number is controlled from a single point of management, both management and data transfer take place on a single wire. ACI allows you to bring both networks and servers into one management console.
So, micro-segmentation of networks is the most important function of ACI, which allows granular separation of applications in a system with different levels of dialogue between themselves and with the outside world. For example, two virtual machines running ACI cannot communicate with each other by default. Interaction with each other opens only by opening the so-called “contract”, which allows you to detail access lists for detailed (in other words, micro) network segmentation.
Microsegmentation allows you to achieve fine-tuning of any segment of the IT system by isolating any components and linking them together in any configuration of physical and virtual machines. Finite Computing Element Group (EPG) groups are created to which filtering and routing policies are applied. Cisco ACI allows you to group these EPGs in existing applications into new microsegments (uSeg) and configure network policies or VM attributes for each specific microsegment element.
For example, you can assign web servers to some EPG in order to apply the same policies to them. By default, all compute nodes in the EPG are free to communicate with each other. However, if a web EPG includes web servers for the development and operational phases, it may make sense to prevent them from communicating with each other to ensure that they do not fail. Microsegmentation with Cisco ACI allows you to create a new EPG and automatically prescribe policies for it based on the VM name attributes, such as "Prod-xxxx" or "Dev-xxx".
Of course, this was one of the key sessions of the technical program.
Effective evolution of a DC Networking (Evolution of a network of data centers in the context of virtualization technologies)

This session was logically related to the session on micro-segmentation of the network, and also covered the topic of container networking. In general, it was a question of migration from virtual routers of one generation to routers of another - with architecture diagrams, connection diagrams between different hypervisors, etc.
Thus, the ACI - VXLAN architecture, microsegmentation and distributed firewall, which allow to configure the firewall for conditional 100 virtual machines.
ACI architecture allows these operations not at the virtual OS level, but at the virtual network level: it is safer to configure for each machine a specific set of rules not from the OS, manually, but at the virtualized network level, safer, faster, less labor-intensive, etc. The best control of everything that happens is on each network segment. What's new:
- ACI Anywhere allows you to distribute policies to public clouds (while AWS, in the future - in Azure), as well as on-premise elements or on the web, simply by copying the necessary configuration of settings and policies.
- Virtual Pod is a virtual instance of ACI, a copy of the physical control module, its use requires a physical original (but this is not accurate).
How this can be applied in practice: expanding network connectivity in large clouds. Multicloud comes, more and more companies are using hybrid configurations, faced with the need for disparate network settings in each cloud environment. Now ACI Anywhere provides an opportunity to deploy networks with a single approach, protocols and policies.
Designing Storage Networks for the Next-decade in an AllFlash DC (SAN)
An interesting session about SAN-networks with a demonstration of a set of best practices for configuration.
Top content: Overcoming slow drain on SAN networks. It occurs when any of two or more data arrays is upgraded or replaced with a more efficient configuration, and the rest of the infrastructure does not change. This leads to "inhibition" of all applications running on this infrastructure. The FC protocol does not have the window size matching technology that the IP protocol has. Therefore, with an imbalance in the amount of information sent and the bandwidth and computing areas of the channel, there is a chance to catch slow drain. Recommendations for overcoming - to control the balance of bandwidth and speed of the host edge and storage edge so that the speed of aggregation of channels is greater than in the rest of the factory. It also looked at ways to identify slow drain, such as traffic segregation using vSAN.
Much attention was paid to zoning. The main recommendation for configuring the SAN is to comply with the “1 to 1” principle (1 initiator is prescribed for 1 target). And if the network factory is large, then this generates a huge amount of work. However, the TCAM list is not infinite, therefore, in Cisco SAN management solutions from Cisco smart zoning and auto zoning options have appeared.
HyperFlex Deep Dive Session

Find me in the photo :-)
This session was devoted to the HyperFlex platform as a whole - its architecture, data protection methods, various application scenarios, including for new generation tasks: for example, for data analytics.
The main message - the capabilities of the platform today allow you to configure it for any tasks, scaling and distributing its resources between the tasks facing the business. Platform experts presented the main advantages of the platform’s hyperconverged architecture, the most important of which is the ability to quickly deploy any advanced technological solutions with minimal infrastructure configuration costs, reduced IT TCO and increased productivity. Cisco delivers all these benefits with advanced networking and management and control software.
A separate part of the session was devoted to Logical Availability Zones - a technology that allows to increase the fault tolerance of server clusters. For example, if there are 16 nodes assembled in a single cluster with a replication factor of 2 or 3, then the technology will create copies of the servers, blocking the consequences of possible server failures due to space donation.
Summary and Conclusions

Cisco is actively promoting the idea that today absolutely all the options for configuring and monitoring IT infrastructure are available from the clouds, and these solutions need to be switched as soon as possible and in droves. Just because they are more convenient, eliminate the need to solve a mountain of infrastructure issues, make your business more flexible and modern.
As device productivity grows, so do all the risks associated with them. 100-gigabit interfaces are already real, and you need to learn how to manage technologies in relation to the needs of the business and your competencies. Deploying IT infrastructure has become simple, but management and development have become much more complex.
At the same time, there seems to be nothing radically new in terms of basic technologies and protocols (everything is on Ethernet, TCP / IP, etc.), but multiple encapsulation (VLAN, VXLAN, etc.) makes the overall system extremely complex. Behind seemingly simple interfaces, very complex architectures and problems are hiding today, and the price of one error increases. Easier to manage - easier to make a fatal blunder. You should always remember that the policy that you changed is applied instantly and applies to all devices of your IT infrastructure. In the future, the introduction of the latest technological approaches and concepts such as ACI will require a radical upgrade of personnel training and elaboration of processes within the company: for simplicity you will have to pay a high price. With progress, risks appear of a whole new level and profile.
Epilogue

While I was preparing an article about Cisco Live tech sessions, my colleagues from the cloud team managed to visit Cisco Connect in Moscow. And here's what they heard interesting there.
Panel discussion on digitalization challenges
Speech by IT managers of the bank and the mining company. Summary: if earlier IT specialists came to the leadership for the coordination of purchases and achieved it with a creak, then now it's the other way around - the leadership is running for IT as part of the enterprise digitalization process. And here two strategies are noticeable: the first can be called “innovative” - to find new products, filter, test and find practical application, the second, “early followers strategy”, involves the ability to find cases from Russian and foreign colleagues, partners, vendors and use them in your company.

Booth “Data Centers with the New Cisco AI Platform Server (UCS C480 ML M5)”
The server contains 8 NVIDIA V100 chips + 2 Intel CPUs up to 28 cores + up to 3 TB of RAM + up to 24 HDD / SSDs all in one 4-unit case with a powerful cooling system. Designed to run applications based on artificial intelligence and machine learning, in particular, TensorFlow produces 8x125 teraFLOPs performance. The server analytics system of the conference visitors' routes was implemented on the basis of the server by processing video streams.
New Nexus 9316D Switch
In a 1-unit case 16 ports of 400 Gbit are placed, this is a total of 6.4 Tbit.
For comparison, I looked at the peak traffic of the largest traffic exchange point in Russia MSK-IX - 3.3 Tbits, i.e. a significant part of the Runet in the 1st unit.
Able to L2, L3, ACI.
And finally: a picture to attract attention with our talk on Cisco Connect.

First article: Cisco Live EMEA 2019: changing an old IT bike to a BMW in the clouds