LinuxCon + CloudOpen + Embedded LinuxCon Europe 2015: how it was

    Once a year in Europe, an event takes place that everyone who at least knows something about Linux wants to visit. An event that gathers around itself the largest community that has ever existed on this planet. A community of enthusiasts, hackers, engineers, programmers, admins, corporate bosses, all those who have work and hobbies thanks to Linux and open source. We at STC Metrotek are used to sharing knowledge and getting it, so we couldn’t miss this. Ladies and gentlemen, welcome to Dublin at the triple LinuxCon + CloudOpen + Embedded LinuxCon Europe 2015 conference!



    Even when we just looked at the conference programwe realized that it will be a very busy 3 days. Every day from 9 a.m. to 6 p.m. in the huge Convention Center Dublin there were 12 (!) Reports in parallel. Naturally, I wanted to visit almost everything, but our time flywheel has not yet been sent from AliExpress, so I had to carefully choose where to go. Fortunately there were two of us (me and paulig ) so I managed to see more. In the end, we got to Dublin and started.

    The first day


    Thousands of (literally) Linuxoids of all stripes gathered in a huge exhibition / conference center: sugary startup hipsters launching dock containers with Node.js applications, beautiful women on guard of power management and documentation, to gray-haired Unix hackers who remember times when Linux was developed through tarballs.

    Responding immediately to a subconscious question - yes, Linus Torvalds is here. As well as Greg Croa-Hartman, Dirk Hondell and a dozen other Linux kernel maintainers. And the guys from the Apache Foundation, Docker, Red Hat, IBM, GitHub, Google, Oracle, Intel and other great companies are all those who create the open source software industry.

    On the first day, the party proceeds rather modestly, many have not yet been liberated (including us), someone is being constrained by the language barrier (Europe after all), and indeed the first day so far.

    Like any conference, it all starts with keynote'ov, introductory reports from various top managers. At LinuxCon, already traditionally, such a person is Jim Zemlin - president of the Linux Foundation.

    From the stage of the Convention Center Dublin, he congratulated everyone on two significant dates: the 24th birthday of Linux and the 30th anniversary of the Free Software Foundation.

    The Linux Foundation is developing many projects besides Linux itself. The total cost of projects developed under the auspices of the Linux Foundation is $ 5 billion.

    Jim also announced the creation of a new project - “Real time collaborative project” - in the framework of which they will develop an RT-patch for Linux, as well as the hiring of its main developer Thomas Glakesner. A bit of context. Recently, Thomas played a drama about the fact that no one pays him for his incredible efforts and therefore he is going to abandon him.

    After that, Sean Gourley spoke with the theme Man vs. Machine. Using high-frequency trading as an example, it was shown that in the modern world there are areas in which a person is inferior to algorithms and information systems. Machines are faster than people - if a person makes a decision in 0.7 seconds, then the car during this time manages to carry out a series of trading operations. But, nevertheless, the machine is mistaken, and these errors are expensive. The most famous example here is the story of Knight Capital. However, this is the world in which we live, a world in which 61% of all Internet traffic is not related to people.

    Next was an IBM report in which they touted themselves. Nothing interesting, but they are the main sponsor, so I had to endure.

    And in the end, the guys from Drone Project and ETH Zurich talked about drones. Drones are not military devices, but well-known copters. The purpose of the Drone Project is to create open source tools for creating such devices. Currently, there are Pixhawk (Open hardware platform) and ROS (Robot operating system).

    Then I went to the reports. The first was "Application driven storage", where the guys from CNEX Labs talked about the idea of ​​the so-called "Open channel SSDs" and their use in applications. Open channel SSD is an SSD in which all FTL is removed and assigned to the developer. That is, the application itself controls how the data will be allocated, how to delete data and do garbage collection, remap blocks, etc. This removes unnecessary levels of abstraction and complexity, which more interfere than help. All this is needed when you are developing an application that actively works with the disk, has special strict requirements for it, and is well aware of its access patterns. For example, databases that struggle with operating systems all the time and implement their own caches, I / O schedulers, etc. The speaker was no exception. who talked about RocksDB, a database specially tuned for SSD key-value forked from LevelDB, and the new LightNVM kernel subsystem, which provides a device-independent API for scrolling information and management in userspace. There should be a separate report on LightNVM, so the main focus was on RocksDB. In fact, everything is in its infancy, in terms of performance, the new solution loses (!) Many times (!!) to the old POSIX API on regular SSDs due to the lack of page cache and the use of one SSD channel. But in the future they promise enchanting acceleration, almost at the level of SSD utilization. In the meantime, like this. The hardware itself (open channel SSD) is now implemented on FPGA, but in the future it is planned to do ASICs. which provides a device-independent API for passing information and management in userspace. There should be a separate report on LightNVM, so the main focus was on RocksDB. In fact, everything is in its infancy, in terms of performance, the new solution loses (!) Many times (!!) to the old POSIX API on regular SSDs due to the lack of page cache and the use of one SSD channel. But in the future they promise enchanting acceleration, almost at the level of SSD utilization. In the meantime, like this. The hardware itself (open channel SSD) is now implemented on FPGA, but in the future it is planned to do ASICs. which provides a device-independent API for passing information and management in userspace. There should be a separate report on LightNVM, so the main focus was on RocksDB. In fact, everything is in its infancy, in terms of performance, the new solution loses (!) Many times (!!) to the old POSIX API on ordinary SSDs due to the lack of page cache and the use of one SSD channel. But in the future they promise enchanting acceleration, almost at the level of SSD utilization. In the meantime, like this. The hardware itself (open channel SSD) is now implemented on FPGA, but in the future it is planned to do ASICs. In terms of performance, the new solution loses (!) at times (!!) the old POSIX API on regular SSDs due to the lack of page cache and the use of one SSD channel. But in the future they promise enchanting acceleration, almost at the level of SSD utilization. In the meantime, like this. The hardware itself (open channel SSD) is now implemented on FPGA, but in the future it is planned to do ASICs. In terms of performance, the new solution loses (!) at times (!!) the old POSIX API on regular SSDs due to the lack of page cache and the use of one SSD channel. But in the future they promise enchanting acceleration, almost at the level of SSD utilization. In the meantime, like this. The hardware itself (open channel SSD) is now implemented on FPGA, but in the future it is planned to do ASICs.

    The next was the report “RTFM? Write a better FM! ” The bottom line is “stop being a jerk, get to know your audience”. Banal things, because, as the speaker himself noted, people who go to listen to such topics are usually well done, and those who really need it do not get it.

    Then I went to a speaker with whom I was acquainted in absentia by code - Alan Tull from Altera, whose code I studied and added to the needs of our products. He talked about the familiar FPGA management framework. Well, or rather, first about FPGA, about SoC FPGA, then about how they used to work with FPGA, the development history and what state it is in now. There were few technical details about FPGA management itself, more about DeviceTree overlays. We looked at the latest version and its API. We need to see what has changed.

    Continuing the topic of FPGA, there was also a report under the colorful title “Using FPGA for driver testing”, from which I expected wonderful revelations. But in fact, it turned out that the developers of U-Boot take a simple Lattice FPGA and make firmware for them, which is engaged in fuzzy testing of hardware buses and devices. In particular, FPGAs are connected to the I2C or SPI bus, and random and simply erroneous commands are sent to it, and then the bus driver has fallen or not. They do the same for testing the bus itself, when a strange device such as an SD card is simulated on FPGA. To be honest, I expected more.

    “Maximum performance. “How to get it and how to avoid pitfalls”, authored by Cristoph Lameter, an active developer of the network kernel subsystem, was fairly general, in which he systematized extensive experience in tuning performance. Usually software is already optimized for performance for most cases or popular hardware platforms. But sometimes this is not what you need, and if you want to achieve maximums, you need to sacrifice something: money, simplicity, support and / or performance of other parts. As a rule, 2 things are tuned - input / output and network, as the longest and most resource-intensive parts. But, because Since modern storage systems essentially repeat the behavior of networks (sending commands, encapsulating messages), we can say that “Storage today is network communication”. And if you want to optimize your application, then you need to go down to hardware. Instead of using the API of your programming language, use buffered I / O from glibc, either pull the Socket API, or rewrite everything to the RDMA API, or switch to FPGA or ASIC. When optimizing memory access, you need to remember the cache hierarchy and try to help it. To optimize the performance of the CPU, it is worth using vectorization (if the application allows) or even give processing to the GPU. In short, the main message was that if you want maximum performance, then grind under your own hardware and get rid of extra layers and operating systems, which was what people did from the LightNVM and RocksDB projects, which I spoke about at the beginning. We will think about using in Instead of using the API of your programming language, use buffered I / O from glibc, either pull the Socket API, or rewrite everything to the RDMA API, or switch to FPGA or ASIC. When optimizing memory access, you need to remember the cache hierarchy and try to help it. To optimize the performance of the CPU, it is worth using vectorization (if the application allows) or even give processing to the GPU. In short, the main message was that if you want maximum performance, then grind under your own hardware and get rid of extra layers and operating systems, which was what people did from the LightNVM and RocksDB projects, which I spoke about at the beginning. We will think about using in Instead of using the API of your programming language, use buffered I / O from glibc, either pull the Socket API, or rewrite everything to the RDMA API, or switch to FPGA or ASIC. When optimizing memory access, you need to remember the cache hierarchy and try to help it. To optimize the performance of the CPU, it is worth using vectorization (if the application allows) or even give processing to the GPU. In short, the main message was that if you want maximum performance, then grind under your own hardware and get rid of extra layers and operating systems, which was what people did from the LightNVM and RocksDB projects, which I spoke about at the beginning. We will think about using in either upgrade to FPGA or ASIC. When optimizing memory access, you need to remember the cache hierarchy and try to help it. To optimize the performance of the CPU, it is worth using vectorization (if the application allows) or even give processing to the GPU. In short, the main message was that if you want maximum performance, then grind under your own hardware and get rid of extra layers and operating systems, which was what people did from the LightNVM and RocksDB projects, which I spoke about at the beginning. We will think about using in either upgrade to FPGA or ASIC. When optimizing memory access, you need to remember the cache hierarchy and try to help it. To optimize the performance of the CPU, it is worth using vectorization (if the application allows) or even give processing to the GPU. In short, the main message was that if you want maximum performance, then grind under your own hardware and get rid of extra layers and operating systems, which was what people did from the LightNVM and RocksDB projects, which I spoke about at the beginning. We will think about using in that if you want maximum performance, then grind under your hardware and get rid of extra layers and operating systems, which is what people from the LightNVM and RocksDB projects that I spoke about at the beginning did. We will think about using in that if you want maximum performance, then grind under your hardware and get rid of extra layers and operating systems, which is what people from the LightNVM and RocksDB projects that I spoke about at the beginning did. We will think about using inB100 .

    In conclusion, there were regular keynotes, of which the Kernel Developers Panel was most interesting when they assemble Linux kernel developers and ask them questions, discussing various topics. In addition to the standard “How did you get to such a life”, we mainly tried to understand how to attract new people and how to help maintainers in their thankless work. They agreed that the work is overwhelming and that people need to be helped to join the community, including helping maintainers.

    And we went looking for maintainers in bars



    Second day


    The second day promised to be more interesting in terms of reports. I was going to listen about LLVM / Clang, Btrfs and snapshots, Buildroot and profiling.

    The morning keynotes of the second day were more interesting than the first. The first speaker was Leigh Honeywell from Slack Technologies on Securing an open future. Based on the well-known Heartbleed, she spoke about possible actions to prevent such situations. In short, the developers should not forget about security and do at least something in this direction: to study means, possible attack vectors, etc., i.e. try to think like an attacker. For managers, it’s worth creating a healthy culture in which people do not feel guilty because it leads to hiding real problems. In addition, it is worth reading the good practices of organizing secure development, such as Microsoft SDL.

    Next up was the Container Panel - an open discussion on the topic of containers. Key points:

    • Are containers ready for production? Yes, the basis of containers in the core has been around for 10 years and everything is fine with it. Docker showed how this can be conveniently used, and here there is something to do.
    • Can we replace the traditional way of distributing applications via rpm or deb packages with containers? On the one hand, a container is a black box into which you can put anything you like and it is not very pleasant to use it. On the other hand, when we install some complex application that pulls a bunch of dependencies, we do not check what is in the packages. It is all about trust and authentication, and that is the authentication of containers that is what is really needed now.

    In conclusion, the Keynotes talked about IoT, where would it be without him. In short, IoT will change the world, but it must solve important problems:

    • Security
    • Lack of experts in embedded systems
    • Interaction of things

    Next came the reports. The first I went to listen to “Boosting Developer productivity with Clang” from Timan Sceller from Samsung.

    First, Clang speaks correctly as “Klang,” not “Hose,” or whatever you call him. For those familiar with LLVM and Clang, it was boring. Timan said that LLVM is a modular framework for developing compilers and infrastructure for many projects, for example:

    • Clang - C / C ++ / Objective-C compiler
    • LLDB - debugger
    • lld - a framework for linkers
    • polly - polyhedral optimizer

    Now LLVM is used in WebKit FtL JIT, Rust, Android NDK, OpenCL implementations, CUDA, etc., which indicates more than enough maturity.

    The main feature of LLVM is IR, Intermediate Representation. A RISC-like Intermediate Bitcode with type information, into which the source code is translated, and within which all optimizations are implemented. IR is then translated into assembler or machine code of the appropriate architecture.

    Clang is a powerful C / C ++ / Objective-C compiler with rich diagnostic capabilities. Based on Clang, several rather interesting utilities were made, namely:

    • Clang static analyzer - C code static analyzer
    • clang-format - code formatter
    • clang-modernize - brings C ++ code to new standards
    • clang-tidy - search for violations of development rules (coding conventions)
    • Sanitizers:
      • AddressSanitizer
      • Threadsanitizer
      • Leak sanitizer
      • MemorySanitizer
      • UBSanitizer (undefined behavior)

    In terms of performance, the code generated by Clang is on average 2% slower than gcc. but the compilation time is much shorter and the diagnostic possibilities are wider. Next, we looked at the diagnostic capabilities with examples in comparison with gcc. And they brought out the recipe for the best acceleration of the assembly of a large C ++ project (LLVM itself). The recipe is Clang + gold + PGO (profile guided optimizations) + Split Dwarf + optimized TableGen + Ninja. It turns out 2 times faster than gcc. For details, ask the speaker.

    Next, I went to the intriguing Btrfs and rollback report, but was disappointed with the point-point and disgusting presentation of the material.

    He went to Project Ara architecture. From what I heard, I learned that a UniPro bus was developed for a modular phone, each module has a small CPU to communicate on UniPro, when the module is inserted, the bus driver receives a notification about the new module, loads the driver and, if necessary, asks Android to update the software from the clouds.

    Visited a great Buildroot tutorial. Thomas Petazonni from Free Electrons in front of his eyes was assembling a system for BeagleBone Black from BuildRoot, showed how to configure components, how to add your kernel patches. We looked at how to make our own package and customize rootfs. We spent 2 hours in a demo with questions and answers, and now I am very enthusiastic about Buildroot and am going to try it.

    And the last report was on the topic “Linux performance profiling and monitoring”, which was absolutely uninteresting to anyone who tried to do this even a little. The report consisted of listing utilities like vmstat, sar, top (> _ <), mentioning ftrace and perf. If you are really interested, then here - brendangregg.com/linuxperf.html

    The final keynotes consisted of a “Fireside chat” with Linus Torvalds and Dirk Hondell. The uncles talked nicely about the state of the core and what Linus is going to do. Everything is fine with the kernel; Linus does not want to do anything.

    And then all the same Thomas Petazonni told how they, with their small company of 6 people, manage to make a significant contribution to the development of the core, in particular ARM SoCs. The secret is simple - a small team and lack of communication problems, focus on advancement in upstream, constant exchange of knowledge, communication at conferences.

    Day three




    In addition to the conference, a small exhibition was held in parallel, at which sponsors showed themselves, someone hunted, someone held a mini-summit (UEFI and Yocto, for example). But mostly they ate and drank there, which is noticeable in the photo.

    Keynotes of the third day were opened by Martin Fink of HP, who introduced the OpenSwitch project - a modern open OS for network devices (switches). We are watching the project with great interest, hoping to use it in our switches . Martin also identified one of the main open source threats as having a large number of licenses (about 70), which for the most part are not compatible with each other (read the example about ZFS, DTrace and Linux), the Oracle and IBM trolls were like that.

    Further, we learned that the Internet of things needs an open platform, over which we will have control (hi, Lenovo) and it was launched by the J-Core CPU project - an open, in the sense of open hardware, processor.

    And then there was an eerie promotional report from Huawei in the best traditions of Death by PowerPoint , which no one listened to.

    I went to the reports. I really wanted to visit the story about Multifunctional devices (mfd), but the speaker started 10 minutes earlier (why ?!), quickly rummaged through the introductory and went to throw messy pieces of code. I ran away to a report about Open Channel SSDs.

    Open Channel SSDs are SSD devices that provide user (and nuclear) applications with access to internal information, namely the “geometry” of SSDs:

    • NAND carrier
    • Channels, timings
    • List of bad blocks
    • Some kind of PPA format
    • ECC

    All this so that applications intensive from the point of view of I / O could utilize SSD as much as possible. Read the details on the github .

    Tim Bird ( Bird ) made his annual report “Status of Embedded Linux”, where he talked about what happened in Linux for embedded systems over the past year, and what will happen. It makes no sense to list everything, so I’ll get along with a short list:



    What to watch next year:

    • kdbus
    • RT-preempt (now under the auspices of the Linux Foundation will go uphill)
    • Persistent memory
    • SoC mainlining


    Then there were 2 reports about debuggers - “How debuggers work” and “Debugging Linux kernel with GDB”.

    About how debuggers work told Pavel Mall of ARM. In short, through ptrace. If it is longer, then debuggers fork the process, do ptrace (PTRACE_TRACEME, ...) and execve there. In the parent process (debugger), all control takes place. Installing a breakpoint at an address consists in saving instructions to that address somewhere from the debugger and replacing them with architecturally specific ones. For example, for x86 it is int 3, and for ARM there is a specially defined undefined instruction (there is a defined undefined instruction). Pun, yes. When execution reaches such a special instruction, an exception occurs that raises the SIGILL signal, which is delivered to the parent process, i.e. to the debugger. And he can already do whatever he wants. In order to do something interesting,

    About debugging the Linux kernel told Peter Griffin (not at all ) from Linaro. Offered 4 debugging methods:

    1. gdb remote to kgdb stub in the kernel
    2. gdb remote to qemu
    3. gdb remote to gdb-compatible JTAG e.g. OpenOCD
    4. gdb and kernel dump, crash utility

    He also talked about the progress of Linux debugging support in GDB.

    Linaro also presented a talk on the pathos theme “Rethinking the core OS in 2015”. But in fact, a dull bearded man told trivialities in the spirit of “And let's replace gcc with Clang, and glibc with musl!” As a result, we are raking in a bunch of problems, and there is not much gain. It’s strange.

    The reports ended there, in conclusion, a dozen development boards were raffled off and everyone was taken to the Guiness factory-museum. But this is a completely different story.

    Conclusion


    After reading my notes, it may seem that the reports were mostly weak, but this is not so. Bad performances, of course, spoiled the mood, but the remaining good ones were worth it. In 3 days we managed to talk with a lot of people, find out how our industry lives, where to look and be inspired for a long time, and this is why it is worth attending such events.

    And I also have 2 pictures that many friends now envy.





    That's all. Thanks for attention!

    Also popular now: