Security Week 11: RSA 2019 and a brighter future

    The twenty-eighth RSA conference was held last week, and if in 2018 this largest industry event in the industry was marked by some difficulties in finding new meanings, this time again everything is fine. The opening presentation by RSA President Rohit Gai was dedicated to the “landscape of trust”, and in the course of it an attempt was made to draw a positive scenario for the future, specifically in 2049.

    It is positive because it managed to solve many problems of today, including not even cybersecurity issues, but rather difficulties with the new model of the development of society, which is almost completely tied to the Internet and digital services. Indeed, if you try to look at this topic from above (which they traditionally love to do on RSA), it’s not only about the ability to hack someone’s computer or server. There is, for example, the problem of manipulation in social networks, and sometimes the services themselves sometimes develop somewhere in the wrong direction. In this context, the idea of ​​trust - users to companies, people to artificial intelligence - is really important.


    If you are interested in how they say about security at business events, watch the video.

    Another interesting idea was expressed in the presentation: artificial intelligence does not need to be forced to perform tasks that only people can do, where facts play a lesser role than emotions and, for example, ethical issues. And vice versa: decisions that require a strict adherence to facts should often be left to the machines that are (presumably) less prone to errors. Decisions about whether to trust any source of information on the network or not should be based on reputation. A similar idea can be applied to the problem of cyber incidents: yes, sooner or later they happen to everyone, but the advantage is given to organizations whose efforts to protect customer data outweigh the consequences of hacking.

    Theoretical attacks on machine learning algorithms

    However, RSA still shows how complex the relationships within the industry are between those who find new problems and those who offer solutions. Apart from a couple of protocol speeches, the most interesting speeches at the conference, if they paint the future, are in some not very optimistic tones. Noteworthy is the presentation by Google spokesman Nicolas Carlini ( news ). He summarized the experience of attacks on machine learning algorithms, starting with this already classic example from 2017:


    The original image of a cat is modified completely invisibly to humans, but the recognition algorithm classifies this picture in a completely different way. What threat does this modification pose? Another not the most recent, but informative example:


    The road sign seems to have suffered a little from the vandals, but for a person it is quite understandable. The car can recognize the sign with such changes as a completely different sign - with information about the speed limit, and will not stop at the intersection. Further more interesting:


    The same method can be applied to sound, which has been demonstrated in practice. In the first example, the speech recognition system “recognized” the text in a musical fragment. In the second, the invisible manipulation of voice recording led to the recognition of a completely different set of words (as in the picture). In the third case, the text was recognized and completely out of meaningless noise. This is an interesting situation: at some point, people and their digital assistants begin to see and hear completely different things. Finally, machine learning algorithms can theoretically reveal the personal data on which they were trained, with further use. The simplest and most understandable example is the predictive typing system aka “damned T9”.

    Medical Device Security

    Security in medicine has recently been discussed in the areas of incredibly outdated software and the lack of budgets for the development of IT. As a result, the consequences of cyberattacks are more serious than usual, and we are talking about the loss or leak of very sensitive patient data. At the RSA conference, Check Point Software experts shared the results of a study of the computer network of a real hospital in Israel. In most medical institutions, the computer network is not divided into zones, so it was easy enough to find specialized devices, in this case, an ultrasound device.

    The story about the search for vulnerabilities in the computer part of the device was very short. Ultrasound is running Windows 2000, and finding an exploit for one of the critical vulnerabilities in this OS was not difficult. Researchers got access to the archive of images with the names of patients, were able to edit this information and had the opportunity to activate the ransomware trojan. The device manufacturer said that more modern models are built on modern software, software updates are regularly delivered there (but not the fact that they are installed), but updating medical devices costs (a lot) money, and what is the point if older devices work?

    The recommendations for medical organizations are clear: segmentation of the local network, separation of devices that store private data from all others. Interestingly, for the development of machine learning technologies in medicine, on the contrary, the widest possible access to patient data is needed - for training algorithms.

    Disclaimer: The opinions expressed in this digest may not always coincide with the official position of Kaspersky Lab. Dear editors generally recommend treating any opinions with healthy skepticism.

    Also popular now: