Security Week 33: doors without locks, Microsoft invulnerability, disassembler and pain

    Welcome to the weekly security events digest, season one, episode two. In the previous series, we learned about self-opening cars, the chronic fear of the scene in Android and how we will no longer be monitored on the network (in fact, they will). In this issue, two seemingly completely unrelated news, which nevertheless have one thing in common: they are not about the fact that someone is vulnerable somewhere, but that the vulnerability is sometimes caused by an unwillingness to connect available security measures. And the third - it is not about security at all, but rather about special cases of relationships within the industry. Interestingly, all three, if you dig around, are not quite the same as they seem initially.

    I remind you the rules: every week the editorial board of the Threatpost news siteselects the three most significant news, to which I add an extended and merciless comment. All episodes of the series can be found here .

    Hacking hotel doors
    News .

    They say that there are humanities, but there are techies, and these two categories of the population hardly understand each other. And it’s just impossible to turn from a humanist to a techie. Such a stereotype was once refuted by John Wiegand, who initially chose a career as a musician. In the 30s of the last century, he played the piano and conducted a children's choir until he became interested in the device of sound amplifiers. In the 40s, he worked on the then novelty - magnetic sound recording - and in 1974 (at the age of a minute, 62 years) he made his main discovery.

    In a Wiegand wire, made of an alloy of cobalt, iron and vanadium, placed in a magnetic field, the polarity of the core and the sheath changes places, due to which a potential difference arises. Moreover, the polarity does not change until the next magnetization, which allowed us to use the effect for, say, creating hotel keys. Unlike modern cards, units and zeros are not written in a microcircuit, but directly by a sequence of wires laid in a special way. It is impossible to reprogram such a key, and in general, according to the scheme, it is no longer more like modern metro and credit card travel cards, but cards with magnetic stripes — only more reliable ones.

    That is, we break contactless cards? Not really . Wigand's name is not only an effect, but also a protocol, quite ancient, according to which data is exchanged. With the protocol, everything is bad enough. Firstly, it was not really standardized, and there are many different implementations. Secondly, initially the card ID could be written with a maximum of 16 bits, which gives well, very few possible combinations. Thirdly, the peculiarity of those contactless cards with wire, invented even before they learned how to put a whole computer on a credit card, limits the key length to 37 bits - further reading reliability decreases.

    So, at the Black Hat conference last week, researchers Eric Evenchik and Mark Baseggio showed their device to intercept (in no way encrypted) key sequences during authorization. The most interesting thing is that the cards have nothing to do with it - the data is stolen when transferred from the card reader to the door controller: there, historically, the same Wiegand protocol is used.

    They called the device BLEKey - this is such a small board that can be inserted directly into the reader’s body, say, on the hotel door, and it was shown that the whole process takes several seconds. Then everything is simple: we read the key, wait until the real owner leaves, open the door. Or do not wait. Or do not open. If you do not go into technical details, the dialogue between the door and the reader / wireless key looks like this:

    - Who is it?
    - It's me.
    - And this is you. Come in.



    Satisfied researcher on the background of a model of a vulnerable door.

    Everything seems to be clear, but there is a nuance. Well, as usual, not all access control systems are susceptible to such an attack. And even those that are susceptible can be protected without completely replacing them. According to the researchers, there are remedies in the readers against such hacks, they just usually, ahem, are turned off. Some even support the Open Supervised Device Protocol , which encrypts the transmitted key sequence. These “features” are not used, because, I do not get tired of repeating this, not thinking about security is cheap and simple.



    Here is another interesting study on the topic of 2009, with technical details. Apparently, the vulnerability of cards (not readers) was indicated back in 1992, but then the card itself was either disassembled or passed through an X-ray. For this, it must, for example, be taken from the owner. And now everything is decided by a coin-sized scarf. However, progress!

    Invulnerability at Microsoft. Subtleties of work of Windows Server Update Services in the companies.
    The news . Original whitepaper researchers.

    Windows Server Update Services allows large companies to centrally install updates to a fleet of computers using an internal server instead of an external server for distribution. And this is a very reliable and fairly secure system. First, all updates must be signed by Microsoft. Secondly, communication between the corporate update server and the vendor server is encrypted using the SSL protocol.

    And this is a fairly simple system. The server of the company receives the list of updates in the form of a file in XML format, where it is actually written - what to download and how to update. And this is the initial interaction, as it turned out, is made in clear text. More precisely, not so. It must be encrypted (the keyword "must"), and when deploying WSUS, the administrator is strongly encouraged to enable encryption. But by default it is off.

    This is not something horror-horror: just replacing the “instructions” will not work, but if the attacker already has the ability to intercept traffic (man-in-the-middle has already taken place), then this is possible. Researchers Paul Stone and Alex Chapman found that spoofing instructions allows you to run arbitrary code with high privileges on the updated system. No, a Microsoft digital certificate is still checked, but any company certificate is accepted. For example, you can drag the PsExec utility from the SysInternals set in this way, and use it to start anything you want.

    Why does it happen that way? The fact is that the inclusion of SSL during the deployment of WSUS cannot be automated - you need to generate a certificate. Moreover, as the researchers note, Microsoft in this case cannot do anything except to strongly recommend the inclusion of SSL. That is, it turns out that the vulnerability seems to exist, but it does not exist. And nothing can be done. Yes, and no one but the admin is to blame.


    Picture on request.

    By the way, the Flame cyber spyware detected by the Laboratory also used the Windows update system to infect it, although in a different way: using a fake proxy, they intercepted requests to the Microsoft server and a few files were returned in response, some of which were even signed by vendor certificates.

    Reverse Engineering and Pain
    News . Original CSO Oracle post (google cache , another copy).

    The two presentations at the Black Hat conference are cited above, and what unites them is that the authors of these studies - security experts - discovered some vulnerability in the technology or product that someone else was developing. And they made it public, and in the case of BLEKey, they also laid out all the code and hardware in the public domain. This, in general, is the standard interaction of the security industry with the outside world, but not everyone likes this alignment. Fundamentally, I will refrain from evaluating here, I can only say that this is a very delicate topic. Is it possible to analyze someone else's code and under what conditions? How to disclose vulnerability information so as not to harm? Can I pay for the holes found? Legislative restrictions, the criminal code, unwritten industry rules - all this affects.

    The elephant effect in the china shop was made by the recent post of Oracle Chief Security Officer Mary Ann Davidson. Entitled (inaccurate translation from English) “ Actually no , you can’t”, it is almost entirely addressed to the company's customers (and not to the industry as a whole) who send information about vulnerabilities found in the vendor’s products. You can cite the post posted on the Oracle blog on August 10 in paragraphs, but the main thing: if the client could not get information about the vulnerability other than through reverse engineering, then the client violates the license agreement, and this is not good.



    Quote:
    The client cannot analyze the entire code and make sure that there is an algorithm blocking a potential attack, which some scanner tells him about ... The client cannot create a patch to solve the problem - only the vendor can. The client absolutely violates the license agreement using the utility for static analysis.

    The public reaction looked something like this :



    Or so :


    Or even like this :


    In short, the post sagged no more than a day, was deleted due to “inconsistency with [official] views on interaction with customers” (but the Internet remembers everything). Let me remind you that Oracle is developing Java, the vulnerability of which is not exploited only by the lazy. Three years ago, we calculated the number of vulnerabilities in Java over 12 months and found 160 (!) Pieces. Perhaps, in an ideal world, only a software developer should really seek and close vulnerabilities in software. In the real world, is it sometimes impossible that the people responsible for this work according to the “bees against honey” scheme?

    And here is a look from the other side. Last week, Black Hat founder Jeff Moss spoke outfor the responsibility of software developers for holes in it. Like, it's time to delete from EULA all these lines about the fact that the company owes nothing to its customers. The statement is interesting, but no less pretentious than "let's ban the disassembler." So far, it’s only clear that users (corporate and simple), vendors and researchers, if they can agree among themselves, it’s clearly not through loud statements and jokes on Twitter.

    What else happened:
    Another presentation with Black Hat about hacking Square Reader plastic card reader - well, the thing that connects to the smartphone, and through it you pay the courier sushi delivery. A soldering iron is required.

    In Lenovo laptops (not in all, but in some), they again found a rootkit from a vendor. Previousthe story .

    Antiquities:
    Family "Small"

    Resident viruses are standardly written to the end of COM files (except for "Small-114, -118, -122", these to the beginning) when loading files into memory. Most of the viruses in the family use the POPA and PUSHA 80x86 processor instructions. "Small-132, -149" incorrectly infect some files. Belong to various authors. Apparently, the appearance of the Small family can be considered as a competition for the shortest resident virus for MS-DOS. It remains only to decide on the size of the prize fund.

    Quote from the book "Computer viruses in MS-DOS" by Eugene Kaspersky. 1992 year. Page 45.

    Disclaimer: This column reflects only the private opinion of its author. It may coincide with the position of Kaspersky Lab, or it may not coincide. That's how lucky.

    Also popular now: