Microsoft's chatbot racist went online, confessed to drug use and was disconnected again

    Tay admitted to smoking drugs near the police


    No, these eyes can not lie

    As already reported on Geektimes, Tay chatbot-girl, created by the efforts of Microsoft experts, began to communicate with ordinary mortals on Twitter. Fledgling AI could not cope with the influx of trolls, and began to repeat racist phrases after them. The corporation had to delete most of the messages of its bot, and disable it itself until the circumstances of the incident were clarified and some parameters of communication were corrected.

    In addition, the corporation had to apologizein front of users for the behavior of their bot. Now, considering that everything is fine, Microsoft has re-enabled its bot. According to the developers, the bot taught us to better distinguish between malicious content. However, almost immediately after the next launch, the racist bot also confessed to drug use.



    Then the bot asked more than 210,000 of its followers to take the time and relax. This request was repeated many times.



    After that, Microsoft switched the bot profile to privacy mode, removing the possibility for other microblogging users to see Tay tweets.

    It is worth recalling that in China, the Microsoft bot has been successfully communicating for a long time. There is interaction with more than 40 million users from Twitter, Line, Weibo and some other social resources.

    But here the English-speaking bot cannot cope with the information that Internet trolls stuff it into.

    Also popular now: