Chatbot, which is “like Siri, only cooler” on the naive Bayes classifier

Hello! It is no secret that recently in the world there has been a sharp surge in activity, regarding the study of such a topic as artificial intelligence . So this phenomenon did not pass me by.

Background


It all started when on the plane I watched a typical, at first glance, American comedy - “Why, he?” (Eng. Why him? 2016) . There, a voice assistant was installed in one of the key characters in the house, who immodestly positioned himself “like Siri, only cooler”. By the way, the bot from the film was able not only to defiantly talk with guests, sometimes cursing, but also to control the entire house and the surrounding area - from central heating to flushing the toilet. After watching the movie, I got the idea to implement something similar and I started writing code.

image

Figure 1 - Frame from the same film. Voice assistant on the ceiling.

Development start


The first stage was easy - the Google Speech API was connected for speech recognition and synthesis. The text received from the Speech API was processed through manually written regular expression patterns , which, when matched, determined the intent of the person talking to the chat bot. Based on the intent defined by regexp, one phrase was randomly selected from the corresponding list of answers. If a sentence spoken by a person did not fall under any pattern, then the bot said pre-prepared general phrases, like: “I like to think that I'm not just a computer” and so on.

Obviously, manually registering a lot of regular expressions for each intent is a laborious task, therefore, as a result of the searches, I came across the so-called “ naive Bayes classifier ”. It is called naive because when it is used, it is understood that the words in the analyzed text are not related to each other. Despite this, this classifier shows good results, which we will talk about below.

Writing a classifier


Just so sticking a line into the classifier will not work. The input string is processed according to the following scheme:

image

Figure 2 - Input text processing

flowchart I will explain each stage in more detail. With tokenization, everything is simple. Trite is a breakdown of text into words. After that, the so-called stop words are deleted from the received tokens (an array of words) . The final stage is rather difficult. Stamming is getting the word base for a given source word. Moreover, the basis of the word is not always its root. I used Stemmer Porter for the Russian language (link below). Let's move on to the mathematical part. The formula with which it all begins is as follows:



$ P (I | D) = P (D | I) * P (I) / P (D), where I - Intent (intent), D - document $



$ P (I | D) $ - this is the probability of assigning any intent to a given input line in other words to the phrase that the person told us. $ P (I) $- probability of intent, which is determined by the ratio of the number of documents belonging to intent to the total number of documents in the training set. Document Probability -$ P (D) = 1 $therefore we discard it. $ P (D | I) $- the probability of the relationship of the document to intent. She signs as follows:

$ P (D | I) = P (w_1, w_2 ... w_n) |  I) = ∑ _ i ^ n P (w_i | I), $


Where $ w_i $- the corresponding token (word) in the document

We will write in more detail:

$ P (w_i | I) = (count (w_i, I) + α) / (count (I) + α * uniqueWords) $


Where:
$ count (w_i, I) $ - how many times the token has been assigned to this intent
$ α $ - anti-zeroing anti-aliasing
$ count (I) $ - number of words related to intent in the training data
$ uniqueWords $- the number of unique words in the training data.


For training, I created several text files with the symbolic names “hello”, “howareyou”, “whatareyoudoing”, “weather” etc. For example, I’ll give the contents of the hello file:

image

Figure 3 - An example of the contents of the text file “hello.txt”

I won’t describe the learning process in detail, because all Java code is available on Github . I will give only a diagram of the use of this classifier:

image

Figure 4 - Scheme of the classifier.

After we trained our model, we proceed to the classification. Since, in the training data, we have identified several intent'ov, then the obtained probabilities$ P (I | D) $there will be several.

So which one to choose? Choose the maximum!

$ classify (I_1, I_2, I_3 ... .I_n | D) = argmax P (I_i | D) $



And now the most interesting, the classification results:

No.Input lineSpecific intentIs it true?
1Hello, how do you do?HowareyouYes
2Glad to welcome you friendWhatdoyoulikeNot
3How was yesterdayHowareyouYes
4What is the weather outside?WeatherYes
5What weather is promised for tomorrow?WhatdoyoulikeNot
6I'm sorry, I need to go awayWhatdoyoulikeNot
7Have a good dayByeYes
8Let's get acquainted?NameYes
9HiHelloYes
10Glad to welcome youHelloYes

The first results were a little disappointing, but in them I saw suspicious patterns:

  • Phrases No. 2 and No. 10 differ in one word, but give different results.
  • All incorrectly defined intent's are defined as whatdoyoulike .

This problem was solved by reducing the smoothing parameter ($ α $) from 0.5 to 0.1, after which the following results are obtained:

No.Input lineSpecific intentIs it true?
1Hello, how do you do?HowareyouYes
2Glad to welcome you friendHelloYes
3How was yesterdayHowareyouYes
4What is the weather outside?WeatherYes
5What weather is promised for tomorrow?WeatherYes
6I'm sorry, I need to go awayByeYes
7Have a good dayByeYes
8Let's get acquainted?NameYes
9HiHelloYes
10Glad to welcome youHelloYes

I consider the results successful, and given my previous experience with regular expressions, I can say that the naive Bayes classifier is a much more convenient and universal solution, especially when it comes to scaling training data.
The next step in this project will be the development of a module for defining named entities in the text ( Named Entity Recognition ), as well as improving current capabilities.
Thank you for your attention, to be continued!

Literature


Wikipedia
Stop Words
Stemmer Porter

Also popular now: