Writing a basic wave robot in python

    Glory to the Robots
    In the summer I got an invite to the Google sandbox. But in this very sandbox there were a lot of people, all the waves were public, and my poor netbook only digested all this activity with great creak, so, having played a little, I scored a sandbox :)

    And recently, my sandbox account turned into an account in live preview, and I, having sent invites to those whom I got to, and waiting for at least one of my friends to get them, sat down to deal with the robotic api.


    The result of the proceedings was such a basic robot: bakarobo@appsot.com, which is able to do just about anything:

    on command! Br: bor! get a random quote from bash
    on command! br: rb! get a photo of the day with rosbest '
    on command! br: BakaRobo! respond :)
    and swear in response to all unfamiliar teams.


    And in the process of creation, I realized a funny thing: a big, cool applet was developed for wave robots ... Almost not documented at the moment :) At least the reference for the Python api is just a generic list of classes and functions, from which practically nothing is clear .

    And so, having spent some time reading different docks and samples, it seems to me that I have identified some basic set of information necessary to make some kind of useful robot. I’d like to talk about all these necessary things, maybe it’s not very well structured :)

    To begin with, robots should be placed on the Google App Engine . I won’t tell you how to create an application there and download tools for code commits - everything is very clearly explained there.

    So, we downloaded the tools, and in a certain folder on the disk we got something like this: Where is our_robot - the folder in which our robot will be. And in this folder we download and unpack this one

    .
    ..
    google_appengine
    our_robot


    the archiver from code.google.com is, in fact, a Python account.

    Now we are ready for the actual development.

    Just in case: committing the code to the appendin is done like this:
    python ./google_appengine/appcfg.py update ./our_robot/ - then we are asked about soap and password and allowed to upload the file.


    In the base case, there will be three main files in the project:

    our_robot.py - actually, the robot code
    app.yaml - something like a manifest
    _wave / capabilities.xml - a file declaring the events that the robot wants to listen to.
    

    Addition from farcaller : The
    python api generates xml-ku of casti, based on arguments to robot.Robot, but the java must be written with pens .
    So, apparently, a certain amount of body movements in the development process can be abandoned.

    You can see the list of events here , but the most important ones for the robot, in my opinion, are:
    WAVELET_SELF_ADDED - it works when the robot is added to the wave, at this point it’s nice to show a little info on use;
    BLIP_SUBMITTED - triggered when a wave blip is created / edited, not at the time of writing, but when the Done button is already blinking.


    Let's go further.
    According to the tutorial on code.google.com, the app.yaml manifesto looks something like this: Everything seems to be clear here. The name of the robot, the version of what we launch, the version of the api and handlers for different urls. The only thing you should pay attention to is "-url: /icon.png" in the handlers section. This, it seems, is not in the tutorial, the design allows you to specify how to handle the robot icon. We draw it into a file, save it to the robot folder, declare it inside the Python file :) capabilities.xml, again, by the way, it also looks straightforward:

    application: our_robot
    version: 1
    runtime: python
    api_version: 1
    handlers:
    - url: /_wave/.*
      script: our_robot.py
    - url: /assets
      static_dir: assets
    - url: /icon.png
      static_files: icon.png
      upload: icon.png








    <?xml version="1.0" encoding="utf-8"?>
    <w:robot xmlns:w="http://wave.google.com/extensions/robots/1.0">
     <w:capabilities>
        <w:capability name="WAVELET_SELF_ADDED" content="true" />
        <w:capability name="BLIP_SUBMITTED" content="true" />
     </w:capabilities>
     <w:version>1</w:version>
     </w:robot>

    * This source code was highlighted with Source Code Highlighter.


    Actually, there is nothing much to change in this file: only the version number and the events that we want to listen to.

    But after all this preliminary turmoil ended and begins, in fact, quite a pleasant fuss with writing the Python code of the robot.

    To begin with, I will describe the general structure of the code, as it is provided in the examples and tutorial, and then I will throw all sorts of minor usefulnesses that are not in the tutorial, I still have to get to the bottom of the reference :), so I had to extract them from the examples.

    So, in general, the code of the blank for the robot looks something like this:

    from waveapi import events
    from waveapi import model
    from waveapi import robot
    def OnRobotAdded(properties, context):
        pass
    def OnBlipSubmitted(properties, context):
        pass
    if __name__ == '__main__':
     myRobot = robot.Robot('our_robot',
         image_url='http://our_robot.appspot.com/icon.png', #иконка контакта для робота
         version='2.3', #версия
         profile_url='http://our_robot.appspot.com/') #адрес профиля контакта
     # Назначаем события:
     myRobot.RegisterHandler(events.WAVELET_SELF_ADDED, OnRobotAdded)
     myRobot.RegisterHandler(events.BLIP_SUBMITTED, OnBlipSubmitted)
     # Запуск
     myRobot.Run()

    * This source code was highlighted with Source Code Highlighter.


    And as if everything is wonderful and understandable. But when you start writing the functions of events, you realize that it’s completely unclear how, for example, to replace a piece of text with another piece of text, not to mention coloring or emphasizing something.

    As a result of a not-so-long, but rather persistent report, I dug up such a list of useful methods that was enough for me to write a robot:

    First, to get the blip with which the event occurred in the event processing functions (if, of course, this event happened with a blip), use
    blip = context.GetBlipById (properties ['blipId'])

    Secondly, to get the blip text and operate with it, do
    doc = blip.GetDocument ()
    contents = doc.GetText ()


    Accordingly, to replace a piece of text with another, use the resulting doc
    doc.SetTextInRange (model.document.Range (START, END), NEW_TEXT)

    To insert a piece of text anywhere:
    doc.InsertText (START, TEXT)

    To add a piece text to the end:
    doc.AppendText (TEXT)

    To insert a picture:
    At the end - doc.AppendElement (model.document.Image (PICTURE ADDRESS, WIDTH, HEIGHT, ATTACHMENT_ID, ALT))
    To a specific place - doc.InsertElement (START, model. document.Image (PICTURE ADDRESS, WIDTH, HEIGHT, ATTACHMENT_ID, ALT))

    In general, it’s useful to see thisreference in order to find out what can be done with the document. In order to find out the types of elements that can be created, we look at the references on waveapi.document. * - there are Image, Link and even Gadget.

    Farther. All design and various other usefulness of the blip are stored in the so-called annotation. Everything is simple with it:

    doc.SetAnnotation (model.document.Range (START, END), TYPE, VALUE)

    Moreover, TYPE is a thing that describes what kind of annotation we are adding. The most important, IMHO, is 'style / STYLE_PROP', where STYLE_PROP is the css attribute record in js form.
    Suddenly, who doesn’t know - this is a transformed css property record used in js-scripts, it’s easier to show its essence using examples :) For example, color is just color, but font-size is fontSize. I mean, where there is a hyphen in css, there is no hyphen in this entry, but every word except the first begins with a capital letter. backgroundColor, backgroundImage, marginTop, and so on.

    They are also removed without hassle, you can stupidly kill all annotations of the same type, for example, about font color, or background color, with such a funky function:

    doc.DeleteAnnotationsByName (TYPE)

    And you can clean only certain text range from annotations of a type:

    doc.DeleteAnnotationsInRange ( model.document.Range (START, END), TYPE)

    Annotations are also useful because they can store any information that relates to this blip.

    To annotate the entire blip, use:
    doc.AnnotateDocument (TYPE, VALUE)

    To find out if there is any type of annotation in the
    blip , call doc.HasAnnotation (TYPE)

    Here, somewhere, it seems to me, it already allows you to create robots that can do something useful. Of course, the use of GAE database systems and other pleasant things, for example, remained outside the scope of the text, but, hopefully, the text will not be completely useless.

    PS: By the way, I noticed a bug that greatly hindered the development at first: in the log tab in appspot (the only debug tool available to us), the level of messages displayed by default is set to Error. The point is that if a piece of a level, for example, Info, first appears in the log record, and only then a piece with an error, such a record will not be shown to us. So - we rearrange the level of alerts on Debug and rejoice at the opportunity to consider all the errors that happened.

    PPS: Thanks, moved to the appropriate blog.

    Also popular now: