EventMachine ⇒ collection of information from various sources with subsequent processing

    The easiest way to step on a rake is to use asynchrony. I am familiar with programmers who have established themselves as strong professionals who literally gave in to multithreading. For starters, I’ll tell my favorite story about deadlock (I apologize for the dupe, but it is too good). About ten years ago, the Associated Press told the world how a pilot tried to land a passenger plane at the airport of the Swedish city of Chrysianstad, but none of the dispatchers answered his request. It turned out that the dispatcher had not yet returned from vacation. As a result, the plane circled over the airport until an emergency dispatcher was called urgently, who
    landed the plane in half an hour. Debriefing showed that the reason was the delay of the aircraft. On board which was the very dispatcher who was in a hurry to work from vacation.

    So, when we are faced with asynchrony, we have to break the usual picture in the head: the subjective world around us is single-threaded. If we sent a letter, and after a week received an answer, for us everything happens within one stream; we do not have to be responsible for the actions of the respondent and the postman. And to our code - it is necessary.
    To simplify the life of a programmer, you can use the Reactor pattern . The best (in my opinion) implementation for Ruby is  EventMachine . But with her there are not obvious moments. I plan to briefly talk about one of them.


    gem install eventmachine

    The class is EventMachinemore or less documented and dealing with simple queries is not difficult. Usually everything happens somehow like this ( EM- alias for EventMachine):
      EM.run do
      … # тут-то все и происходит, например, EM.connect (…)
      # бесконечно печатать всякую фигню
      EM.add_periodic_timer(1) { puts "on tick" } 
      EM.stop # это тут просто для примера, деструктор остановит все сам

    You can hang a hook at the reactor shutdown ( EventMachine.add_shutdown_hook {puts “Exiting ...”} ). Well, of course, you can create asynchronous connections on the fly . Documentation, I repeat, is. In places, even intelligible.
    But enough tediousness.

    Collection of results

    As long as everything is limited to the “request → response processing” model, there are no problems. But what if we need to send the next request depending on the result of the previous one? In order not to make the note too long and not to chew on very simple moments, I will immediately proceed to the task:

    find the component we need on the gubbir server through discovery and communicate with it

    In words, this happens somehow like this: we send a request to the discovery, we get a list of components, we interview each component for its capabilities. If there is ours in the features list, we initialize.
    Here's what it looks like using EventMachine(I removed everything that doesn't directly work with EM):
    @stream.write_with_handler(disco) do |result|
      # iterate thru disco results and wait until all collected
      EM::Iterator.new(result.items).map proc{ |c, it_disco|
        @stream.write_with_handler(info) do |reply|
          # iterate thru discoinfo results and wait until all collected,
          # then switch the parent’s iterator
          EM::Iterator.new(reply.features).map proc{ |f, it_info|
            it_info.return …
          }, proc{ |comps|
            # one more disco item was utilized
            it_disco.return …
      }, proc{ |compss|
        # yielding
        # compss.init or smth like that

    Iterators and their magic map function will do all the work for us . The lambda code under the last brackets (comment "yielding") will be executed only when all discoinfo for each component found is collected.

    I apologize if this seems obvious to someone, but Google didn’t offer me a quick solution, and fussing Fiberhere turns into pure hell.

    Also popular now: