AsyncIO for a practicing python developer

Original author: Yeray Diaz
  • Transfer
  • Tutorial
I remember the moment when I thought “How slowly everything works, what if I parallelize calls?”, And after 3 days, looking at the code, I couldn’t understand anything in a terrible mess of threads, synchronizers and callback functions.

Then I met asyncio , and everything changed.

If anyone does not know, asyncio is a new module for the organization of competitive programming, which appeared in Python 3.4. It is intended to simplify the use of coroutine and futur in asynchronous code - so that the code looks like synchronous, without callbacks.

I remember at that time there were several similar tools, and one of them stood out - this is the gevent library . I advise everyone to read the excellent gevent tutorial for a practicing python developer, which describes not only work with her, but also what is competition in the general sense. I liked that article so much that I decided to use it as a template for writing an introduction to asyncio.

A small disclaimer is not a gevent vs asyncio article. Nathan Road already did this for me in his note . You can find all the examples on GitHub .

I know that you can’t wait to write code, but for a start I would like to consider a few concepts that will come in handy in the future.

Threads, event loops, coroutines, and futures


Threads are the most common tool. I think you've heard about it before, but asyncio uses a few other concepts: event cycles, coroutines and futures.

  • the event loop for the most part only controls the execution of various tasks: it registers receipt and starts at the right time
  • coroutines are special functions, similar to python generators, that are expected ( await ) to give control back to the event loop. It is necessary that they be launched precisely through the event loop
  • futures are objects in which the current result of a task is stored. This may be information that the task has not yet been processed or the result has already been obtained; or maybe an exception

Pretty simple? Go!

Synchronous and asynchronous execution


In the video “ Competition is not concurrency, it is better ” Rob Pike draws your attention to a key thing. The division of tasks into competitive subtasks is possible only with such parallelism, when he also manages these subtasks.

Asyncio does the same - you can break your code into procedures that you define as coroutines, which makes it possible to manage them as you wish, including simultaneous execution. Coroutines contain yield statements, with the help of which we determine the places where you can switch to other tasks waiting to be completed.

The yield is responsible for switching the context in asyncio, which transfers control back to the event loop, which in turn goes to another coroutine. Consider a basic example:

import asyncio
async def foo():
    print('Running in foo')
    await asyncio.sleep(0)
    print('Explicit context switch to foo again')
async def bar():
    print('Explicit context to bar')
    await asyncio.sleep(0)
    print('Implicit context switch back to bar')
ioloop = asyncio.get_event_loop()
tasks = [ioloop.create_task(foo()), ioloop.create_task(bar())]
wait_tasks = asyncio.wait(tasks)
ioloop.run_until_complete(wait_tasks)
ioloop.close()

$ python3 1-sync-async-execution-asyncio-await.py
Running in foo
Explicit context to bar
Explicit context switch to foo again
Implicit context switch back to bar

* First, we announced a couple of simple coroutines that pretend to be non-blocking using sleep from asyncio
* Coroutines can only be launched from another coroutine, or wrapped in a task using create_task
* After we have 2 tasks, combine them using wait
* And finally, we will send it to run in the event loop through run_until_complete

Using await in some coroutine, we thus declare that coroutine can give control back to the event loop, which, in turn, will launch some of the following tasks: bar. The same thing will happen in bar: on await asyncio.sleepcontrol will be transferred back to the event loop, which at the right time will return to foo.

Imagine 2 blocking tasks: gr1 and gr2, as if they are accessing certain third-party services, and while they are waiting for an answer, the third function can work asynchronously.

import time
import asyncio
start = time.time()
def tic():
    return 'at %1.1f seconds' % (time.time() - start)
async def gr1():
    # Busy waits for a second, but we don't want to stick around...
    print('gr1 started work: {}'.format(tic()))
    await asyncio.sleep(2)
    print('gr1 ended work: {}'.format(tic()))
async def gr2():
    # Busy waits for a second, but we don't want to stick around...
    print('gr2 started work: {}'.format(tic()))
    await asyncio.sleep(2)
    print('gr2 Ended work: {}'.format(tic()))
async def gr3():
    print("Let's do some stuff while the coroutines are blocked, {}".format(tic()))
    await asyncio.sleep(1)
    print("Done!")
ioloop = asyncio.get_event_loop()
tasks = [
    ioloop.create_task(gr1()),
    ioloop.create_task(gr2()),
    ioloop.create_task(gr3())
]
ioloop.run_until_complete(asyncio.wait(tasks))
ioloop.close()

$ python3 1b-cooperatively-scheduled-asyncio-await.py
gr1 started work: at 0.0 seconds
gr2 started work: at 0.0 seconds
Lets do some stuff while the coroutines are blocked, at 0.0 seconds
Done!
gr1 ended work: at 2.0 seconds
gr2 Ended work: at 2.0 seconds

Pay attention to how I / O and scheduling work, allowing you to fit all this into a single thread. While two tasks are blocked by I / O waiting, the third function can take up all the processor time.

Execution order


In a synchronous world, we think sequentially. If we have a list of tasks that take different time to complete, then they will end in the same order in which they were processed. However, in the case of competition, one cannot be sure of this.

import random
from time import sleep
import asyncio
def task(pid):
    """Synchronous non-deterministic task.
    """
    sleep(random.randint(0, 2) * 0.001)
    print('Task %s done' % pid)
async def task_coro(pid):
    """Coroutine non-deterministic task
    """
    await asyncio.sleep(random.randint(0, 2) * 0.001)
    print('Task %s done' % pid)
def synchronous():
    for i in range(1, 10):
        task(i)
async def asynchronous():
    tasks = [asyncio.ensure_future(task_coro(i)) for i in range(1, 10)]
    await asyncio.wait(tasks)
print('Synchronous:')
synchronous()
ioloop = asyncio.get_event_loop()
print('Asynchronous:')
ioloop.run_until_complete(asynchronous())
ioloop.close()

$ python3 1c-determinism-sync-async-asyncio-await.py
Synchronous:
Task 1 done
Task 2 done
Task 3 done
Task 4 done
Task 5 done
Task 6 done
Task 7 done
Task 8 done
Task 9 done
Asynchronous:
Task 2 done
Task 5 done
Task 6 done
Task 8 done
Task 9 done
Task 1 done
Task 4 done
Task 3 done
Task 7 done

Of course, your result will be different, since each task will fall asleep for a random time, but note that the result of the execution is completely different, although we always set the tasks in the same order.

Also pay attention to coroutine for our rather simple task. It is important to understand that in asyncio there is no magic when implementing non-blocking tasks. During the implementation, asyncio stood separately in the standard library, as other modules provided only blocking functionality. You can use the concurrent.futures module to wrap blocking tasks in threads or processes and get futures for use in asyncio. Several such examples are available on GitHub .
This is probably the main drawback now when using asyncio, however, there are already several libraries to help solve this problem.

The most popular blocking task is retrieving data using an HTTP request. Let's look at working with the excellent aiohttp library using the example of receiving information about public events on GitHub.

import time
import urllib.request
import asyncio
import aiohttp
URL = 'https://api.github.com/events'
MAX_CLIENTS = 3
def fetch_sync(pid):
    print('Fetch sync process {} started'.format(pid))
    start = time.time()
    response = urllib.request.urlopen(URL)
    datetime = response.getheader('Date')
    print('Process {}: {}, took: {:.2f} seconds'.format(
        pid, datetime, time.time() - start))
    return datetime
async def fetch_async(pid):
    print('Fetch async process {} started'.format(pid))
    start = time.time()
    response = await aiohttp.request('GET', URL)
    datetime = response.headers.get('Date')
    print('Process {}: {}, took: {:.2f} seconds'.format(
        pid, datetime, time.time() - start))
    response.close()
    return datetime
def synchronous():
    start = time.time()
    for i in range(1, MAX_CLIENTS + 1):
        fetch_sync(i)
    print("Process took: {:.2f} seconds".format(time.time() - start))
async def asynchronous():
    start = time.time()
    tasks = [asyncio.ensure_future(
        fetch_async(i)) for i in range(1, MAX_CLIENTS + 1)]
    await asyncio.wait(tasks)
    print("Process took: {:.2f} seconds".format(time.time() - start))
print('Synchronous:')
synchronous()
print('Asynchronous:')
ioloop = asyncio.get_event_loop()
ioloop.run_until_complete(asynchronous())
ioloop.close()

$ python3 1d-async-fetch-from-server-asyncio-await.py
Synchronous:
Fetch sync process 1 started
Process 1: Wed, 17 Feb 2016 13:10:11 GMT, took: 0.54 seconds
Fetch sync process 2 started
Process 2: Wed, 17 Feb 2016 13:10:11 GMT, took: 0.50 seconds
Fetch sync process 3 started
Process 3: Wed, 17 Feb 2016 13:10:12 GMT, took: 0.48 seconds
Process took: 1.54 seconds
Asynchronous:
Fetch async process 1 started
Fetch async process 2 started
Fetch async process 3 started
Process 3: Wed, 17 Feb 2016 13:10:12 GMT, took: 0.50 seconds
Process 2: Wed, 17 Feb 2016 13:10:12 GMT, took: 0.52 seconds
Process 1: Wed, 17 Feb 2016 13:10:12 GMT, took: 0.54 seconds
Process took: 0.54 seconds

Here it is worth paying attention to a couple of points.

Firstly, the time difference - when using asynchronous calls, we run requests at the same time. As mentioned earlier, each of them transferred control to the next and returned the result upon completion. That is, the execution speed directly depends on the running time of the slowest request, which took just 0.54 seconds. Cool, right?

Secondly, how similar the code is to synchronous. This is essentially the same thing! The main differences are related to the implementation of the library for query execution, creation and waiting for tasks to be completed.

Creating competitiveness


So far, we have used the only method of creating and obtaining results from coroutine, creating a set of tasks and waiting for their completion. However, coroutines can be planned to run and produce results in several ways. Imagine a situation where we need to process the results of GET requests as we receive them; in fact, the implementation is very similar to the previous one:

import time
import random
import asyncio
import aiohttp
URL = 'https://api.github.com/events'
MAX_CLIENTS = 3
async def fetch_async(pid):
    start = time.time()
    sleepy_time = random.randint(2, 5)
    print('Fetch async process {} started, sleeping for {} seconds'.format(
        pid, sleepy_time))
    await asyncio.sleep(sleepy_time)
    response = await aiohttp.request('GET', URL)
    datetime = response.headers.get('Date')
    response.close()
    return 'Process {}: {}, took: {:.2f} seconds'.format(
        pid, datetime, time.time() - start)
async def asynchronous():
    start = time.time()
    futures = [fetch_async(i) for i in range(1, MAX_CLIENTS + 1)]
    for i, future in enumerate(asyncio.as_completed(futures)):
        result = await future
        print('{} {}'.format(">>" * (i + 1), result))
    print("Process took: {:.2f} seconds".format(time.time() - start))
ioloop = asyncio.get_event_loop()
ioloop.run_until_complete(asynchronous())
ioloop.close()

$ python3 2a-async-fetch-from-server-as-completed-asyncio-await.py
Fetch async process 1 started, sleeping for 4 seconds
Fetch async process 3 started, sleeping for 5 seconds
Fetch async process 2 started, sleeping for 3 seconds
>> Process 2: Wed, 17 Feb 2016 13:55:19 GMT, took: 3.53 seconds
>>>> Process 1: Wed, 17 Feb 2016 13:55:20 GMT, took: 4.49 seconds
>>>>>> Process 3: Wed, 17 Feb 2016 13:55:21 GMT, took: 5.48 seconds
Process took: 5.48 seconds

Look at the indents and timings - we launched all the tasks at the same time, but they were processed in the order of completion. The code in this case is slightly different: we pack the coroutines, each of which is already prepared for execution, in a list. The as_completed function returns an iterator that returns coroutine results as they execute. Cool, really ?! By the way, both as_completed and wait are functions from the concurrent.futures package .

Another example is that if you want to know your IP address. There are a bunch of services for this, but you do not know which one will be available at the time of the program. Instead of sequentially polling each of the list, you can run all the queries competitively and choose the first successful one.

Well, for this, our favorite function wait has a special parameter return_when . So far, we have ignored what wait returns , because only parallelized tasks. But now we need to get the result from coroutine, so we will use the set of futures done and pending.

from collections import namedtuple
import time
import asyncio
from concurrent.futures import FIRST_COMPLETED
import aiohttp
Service = namedtuple('Service', ('name', 'url', 'ip_attr'))
SERVICES = (
    Service('ipify', 'https://api.ipify.org?format=json', 'ip'),
    Service('ip-api', 'http://ip-api.com/json', 'query')
)
async def fetch_ip(service):
    start = time.time()
    print('Fetching IP from {}'.format(service.name))
    response = await aiohttp.request('GET', service.url)
    json_response = await response.json()
    ip = json_response[service.ip_attr]
    response.close()
    return '{} finished with result: {}, took: {:.2f} seconds'.format(
        service.name, ip, time.time() - start)
async def asynchronous():
    futures = [fetch_ip(service) for service in SERVICES]
    done, pending = await asyncio.wait(
        futures, return_when=FIRST_COMPLETED)
    print(done.pop().result())
ioloop = asyncio.get_event_loop()
ioloop.run_until_complete(asynchronous())
ioloop.close()

$ python3 2c-fetch-first-ip-address-response-await.py
Fetching IP from ip-api
Fetching IP from ipify
ip-api finished with result: 82.34.76.170, took: 0.09 seconds
Unclosed client session
client_session: 
Task was destroyed but it is pending!
task:  wait_for=>

What happened? The first service answered successfully, but there was some warning in the logs!

In fact, we started the execution of two tasks, but left the cycle after the first result, while the second coroutine was still being executed. Asyncio thought it was a bug and warned us. Probably worth cleaning up for yourself and clearly kill unnecessary tasks. How? Glad you asked.

Futur states


  • pending
  • execution (running)
  • done
  • canceled (canceled)

Everything is so simple. When the futur is in the done state, you can get the execution result from it. In the pending and running states, such an operation will throw an InvalidStateError exception , and in the case of canelled, a CanceledError will be thrown , and finally, if the exception occurred in the coroutine itself, it will be thrown again (just as it was when the exception was called ). But do not take my word for it .

You can find out the state of futures using the done , canceled, or running methods , but do not forget that in the case of done, calling resultcan return both the expected result and the exception that arose in the process. To cancel the execution of futures there is a cancel method . This is suitable for fixing our example.

from collections import namedtuple
import time
import asyncio
from concurrent.futures import FIRST_COMPLETED
import aiohttp
Service = namedtuple('Service', ('name', 'url', 'ip_attr'))
SERVICES = (
    Service('ipify', 'https://api.ipify.org?format=json', 'ip'),
    Service('ip-api', 'http://ip-api.com/json', 'query')
)
async def fetch_ip(service):
    start = time.time()
    print('Fetching IP from {}'.format(service.name))
    response = await aiohttp.request('GET', service.url)
    json_response = await response.json()
    ip = json_response[service.ip_attr]
    response.close()
    return '{} finished with result: {}, took: {:.2f} seconds'.format(
        service.name, ip, time.time() - start)
async def asynchronous():
    futures = [fetch_ip(service) for service in SERVICES]
    done, pending = await asyncio.wait(
        futures, return_when=FIRST_COMPLETED)
    print(done.pop().result())
    for future in pending:
        future.cancel()
ioloop = asyncio.get_event_loop()
ioloop.run_until_complete(asynchronous())
ioloop.close()

$ python3 2c-fetch-first-ip-address-response-no-warning-await.py
Fetching IP from ipify
Fetching IP from ip-api
ip-api finished with result: 82.34.76.170, took: 0.08 seconds

A simple and accurate conclusion is just what I love!

If you need some additional logic for processing futur, then you can connect callbacks, which will be called upon transition to the done state. This can be useful for tests when some results need to be redefined with some of their values.

Exception Handling


asyncio is all about writing managed and readable concurrent code, which is very noticeable when handling exceptions. Let's go back to the example to demonstrate.
Suppose we want to make sure that all requests for services by IP definition returned the same result. However, one of them may be offline and may not respond to us. Just apply try ... except as usual:

from collections import namedtuple
import time
import asyncio
import aiohttp
Service = namedtuple('Service', ('name', 'url', 'ip_attr'))
SERVICES = (
    Service('ipify', 'https://api.ipify.org?format=json', 'ip'),
    Service('ip-api', 'http://ip-api.com/json', 'query'),
    Service('borken', 'http://no-way-this-is-going-to-work.com/json', 'ip')
)
async def fetch_ip(service):
    start = time.time()
    print('Fetching IP from {}'.format(service.name))
    try:
        response = await aiohttp.request('GET', service.url)
    except:
        return '{} is unresponsive'.format(service.name)
    json_response = await response.json()
    ip = json_response[service.ip_attr]
    response.close()
    return '{} finished with result: {}, took: {:.2f} seconds'.format(
        service.name, ip, time.time() - start)
async def asynchronous():
    futures = [fetch_ip(service) for service in SERVICES]
    done, _ = await asyncio.wait(futures)
    for future in done:
        print(future.result())
ioloop = asyncio.get_event_loop()
ioloop.run_until_complete(asynchronous())
ioloop.close()

$ python3 3a-fetch-ip-addresses-fail-await.py
Fetching IP from ip-api
Fetching IP from borken
Fetching IP from ipify
ip-api finished with result: 85.133.69.250, took: 0.75 seconds
ipify finished with result: 85.133.69.250, took: 1.37 seconds
borken is unresponsive

We can also handle the exception that occurred during coroutine execution:

from collections import namedtuple
import time
import asyncio
import aiohttp
import traceback
Service = namedtuple('Service', ('name', 'url', 'ip_attr'))
SERVICES = (
    Service('ipify', 'https://api.ipify.org?format=json', 'ip'),
    Service('ip-api', 'http://ip-api.com/json', 'this-is-not-an-attr'),
    Service('borken', 'http://no-way-this-is-going-to-work.com/json', 'ip')
)
async def fetch_ip(service):
    start = time.time()
    print('Fetching IP from {}'.format(service.name))
    try:
        response = await aiohttp.request('GET', service.url)
    except:
        return '{} is unresponsive'.format(service.name)
    json_response = await response.json()
    ip = json_response[service.ip_attr]
    response.close()
    return '{} finished with result: {}, took: {:.2f} seconds'.format(
        service.name, ip, time.time() - start)
async def asynchronous():
    futures = [fetch_ip(service) for service in SERVICES]
    done, _ = await asyncio.wait(futures)
    for future in done:
        try:
            print(future.result())
        except:
            print("Unexpected error: {}".format(traceback.format_exc()))
ioloop = asyncio.get_event_loop()
ioloop.run_until_complete(asynchronous())
ioloop.close()

$ python3 3b-fetch-ip-addresses-future-exceptions-await.py
Fetching IP from ipify
Fetching IP from borken
Fetching IP from ip-api
ipify finished with result: 85.133.69.250, took: 0.91 seconds
borken is unresponsive
Unexpected error: Traceback (most recent call last):
 File “3b-fetch-ip-addresses-future-exceptions.py”, line 39, in asynchronous
 print(future.result())
 File “3b-fetch-ip-addresses-future-exceptions.py”, line 26, in fetch_ip
 ip = json_response[service.ip_attr]
KeyError: ‘this-is-not-an-attr’

Just like starting a task without waiting for its completion is a mistake, getting unknown exceptions leaves its traces in the output:

from collections import namedtuple
import time
import asyncio
import aiohttp
Service = namedtuple('Service', ('name', 'url', 'ip_attr'))
SERVICES = (
    Service('ipify', 'https://api.ipify.org?format=json', 'ip'),
    Service('ip-api', 'http://ip-api.com/json', 'this-is-not-an-attr'),
    Service('borken', 'http://no-way-this-is-going-to-work.com/json', 'ip')
)
async def fetch_ip(service):
    start = time.time()
    print('Fetching IP from {}'.format(service.name))
    try:
        response = await aiohttp.request('GET', service.url)
    except:
        print('{} is unresponsive'.format(service.name))
    else:
        json_response = await response.json()
        ip = json_response[service.ip_attr]
        response.close()
        print('{} finished with result: {}, took: {:.2f} seconds'.format(
            service.name, ip, time.time() - start))
async def asynchronous():
    futures = [fetch_ip(service) for service in SERVICES]
    await asyncio.wait(futures)  # intentionally ignore results
ioloop = asyncio.get_event_loop()
ioloop.run_until_complete(asynchronous())
ioloop.close()

$ python3 3c-fetch-ip-addresses-ignore-exceptions-await.py
Fetching IP from ipify
Fetching IP from borken
Fetching IP from ip-api
borken is unresponsive
ipify finished with result: 85.133.69.250, took: 0.78 seconds
Task exception was never retrieved
future:  exception=KeyError(‘this-is-not-an-attr’,)>
Traceback (most recent call last):
 File “3c-fetch-ip-addresses-ignore-exceptions.py”, line 25, in fetch_ip
 ip = json_response[service.ip_attr]
KeyError: ‘this-is-not-an-attr’

The output looks the same as in the previous example, except for the reproach message from asyncio.

Timeouts


But what if the information about our IP is not so important? This can be a good complement to some kind of composite answer in which this part will be optional. In this case, we will not make the user wait. Ideally, we would set the timeout to calculate the IP, after which, in any case, give the answer to the user, even without this information.

Again, wait has a suitable argument:

import time
import random
import asyncio
import aiohttp
import argparse
from collections import namedtuple
from concurrent.futures import FIRST_COMPLETED
Service = namedtuple('Service', ('name', 'url', 'ip_attr'))
SERVICES = (
    Service('ipify', 'https://api.ipify.org?format=json', 'ip'),
    Service('ip-api', 'http://ip-api.com/json', 'query'),
)
DEFAULT_TIMEOUT = 0.01
async def fetch_ip(service):
    start = time.time()
    print('Fetching IP from {}'.format(service.name))
    await asyncio.sleep(random.randint(1, 3) * 0.1)
    try:
        response = await aiohttp.request('GET', service.url)
    except:
        return '{} is unresponsive'.format(service.name)
    json_response = await response.json()
    ip = json_response[service.ip_attr]
    response.close()
    print('{} finished with result: {}, took: {:.2f} seconds'.format(
        service.name, ip, time.time() - start))
    return ip
async def asynchronous(timeout):
    response = {
        "message": "Result from asynchronous.",
        "ip": "not available"
    }
    futures = [fetch_ip(service) for service in SERVICES]
    done, pending = await asyncio.wait(
        futures, timeout=timeout, return_when=FIRST_COMPLETED)
    for future in pending:
        future.cancel()
    for future in done:
        response["ip"] = future.result()
    print(response)
parser = argparse.ArgumentParser()
parser.add_argument(
    '-t', '--timeout',
    help='Timeout to use, defaults to {}'.format(DEFAULT_TIMEOUT),
    default=DEFAULT_TIMEOUT, type=float)
args = parser.parse_args()
print("Using a {} timeout".format(args.timeout))
ioloop = asyncio.get_event_loop()
ioloop.run_until_complete(asynchronous(args.timeout))
ioloop.close()


I also added the timeout argument to the script launch line to check what happens if the requests have time to process. I also added random delays to prevent the script from ending too quickly, and it was time to figure out exactly how it works.

$ python 4a-timeout-with-wait-kwarg-await.py
Using a 0.01 timeout
Fetching IP from ipify
Fetching IP from ip-api
{‘message’: ‘Result from asynchronous.’, ‘ip’: ‘not available’}

$ python 4a-timeout-with-wait-kwarg-await.py -t 5
Using a 5.0 timeout
Fetching IP from ip-api
Fetching IP from ipify
ipify finished with result: 82.34.76.170, took: 1.24 seconds
{'ip': '82.34.76.170', 'message': 'Result from asynchronous.'}

Conclusion


Asyncio reinforced my already great love for python. To be honest, I fell in love with coroutines when I met them at Tornado, but asyncio managed to get the best from it and other libraries for implementing competitiveness. And so much so that special efforts have been made so that they can use the main I / O cycle. So if you use Tornado or Twisted , then you can include code designed for asyncio!

As I mentioned, the main problem is that standard libraries do not yet support non-blocking behavior. Also, many popular libraries work so far only in a synchronous style, and those that use competitiveness are still young and experimental. However, their number is growing .

I hope in this tutorial I showed how nice it is to work with asyncio, and this technology will push you to switch to python 3 if you are stuck with python 2.7 for some reason. One thing is for sure - the future of Python has completely changed.

From the translator:
The original article was published on February 20, 2016, a lot has happened during this time. Python 3.6 was released, in which, in addition to optimizations, asyncio was improved, the API was put into a stable state. Libraries for working with Postgres, Redis, Elasticsearch, etc., were released in non-blocking mode. Even the new framework is Sanic, which resembles Flask, but works in asynchronous mode. In the end, even the event loop was optimized and rewritten in Cython, which turned out 2 times faster. So I see no reason to ignore this technology!

Only registered users can participate in the survey. Please come in.

Do you use asyncio?

  • 24% yes and I like 109
  • 5.9% yes, but I get a bunch of problems 27
  • 60.9% not, but I would like 276
  • 1.9% no, I don’t even think 9
  • 7% I'm on python 2.7 32

Also popular now: