Fast analogues of popular libraries for Python

    It so happened that for several months I tried to seriously use the hardware on the ARM processor as a server.
    I wrote about this here and here .
    Performance was often not enough for me, and I found and looked for various alternatives, often actively using C / C ++. A couple of libraries under the cut.

    I apologize in advance if everyone already knows about this, but I just recently unearthed these libs.


    Here where now without him? For web applications, this is essentially the main format. The server should be able to quickly convert python structures to JSON and give it to browsers or devices. Probably the majority do import json and everything is fine with them ...
    But if you need it many times faster then meet ujson . The acceleration orders can be viewed at the link, but on ARM, in essence, the use of ujson allowed to reduce the request time from a few seconds to <1 second. The ujson syntax is similar to regular json, although some attributes are missing (for example, I couldn’t get my own handler to convert the datetime). Also, ujson is compatible with python3 and I have never come across its glitches.
    If you have a lot of non-English text in JSON, then it is extremely beneficial to use the ensure_ascii = False parameter for the dumps function. The fact is that, by default, text is encoded in JSON using \ u notation, which leads to 6-8 bytes per character, and you can send everything purely to utf-8, which reduces the returned JSON by 2 or 3 times (even if used by gzip).
    Attention: for regular json, ensure_ascii = False leads to a serious performance loss, for ujson, even a performance improvement or a slight deterioration is possible.


    Many probably used this library to parse RSS, ATOM, etc. Its performance bothered me before, but now I couldn’t use it at all, since parsing of one RSS feed could last a minute. SpeedParser came to my aid ; it allowed me to reduce RSS parsing time to several seconds. In general, the syntax is compatible with FeedParser but the behavior is very different. For example, he does not see a custom namespace or which is ignored which is not on his list (which is firmly clogged in the lib). To correct some errors, we used MonkeyPatching (alas).


    You can’t use the maximum values, but it’s better to revolve around the coefficient 5. Below, the size begins to strongly influence the speed, and above it already greatly slows down the formation of content for return. It is difficult to find an analogue for gzip (and it is easy to replace it in the system), but there is a tip - use it only on the nginx side and not try from python. In one of the test projects, I used gzip compression as middleware in python and moving it to nginx allowed me to reduce the load.


    The less overhead the better. Do not try to deploy large frameworks, they will work, but already on a couple of other people everything will become a stake. The difference between Pylons and Tornado on a blank page 3 times in favor of the latter.

    Something like this.
    Continuation of the chat story here: .

    Threat in the text a lot of jargon, sorry, but I'm so pleased to write.
    ZY2 errors most likely as many, write in a personal if you find.

    Also popular now: