Compilation @pythonetc, June 2018


    Hey. I am the author of the channel @ pythonetc with tips about Python in particular and about programming in general. From this month we are launching a series of collections with the best posts for the month translated into Russian.


    Call chain data transfer


    When you want to pass some information along a chain of calls, you usually use the easiest way: pass data in the form of function arguments.


    But sometimes it is very inconvenient to change all the functions in the chain, only to transfer a new piece of data. In such cases, it is better to create a peculiar context that the functions will use. How to do it?


    The simplest solution is a global variable. In Python, you can use modules and classes as context keepers, since, strictly speaking, they are also global variables. Perhaps you periodically do this, say, when you create loggers.


    If you have a multi-threaded application, then the usual global variables will not help, because they are not thread safe. At the same time you can run multiple call chains, and each will need its own context. The module threadingprovides a thread safe object threading.local(). Keep it in any data simply referring to attributes: threading.local().symbol = '@'.


    However, both approaches are not concurrency-safe, that is, they will not work in coruntine call chains, in which the korutins may not call other korutins, but do await on them. If the quorutine is in a standby state, the event loop can start another quorutine from another chain. This option will not work:


    import asyncio
    import sys
    global_symbol = '.'asyncdefindication(timeout):whileTrue:
            print(global_symbol, end='')
            sys.stdout.flush()
            await asyncio.sleep(timeout)
    asyncdefsleep(t, indication_t, symbol='.'):
        loop = asyncio.get_event_loop()
        global global_symbol
        global_symbol = symbol
        loop.create_task(indication(indication_t))
        await asyncio.sleep(t)
    loop = asyncio.get_event_loop()
    loop.run_until_complete(asyncio.gather(
        sleep(1, 0.1, '0'),
        sleep(1, 0.1, 'a'),
        sleep(1, 0.1, 'b'),
        sleep(1, 0.1, 'c'),
    ))

    You can solve the problem by forcing the event loop to save and restore the context each time you return to the coroutine. So comes the module aiotask_context, which with the help loop.set_task_factorychanges the way objects are created. This option will work:


    import asyncio                                
    import sys                                    
    import aiotask_context as context             
    asyncdefindication(timeout):whileTrue:                               
            print(context.get('symbol'), end='') 
            sys.stdout.flush()                    
            await asyncio.sleep(timeout)          
    asyncdefsleep(t, indication_t, symbol='.'):
        loop = asyncio.get_event_loop()           
        context.set(key='symbol', value=symbol)  
        loop.create_task(indication(indication_t))
        await asyncio.sleep(t)                    
    loop = asyncio.get_event_loop()               
    loop.set_task_factory(context.task_factory)  
    loop.run_until_complete(asyncio.gather(       
        sleep(1, 0.1, '0'),                       
        sleep(1, 0.1, 'a'),                       
        sleep(1, 0.1, 'b'),                       
        sleep(1, 0.1, 'c'),                       
    ))

    Create SVG


    SVG is a vector graphic format that stores image information in the form of all forms and numbers required for drawing to XML. For example, the orange circle can be represented as follows:


    <svg xmlns="http://www.w3.org/2000/svg">
        <circle cx="125" cy="125" r="75" fill="orange"/>
    </svg>

    Since SVG is a subset of XML, you can create SVG files in any language quite easily. Including in Python, for example, using lxml. But there is a svgwrite module, created just to create an SVG.


    Here is an example of how you can display the Rekaman sequence in the form of a diagram that you saw at the beginning of the article.


    Appeal to external scopes


    When you use a variable in Python, it first looks for it in the current scope. If it does not find it, then it searches already in the area one level higher. And so on until the global namespace comes up.


    x = 1defscope():
        x = 2definner_scope():
            print(x)  # prints 2
        inner_scope()
    scope()

    But variable assignment works differently. A new variable is always created in the current scope, unless globaleither nonlocal:


    x = 1defscope():
        x = 2definner_scope():
            x = 3
            print(x)  # prints 3
        inner_scope()
        print(x)  # prints 2
    scope()
    print(x)  # prints 1

    globalallows you to use variables of the global namespace, and in the case of nonlocalPython, it searches for a variable in the nearest ambient context. Compare:


    x = 1defscope():
        x = 2definner_scope():global x
            x = 3
            print(x)  # prints 3
        inner_scope()
        print(x)  # prints 2
    scope()
    print(x)  # prints 3
    x = 1defscope():
        x = 2definner_scope():nonlocal x
            x = 3
            print(x)  # prints 3
        inner_scope()
        print(x)  # prints 3
    scope()
    print(x)  # prints 1

    Running scripts


    pythonsupports several ways to run the script. An ordinary command python foo.pyjust executes foo.py.


    You can also use the design python -m foo. If foonot a package, the system will find foo.pyin sys.pathand execute. If it is, Python will execute foo/__init__.py, and then foo/__main__.py. Notice that the variable takes on a value __name__during execution , and during execution - .__init__.pyfoo__main__.py__main__


    You can also use the form python dir/or even python dir.zip. Then he pythonwill search dir/__main__.py, and if he finds it, he will.


    $ ls foo
    __init__.py  __main__.py
    $ cat foo/__init__.py
    print(__name__)
    $ cat foo/__main__.py
    print(__name__)
    $ python -m foo
    foo
    __main__
    $ python foo/
    __main__
    $ python foo/__init__.py
    __main__

    The number of seconds since the beginning of the era


    Before the advent of Python 3.3, it was difficult to convert an object datetimeto the number of seconds since the beginning of the Unix era.


    The most logical way is to use a method strftimethat can format datetime. Taking %sas a format, you can get the timestamp.


    naive_time = datetime(2018, 3, 31, 12, 0, 0)
    utc_time = pytz.utc.localize(naive_time)
    ny_time = utc_time.astimezone(
        pytz.timezone('US/Eastern'))

    ny_time- this is absolutely the same time as utc_time, but recorded in the form as is customary in New York:


    # utc_time
    datetime.datetime(2018, 3, 31, 12, 0,
        tzinfo=<UTC>)
    # utc_time
    datetime.datetime(2018, 3, 31, 8, 0,
        tzinfo=<DstTzInfo 'US/Eastern' ...>)

    If the time is the same, then the timestamps should be equivalent:


    In : int(utc_time.strftime('%s')),
         int(ny_time.strftime('%s'))
    Out: (1522486800, 1522468800)

    Uh, what? Why are they different? The fact is that strftimeit cannot be used to solve this problem. In Python, strftimeit doesn't support it %sas an argument at all, but it works only because strftime()the C-library platform function is called inside . But, as you can see, the time zone of the object is datetimecompletely ignored.


    The correct result can be obtained with a simple subtraction:


    In : epoch_start = pytz.utc.localize(
          datetime(1970, 1, 1))
    In : (utc_time - epoch_start).total_seconds()
    Out: 1522497600.0
    In : (utc_time - epoch_start).total_seconds()
    Out: 1522497600.0

    And if you're using Python 3.3+, you can solve the problem by the method of timestampthe class datetime: utc_time.timestamp().


    Also popular now: