Memory and tasks

    Greetings.
    I’ll immediately warn you that the post is full of personal considerations of the author, which may be erroneous, and this topic is extensive and interesting, so a discussion on the topic in the comments is welcome.
    Only now I realized that, firstly, I did not tell much that it would be useful to know before writing code. In addition, there are much better ways to implement multitasking than I mentioned in a previous release.
    I will also try to follow numerous tips and organize the content of articles.

    Despite the fact that this issue is the fiction of purest water, we’ll draw up a plan.
    1) Different thoughts about multitasking.
    2) Pros and cons for each method.
    3) Decide how to make your bike.

    Let's get started.
    1) When I just started to think about how it is better to realize the subject, a sacred, straightforward thought crept in to me. Let's think about the area where user processes are executed (with ring0, everything is simpler). Let's give them a place starting at 16 megabytes. The rest is their area, until the very end of the RAM. Now the question arises: how do we control and provide the process with everything necessary. The sacred thought was as follows, you already heard it: we divide RAM into N pieces (not necessarily equal), then we organize queues from tasks and to them, we will take measurements according to the 'weight' of programs, etc. You probably want to say: “The author, maybe you will issue a war cry and rush to rob villages? All this is barbaric. ” I agree. Such a model is not viable, non-expandable and terribly inefficient. Well-known comrade Tanenbaum wrote about this in his remarkable work. With such an implementation of multitasking, the following problems arise: small programs will be given less time, possibly starvation. Due to the fact that the sizes of all programs are different, and the framework is static, unequal competition will arise. In principle, this can be solved by strict control, but it hurts a lot this method with static partitions brings a headache.

    2) Then, after a little thought, another thought came to mind: virtual memory.
    Let’s better remember the page organization of memory (bless it to Cthulhu).
    If we do not use static areas for tasks, then a number of other problems arise:
    1) How can we follow the growing processes?
    2) Do not each other's processes?
    3) Does physical memory need a piece of continuous space for the process?
    Let's deal with them.
    1) In order not to miss the processes, it is enough to implement a memory manager (manager), which will 'distribute rations' in the form of free pages. In addition, we give each process its own descriptors, which will chop off the long arms of the process.
    2) This is another duty of the dispatcher - to carefully ensure that the memory is not shared by N processes.
    3) Honestly, at first I was mistaken myself WHERE , where the code is executed in physical or virtual spaces (I know it sounds strange). The answer is in the virtual space. Just imagine the worlds as in the Matrix: both are connected and both are real. So you do not need to perceive virtual space as meaningless and simply expanding the maximum amount of addressable RAM.
    In order to better understand, consider an example. Here, for example, there is a certain process whose particles are scattered throughout physical memory. But we can assign pages with specific virtual addresses to physical addresses, right? That is, the virtual address will again go on a journey through directories and pages, and in the end there will be a completely necessary and valid address in physical memory. It turns out that for the process the virtual addresses will be sequential, but in fact its code can be scattered anywhere in physical memory.

    The method described above is now used in many operating systems, as it is quite stable, flexible. The difficulty will be only in the competent writing of high-quality code.

    Now imperceptibly approached the theme of the memory manager. In general, it is believed that this is the most difficult part in the project. I even heard such a joke:
    “Why do osdev projects stall?
    1) He is very busy at work;
    2) married;
    3) I tried to write a memory manager; “

    But let's not be scared. We will try to cope. He will have to “rule fairly” so that the idyll and the “golden age” reign in the RAM:
    1) Monitor the pages: release them on time, remap.
    2) Search for new free pages on demand (#PF, most often).
    For this, several algorithms are provided, which are also described by comrade Tanenbaum. We will consider them later.
    3) If there is not enough physical memory, then use a swap. In this scenario, you will need to find less used pages, drive them into a swap. Then back. In short, the manager will have to sweat pretty much in this case.

    Now let's describe in words how the creation of a new task will take place (we take into account that this is not yet a combat-ready version):
    1) Check if there is any room for another task.
    2) Load the program image into memory at a specific address.
    3) Describe code, stack, data descriptors for the task.
    4) It would be nice if you created your own directory of pages for the task with all that it implies.
    5) Transfer control to the program code.
    6) If #PF arises, then we look at what task was performed, when accessing which address it was painted, we cry out to the memory manager with prayers and, if lucky, we get another memory page, and return back to the code.

    In the next issue, we will consider how to write the notorious memory manager, because without it you can’t get anywhere.

    What to read:
    1) Andrew Tanenbaum : "Modern operating systems." (A very useful book).
    2) Intel System Programming Manuals (without them, nowhere).

    Also popular now: