Software inflation in terms of processor resources - why are newer versions of the application sometimes much slower than older ones?

Prelude


It was a regular Thursday night. I returned from work, sat down at the laptop, turned it on, started Skype and began to habitually wait for it to finally load fully and completely. And then suddenly I thought - why did it take so long to load, and even the system is clearly hard to endure this process?

I decided to look into the Task Manager to estimate how much resources Skype consumes in the background. But first, a little preliminary calculations. And how much should this consume resources? I'm talking about the background now. Those. when there is no video connection, I don’t even talk to anyone on the microphone. All that is is a list of contacts, which is displayed in the form of icons and names, and a menu in which you can choose something.

Those. this is one form, in fact, from which additional menus can be launched. There is one list on this form. And a text box for entering messages to someone, a few buttons. About 15 years ago, when I wrote in Delphi, such an application (with one form) would weigh a couple of hundred kilobytes. Of course, since then the development environment began to consume much more resources, the visual components became richer. However, even with this progress, Skype in the background should weigh somewhere around 10 megabytes maximum. After all, I’m not talking to or calling anyone, what else can I spend so much on there?

One can look at this matter and, on the other hand, as mathematicians say, equivalence. Without video calling and microphone conversations, Skype provides an opportunity to see which of the interlocutors is currently on the network, as well as send him a text message, well, and receive a text message in response from someone. Those. this is essentially ICQ - so, well, probably in the background, most likely, should Skype take up about as much as ICQ in memory? Now let's check in practice these calculations. We look at the memory consumption in the TaskManager:

image

Does Skype take up 158 MB of memory? Are you serious considering that QIP takes up 35 MB? Okay, 35 MB is probably too much, and this should be sorted out, but my note is not about that. We are now about Skype. Why does it take up so many resources - almost 5 times more than QIP? It’s a bit too much for one form with a list of contacts, don’t you?

Interestingly, this problem excites not only me, if you drive into Google “ Why does skype consume so much memory ”, then just a footcloth of various discussions on the forums, why new versions of Skype weigh so much, will fall out. The answers are especially pleasing. For example, a real answer on the Skype community forum (I quote my free translation of the answer):

And why do you think this is a lot? Modern computers have 4-8 GB of RAM. 140 MB is such a trifle by modern standards that you will not even notice this at startup.


Yeah, yes. Of course. If so all software developers will reason, then that way " no volosts are not enough ." The question is not that I'm sorry for Skype memory (and I'm sorry too). The question is, what is so new in functionality added in new versions of Skype (compared with old ones), that they require so much memory?

But that’s okay. I was more interested in another question - the processor with Skype in the background also didn’t really relax and periodically showed even a full load. The question arises, “Why and how to get rid of this?”. In fact, the question should sound like this - how do developers even manage to create such bulky applications? And what to do with it?

A bit of history


Of course, any visitor to this site probably at least once heard about Moore’s law on improving system performance. The article Free lunch is over provides such a curious picture:

image

As you can see, somewhere in 2004, the processors reached the ceiling in terms of clock frequency. And the last 10 years, this frequency is not particularly growing. Does it follow from this that Moore's law has ceased to be fulfilled? Actually, no, and the article clearly and clearly explains why. Simply, the performance of our computers is now growing due to other factors (cache and multi-core). However, the catch is that ordinary single-threaded applications will not be able to speed up these factors. And here a problem arises. The fact is that many software manufacturers today behave as if the yard is still the 80s or 90s, and software optimization in terms of reducing the number of clock cycles does not present a particular problem - you can just wait a bit and the processors will much faster.

This is true not only for Microsoft, but I will focus on specific examples for it. Joel Spolsky in his article mentions that Microsoft managed to prevail over Lotus in the 80s in the battle between Excel and 123 due to the fact that Lotus managers made one critical mistake - they tried to optimize the application. Specifically, they tried to shrink the application so that it always guaranteed to fit in 640 KB of RAM. They killed it for a year and a half, and during this time Redmond captured the market using Excel, because at that moment computers with much larger amounts of RAM were in operation. This solution cost Lotus a lot.

However, here’s what’s interesting - these days, such a strategy could perhaps turn out to be a win - since the resources of standard desktop systems no longer grow at such an amazing pace as 20-30 years ago. The problem is that as a result of fierce competition of that period, the companies that developed the functionality of the applications benefited, while ignoring the performance and optimization. It is these companies that have formed this development ideology, the fruits of which we are still reaping.

Inflation


What did this lead to? To a very curious phenomenon. I will never give up on hardware upgrades, but all of a sudden I recently started avoiding software upgrades unnecessarily, due to possible inflation . By this term, I mean that the same functional set that I had in the old version, in the new one, I get for a great price in terms of processor resources and RAM.

There is a steady idiom in English, which in Russian sounds like “trying to fix something that has not been broken.” The situation with software over the past 10 years is very reminiscent of this idiom. Skype is one of the prime examples of this. Judging by the forums, this memory problem did not exist in older versions of Skype, for example, in versions 3.x. What has been improved in the product since then that it has become so expensive in terms of RAM?

The same, by the way, applies to various chat applications. About 15 years ago, a chat that takes up 30 MB of RAM at a time looked nonsense. However, now this is already the norm, although what does the current chat rooms provide us with that was not provided at that time?

Do not forget Microsoft Office. In my opinion, the version for XP satisfied everyone. Of course, like any product, it had its drawbacks. But were they so critical that they needed to release versions 2007, 2010, etc.? I make the same documents in them, but now I have to wait much longer until these systems boot up.

In justification, we hear that new versions contain new features. Yes, I don’t deny it, but doesn’t it seem strange that old opportunities require more resources?

Modularity and optimization


Still, why more resources? Here everything is explained by the fact that most applications are rather monolithic. No, in terms of organizing the code, they are probably divided into modules with the correct division of responsibility. However, I call them monolithic in the sense that when loading an application, all of its modules are usually loaded at once, although this is often not required.

Let's get back to Skype again. Now it is obviously made so that with a simple entry into the system, a lot of modules are immediately loaded into RAM, which are directly responsible for sound, video, etc. This is despite the fact that the usual input requires only a list of users and the ability to exchange text. This system could be done differently, loading only the most necessary. And only when the user wants to really start a video exchange, everything else is loaded.

Optimization is also important, due to the fact that developers cannot physically develop all the code “from scratch”, but are forced to use existing libraries that were written without much pressure on optimization.

Imagine that the developer of each library made his library 5% slower than it could have been if he had spent additional efforts on optimization. Let library 1 use library 2 in its work, and that library 3, and that library 4. In this case, a chain of 15 libraries in the case of accumulation of delays gets the result 1.05 ^ 15 = 2.07, i.e. in the worst case, the application will run twice as slow as any of the components.

I am well acquainted with the phrase that premature optimization is the root of all evil. However, this is a premature optimization, not optimization at all. This slogan was wonderful at a time when processors were getting faster and faster in front of our eyes. After reaching the ceiling, this slogan begins to turn sideways, when the old version of the application, written about 15 years ago, suddenly begins to look more preferable than the version released last week. By the way, one cannot fail to note the fact that software manufacturers often try to force users to update software, precisely because there are no special motives for the benefit for the consumer in this case.

Alternative examples


In principle, the software industry expects the same as the automotive industry after the oil crisis of the 70s, when it became clear that gasoline was becoming a critically expensive resource. Since then, car companies have been able to reduce engine consumption by about a third, if I'm not mistaken.

In the world of software, there are also such examples. At one time, I really liked Erlang, which is built on the concept of many independent lightweight processes that are united only by common messages (this allows you to make the most of multicore). In addition, there is a principle in this environment that was explicitly borrowed from the LISP interpreters - that each module and function can be loaded and reloaded on the go if necessary (and the same applies to any process).

For comparison, on Glassfish, if you changed one of several hundred or thousands of classes, you need to reinstall the entire module (war / ear / jar). Hot swapping of functions or classes on the go is there, but it is implemented very poorly in comparison with Erlang.

The future of the industry is with applications that can fully utilize multi-core technology and will not immediately download all the possible modules for all the features that are in this product. Those. the program will be able to load in the basic configuration and consume as many resources as its predecessor consumed 15 years ago, and if necessary, download everything that the user needs on the go.

Also popular now: