Why are mobile web applications so slow?
Good day. Our team, which is involved in creating Java courses on Hexlet , decided to start translating articles that we found very interesting. Having engaged in this matter, we found that Habré had not yet run a translation of a magnificent article entitled: " Why are mobile web applications so slow? " It is the translation of its part that we bring to your attention today. The article is not new and it was even more surprising that Habré did not meet its translation, but it seems relevant (in its ideas) to this day, although, of course, many benchmarks are outdated. If the Habra community liked the beginning, then we will publish the second part, as well as start publishing translations of other articles that we thought were very interesting and not presented on Habré.
My previous article, where I argued that mobile web applications work slowly , caused an unusually many interesting conversations. A discussion ensued, both online and in real life, but, alas, it was not as factual as I would like it to be.
In this regard, in this post I am going to present real evidence for discussing this problem, and not just arrange unfounded shouting. You will see key points, hear the opinions of experts and even read crystal-clear journal articles on this topic. This post contains over 100 quotes, and this is not a joke.I cannot guarantee that this article will convince you, or that absolutely all the information presented is error free (this is not possible for an article of this size), but I can guarantee that this is the most comprehensive and detailed analysis of the opinion of many iOS developers that mobile Web applications work and will work slowly for the foreseeable future.
I warn you: this is a very boring and long article with almost 10 thousand words. This is the format. Recently, I prefer good rather than popular articles . This is my attempt to write a good article, as well as put into practice what I previously called for: to encourage an interesting discussion based on facts and to refrain from witty comments.
I write this article partly for the reason that this topic is discussed endlessly (in the form of an exchange of harsh phrases). This is not another article on a hackneyed topic, so if you are looking for a 30-second chatter in the spirit of “No, real web applications suck!” “No, not sucks,” then this article is not for you. The web is full of such discussions in the spirit of “Oh, stop it, I can’t breathe, please stop, there are so many opinions and few facts”, etc. On the other hand, as far as I can tell, there is nowhere to find a detailed, informative and adequate discussion of this issue.. This may seem like a very stupid idea, but this article represents my attempt to calmly discuss a topic that has already given rise to completely meaningless banal disputes filled with flame. My position is this: the problem is rather that people who are able to discuss the issue more adequately do not participate in the discussion, and not in the subject of discussion. I think we will find out if this is so.
So, if you want to find out what kind of craziness has found on your fellow developers who continue to write ill-fated proprietary applications on the eve of the already obvious web revolution, or find out something else, bookmark this page, make yourself a cup of coffee, find a free evening, sit in a comfortable chair, and now, we are both ready for discussion.
In my previous post, I argued through the example of SunSpider that today mobile web applications are slow.
“Of course, if by“ web application ”you mean“ a website with one or two buttons, ”then you can send all the miserable performance tests like SunSpider to hell. If you mean “fast word processing, photo editing, local storage and animation between screens,” then you should not do this with a mobile application on ARM, unless you are a suicide. ”
This article is really worth reading, but I will still show you a performance test:
In general, this comparative test is criticized from three points of view:
- It's no secret that JS code is slower than native code: everyone found out about it in CS1 when they compared compiled, JIT, and interpreted languages. The question is how much its slow operation affects the programs you write, and similar performance tests cannot solve this problem in one way or another.
- Yes, JS is slower, and it matters, but its performance is constantly growing, and one day we will notice for the first time that the difference in speed is already insignificant, so start investing in JS now.
- I am writing a server program in Python / PHP / Ruby and it does not matter to me what you drive there. I know that my servers are faster than your mobile devices, but, in fact, it is not difficult for me to serve X000 users using an actually interpreted language, but can you guys imagine how to serve one user in a language with a high-performance compiler Jit? How hard can it be?
I set myself a rather ambitious goal: to refute all three claims in this article. Yes, JS is slow and it really matters. No, in the near future it will not be noticeably faster. No, your experience with server programming will not prepare you properly for “starting small” and correctly discussing the performance of mobile applications.
All this is good, but how exactly to compare JS performance with native application performance?
Good question. To answer it, I chose an arbitrary performance test from the Benchmarks Game. Then I found an older C program that runs the same test (older, since the new ones have a lot of x86-specific details). After that, I compared Nitro with LLVM on my tested iPhone 4S. All this code can be found on GitHub .
All this is very arbitrary, but the code that you execute in real life is just as arbitrary. If you need a better experiment, then experiment. This is just my experiment, as there are no other experiments to compare LLVM and Nitro yet.
One way or another, in this synthetic performance test, LLVM is consistently 4.5 times faster than Nitro:
So if you are interested in how much faster the processor-limited function in the native code compared to Nitro JS, the answer will be "about 5 times." This result generally coincides with the Benchmarks Game results with x86 / GCC / V8, according to which , GCC / x86 as a whole is 2 - 9 times faster than V8 / x86. Thus, the result seems to be close to the truth and does not depend on whether you use ARM or x86.
But is 1/5 of the performance not enough?
Enough for x86. In fact, how much does the processor render the table? Not so much. The problem is that ARM is not x86.
According to GeekBench , a comparison of the latest MBP model with the latest iPhone model showed a coefficient of 10, so everything is fine, the tables are not so heavy. You can work with a productivity of 10%. And you still want to divide it into five? Bravo, man! Now we have a performance of 2% of the desktop computer (I randomly operate with units, but we are dealing with an order of magnitude. Okay, that’s fine).
Good, but how difficult word processing really is? Is it not possible to do this on a processor like m68k, adding another processor to it? Well, this question can be answered. You may not remember, but interacting with Google Docs in real time was not really just a launch feature. They rewrote a lot of things by April 2010. Let's see what browser performance was in April 2010 .
Judging by this graph, it is obvious that the iPhone 4S is absolutely not comparable with web browsers in the era of interaction with Google Docs in real time. However, it can compete with IE8, with which I congratulate him.
(To use Google Wave in Internet Explorer, you must install the plug-in for the Google Chrome Frame browser. Alternatively, you can use one of the following browsers: Google Chrome, Safari 4, Firefox 3.5. If you want to continue working at your own risk, proceed to the next step).
See how faster these browsers are than the iPhone 4S?
You see how all supported browsers show results below 1000, and the one that showed the result of 3800 is excluded due to low speed? The iPhone shows the result of 2400. Like IE8, it is not fast enough to run Wave.
But I thought that the performance of V8 / modern JS is almost as good as C!
Still, a factor of 5 is acceptable for x86 , primarily because x86 is 10 times faster than ARM. There is plenty of room for maneuver. This solution is clearly intended only to accelerate ARM by 10 times, so it is comparable to x86, and then you can achieve performance like JS on a desktop computer without any extra effort!
Whether this will work depends on your belief in Moore’s law regarding the attempt to charge the chip with a 3 ounce battery. I am not a hardware engineer, but I used to work for a large semiconductor manufacturing company, and its employees told me that at present, productivity mainly depends on the process(i.e., a quantity measured in nanometers). The impressive performance of the iPhone 5 is mainly associated with a decrease in the technological process from 45 nm to 32 nm, that is, a decrease of about a third, but to repeat this, Apple would have to shrink the manufacturing process to 22 nm.
Just for reference: Intel's 22nm x86 Atom Bay Trail version does not currently exist . Intel had to invent a completely new kind of conductor , since the standard version did not work on a scale of 22 nm. Do you think they will sell an ARM license for it? Think again. Only a few 22 nm integrated circuit enterprises are planned to be seriously built in the world , and most of them are controlled by Intel.
In fact, ARM seems to be striving to reduce the manufacturing process to 28 nm or so (see A7), while Intel is aiming at 22 nm, and possibly even 20 nm. If we consider only the hardware aspect, it seems to me much more likely that an x86 chip with x86 class performance will be used in a smartphone much earlier than the opportunity arises to reduce the ARM chip with x86 class performance.
Note from a former Intel engineer who sent me an email:
“I am a former Intel engineer who worked on a line of microprocessors for mobile devices, and then at Atoms. Be that as it may, in my incredibly biased opinion, it will be easier to install x86 on a phone with a “set of functions” from larger kernels than ARM to catch up with x86 in performance, developing these functions from scratch. ”
Note from a former Robotics engineer who sent me an email:
“You are absolutely right that they will slightly increase productivity, and that Intel may have a faster processor for mobile devices in a few years. In fact, at present, mobile processors experience the same limitation as desktop processors when they reached 3 GHz: a further increase in the clock frequency is impractical without a cardinal increase in power. This is true for the following process nodes, although they should be able to slightly increase the IPC (possibly by 10-20%). When they faced this limitation, dual-core and quad-core processors for desktop computers began to be produced, but mobile systems on a chip are already dual and quad-core, so acceleration will not be easy to achieve. ”
Thus, in the end, Moore's law may be true, but only in that the transition to x86 will require a whole mobile ecosystem. This is not to say that this is impossible, it has already been done before . True, this was in an era when annual sales amounted to about one million units , and now 62 million are sold per quarter . This was done using a ready-made virtualization environment that could mimic an old architecture at a speed of about 60% . Meanwhile, the performance of modern hypothetical virtualization search engines for optimized (O3) code is approaching 27%.
I'm afraid this is where my knowledge of hardware ends. I can only tell you one thing: if you want to believe that ARM will bridge the gap with x86 in the next 5 years, first of all you need to find someone who works on ARM or x86 (that is, a person who really has knowledge) so that he agrees with you. To write this article, I consulted with many qualified engineers, and all of them refused to officially voice this position, so it seems to me that there is nothing good in it.
If we connect the acceleration of JS with the hardware as a whole, then improving the performance (hardware) of JS is not a prerequisite for improving program performance in the future . That is why, if you want to believe that JS will accelerate, today it will most likely happen due to the acceleration of equipment, as this is evidenced by the historical trend.
What about JIT, V8, Nitro / SFX, TraceMonkey / IonMonkey, Chakra and the rest? Well, at the time of release, they were something significant, although not as significant as you think. V8 was released in September 2008. Around the same time, I dug up a copy of Firefox 3.0.3:
Do not get me wrong, a performance increase of 9 times cannot be ignored, after all, this number is almost equal to the difference between ARM and x86. Thus, the performance difference between Chrome 8 and Chrome 26 remains at the same level, since nothing extremely important has happened since 2008. Other browser makers have made up for it (some are faster, some are slower), but no one has really increased the speed of the processor code itself since then.
Here is Chrome v8 on my Mac (the earliest version that still works, December 2010), and here is v26 .
If the Web seems faster to you than in 2010, then this is probably due to the fact that you are working on a faster computer, and the improvements in Chrome have nothing to do with it.
Note. Some smart people have noted that these days, SunSpider is not a good benchmark for performance (and they refused to provide any relevant numbers). To start a meaningful discussion, I ran Octane (a Google test) on older versions of Chrome, and it showed some improvement:
But Safari seems to be faster than before?
So Safari 7 is 3.8 times faster than other browsers?
Perhaps for Apple’s convenience, information about this version of Safari has not been disclosed, so that no one can publish independent data on the performance of Safari. Nonetheless, let me make a few points on this statement, based only on publicly available information.
Another important fact is that, according to Apple, improving performance in SunSpider does not always mean improving anything else. In the same document, which presents Apple's preferred test, Eich and others write the following:
“The diagram clearly shows that, according to SunSpider, the performance of version 3.6 of Firefox has increased 13 times compared to version 1.5. Nevertheless, if you look at the improvements on Amazon, they look much more modest - 3 times. More interestingly, over the past two years, improvements on Amazon have leveled off. Apparently, some optimizations that work well on Sun Spider are ineffective on Amazon. ”
Continuing the topic, they claim in essence that Amazon performance testing is a better tool for predicting Amazon’s performance than SunSpider’s performance testing [uh ... obviously ...], therefore, it is well-suited for web browsers that are used to visit Amazon. However, all this will not help to write a photo processing application.
But in any case, relying on open information, I can say that Apple's statements about a 3.8-fold increase in performance do not always mean something useful to you. I can also say that if I had tests refuting Apple's claims of superiority over Chrome, I would not be allowed to publish them.
So let's end this section with the following conclusion: if someone has a graph showing that his web browser is faster, this does not mean that JS as a whole is getting faster.
We really hope that you will like the article as much as we liked it in due time. If so, then soon you will find the second part of the translation.
Only registered users can participate in the survey. Please come in.