How to Improve Front-End Web Application Performance: Five Tips
In many of my front-end projects, at some point I was faced with a decrease in productivity - this usually happens when the complexity of the application increases, and this is normal. Nevertheless, the developers are still responsible for the performance, so in my article I will give five tips for optimizing applications that I apply myself: some may seem obvious, some affect the basic principles of programming - but, I think, refreshing the memory is not superfluous will be. Each tip is backed up by tests: you can run them yourself and test the performance.
Translated to Alconost
Remember: if the code does not need optimization, do not get into it. Of course, the code you write should work quickly, and you can always come up with a faster algorithm - but the written one should remain clear to other developers. In the lecture “Programming as an Art,” Donald Knuth expressed a very important idea about code optimization:
The real problem was that programmers spent too much time worrying about efficiency in inappropriate places and at inappropriate times. Premature optimization is the root of all programming mistakes (or at least most).
1. Search: instead of ordinary arrays - objects and associative arrays
When working with data, situations often arise when, for example, you need to find an object, do something with it, then find another object, and so on. The most common data structure in JS is an array, so storing data in them is normal practice. However, whenever you need to find something in the array, you have to use methods such as "find", "indexOf", "filter", or iterate with loops - that is, you need to iterate over the elements from start to finish. Thus, we perform a linear search, the complexity of which is 0 (n) (in the worst case, we will need to perform as many comparisons as there are elements in the array). If you do this operation a couple of times on small arrays, the impact on performance will be small. However, if we have a lot of elements, and the operation is performed many times,
In this case, it will be a good solution to convert a regular array into an object or an associative array and perform a key search: in these structures, you can access the elements with O (1) complexity - we will have one memory call, regardless of size. Improving the speed of work is achieved through the use of a data structure called a hash table .
You can test the performance here: https://jsperf.com/finding-element-object-vs-map-vs-array/1 . Below are my results:
The difference is very significant: for an associative array and an object, I got millions of operations per second, while for an array, the best result is a little more than a hundred operations. Of course, data conversion is not taken into account here, but even taking into account its operation will be much faster.
2. Instead of exceptions - the conditional operator "if"
Sometimes it seems easier to skip the null check and just catch the corresponding exceptions. This, of course, is a bad habit - you don’t need to do this, and if you have one in your code, just rewrite the corresponding sections. But in order to convince you completely, I will support this recommendation with tests. I decided to test three ways of doing checks: the expression “try-catch”, the condition “if”, and the calculation of “short circuit”.
Test: https://jsperf.com/try-catch-vs-conditions/1 . Below are my results:
I think it is obvious from here that it is necessary to perform a check for "null". In addition, as you can see, there is almost no difference between the "if" condition and the calculation of the "short circuit" - then apply to what the soul lies.
3. The fewer cycles, the better
Another obvious, but perhaps controversial consideration. There are many convenient functions for arrays: "map", "filter", "reduce", so their use looks attractive, and the code with them looks neater and is easier to read. But when the question arises of improving productivity, you can try to reduce the number of called functions. I decided to analyze two cases: 1) “filter”, then “map”, and 2) “filter”, then “reduce” - and compare them with the functional chain, “forEach” and the traditional “for” loop. Why exactly these two cases? From the tests it will be seen that the benefits obtained may not be very significant. In addition, in the second case, I also tried using "filter" when calling "reduce".
Performance test for "filter" and "map":https://jsperf.com/array-function-chains-vs-single-loop-filter-map/1 . My results:
It can be seen that one cycle is faster, but the difference is small. The reason for such a small gap is the “push” operation, which is not required when using the “map”. Therefore, in this case, you can think about whether it really is necessary to proceed to one cycle.
Now let's check "filter" + "reduce": https://jsperf.com/array-function-chains-vs-single-loop-filter-reduce/1 . My results:
Here the difference is already more significant: the combination of two functions into one accelerated the execution by almost half. Nevertheless, the transition to the traditional "for" cycle gives a much more significant increase in speed.
4. Use regular for loops
This advice may also seem controversial, because developers love functional cycles: they are well read and can simplify the work. However, they are less effective than traditional cycles. I think you might already notice the difference in the use of for loops, but let's take a look at it in a separate test: https://jsperf.com/for-loops-in-few-different-ways/ . As you can see, in addition to the built-in mechanisms, I also checked "forEach" from the "Lodash" library and "each" from "jQuery". Results:
And again we see that the simplest “for” loop is much faster than the rest. True, these loops are good only for arrays - in the case of other iterable objects, you should use "forEach", "for ... of" or the iterator itself. But “for ... in” should be applied only if there are no other methods at all. Also, remember that “for ... in” accepts all properties of the object (and in the array the properties are indexes), which can lead to unpredictable results. Surprisingly, the methods from Lodash and jQuery were not so bad in terms of performance, so in some cases you can safely use them instead of the built-in “forEach” (it is interesting that in the test the loop from Lodash worked faster than the built-in).
5. Use the built-in functions to work with the DOM
Here is a comparison of the built-in functions for the DOM and similar JQuery operations in three different cases: https://jsperf.com/native-dom-functions-vs-jquery/1 . My results:
And again, the most basic functions - “getElementById” and “getElementsByClassName” - turned out to be the fastest when viewing the DOM. In the case of identifiers and advanced selectors, querySelector is also faster than jQuery. And in only one case is “querySelectorAll” slower than jQuery (getting elements by class name). For more information on how and how to replace jQuery, see here: http://youmightnotneedjquery.com .
It is clear that if you are already using the library to manage the DOM, it is strongly recommended that you stick to it - however, for simple cases, built-in tools are enough.
2. Data structures, basic algorithms and their complexity: many believe that this is “just a theory”, but in the first paragraph we saw how this theory works in practice.
About the translator
The article was translated by Alconost.
Alconost localizes games , applications and sites in 70 languages. Native-language translators, linguistic testing, cloud platform with API, continuous localization, project managers 24/7, any format of string resources.
We also make advertising and training videos - for sites that sell, image, advertising, training, teasers, expliner, trailers for Google Play and the App Store.
→ Read more