
React, built-in features and performance
- Transfer
When I have to talk about React, or when I give the first lecture of the training course, showing all sorts of interesting things, someone will certainly ask: “Built-in functions? I heard they’re slow. ”
This question didn’t always come up, but in the last few months, as the author of the library and the teacher, I have to answer it almost every day, sometimes in lectures, sometimes on Twitter. Honestly, I'm already tired of this. Unfortunately, I did not immediately realize that it is better to present everything in the form of an article, which, I hope, will be useful for those who are asking questions of performance. In fact, this is the fruit of my labors.

In the context of React, what is called an inline function is a function that is defined during the rendering process. In React, there are two meanings of the concept of “rendering,” which are often confused. The first relates to getting React elements from components (calling
Here are some examples of built-in functions:
Before we continue, we need to talk about how to optimize programs. Ask any productivity professional and he will tell you that premature optimization is evil. This applies to absolutely all programs. Anyone who knows a lot about optimization can confirm this.
I remember the speech of my friend Ralph Holzmann, a dedication
At that time in LABjsSomething strange was done aimed at optimizing the size of the finished code. Instead of using the usual object notation (
Ralph forked this code and removed the optimization, rewriting it in the usual way, without thinking about how to optimize the code for minification and gzip compression.
It turned out that getting rid of “optimization” allowed us to reduce the size of the resulting file by 5.3%! Obviously, the author of the library wrote it right away in an “optimized” form, without checking whether it would give any advantages. Without measurements it is impossible to find out if a certain optimization improves something. In addition, if optimization only worsens the situation, you will not know about this either.
Premature optimization not only can significantly increase development time, degrading the purity of the code, it can have negative consequences and lead to problems, as was the case with LABjs. If the author of the library took measurements instead of imagining performance problems, he would save development time and release cleaner code with better characteristics.
I'll quote this tweet here: “It annoys me when people, lounging in an armchair, argue that some code will be slow to solve their problems without taking any measurements of performance.” I support this point of view.
So, I repeat - do not engage in premature optimization. And now - back to React.
Built-in functions are considered slow for two reasons. Firstly, this is due to concerns regarding memory consumption and garbage collection. Secondly - because of
For starters, programmers (and estlint configurations ) are concerned about memory consumption and system load from garbage collection when creating built-in functions. This is a legacy of the days when arrow functions in JS were not yet widely used. If the command was often used in the React code in the built-in constructions
The problems with
Remember that you don’t have to make assumptions that some code will be slow. Write code as you always do and measure performance. If you can find any problems, fix them. You don't have to prove that arrow functions work fast - let someone else prove that they are slow. Otherwise, it is premature optimization.
As far as I know, no one has yet conducted a study of their application, indicating that built-in functions lead to performance problems. Up to this point, you should not even talk about it, however, in any case, I will share here another idea.
If the load on the system from creating the built-in function is high enough to create a special eslint rule to prevent this, why would we strive to move these heavy operations to an initialization block that is very important from the point of view of affecting the speed of the system?
In the preliminary optimization, we slowed down the initialization of the component three times. If all event handlers were built-in functions, the initial call
However, again, do not get carried away with the idea of transferring everything that is necessary and not necessary to the built-in functions. If, inspired by the above idea, someone decides to create an eslint rule that will require the widespread use of built-in functions to speed up the initial rendering, then we will still face the same harmful premature optimization.
The real essence of the problem lies in
When called
If a component is set
The most common way to optimize a component is to expand
The class
In JavaScript, there are six primitive types:
When
The problem when comparing properties arises for other types, that is, sorry - the only type. Everything else in JS is this
Well what can I say - JS is JS. In any case, a strict comparison of different objects, even if they contain the same values, will return
So, if you embed an object in JSX code, an adequate comparison of the properties in
Since functions are objects, and they
You may notice that this applies not only to the built-in functions. The same can be said about ordinary objects, and about arrays.
In order to
It should be noted here that the study of techniques for preserving the link identity of functions leads to surprisingly long conversations. I have no reason to call programmers for this, unless they want to fulfill the requirements of their eslint configuration. The main thing that I wanted to show is that the built-in functions do not interfere with optimization. And now I’ll share my own story of performance optimization.
When I first found out about
However, to my great surprise, the application began to work more slowly.
Why? Think about it. If you have one
The universal answer to the question: "How to increase productivity?" not. The answer can only be found in the performance measurements of a particular application.
At the beginning of the material, I showed three types of built-in functions. Now that some base has been prepared, let's talk about each of them. But please remember that
Typically, nothing but a call is done inside event handlers for buttons, input fields, and other DOM components
A component
As a result, this can be considered slow only if you agree that the simple definition of a function is a rather large load on the system, which is worth worrying about. There is no evidence that this is so. Unnecessarily getting rid of the built-in event handlers is a familiar premature optimization.
If
Now let's talk about events that seem to be sort of
Most view properties do not meet these requirements. Thus, most use cases lead to unnecessary comparisons, which forces developers to maintain, unnecessarily, the reference identity of the handlers. It is necessary to compare only those properties that can change. Thus, handlers can be in the element description code, and all this will work quickly, and if we are concerned about performance, it can be noted that with this approach less comparisons will be necessary. For most components, I would recommend creating a class and inheriting from it, instead of inheriting from . This will help to simply skip all feature checks. Just what you need. Well - almost what you need.
If you get a function and pass that function directly to another component, it will be deprecated. Take a look at this:

What is a built-in function?
In the context of React, what is called an inline function is a function that is defined during the rendering process. In React, there are two meanings of the concept of “rendering,” which are often confused. The first relates to getting React elements from components (calling
render
component methods ) during the upgrade process. The second is the actual updating of page fragments by modifying the DOM. When I talk about "rendering" in this article, I mean the first option. Here are some examples of built-in functions:
class App extends Component {
// ...
render() {
return (
{/* 1. встроенный обработчик событий "компонента DOM" */}
{/* 2. "Кастомное событие" или "действие" */}
{
this.setState({ sidebarIsOpen: isOpen })
}}/>
{/* 3. Коллбэк свойства render */}
(
{match.params.id}
}
)
/>
)
}
}
Premature optimization is the root of all evil
Before we continue, we need to talk about how to optimize programs. Ask any productivity professional and he will tell you that premature optimization is evil. This applies to absolutely all programs. Anyone who knows a lot about optimization can confirm this.
I remember the speech of my friend Ralph Holzmann, a dedication
gzip
, which really strengthened this idea in me. He talked about an experiment he had with an LABjs
old library for loading scripts. You can watch this performance . What I am talking about here takes about two and a half minutes, starting at the 30th minute of the video. At that time in LABjsSomething strange was done aimed at optimizing the size of the finished code. Instead of using the usual object notation (
obj.foo
), we used the storage of keys in strings and the use of square brackets to access the contents of objects ( obj[stringForFoo]
). The reason for this was that after minifying and compressing the code with the help of gzip
unusually written code , it would have to become smaller than the code that was written in the usual way. Ralph forked this code and removed the optimization, rewriting it in the usual way, without thinking about how to optimize the code for minification and gzip compression.
It turned out that getting rid of “optimization” allowed us to reduce the size of the resulting file by 5.3%! Obviously, the author of the library wrote it right away in an “optimized” form, without checking whether it would give any advantages. Without measurements it is impossible to find out if a certain optimization improves something. In addition, if optimization only worsens the situation, you will not know about this either.
Premature optimization not only can significantly increase development time, degrading the purity of the code, it can have negative consequences and lead to problems, as was the case with LABjs. If the author of the library took measurements instead of imagining performance problems, he would save development time and release cleaner code with better characteristics.
I'll quote this tweet here: “It annoys me when people, lounging in an armchair, argue that some code will be slow to solve their problems without taking any measurements of performance.” I support this point of view.
So, I repeat - do not engage in premature optimization. And now - back to React.
Why is it said that built-in functions degrade performance?
Built-in functions are considered slow for two reasons. Firstly, this is due to concerns regarding memory consumption and garbage collection. Secondly - because of
shouldComponentUpdate
. Let us examine these concerns.▍Memory consumption and garbage collection
For starters, programmers (and estlint configurations ) are concerned about memory consumption and system load from garbage collection when creating built-in functions. This is a legacy of the days when arrow functions in JS were not yet widely used. If the command was often used in the React code in the built-in constructions
bind
, this, historically, led to poor performance. For instance:
{stuff.map(function(thing) {
{thing.whatever}
}.bind(this)}
The problems with
Function.prototype.bind
were fixed here , and the arrow functions were either used as built-in features of the language or transformed using babel into ordinary functions. And so and so we can assume that they are not slow. Remember that you don’t have to make assumptions that some code will be slow. Write code as you always do and measure performance. If you can find any problems, fix them. You don't have to prove that arrow functions work fast - let someone else prove that they are slow. Otherwise, it is premature optimization.
As far as I know, no one has yet conducted a study of their application, indicating that built-in functions lead to performance problems. Up to this point, you should not even talk about it, however, in any case, I will share here another idea.
If the load on the system from creating the built-in function is high enough to create a special eslint rule to prevent this, why would we strive to move these heavy operations to an initialization block that is very important from the point of view of affecting the speed of the system?
class Dashboard extends Component {
state = { handlingThings: false }
constructor(props) {
super(props)
this.handleThings = () =>
this.setState({ handlingThings: true })
this.handleStuff = () => { /* ... */ }
// ещё больше нагрузки на систему с bind
this.handleMoreStuff = this.handleMoreStuff.bind(this)
}
handleMoreStuff() { /* ... */ }
render() {
return (
{this.state.handlingThings ? (
) : (
)
}
}
In the preliminary optimization, we slowed down the initialization of the component three times. If all event handlers were built-in functions, the initial call
render
would have to create only one function. Instead, we create three. Moreover, no performance measurements were taken, so we have no reason to consider this a problem. However, again, do not get carried away with the idea of transferring everything that is necessary and not necessary to the built-in functions. If, inspired by the above idea, someone decides to create an eslint rule that will require the widespread use of built-in functions to speed up the initial rendering, then we will still face the same harmful premature optimization.
▍PureComponent and shouldComponentUpdate
The real essence of the problem lies in
PureComponent
and shouldComponentUpdate
. In order to meaningfully engage in performance optimization, you need to understand two things: features shouldComponentUpdate
, and how the comparison works on strict equality in JavaScript. Without understanding these concepts, you can, trying to make the code faster, only make things worse. When called
setState
, React compares the old element with the new one (this is called reconciliation ), and then uses the information it receives to update the elements of the real DOM. Sometimes this operation can happen rather slowly if there are too many elements that need to be checked (something like a large SVG). In such cases, React provides a workaround called shouldComponentUpdate
.class Avatar extends Component {
shouldComponentUpdate(nextProps, nextState) {
return stuffChanged(this, nextProps, nextState))
}
render() {
return //...
}
}
If a component is set
shouldComponentUpdate
before React compares the old and new elements, it will turn to shouldComponentUpdate
to find out if something has changed. If it returns false
, React will completely skip the element comparison operation, which will save some time. If the component is large enough, this can lead to a noticeable effect on performance. The most common way to optimize a component is to expand
React.PureComponent
instead React.Component
. PureComponent
will compare properties and state in shouldComponentUpdate
, as a result, you do not have to do it yourself. class Avatar extends React.PureComponent { ... }
The class
Avatar
now uses strict equality comparisons when working with properties and state before requesting updates. It can be expected that this will speed up the program.▍Comparison for strict equality
In JavaScript, there are six primitive types:
string
, number
, boolean
, null
, undefined
, and symbol
. When a strict comparison of two variables of primitive types that store it and the same value is performed, it turns out true
. For instance:const one = 1
const uno = 1
one === uno // true
When
PureComponent
comparing properties, it uses strict comparison. This works fine for embedded primitive values like
. The problem when comparing properties arises for other types, that is, sorry - the only type. Everything else in JS is this
Object
. But what about functions and arrays? In fact, all these are objects. I allow myself to quote an excerpt from the MDN documentation : "Functions are ordinary objects that have the additional ability to be called for execution." Well what can I say - JS is JS. In any case, a strict comparison of different objects, even if they contain the same values, will return
false
.const one = { n: 1 }
const uno = { n: 1 }
one === uno // false
one === one // true
So, if you embed an object in JSX code, an adequate comparison of the properties in
PureComponent
will not be possible, resulting in a more time-consuming comparison of React elements. This comparison will only find out that the component has not changed, as a result - the loss of time in two comparisons.// первый рендер
// следующий рендер
// сравнение свойств полагает, что что-то изменилось, так как {} !== {}
// сравнение элементов (согласование) выясняет, что ничего не изменилось
Since functions are objects, and they
PureComponent
perform strict checks on the equality of properties, comparing built-in functions when analyzing properties always ends with a message that they are different, after which a transition to comparing elements during the matching procedure will be carried out. You may notice that this applies not only to the built-in functions. The same can be said about ordinary objects, and about arrays.
In order to
shouldComponentUpdate
do what we expect from it when comparing identical functions, we need to maintain the reference identity of the functions. For experienced JS developers, this is not so bad news. But, considering that Michaeland after learning about 3,500 people with different levels of training, it can be noted that this task was not so simple for our students. It should be noted that ES classes do not help here either, so in this situation you have to use other JS features:class Dashboard extends Component {
constructor(props) {
super(props)
// Используем bind? Это замедляет инициализацию и, если такое повторяется раз 20,
// ужасно смотрится.
// Кроме того, это увеличивает размер пакета.
this.handleStuff = this.handleStuff.bind(this)
// _this - это дурной тон.
var _this = this
this.handleStuff = function() {
_this.setState({})
}
// Если вам доступны классы ES, то, возможно, вы можете использовать и
// стрелочные функции (то есть, работаете с babel или с современным браузером).
// Это не так уж и плохо, но перемещение всех обработчиков в конструктор - это уже
// не так уж и хорошо.
this.handleStuff = () => {
this.setState({})
}
}
// так куда лучше, но это пока за пределами JavaScript,
// поэтому тут можно задаться вопросом о том, как работает комитет TC39 и
// как он оценивает предложения по языку.
handleStuff = () => {}
}
It should be noted here that the study of techniques for preserving the link identity of functions leads to surprisingly long conversations. I have no reason to call programmers for this, unless they want to fulfill the requirements of their eslint configuration. The main thing that I wanted to show is that the built-in functions do not interfere with optimization. And now I’ll share my own story of performance optimization.
How I worked with PureComponent
When I first found out about
PureRenderMixin
(this is a construct from earlier versions of React that later turned into PureComponent
), I used a lot of dimensions and rated the performance of my application. Then I added PureRenderMixin
to all the components. When I took the performance measurement of the optimized version, I hoped that as a result everything would be so wonderful that I could proudly tell everyone about it. However, to my great surprise, the application began to work more slowly.
Why? Think about it. If you have one
Component
, how many comparison operations do you have to perform when working with it? And if it’s aboutPureComponent
? The answers, respectively, are as follows: "only one", and "at least one, and sometimes two." If the component usually changes during the upgrade, it PureComponent
will perform two comparison operations instead of one (properties and state in shouldComponentUpdate
, and then normal comparison of elements). This means that usually PureComponen
t will be slower, but sometimes faster. Obviously, most of my components were constantly changing, therefore, in general, the application began to work more slowly. It’s sad. The universal answer to the question: "How to increase productivity?" not. The answer can only be found in the performance measurements of a particular application.
About three built-in function usage scenarios
At the beginning of the material, I showed three types of built-in functions. Now that some base has been prepared, let's talk about each of them. But please remember that
PureComponent
it is better to hold on until you have measurements in order to assess the benefits of using this mechanism.▍ DOM component event handler
Typically, nothing but a call is done inside event handlers for buttons, input fields, and other DOM components
setState
. This usually makes inline functions the most clean approach. Instead of jumping around the file in search of event handlers, they can be found in the element description code. The React community generally welcomes this. A component
button
(and any other DOM component) cannot even be PureComponent
, so there is no need to worry shouldComponentUpdate
about reference identity either .As a result, this can be considered slow only if you agree that the simple definition of a function is a rather large load on the system, which is worth worrying about. There is no evidence that this is so. Unnecessarily getting rid of the built-in event handlers is a familiar premature optimization.
▍ “Custom event” or “action”
{
this.setState({ sidebarIsOpen: isOpen })
}}/>
If
Sidebar —
it is PureComponent
, we will not pass the comparison of properties. Again, since the handler is simple, embedding it might be the best solution. Now let's talk about events that seem to be sort of
onToggle
, and why Sidebar
they are comparing them. There are only two reasons to look for differences in properties in shouldComponentUpdate
:- The property is used for rendering.
- The property is used to achieve a side effect of
componentWillReceiveProps
, incomponentDidUpdate
, or incomponentWillUpdate
.
Most view properties do not meet these requirements. Thus, most use cases lead to unnecessary comparisons, which forces developers to maintain, unnecessarily, the reference identity of the handlers. It is necessary to compare only those properties that can change. Thus, handlers can be in the element description code, and all this will work quickly, and if we are concerned about performance, it can be noted that with this approach less comparisons will be necessary. For most components, I would recommend creating a class and inheriting from it, instead of inheriting from . This will help to simply skip all feature checks. Just what you need. Well - almost what you need.
on
PureComponent
PureComponentMinusHandlers
PureComponent
If you get a function and pass that function directly to another component, it will be deprecated. Take a look at this:
// 1. Приложение передаст свойство форме.
// 2. Форма собирается передать функцию кнопке,
// которая перекрывает свойство, полученное от приложения.
// 3. Приложение собирается выполнить setState после монтирования и передать
// *новое* свойство форме.
// 4. Форма передаёт новую функцию кнопке, перекрывая
// новое свойство.
// 5. Кнопка проигнорирует новую функцию и не сможет
// обновить обработчик нажатия, её передача будет осуществлена
// с устаревшими данными.
class App extends React.Component {
state = { val: "one" }
componentDidMount() {
this.setState({ val: "two" })
}
render() {
return
)
}
}
→ Here you can experiment with this code.
So, if you like the idea of inheriting from
PureRenderWithoutHandlers
, do not pass your handlers that are not involved in the comparison directly to other components - you need to wrap them in some way. Now we either need to maintain referential identity, or avoid referential identity! Welcome to performance optimization. At a minimum, with this approach, the burden falls on the optimized component, and not on the code that uses it.
I have to honestly say that this example application is an addition to the material that I made after being posted by Andrew Clark . So it might seem that I know exactly when to maintain referential integrity, and when not.
▍ render property
(
{match.params.id}
}
)
/>
Properties
render —
is a template used to create a component that exists to create and maintain a shared state ( here you can read more about this ). The contents of the property are render
unknown to the component. For instance:const App = (props) => (
Welcome, {props.name}
(
{/*
props.name находится за пределами Route и оно не передаётся
как свойство, поэтому Route не соответствует
идеологии PureComponent, у него
нет сведений о том, что здесь появится после рендеринга.
*/}
Hey, {props.name}, let’s get started!
)}/>
)
This means that the built-in function of the property
render
will not lead to problems with shouldComponentUpdate
. The component is not sufficiently informed to be converted to PureComponent
. So, again,
render
we have no evidence of slowness of properties . Everything else is thought experiments that are not related to reality.Summary
- Write the code as you used to, realizing your ideas in it.
- Take performance measurements to find bottlenecks. Here you can learn how to do it.
- Use
PureComponent
andshouldComponentUpdate
only when necessary, skipping functions that are component properties (only if they are not used in lifecycle event hooks to achieve side effects).
In conclusion, I want to say that if you are opposed to premature optimization, then you do not need proof that the built-in functions degrade performance. You, in order to think about their optimization, need evidence to the contrary.
Dear readers! How do you optimize React applications?