ECMAScript 6

Original author: Guillermo Rauch
  • Transfer
The boundaries of my language represent the boundaries of my world.
- Ludwig Wittgenstein

Over the past few months, I have been writing only ECMAScript 6 code using the transformation [1] to the currently supported JavaScript versions.

ECMAScript 6, further ES6 and earlier , is the latest version of the specification. As of August 2014, new features are not being discussed, but details and extreme cases are still being clarified. It is expected that the standard will be completed and published in mid-2015.

The adoption of ES6 at the same time led to improved performance (which makes my code more concise) and eliminated a whole class of errors by eliminating common JavaScript pitfalls.

Moreover, it confirmed my belief in an evolutionary approach to language and software design, as opposed to clean-slate recreation .

This should be fairly obvious to you if you used CoffeeScript, which focuses on the good parts of JS and hides the bad ones. ES6 was able to take on so many innovations from CoffeeScript that some even question the further development of the latter.

Instead of doing a thorough analysis of the new features, I’ll talk about the most interesting of them. To encourage developers to update, new languages ​​and frameworks should (1) have a convincing history of compatibility and (2) offer you a fairly large carrot .

# Module Syntax

ES6 introduces syntax for defining modules and declaring dependencies. I emphasize the word syntax because ES6 is not related to the actual implementation of how the modules will be selected or loaded.

This further enhances the interaction between the various contexts in which JavaScript can be executed.

Consider the simple task of writing CRC32 reusable in JavaScript as an example .

So far, there have been no recommendations on how to actually solve this problem. A general approach is to declare a function:

function crc32(){
  // …

With the caveat, of course, that it introduces a single fixed global name that other parts of the code will have to reference. And in terms of code that uses the crc32 function, there is no way to declare a dependency. Once a function has been declared, it will exist until the code is interpreted.

In this situation, Node.JS chose the path for introducing the require function and module.exports and exports objects . Despite success in creating a prosperous ecosystem of modules, interoperability was still somewhat limited.

A typical scenario to illustrate these shortcomings is to generate bundles of modules for the browser using tools such asbrowserify or webpack . They are still in their infancy, because they perceive require () as syntax , effectively ridding themselves of their inherent dynamism.

The above example is not subject to static analysis, so if you try to transport this code to the browser, it will break:

require(woot() + ‘_module.js’);

In other words, the packer algorithm cannot know in advance what woot () means .

ES6 introduced the right set of constraints, taking into account most existing use cases, drawing inspiration from the most informally-existing special modular systems like jQuery $ .

The syntax requires some getting used to. The most common dependency pattern is surprisingly impractical.

The following code:

import crc32 from ‘crc32’;

works for

export default function crc32(){}

but not for

export function crc32(){}

the latter is considered a named export and requires the {} syntax in the import construct:

import { crc32 } from ‘crc32’;

In other words, the simplest (and perhaps most desirable) form for defining a module requires the additional keyword default . Or in case of its absence, use {} when importing.

# Restructuring

One of the most common patterns that have emerged in modern JavaScript code is the use of variant objects.

This practice is widely used in new browser APIs, for example, in WHATWG fetch (a modern replacement for XMLHttpRequest ):

fetch(‘/users’, {
  method: ‘POST’,
  headers: {
    Accept: ‘application/json’,
    ‘Content-Type’: ‘application/json’
  body: JSON.stringify({
    first: ‘Guillermo’,
    last: ‘Rauch’

The widespread adoption of this model effectively prevents the JavaScript ecosystem from falling into a logical trap .

If we accept that the API accepts ordinary arguments, and not an object with parameters, then the fetch call becomes the task of remembering the order of the arguments and entering the null keyword in the right place.

// пример ночного кошмара из альтернативного мира
fetch(‘/users’, ‘POST’, null, null, {
  Accept: ‘application/json’,
  ‘Content-Type’: ‘application/json’
  }, null, JSON.stringify({
    first: ‘Guillermo’,
    last: ‘Rauch’

On the implementation side, however, it doesn't look as pretty. Looking at a function declaration, its signature no longer describes the input capabilities:

function fetch(url, opts){
  // …

This is usually accompanied by manual setting of default values ​​to local variables:

opts = opts || {};
var body = opts.body || '';
var headers = opts.headers || {};
var method = opts.method || 'GET';

And unfortunately for us, despite its prevalence, the practice of using || actually introduces hard to detect errors. For example, in this case, we do not assume that opts.body can be 0 , so reliable code will most likely look like this:

var body = opts.body === undefined ? '' : opts.body;

Due to the destructuring, we can immediately clearly define the parameters, correctly set the default values ​​and set them in the local scope:

fetch(url, { body='', method='GET', headers={} }){
  console.log(method); // нету opts.

Actually, the default value can be applied to the whole object with parameters:

fetch(url, { method='GET' } = {}){
  // значение по умолчанию для второго параметра - {}
  // выведет "GET":

You can also destruct the assignment operator:

var { method, body } = opts;

This reminds me of the expressiveness provided with , but without magic or negative effects.

# New Agreements

Some parts of the language have been completely replaced by better alternatives , which will quickly become the new standard in how you write JavaScript.

I will talk about some of them.

# let / const instead of var

Instead of writing var x = y, you will most likely write let x = y . let allows you to declare variables with block scope:

if (foo) {
  let x = 5;
    // тут x равен `5`
  }, 500);
// тут x равен `undefined`

This is especially useful for for or while loops:

for (let i = 0; i < 10; i++) {}
// `i` здесь не существует.

Use const if you want to provide immutability with the same semantics as let .

# string patterns instead of concatenation

Due to the lack of sprintf or similar utilities in the standard JavaScript library, line-ups have always been more painful than they should be.

String patterns have made embedding expressions in strings a trivial operation, as well as supporting multiple lines. Just replace 'with'

let str = `
  Здравствуйте ${first}.
  Мы в ${new Date().getFullYear()} году

# classes instead of prototypes

The definition of a class was a cumbersome operation and required a deep knowledge of the internal structure of the language. Even though the benefits of understanding the internal structure are obvious, the entry threshold for beginners was unreasonably high.

The class offers syntactic sugar for defining constructor functions , prototype methods , and getters / setters. It also implements prototype inheritance with built-in syntax (without additional libraries or modules).

class A extends B {
  get prop(){}
  set prop(){}

I was initially surprised to learn that classes do not pop up (hoisted) (explanation here ). Therefore, you should think about them, translating in var A = function () {} as opposed to function A () {} .

# () => instead of function

Not only because (x, y) => {} is shorter to write than function (x, y) {} , but this behavior in the body of the function will most likely refer to what you want.

The so-called “thick arrows” functions are lexically related . Consider an example of a method inside a class that starts two timers:

class Person {
  constructor(name){ = name;
    }, 100);
    setTimeout(() => {
    }, 100);

To the novice’s horror, the first timer (using function ) will output “undefined” . But the second will correctly display name .

# First-class async I / O support

Asynchronous code execution has accompanied us for almost the entire history of the language. setTimeout , after all, was introduced around the time JavaScript 1.0 came out.

But, perhaps, the language does not support asynchrony in fact . The return value of function calls that are scheduled to be executed in the future is usually undefined or, in the case of setTimeout , Number .

The introduction of Promise has filled a very large gap in compatibility and composition .

On the one hand, you will find the API more predictable. As a test, consider a new fetchAPI How does this work for the signature we just described? You guessed. It returns a Promise .

If you have used Node.JS in the past, you know that there is an informal arrangement that callbacks follow a signature:

function (err, result){}

The idea that callbacks will be called only once is also unofficially pointed out . And null will be the value if there are no errors (and not undefined or false ). Except, perhaps this is not always the case .

# Forward to the future

ES6 is gaining considerable momentum in the ecosystem. Chrome and io.js have already added some functionality from ES6. Much has already been written about this.

But it is worth noting that this popularity was largely due to the presence of utilities for transformation , rather than actual support. Great tools appeared to enable the transformation and emulation of ES6, and browsers over time added support for debugging code and catching errors (using code cards).

The evolution of the language and its alleged functionality are ahead of implementation. As mentioned above, Promise is really interesting as a standalone unit that offers a solution to the problemcallback hell once and for all.

The ES7 standard suggests doing this by introducing the async capability of a Promise object:

async function uploadAvatar(){
  let user = await getUser();
  user.avatar = await getAvatar();
  return await;

Although this specification has been discussed for a long time, the same tool that compiles ES6 into ES5 has already implemented this.

There is still a lot of work to make sure that the process of adopting a new language syntax and API becomes even more devoid of oddities for those who are just getting started.

But one thing is certain: we must accept this future.

1.^ I use the word “transformation” in the article to explain the compilation of source code into source code in JavaScript. But the meaning of this term is technically controversial .

Also popular now: