JavaScript: The Big Whole Well Why


    Not so long ago, JavaScript boasted a new primitive BigInt data type for working with arbitrary precision numbers. The necessary minimum of information has already been told / translated about motivation and use cases . And I would like to pay a little more attention to the exaggerated local "explicitness" in type casting and the unexpected TypeError . Will we scold or understand and forgive (again)?

    Implicit becomes explicit?


    In a language where implicit type conversion has long been used to, it has become a meme of almost any conference and few people are surprised by such intricacies as:

    1 + {}; // '1[object Object]'
    1 + [[0]]; // '10'
    1 + new Date; // '1Fri Feb 08 2019 00:32:57 GMT+0300 (Москва, стандартное время)'
    1 - new Date; // -1549616425060
    ...
    

    We suddenly get a TypeError, trying to add two seemingly NUMBERS:

    1 + 1n; // TypeError: Cannot mix BigInt and other types, use explicit conversions
    

    And if the previous experience of implicitness did not lead to a breakdown in learning the language, then there is a second chance to break down and throw out the ECMA textbook and go into some Java.

    Further, the language continues to “troll” js developers:

    1n + '1'; // '11'
    

    Oh yes, do not forget about the unary + operator :

    +1n; // TypeError: Cannot convert a BigInt value to a number
    Number(1n); // 1
    

    In short, we cannot mix BigInt and Number in operations . As a result, it is not recommended to use “large integers” if 2 ^ 53-1 ( MAX_SAFE_INTEGER ) is enough for our purposes.

    Key decision


    Yes, this was the main decision of this innovation. If you forget that this is JavaScript, then everything is so logical: these implicit conversions contribute to the loss of information.

    When we add two values ​​of different numerical types (large integers and floating-point numbers), the mathematical value of the result may be outside their range of possible values. For example, the value of the expression (2n ** 53n + 1n) + 0.5 cannot be accurately represented by any of these types. This is no longer an integer, but a real number, but its accuracy is no longer guaranteed by the float64 format :

    2n ** 53n + 1n; // 9007199254740993n
    Number(2n ** 53n + 1n) + 0.5; // 9007199254740992
    

    In most dynamic languages, where types for both integers and floats are represented, the former are written as 1 , and the latter are written as 1.0 . Thus, during arithmetic operations on the presence of a decimal separator in the operand, we can conclude that the accuracy of float in calculations is acceptable. But JavaScript is not one of them, and 1 is a float! And this means that computing 2n ** 53n + 1 will return float 2 ^ 53. Which, in turn, breaks the key functionality of BigInt :

    2 ** 53 === 2 ** 53 + 1; // true
    

    Well , there’s no reason to talk about the implementation of the “numerical tower” either, because you won’t succeed in taking the existing number as a general numerical data type (for the same reason).

    And to avoid this problem, the implicit cast between Number and BigInt in operations was prohibited. As a result, the “big integer” cannot be safely cast into any JavaScript or Web API function, where the usual number is expected:

    Math.max(1n, 10n); // TypeError
    

    You must explicitly select one of the two types by using Number () or BigInt () .

    In addition, for operations with mixed types, there is an explanation about a complex implementation or loss of performance, which is quite common for compromise language innovations.

    Of course, this applies to implicit numerical conversions with other primitives:

    1 + true; // 2
    1n + true; // TypeError
    1 + null; // 1
    1n + null; // TypeError
    

    But the following (already) concatenations will work, since the expected result is a string:

    1n + [0]; // '10'
    1n + {}; // '1[object Object]'
    1n + (_ => 1); // '1_ => 1'
    

    Another exception is in the form of comparison operators (like < , > and == ) between Number and BigInt . There is also no loss of accuracy, since the result is a Boolean.

    Well, if you recall the previous new Symbol data type , does TypeError no longer seem like such a radical addition?

    Symbol() + 1; // TypeError: Cannot convert a Symbol value to a number
    

    And yes, but no. Indeed, conceptually symbol is not a number at all, but a whole - very much:

    1. It is highly unlikely that symbol will fall into such a situation. However, this is very suspicious and TypeError is quite appropriate here.
    2. It is very likely and usual that the “big whole” in operations will turn out to be one of the operands when there is really nothing wrong.

    The unary + operator throws an exception due to a compatibility problem with asm.js , where Number is expected . The unary plus cannot work with BigInt in the same way as Number , since in this case the previous asm.js code will become ambiguous.

    Alternative offer


    Despite the relative simplicity and “cleanliness” of the BigInt implementation , Axel Rauschmeyer emphasizes the lack of innovation. Namely, its only partial backward compatibility with the existing Number and the ensuing:
    Use Numbers for up to 53-bit ints. Use Integers if you need more bits
    As an alternative, he proposed the following .

    Let Number become the supertype for the new Int and Double :

    • typeof 123.0 === 'number' , and Number.isDouble (123.0) === true
    • typeof 123 === 'number' , and Number.isInt (123) === true

    With new functions for Number.asInt () and Number.asDouble () conversions . And, of course, with operator overloading and the necessary casts:

    • Int × Double = Double (cast)
    • Double × Int = Double (with cast)
    • Double × Double = Double
    • Int × Int = Int (all operators except division)

    Interestingly, in the simplified version, this sentence manages (at first) without adding new types to the language. Instead, the definition of The Number Type expands : in addition to all possible 64-bit double-precision numbers (IEEE 754-2008), number now includes all integers. As a result, the "inaccurate number" 123.0 and the "exact number" 123 are separate numbers of the single Number type .

    It looks very familiar and reasonable. However, this is a serious upgrade of the existing number, which is more likely to “break the web” and its tools:

    • There is a difference between 1 and 1.0 , which was not there before. The existing code uses them interchangeably, which after the upgrade can lead to confusion (unlike languages ​​where this difference was present initially).
    • There is an effect when 1 === 1.0 (it is supposed to be an upgrade), and at the same time, Number.isDouble (1)! == Number.isDouble (1.0) : again, it’s like that.
    • The “peculiarity” of the equality 2 ^ 53 and 2 ^ 53 + 1 disappears, which will break the code that relies on it.
    • The same compatibility issue with asm.js and more.

    Therefore, in the end, we have a compromise solution in the form of a new separate data type. It’s just worth emphasizing that another option was also considered and discussed .

    When you sit on two chairs


    Actually, the committee’s comment begins with the words:
    Find a balance between maintaining user intuition and preserving precision

    On the one hand, I finally wanted to add something “exact” to the language. And on the other hand, to maintain its already familiar behavior for many developers.

    It’s just that this “exact” cannot be added, because you can’t break it: mathematics, ergonomics of the language, asm.js, the possibility of further expansion of the type system , productivity, and, ultimately, the web itself! And you can’t break it all at the same time, which leads to the same.

    And you can’t break the intuition of language users, which, of course, was also hotly debated . True, did it work out?

    Also popular now: