Mail.Ru for Business, Part 2: How it Works

    We recently launched Mail.Ru for Business. In the last post, we already talked about how to organize corporate mail on Mail.Ru, and today we will dwell on the technologies that we used in the implementation. When working on the project, we implemented a number of technical solutions on the server and client side, which ultimately allowed us to make the service more convenient. In this post we will tell in more detail how our client part is technologically arranged.

    Under the cutscene - about single-page, error collection, templating and switching from one URL to another without reloading the page.





    Single-Page

    The interface of the Mail.Ru for Business main page is quite simple and convenient. It is on this page that the process of adding domains begins. After registering the first domain, the admin panel will become available to you. All together - the main and the admin (and other hidden pages) - this is one full-fledged single-page application.



    To implement a single-page application, techniques were used that were successfully used in the development of Mail, Calendar and Address Book. For example, pages are rendered on the client side using the Fest template engine (more about it a bit later), and all communication with the server is limited to accessing the API methods.

    Some of the same techniques, in contrast, were applied for the first time, and I am very glad that I was able to bring something new in the process of developing :)

    Multiavtorizatsiya

    In addition to beauty and convenience, single-page gives us access to the buns of multi-authorization. We also have multi-authorization on the main Mail; for the new mail with domains and beautiful things, this, as they say, is what the doctor ordered. You can simultaneously sit in the corporate and personal mailbox, and quickly switch between them. At the same time, the page does not reload again.



    CSS animation

    We decided to abandon the support of old browsers, because we believe that domain administrators are quite advanced users. For example, we support IE starting from version 8 and higher. Thanks to this, our hands were untied in terms of using modern buns. Of particular note among them is CSS animation. As you know, users love to have everything beautiful and smooth on the page. You can, of course, implement this “beautifully and smoothly” with the same scripts, but if the browser itself gives us the opportunity to organize the dynamics using standard tools, why not use it?

    Most animation work is done using CSS.

    .panel {
        …
        top: -110px; 
        transition: top 0.5s 0;
    }
    /*показ панели */
    .panel__show {
        top: 0; 
    }
    


    With a script we only run it at the moments we need.

    var $panel = $('#id1')
        , $wrap= $('#id2')
        , offsetTop = $wrap.offset().top + $wrap.outerHeight()
    ;
    $(window).scroll(function() {
        var show = this.scrollTop() > offsetTop;
        $panel.toggleClass('panel__show', show);
    }.throttle(200, $(window)));


    But we decided to go further and make animation even for those users who use browsers that do not support these wonderful CSS buns. For them, we make animations using jQuery. In order to determine whether the browser supports CSS beauty or not, we use the standard CSS.supports browser function :

    var cssTransitions = 
        CSS.supports("transition", "all 1s 1s")
        || CSS.supports("-webkit-transition", "all 1s 1s")
        //… etc
    ;
    ...
    if( cssTransitions ) {
        //анимация средствами css
    }
    else { 
       //анимация средствами jQuery
    }
    


    CSS.supports is something like the standardized Modernizr , only native and performing one specific task - checking support for a specific CSS property. Native support is in browsers Chrome 28+, FireFox 22+, Opera 12.1+. For other browsers, we use a polyfile written by one of the developers of our mail. This polyfile is available by default on almost any Mail.Ru project and is successfully used in some specific situations.

    Fest

    fest- this is our development. It is a general purpose template engine with which we create pages. Fest is suitable for standardization both on the client side and on the server. Its main advantage is the speed of work: templates are "assembled" on the client really quickly. Another feature is support for modularity at the template level: we create one template and use it for many different pages or forms.

    All forms that are present on the page - adding a domain, users, pop-ups, adding / removing an administrator - are built on one template. Due to the versatility of Fest, we can allow ourselves to draw completely different controls within the same template: popups with dropdowns, popups without dropdowns, pages with check-boxes. The formation of all these forms takes place in one place.


    History API

    As you may have noticed, we consider it a good practice to use third-party developments if they are good and suitable for our purposes: why reinvent the wheel? So, in our project, we actively use the History API polyfill - fork of the library from Devote. The use of the History API greatly simplifies the life of developers. Mail.Ru for business "magically" moves from one URL to another without reloading the page - depending on what actions the user performed. The History API allows us to pass the necessary parameters, which are not displayed in the UI, and voila: a new slide is drawn. In this case, if the user reloads the page, he will be exactly at the place where he was.

    How does it all work? APIgives us access to an object of type History, which each tab has and is located at window.history in the object model. Using JavaScript, we can manipulate the address bar by "navigating" the pages using the .pushState (state, title, url) methods that add a new url to the browser history and .replaceState (state, title, url) that change the current url without adding a new item to the story.

    The state property of the history object will always indicate the current "state" of the current url. For example, if we did pushState ({data: “test1”}, “”, “url1”) and pushState ({data: “test2”}, “”, “url2”) - i.e. made two “transitions” through pages with the addition of items to the browser history and ended up on the page “url2”, and the user clicked “Back” in the browser and appeared on the page “url1”, then the value of the history.state.data property will be “test1” - what we need.

    This is especially convenient on the main page: if the user wanted to go to a very specific address, but is not authorized at the moment, then in pushState we will transfer the state object containing the desired address, and after authorization we will redirect it to where he wanted to go - in this way , he won’t have to go over the pens again. All the transition logic is performed in the browser, without reloading the page, while the page address looks “human”, without hashes (“#”).

    //определяем текущий URL
    var path = history.location.pathname; // history.location – from polyfill 
    if( /* пользователь не авторизован */ ) { 
        var state = {
            notAuth: true
            , urlBack: (path != "/") ? path : ""
        };
        //запоминаем текущее состояние пользователя и
        // переходим на главную страницу с отрисовкой соответствующего слайда
        history.pushState( state, null, "/");
    }
    


    Then, after authorization, we understand where the user wanted to go, and throw it to the destination.

    RequireJS vs SingleFile

    A large singe-page application is always a lot of js-modules. Some modules are a common functionality that is almost always needed, some modules are needed only on separate pages. When developing a fairly complex functionality, the number of js files grows very quickly. In this case, the developer needs to constantly think about dependencies and the order in which js-files are connected. Having made a mistake in the order or having forgotten to connect the necessary file, you can get an error in the most unpredictable place. AMD API just solves this problem. The developer just needs to specify the list of dependencies and declare the modules in a special way, and they will take care of loading these modules in the correct orderRequireJS .

    But it would be wrong to force the user to wait for the download of many files every time he logs into our application. Therefore, we use RequireJS only at the development stage, and the “build” is in battle - one js file that contains all the modules necessary for the application to work. The assembly is done by the r.js library , which is part of the RequireJS project.

    It is clear that in most cases the entire set of modules is not needed and it may seem that some of them are connected in vain, increasing the amount of data transmitted to the browser. But this is only at first glance. Actually gzip-compression on the server of the given js-files practically eliminates this problem. Moreover, we save on http-requests. As a result, one large js file will load faster than several small ones, even taking into account the fact that small files will be downloaded “lazily” - i.e. only as needed. For a detailed review of the issue, I suggest reading the article Download and initialize JavaScript or do your own research on the topic.

    Project assembly

    Things like the js file assembly described in the previous section should not be done by hand. Before putting it into production, in addition to assembling js-files, you need to collect csss from scss-files, "compile" fest-templates and transfer the result to js-collector, you need to update the version number in configs, etc. All these things had to be automated somehow.

    It was decided to use Grunt , a general-purpose task manager written in js under nodejs. It allows the js developer to write tasks to build the project on the server. For Grunt, there is a database of ready-made tasks, from which, for example, grunt-contrib-sass and grunt-contrib-requirejs were taken . Grunt starts the necessary tasks automatically when performing git push.

    Error collection

    It is useful for a web developer to know about errors on the client side. It would be even cooler to collect them somewhere, store them and then fix them. We use the third-party Sentry platform for this.

    Sentry is an open source project that works with a host of languages ​​and platforms. It allows you to track both server and client errors and make statistics on them. This platform is also used, for example, in Mozilla and Instagram.

    Sentry can send letters describing errors. In this case, you can not be afraid that the same reports of minor issues will clog the box: message parameters that are worthy of bothering the developer can be easily configured.

    Using the dashboard, we can not only track errors in real time, but also filter them according to a variety of parameters - from the source of the bug to the status that we ourselves assigned to it.

    Sentry is especially indispensable for catching exotic bugs that we cannot reproduce. Agree, having before your eyes a line of code and a description of the error (including the frequency of occurrence and the browser), it is much more convenient to understand what went wrong.

    However, a simple description of the error and line number may not be enough. For some exotic or difficult to reproduce errors, more information is needed to analyze this error. For example, for the “$ is not defined” error, you need to understand if jQuery has loaded with us? To collect browser errors, we use the Raven.js library, which has one undocumented feature - to call the dataCallback handler before each error is sent to the server:

    var ravenSetTimeout = typeof setTimeout === 'function' && setTimeout;
    var options = {
        dataCallback: function(obj){
            var userData;
            if( typeof obj === "object" ) {
                userData = obj["sentry.interfaces.User"];
                if( !userData ) {
                    userData = obj["sentry.interfaces.User"] = {};
                }
                userData.inFrame = window.parent !== window;
                userData.jQueryVersion = typeof jQuery === 'function' && jQuery.fn.jquery;
                userData.setTimeoutBody = 
                  typeof setTimeout === 'function'
                  && (setTimeout + "").contains("[native code]")
                    ? "[native]"
                    : ravenSetTimeout
                        ? setTimeout === ravenSetTimeout ? "[raven]" : "[other]"
                        : "[none]"
                ;
            }
            return obj;
        }
    };
    Raven.config("…", options).install();
    


    In this example, we check if we are running in the iframe, the jQuery version (if it is loaded at all) and the setTimeout code (because it can be replaced). We collect a little more statistical information for the correct diagnosis of errors. The value of the "sentry.interfaces.User" property will be displayed in the Sentry interface in a special field.

    Discuss

    This is how our Mail for Business works. I hope she will delight users with beauty, convenience and functionality. Read more about Mail.Ru for Business here . Let me remind you that the service is now in beta and we will be glad to receive your feedback and suggestions by feedback@biz.mail.ru or in the comments to this post.

    We will be pleased to provide any support to Habr users who want to try our new service.

    Olga Alekseeva,
    Egor Halimonenko ( termi )
    Mail frontend development group

    Also popular now: