mysql-proxy testing with a small scope

    mysql-proxy is an almost ideal solution for those who need cheap sharding without rewriting applications, as well as hosting providers who find it difficult to control crooked clients :) but would like to endure mysql or reduce the load on the DBMS without unnecessary contacts with clients.

    So, there is a task to make life easier for mysql server. An affordable solution is master-slave replication. Everything is great when we have programmers who can rewrite the application to use several DBMS servers, but what if there are none? Here mysql-proxy comes to the rescue. Working transparently for the client, mysql-proxy can proxy requests from several slave & master servers.

    Next will be a description of the tests

    I will omit the deployment of master-slave replication, documentation in this topic, I recommend only skipping the one that advises you to copy table files, the --master-data option in mysqldump has long been present.

    So, master-slave replication was deployed with one master and three slaves. The task is to check the mysql-proxy overhead and failover support.

    To check, two scripts were used, the first one inserted 1000 rows into the database without using indexes, the second one read the table, the cache was excluded, the script was run several times, average values ​​were taken.

    Several tests were performed.

    Test 1 - insert into the database
    a) directly into the database (the result is taken as 100%) - 100%
    b) through mysql-proxy (one master) - 69%

    Test 2 - select from the database
    a) directly from the database (the result is taken at 100%) - 100%
    b) through mysql-proxy (only one slave) - 75%

    Test 3 - selection from the database
    a) directly from the database (the result is taken at 100%) - 100%
    b) through mysql-proxy (three slaves) 70%

    Test 4 - selection from the database
    a) directly from the database (the result is taken as 100%) - 100%
    b) through mysql-proxy (two of the three slaves were canceled to check for failover) 8%

    Graph:


    Test scripts were called
    ab -n 1000 -c 1 script
    and ab -n 1000 -c 10 the script
    between calls was a minute pause, extreme values ​​were discarded, the average for a connection without mysql-proxy was taken as 100%

    Test 1 was performed by the script:
    for ($ i = 1; $ i <= 1000; $ i ++) {
    $ link = new mysqli ("localhost", "login", "pass", "base");
    $ link-> query ("INSERT INTO test` SET` ... ");
    mysqli_close ($ link);
    }
    ?>


    truncate

    tests 2-4 were performed before its execution
    for ($ i = 1; $ i <= 1000; $ i ++) {
    $ link = new mysqli ("localhost", "login", "pass", "base");
    $ q = $ link-> query ("SELECT * FROM` test` ");
    while ($ r = $ q-> fetch_assoc ()) {
    print_r ($ r);
    }
    mysqli_close ($ link);
    }
    ?>


    Conclusions:
    If test1 does not raise a question, and test 2 and 3 in the case of complex queries and the use of indexes will be completely different under real load and also understandable, then the results of test 4 introduce me a little to bewilderment.

    Mysql-proxy config in test4

    cat / etc / default / mysql-proxy
    ENABLED = "true"
    OPTIONS = "
    --proxy-address = localhost: 3309
    --proxy-read-only-backend-addresses = workserver1: 3306
    --proxy -read-only-backend-addresses = downserver1: 3306
    --proxy-read-only-backend-addresses = downserver2: 3306
    --proxy-backend-addresses = workserver1: 3306
    --proxy-skip-profiling
    "

    What are your thoughts?

    Also popular now: