
Purely unscientific: Tarantool 1.6 vs Golang (speed)
I read recently about Tarantool, it became interesting. The idea is good - the code next to the database is stored in such a fast Redis-like environment.
And something was thinking - we are now actively using Golang at work, in fact, the thought came that Go wrote a lot of things, including and embedded databases. But what if, for example, you compare Go + LevelDB (actually, you could have any other) against Tarantool. I also tested Go + RocksDB, but everything turned out to be a little more complicated there , and the result is about the same on small data.
I tested a simple task - an HTTP server, upon request - write the key to the database, get it by name (without any race checks), send back a simple JSON from this value.
Compare: go+leveldb
, tarantool
, go+go-tarantool
,nginx upstream tnt_pass
Looking ahead - in my unscientific test I won Go + LevelDB due to the use of all processor cores. Most likely, if you launch several Tarantulas and a balancer, there may be some gain, but not to say that it will be significant ... But, the truth is, here it will already be necessary to do replication or something like that.
But overall, Tarantool is a very impressive thing.
Please note: I am comparing a very specific case , this does not mean that in all other cases Go / LevelDB wins or loses.
Well and still: instead of LevelDB - probably, it is better to use RocksDB.
So the result (briefly)
4-10
= 4 threads, 10 simultaneous connections 10-100
= 10 threads, 100 connections
Please note Tarantool takes only 1 CPU thread (or rather, in view of 2), and it was tested on a 4-thread CPU. Go uses all cores and threads by default.
nginx lua tnt_pass taken from dedokOne comment ( result )
wrk -t 4 -c 10
(4 threads, 10 connections):
Golang:
Latency Distribution
50% 269.00us
99% 1.64ms
Requests/sec: 25637.26
Tarantool:
Latency Distribution
50% 694.00us
99% 1.43ms
Requests/sec: 10377.78
But, Tarantula took up only about half of the cores, so their speed is probably about the same.
Under heavy load ( wrk -t 10 -c 100
), the Tarantula remained in place by RPS (but latency dipped much more noticeably than Golang, especially the upper part), and Golang even perked up (but latency also dipped, of course).
Go:
Latency Distribution
50% 2.85ms
99% 8.12ms
Requests/sec: 33226.52
Tarantool:
Latency Distribution
50% 8.69ms
99% 73.09ms
Requests/sec: 10763.55
Tarantool has its advantages: secondary index, replication ...
We Go also has a huge library ecosystem ( about 100 thousand by my count, among them implementations built (and not) database - the sea ), and, as an example, the same bleve provides full-text search (which, as I understand it, for example , not in Tarantool).
Feels like the Tarantula ecosystem is poorer. At least everything that is offered is msgpack, http server, client, json, LRU cache, ... in Go it is implemented in countless versions ..
That is, in general, there is no crazy speed gain.
So far, my personal choice remains in the direction of Go, because there is no feeling that the Tarantool ecosystem will shoot so hard in the near future, and Go - has been actively developing for a long time.
The Tarantool code is, of course, shorter, but mainly because errors are handled by the language. In Go, you can also cut everything err
and remain about the same.
Can someone have other opinions?
We also noticed in the comments about atomic code updates in Tarantool, but since we are talking about HTTP requests, we (at the current place of work) use endless for go and according to our tests (and we have thousands of requests per second) - we update Go code without losing HTTP requests. An example at the end of the article.
And if more about the test:
➜ ~ go version
go version go1.6 darwin/amd64
➜ ~ tarantool --version
Tarantool 1.6.8-525-ga571ac0
Target: Darwin-x86_64-Release
Golang:
➜ ~ wrk -t 4 -c 10 -d 5 --latency http://127.0.0.1:8081/
Running 5s test @ http://127.0.0.1:8081/
4 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 346.71us 600.80us 26.94ms 97.89%
Req/Sec 6.54k 0.88k 13.87k 73.13%
Latency Distribution
50% 269.00us
75% 368.00us
90% 493.00us
99% 1.64ms
130717 requests in 5.10s, 15.08MB read
Requests/sec: 25637.26
Transfer/sec: 2.96MB
Tarantool:
➜ ~ wrk -t 4 -c 10 -d 5 --latency http://127.0.0.1:8080/
Running 5s test @ http://127.0.0.1:8080/
4 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 767.53us 209.64us 4.04ms 87.26%
Req/Sec 2.61k 437.12 3.15k 45.59%
Latency Distribution
50% 694.00us
75% 0.90ms
90% 1.02ms
99% 1.43ms
52927 requests in 5.10s, 8.58MB read
Requests/sec: 10377.78
Transfer/sec: 1.68MB
Under greater load:
Go:
➜ ~ wrk -t 10 -c 100 -d 5 --latency http://127.0.0.1:8081/
Running 5s test @ http://127.0.0.1:8081/
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 3.04ms 1.48ms 25.53ms 80.21%
Req/Sec 3.34k 621.43 12.52k 86.20%
Latency Distribution
50% 2.85ms
75% 3.58ms
90% 4.57ms
99% 8.12ms
166514 requests in 5.01s, 19.21MB read
Requests/sec: 33226.52
Transfer/sec: 3.83MB
Tarantool:
➜ ~ wrk -t 10 -c 100 -d 5 --latency http://127.0.0.1:8080/
Running 5s test @ http://127.0.0.1:8080/
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 10.65ms 14.24ms 269.85ms 98.43%
Req/Sec 1.09k 128.17 1.73k 94.56%
Latency Distribution
50% 8.69ms
75% 10.50ms
90% 11.36ms
99% 73.09ms
53943 requests in 5.01s, 8.75MB read
Requests/sec: 10763.55
Transfer/sec: 1.75MB
Test sources:
Go:
package main
import (
"encoding/json"
"fmt"
"io"
"net/http"
"github.com/syndtr/goleveldb/leveldb"
)
var db *leveldb.DB
func hello(w http.ResponseWriter, r *http.Request) {
err := db.Put([]byte("foo"), []byte("bar"), nil)
if err != nil {
w.WriteHeader(500)
io.WriteString(w, err.Error())
return
}
res, err := db.Get([]byte("foo"), nil)
if err != nil {
w.WriteHeader(500)
io.WriteString(w, err.Error())
return
}
result, err := json.Marshal(string(res))
if err != nil {
w.WriteHeader(500)
io.WriteString(w, err.Error())
return
}
w.Write(result)
}
func main() {
var err error
db, err = leveldb.OpenFile("level.db", nil)
if err != nil {
panic(err)
}
http.HandleFunc("/", hello)
fmt.Println("http://127.0.0.1:8081/")
http.ListenAndServe("127.0.0.1:8081", nil)
}
Tarantool:
#!/usr/bin/env tarantool
box.cfg{logger = 'tarantool.log'}
space = box.space.data
if not space then
space = box.schema.create_space('data')
space:create_index('primary', { parts = {1, 'STR'} })
end
local function handler(req)
space:put({'foo','bar'})
local val = space:get('foo')
return req:render({ json = val[2] })
end
print "http://127.0.0.1:8080/"
require('http.server').new('127.0.0.1', 8080)
:route({ path = '/' }, handler)
:start()
Golang (atomic code substitution, no connection loss):
package main
import (
"encoding/json"
"fmt"
"io"
"net/http"
"syscall"
"io/ioutil"
"time"
"github.com/fvbock/endless"
"github.com/gorilla/mux"
"github.com/syndtr/goleveldb/leveldb"
)
var db *leveldb.DB
func hello(w http.ResponseWriter, r *http.Request) {
if db == nil {
// (необязательная) гарантия себе, что тест и правда отработал
panic("DB is not yet initialized")
}
err := db.Put([]byte("foo"), []byte("bar"), nil)
if err != nil {
w.WriteHeader(500)
io.WriteString(w, err.Error())
return
}
res, err := db.Get([]byte("foo"), nil)
if err != nil {
w.WriteHeader(500)
io.WriteString(w, err.Error())
return
}
result, err := json.Marshal(string(res))
if err != nil {
w.WriteHeader(500)
io.WriteString(w, err.Error())
return
}
w.Write(result)
}
func main() {
var err error
mux1 := mux.NewRouter()
mux1.HandleFunc("/", hello).Methods("GET")
fmt.Println("http://127.0.0.1:8081/")
server := endless.NewServer("127.0.0.1:8081", mux1)
server.BeforeBegin = func(add string) {
ioutil.WriteFile("server.pid", []byte(fmt.Sprintf("%d", syscall.Getpid())), 0755)
db, err = leveldb.OpenFile("level.db", nil)
for err != nil {
time.Sleep(10 * time.Millisecond)
db, err = leveldb.OpenFile("level.db", nil)
}
}
server.ListenAndServe()
if db != nil {
db.Close()
}
}
After that, you can do a go build
run and try to do during the load go build; kill -1 $(cat server.pid)
- in my tests, there was no data loss.
In the comments, we recommended trying go + go-tarantool
I tried:
Less load
➜ ~ wrk -t 4 -c 10 -d 5 --latency http://127.0.0.1:8081/
Running 5s test @ http://127.0.0.1:8081/
4 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 799.14us 502.56us 25.22ms 95.74%
Req/Sec 2.55k 248.65 2.95k 85.22%
Latency Distribution
50% 727.00us
75% 843.00us
90% 1.02ms
99% 2.03ms
51591 requests in 5.10s, 5.95MB read
Requests/sec: 10115.52
Transfer/sec: 1.17MB
Huge pressure:
➜ ~ wrk -t 10 -c 100 -d 5 --latency http://127.0.0.1:8081/
Running 5s test @ http://127.0.0.1:8081/
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.49ms 4.00ms 65.06ms 81.21%
Req/Sec 1.38k 357.31 8.40k 94.61%
Latency Distribution
50% 6.78ms
75% 8.86ms
90% 11.77ms
99% 22.74ms
69091 requests in 5.10s, 7.97MB read
Requests/sec: 13545.12
Transfer/sec: 1.56MB
Source:
tarantool.lua:
#!/usr/bin/env tarantool
box.cfg{ listen = '127.0.0.1:3013', logger = 'tarantool.log' }
space = box.space.data
if not space then
box.schema.user.grant('guest', 'read,write,execute', 'universe')
space = box.schema.create_space('data')
space:create_index('primary', { parts = {1, 'STR'} })
end
print(space.id)
print('Starting on 3013')
main.go:
package main
import (
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"time"
"github.com/tarantool/go-tarantool"
)
var client *tarantool.Connection
func hello(w http.ResponseWriter, r *http.Request) {
spaceNo := uint32(512)
_, err := client.Replace(spaceNo, []interface{}{"foo", "bar"})
if err != nil {
w.WriteHeader(500)
io.WriteString(w, err.Error())
return
}
indexNo := uint32(0)
resp, err := client.Select(spaceNo, indexNo, 0, 1, tarantool.IterEq, []interface{}{"foo"})
if err != nil {
w.WriteHeader(500)
io.WriteString(w, err.Error())
return
}
first := resp.Data[0].([]interface{})
result, err := json.Marshal(first[1])
if err != nil {
w.WriteHeader(500)
io.WriteString(w, err.Error())
return
}
w.Write(result)
}
func main() {
var err error
server := "127.0.0.1:3013"
opts := tarantool.Opts{
Timeout: 500 * time.Millisecond,
}
client, err = tarantool.Connect(server, opts)
if err != nil {
log.Fatalf("Failed to connect: %s", err.Error())
}
http.HandleFunc("/", hello)
fmt.Println("http://127.0.0.1:8081/")
http.ListenAndServe("127.0.0.1:8081", nil)
}