Benchmarks in Go

Original author: Alexander Morozov
  • Transfer

Benchmarks


Benchmarks are benchmarks for performance. It is quite useful to have them in the project and compare their results from commit to commit. Go has a very good toolkit for writing and running benchmarks. In this article I will show how to use the package testingfor writing benchmarks.

How to write a benchmark


It is just in Go. Here is an example of a simple benchmark:
func BenchmarkSample(b *testing.B) {
    for i := 0; i < b.N; i++ {
        if x := fmt.Sprintf("%d", 42); x != "42" {
            b.Fatalf("Unexpected string: %s", x)
        }
    }
}

Save this code to bench_test.go and run the command go test -bench=. bench_test.go.
You will see something like:
testing: warning: no tests to run
PASS
BenchmarkSample 10,000,000 206 ns / op
ok command-line-arguments 2.274s

We see here that one iteration of the benchmark took 206 nanoseconds. It was really easy. But there are a couple of interesting things about benchmarks in Go.

What can you test with benchmarks?


By default, it go test -bench=.only tests the speed of your code, however you can add a flag -benchmemthat allows you to test memory consumption and the number of memory allocations. It will look like this:
PASS
BenchmarkSample 10,000,000 208 ns / op 32 B / op 2 allocs / op

Here we see the number of bytes and memory allocations per iteration. Useful information as for me. You can also include these results for each benchmark separately by calling the method b.ReportAllocs().
But that’s not all, you can also set the throughput per iteration in bytes using the method b.SetBytes(n int64). For instance:
func BenchmarkSample(b *testing.B) {
    b.SetBytes(2)
    for i := 0; i < b.N; i++ {
        if x := fmt.Sprintf("%d", 42); x != "42" {
            b.Fatalf("Unexpected string: %s", x)
        }
    }
}

Now the output will be:
PASS
BenchmarkSample 5,000,000 324 ns / op 6.17 MB / s 32 B / op 2 allocs / op
ok command-line-arguments 1.999s

You can see the column with bandwidth, which is equal 6.17 MB/sin my case.

Initial conditions for benchmarks


What if you need to do something before each iteration of the benchmark? Of course, you don’t want to include the time of this operation in the benchmark results. I wrote a very simple data structure Setfor testing:
type Set struct {
    set map[interface{}]struct{}
    mu  sync.Mutex
}
func (s *Set) Add(x interface{}) {
    s.mu.Lock()
    s.set[x] = struct{}{}
    s.mu.Unlock()
}
func (s *Set) Delete(x interface{}) {
    s.mu.Lock()
    delete(s.set, x)
    s.mu.Unlock()
}

and benchmark for the method Delete:
func BenchmarkSetDelete(b *testing.B) {
    var testSet []string
    for i := 0; i < 1024; i++ {
        testSet = append(testSet, strconv.Itoa(i))
    }
    for i := 0; i < b.N; i++ {
        b.StopTimer()
        set := Set{set: make(map[interface{}]struct{})}
        for _, elem := range testSet {
            set.Add(elem)
        }
        for _, elem := range testSet {
            set.Delete(elem)
        }
    }
}

There are two problems in this code:
  • Slice creation time and memory are testSetincluded in the first iteration (and this is not a very big problem, because there will be many iterations)
  • The time and memory per method call is Addincluded in each iteration

For such cases, we have methods b.ResetTimer(), b.StopTimer()and b.StartTimer(). Their use in the previous benchmark is shown here:
func BenchmarkSetDelete(b *testing.B) {
    var testSet []string
    for i := 0; i < 1024; i++ {
        testSet = append(testSet, strconv.Itoa(i))
    }
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        b.StopTimer()
        set := Set{set: make(map[interface{}]struct{})}
        for _, elem := range testSet {
            set.Add(elem)
        }
        b.StartTimer()
        for _, elem := range testSet {
            set.Delete(elem)
        }
    }
}

Now the initial setup will not be taken into account in the results and we will see only the results of the method call Delete.

Benchmark comparison


Of course, benchmarks are of little use if you cannot compare them after changing the code. Here is an example of code that serializes the structure in jsonand a benchmark for it:
type testStruct struct {
    X int
    Y string
}
func (t *testStruct) ToJSON() ([]byte, error) {
    return json.Marshal(t)
}
func BenchmarkToJSON(b *testing.B) {
    tmp := &testStruct{X: 1, Y: "string"}
    js, err := tmp.ToJSON()
    if err != nil {
        b.Fatal(err)
    }
    b.SetBytes(int64(len(js)))
    b.ResetTimer()
    for i := 0; i < b.N; i++ {
        if _, err := tmp.ToJSON(); err != nil {
            b.Fatal(err)
        }
    }
}

Let's say this code has already been added to git, now I want to try a cool trick and measure the increase (or decrease) in performance. I am changing the method slightly ToJSON:
func (t *testStruct) ToJSON() ([]byte, error) {
    return []byte(`{"X": ` + strconv.Itoa(t.X) + `, "Y": "` + t.Y + `"}`), nil
}

It's time to start the benchmarks, this time save their output to files:
go test -bench =. -benchmem bench_test.go> new.txt
git stash
go test -bench =. -benchmem bench_test.go> old.txt

We can compare these results using benchcmp utility . You can install it by running the command go get golang.org/x/tools/cmd/benchcmp. Here are the comparison results:
# benchcmp old.txt new.txt
benchmark old ns / op new ns / op delta
BenchmarkToJSON 1579 495 -68.65%

benchmark old MB / s new MB / s speedup
BenchmarkToJSON 12.66 46.41 3.67x

benchmark old allocs new allocs delta
BenchmarkToJSON 2 2 +0.00 %

benchmark old bytes new bytes delta
BenchmarkToJSON 184 48 -73.91%

It is very useful to have such tables with changes, in addition, they can add solidity to your pull requests in opensource projects.

Recording Profiles


Also, you can record cpuand memoryprofiles runtime benchmarks:
go test -bench =. -benchmem -cpuprofile = cpu.out -memprofile = mem.out bench_test.go

About profile analysis, you can read an excellent post on the official Go blog.

Conclusion


Benchmarks are a great tool for a programmer. And Go allows you to write and analyze benchmark results very easily. New benchmarks allow you to find performance bottlenecks, suspicious code (efficient code is usually simpler and easier to read) or the use of the wrong tools for tasks.

Existing benchmarks will allow you to be more confident in the changes and their results can be a voice in your favor when reviewing. Writing benchmarks provides great benefits for the programmer and program, and I advise you to write more of them. It's fun!

Also popular now: