Translation: One Year With Go

Original author: Andrew Thompson
  • Transfer
Under the cut - a translation of an article by an experienced developer about his experience with Go. Important - the opinion of the translator may not coincide with the opinion of the author of the article.




So, it's been a year since I started using Go. A week ago, I deleted it from production.

I am writing this post because over the past year many have asked me about my impressions of working with Go, and I would like to tell you more about it than is possible on Twitter and IRC - until my memories wore off.

So, let's talk about why I don't consider Go a useful tool:

Tools


The tools provided with Go are weird. At first glance, many of them are not bad, but with prolonged work, most of them begin to demonstrate their limitations. Compared to C or Erlang toolkits, they look like a not-so-good joke.

coverage analysis


Strictly speaking, the utility for analyzing code coverage in Go is a hack. It works with only one file at a time and does this by inserting something like this into it:

GoCover.Count[n] = 1

Where n is the identifier of the branch position in the file. Well, she also inserts such a gigantic structure at the end of the file:

really giant
var GoCover = struct {
        Count     [7]uint32
        Pos       [3 * 7]uint32
        NumStmt   [7]uint16
} {
        Pos: [3 * 7]uint32{
                3, 4, 0xc0019, // [0]
                16, 16, 0x160005, // [1]
                5, 6, 0x1a0005, // [2]
                7, 8, 0x160005, // [3]
                9, 10, 0x170005, // [4]
                11, 12, 0x150005, // [5]
                13, 14, 0x160005, // [6]
        },
        NumStmt: [7]uint16{
                1, // 0
                1, // 1
                1, // 2
                1, // 3
                1, // 4
                1, // 5
                1, // 6
        },
}


This approach works adequately for simple unit tests within a single file, but if you want to get coverage analysis for a large integration test, I can only wish you good luck. identifiers in the global scope are conflicting between files, and when using unique names there is no easy way to get a general coverage report. For other programming languages, solutions for analyzing code coverage work with the program as a whole, but not for individual files.

performance analysis


The same with the performance analysis utility - it looks good until you see how it works. And it works by wrapping the code in a loop with a variable number of iterations. After that, the loop is iterated until the code is executed “long enough” (by default 1 second), after which the total execution time is divided by the number of iterations. This approach not only includes the cycle itself in measuring performance, but also hides fluctuations. Implementation code from benchmark.go :

func (b *B) nsPerOp() int64 {
    if b.N <= 0 {
        return 0
    }
    return b.duration.Nanoseconds() / int64(b.N)
}

This implementation will mask the pauses of the garbage collector, the slowdown associated with resource allocation races, and other interesting things if they do not happen too often.

compiler and go vet


One of Go's strengths discussed is fast compilation. As far as I can tell, this is partially achieved due to the fact that many of the checks that the compiler usually does are simply skipped - they are implemented in go vet . The compiler does not check problems with the same variable names in different scopes or with the incorrect printf format , all these checks are implemented in go vet . Moreover, the quality of checks is deteriorating in new versions: in version 1.3 it does not show the problems that version 1.2 showed .

go get


Go users chorus saying not to use get, but they don’t do anything to mark it as an unsuccessful implementation and make an official replacement.

$ GOPATH


Another idea that I'm not happy about. With much greater pleasure, I would clone the project into the home directory and use the build system to install all the necessary dependencies. Not that the $ GOPATH implementation was a lot of trouble, but it’s an unpleasant little thing to keep in mind.

Go race detector


This is a good thing. It is sad that it is generally needed. Well, the fact that it does not work on all supported platforms (FreeBSD, anyone?) And the maximum number of goroutine is only 8192. Moreover, you need to manage to encounter a race condition - which is quite difficult, considering how much race detector slows everything down.

Rantime


Channels / mutexes


Channels and mutexes SLOW. Adding synchronization via mutexes to production so reduced the speed of work that the best solution was to start the process under daemontools and restart it in case of a crash.

fall logs


When go crashes, without exception, goroutine dispatches its call stack to stdout. The amount of this information grows with the growth of your program. Moreover, many error messages are falsely worded, for example, 'evacuation not done in time' or 'freelist empty'. It seems that the authors of these posts set out to maximize traffic to the google search engine, because in most cases this is the only way to understand what is happening.

introspection runtime



It doesn’t work, in practice Go supports the concept of “print debugging”. You can use gdb , but I don’t think you want to do this.

Tongue


I don't enjoy writing code on Go. I either fight with a limited type system with castes of everything in interface {} or do copy-paste of code that does almost the same thing for different types. Each time I add new functionality, this translates into the definition of even more types and the completion of the code to work with them. What is better than using C with adequate pointers, or using functional code with complex types?

Apparently, I also have problems understanding pointers in Go (with Cthere are no such problems). In many cases, adding an asterisk to the code magically made it work, even though the compiler compiled both options without errors. Why should I work with pointers using a garbage collector language?

Converting byte [] to string and working with arrays / slices causes problems. Yes, I understand why all this was done, but according to my feelings it is too low-level in relation to the rest of the language.

And there is [:], ... with append . Look at this:

iv = append(iv, truncatedIv[:]...)

This code requires manual control, because append , depending on the size of the array, will either add values ​​or reallocate memory and return a new pointer. Hello, good old realloc .

standard library


Part of the standard library is not bad, especially cryptography, which for the better differs from a simple wrapper over OpenSSL, which most languages ​​offer you. But the documentation and everything related to the interfaces ... I often have to read the implementation code instead of the documentation, because the latter is often limited to useless "implements method X".

The big problems are the net library. Unlike regular network libraries, this library does not allow changing the parameters of created sockets. Want to set the flag IP_RECVPKTINFO ? Use the syscall library, which is the worst POSIX wrapper I've seen. You cannot even get the file descriptor of the created connection, you have to do everything through 'syscall':

Hidden text
fd, err := syscall.Socket(syscall.AF_INET6, syscall.SOCK_DGRAM, 0)
if err != nil {
    rlog.Fatal("failed to create socket", err.Error())
}
rlog.Debug("socket fd is %d\n", fd)
err = syscall.SetsockoptInt(fd, syscall.IPPROTO_IPV6, syscall.IPV6_RECVPKTINFO, 1)
if err != nil {
    rlog.Fatal("unable to set IPV6_RECVPKTINFO", err.Error())
}
err = syscall.SetsockoptInt(fd, syscall.IPPROTO_IPV6, syscall.IPV6_V6ONLY, 1)
if err != nil {
    rlog.Fatal("unable to set IPV6_V6ONLY", err.Error())
}
addr := new(syscall.SockaddrInet6)
addr.Port = UDPPort
rlog.Notice("UDP listen port is %d", addr.Port)
err = syscall.Bind(fd, addr)
if err != nil {
    rlog.Fatal("bind error ", err.Error())
}


You will also get a lot of pleasure by receiving and passing byte [] when calling 'syscall' functions. Creating and removing C structures from Go is just some kind of positive charge.

Perhaps this is done for using sockets only in polling scripts? I don’t know, but any attempt at complex network programming makes it necessary to write awful and non-portable code.

conclusions


I can not understand the meaning of Go. If I need a system language, I use C / D / Rust. If I need a language with good concurrency support, then I use Erlang or Haskell. The only Go application I see is command line utilities that should be portable and not add dependencies. I do not think that the language is well suited for "long-lived" server tasks. Perhaps it looks attractive to Ruby / Python / Java developers, where, as I understand it, most of the developers on Go came from. I also do not rule out that Go will become the “new Java”, given the ease of deployment and the reputation of the language. If you are looking for a better version of Ruby / Python / Java, perhaps Go will suit you - but I would not recommend stopping your search in that language. Good programming languages ​​allow you to grow as a programmer. LISP demonstrates the idea of ​​“code as data”, “C” teaches how to work with a computer at a low level, Ruby shows how to work with messages and anonymous functions, Erlang talks about concurrency and fault tolerance, Haskell shows a real type system and work without side effects, Rust allows you to understand how to share memory for parallel code. But I can’t say that I learned something using Go.

Also popular now: