Writing a web service on Go (part two)

  • Tutorial
Continuation of the article on how to write a small, full-featured application on Go.

In the first part, we implemented the REST API and learned how to collect incoming HTTP requests. In this part, we will cover our application with tests, add a beautiful web interface based on AngularJS and Bootstrap, and implement access restriction for different users.


In this part, the following stages await us:
  1. The fourth step. But what about the tests?
  2. Step Five — embellishments and a web interface;
  3. Step Six Add some privacy.
  4. Seventh step. We clean the unnecessary;
  5. Step Eight. We use Redis for storage.


The fourth step. But what about the tests?


Any application should be covered with tests, no matter how large it is. Go has a large number of built-in tools for working with tests. It is possible to write both ordinary unit tests (unit tests), and, for example, benchmark tests. The toolkit also allows you to see code coverage with tests.

The basic package for working with tests is testing . The two main types here are Tfor ordinary unit tests and Bfor load tests. Tests in Go are written in the same package as the main program, with the suffix added_test. Therefore, any private data structures available inside the package are also available inside the tests (it is also true that the tests have a common global scope with each other). When compiling the main program, test files are ignored.

In addition to the basic testing package, there are a large number of third-party libraries that help simplify the writing of tests or allow writing in one or another style (even in the BDD style ). Here, for example, is a good introductory article on how to write Go in TDD style .

GitHub has a test library comparison plate , among which there are monsters like goconvey, which also provides a web interface and interaction with the system, for example, notifications of passing tests. But, in order not to complicate things, for our project we will take a small testify library that adds only a few primitives for checking conditions and creating mock objects.

Download the code for the fourth step:

git checkout step-4

Let's start by writing tests for models. Create the file models_test.go. In order to be detected by the go test utility, functions with tests must satisfy the following pattern:

func TestXxx(*testing.T)

We will write our first test, which will check the correct creation of the Bin object:

func TestNewBin(t *testing.T) {
     now := time.Now().Unix()
     bin := NewBin()
     if assert.NotNil(t, bin) {
          assert.Equal(t, len(bin.Name), 6)
          assert.Equal(t, bin.RequestCount, 0)
          assert.Equal(t, bin.Created, bin.Updated)
          assert.True(t, bin.Created < (now+1))
          assert.True(t, bin.Created > (now-1))
     }
}

All test methods in testify accept the * testing.T object as the first parameter.
Next, we test all the scenarios, not forgetting the wrong paths and boundary values. I will not cite the code of all the tests in the article, since there are a lot of them, and you can familiarize yourself with them in the repository, I will mention only the most interesting points.

Pay attention to the api_test.go file, in it we test our REST API. In order not to depend on the storage implementations of our data, we add a mock object that implements the behavior of the Storage interface. We do this using the mock package testify. It provides a mechanism for easily writing mock objects, which can then be used instead of real objects when writing tests.

Here is his code:

type MockedStorage struct{
     mock.Mock
}
func (s *MockedStorage) CreateBin(_ *Bin) error {
     args := s.Mock.Called()
     return args.Error(0)
}
func (s *MockedStorage) UpdateBin(bin *Bin) error {
     args := s.Mock.Called(bin)
     return args.Error(0)
}
func (s *MockedStorage) LookupBin(name string) (*Bin, error) {
     args := s.Mock.Called(name)
     return args.Get(0).(*Bin), args.Error(1)
}
func (s *MockedStorage) LookupBins(names []string) ([]*Bin, error) {
     args := s.Mock.Called(names)
     return args.Get(0).([]*Bin), args.Error(1)
}
func (s *MockedStorage) LookupRequest(binName, id string) (*Request, error) {
     args := s.Mock.Called(binName, id)
     return args.Get(0).(*Request), args.Error(1)
}
func (s *MockedStorage) CreateRequest(bin *Bin, req *Request) error {
     args := s.Mock.Called(bin)
     return args.Error(0)
}
func (s *MockedStorage) LookupRequests(binName string, from, to int) ([]*Request, error) {
     args := s.Mock.Called(binName, from, to)
     return args.Get(0).([]*Request), args.Error(1)
}

Further in the tests themselves, when creating the API, we inject our mock object:

		req, _ := http.NewRequest("GET", "/api/v1/bins/", nil)
		api = GetApi()
		mockedStorage := &MockedStorage{}
		api.MapTo(mockedStorage, (*Storage)(nil))
		res = httptest.NewRecorder()
		mockedStorage.On("LookupBins", []string{}).Return([]*Bin(nil), errors.New("Storage error"))
		api.ServeHTTP(res, req)
		mockedStorage.AssertExpectations(t)
		if assert.Equal(t, res.Code, 500) {
			assert.Contains(t, res.Body.String(), "Storage error")
		}

In the test, we describe the expected requests to the mock object and the answers we need to them. Therefore, at the moment when we call the method inside the mock method of the object s.Mock.Called(names), it tries to find the correspondence of the given parameters and the name of the method, and when we return args.Get (0), the first argument passed to Return is returned, in this case realBin. In addition to the Get method, which returns an object of type interface {}, there are helper methods Int, String, Bool, Error, which convert the interface to the type we need. The mockedStorage.AssertExpectations (t) method checks to see if all expected methods were called by us during testing.

The ResponseRecorder object is also interesting here.created in httptest.NewRecorder, it implements the ResponseWriter behavior and allows us, without displaying the request data anywhere, to see what will eventually be returned (response code, headers and response body).

To run the tests, you need to run the command:

> go test ./src/skimmer
ok  	_/.../src/skimmer	0.032s

The test launch team has a large number of flags, you can familiarize yourself with them like this:

> go help testflag

You can play with them, but now we are interested in the following command (relevant for Go version 1.2):

> go test ./src/skimmer/ -coverprofile=c.out && go tool cover -html=c.out

If it didn’t work for you, you may need to install the coverage tool first

> go get code.google.com/p/go.tools/cmd/cover

This command runs the tests and saves the test coverage profile to the c.out file, and then the utility go toolcreates an html version that opens in the browser.
Test coverage in Go, implemented quite interestingly. Before compiling the code, the source files are changed, counters are inserted into the source code. For example, such a code:

func Size(a int) string {
    switch {
    case a < 0:
        return "negative"
    case a == 0:
        return "zero"
    }
    return "enormous"
}

turns into this one:

func Size(a int) string {
    GoCover.Count[0] = 1
    switch {
    case a < 0:
        GoCover.Count[2] = 1
        return "negative"
    case a == 0:
        GoCover.Count[3] = 1
        return "zero"
    }
    GoCover.Count[1] = 1
    return "enormous"
}

It is also possible to show not just coverage, but also how many times each section of the code is tested. As always, you can read more in the documentation .

Now that we have a full-fledged REST API, and covered with tests, you can start embellishing and building a web interface.

Step Five - Decorations and the web interface.


There is a complete library in the Go package for working with html templates , but we will create a so-called single-page application that works directly with the API through javascript. Help us with this AngularJS .

Updating the code for the new step:

> git checkout step-5

As mentioned in the first chapter, Martini has a handler for distributing statics, by default it distributes static files from the public directory. We put the js and css libraries needed there. I will describe the work of the front-end, since this is not the goal of our article, you can look at the source files yourself, for people familiar with angular, everything is quite simple there.

To display the main page, we will add a separate handler:

	api.Get("**", func(r render.Render){
			r.HTML(200, "index", nil)
		})


Glob characters **say that an index.html file will be issued for any address. To work with templates correctly, we added options when creating a Renderer that indicate where to get the templates. Plus, so that there are no conflicts with angular templates, reassign {{}} to {[{}]}.

	api.Use(render.Renderer(render.Options{
		Directory: "public/static/views",
		Extensions: []string{".html"},
		Delims: render.Delims{"{[{", "}]}"},
	}))


In addition, the Color field (three bytes that store the RGB color value) and Favicon (data uri picture, you need colors) were added to the Bin model, which are randomly generated when creating an object in order to distinguish different bin objects by colors.

type Bin struct {
...
	Color		 [3]byte `json:"color"`
	Favicon      string  `json:"favicon"`
}
func NewBin() *Bin {
	color:= RandomColor()
	bin := Bin{
...
		Color:		  color,
		Favicon:      Solid16x16gifDatauri(color),
	}
...
}

Now we have an almost full-featured web application, you can run it:

> go run ./src/main.go

And open in the browser ( 127.0.0.1:3000) to play.

Unfortunately, the application still has two problems: after the program terminates, all data is lost and we do not have any separation between users, everyone sees the same thing. Well, let's do it.

Step Six Add some privacy.

Download the code for the sixth step:

> git checkout step-6

We will separate users from each other using sessions. To begin, choose where to store them. Sessions at martini-contrib are based on gorilla web library sessions .
Gorilla is a set of tools for implementing web frameworks. All these tools are loosely interconnected, which allows you to take any part and embed to yourself.

This allows us to use the repositories already implemented in gorilla. Ours will be cookie based.

Create a session repository:

func GetApi(config *Config) *martini.ClassicMartini {
...
	store := sessions.NewCookieStore([]byte(config.SessionSecret))
...

The NewCookieStore function accepts key pairs as parameters, the first key in the pair is needed for authentication, and the second for encryption. The second key can be skipped. To be able to rotate keys without losing sessions, you can use several key pairs. When creating a session, the keys of the first pair will be used, but when checking the data, all keys are used in order, starting from the first pair.

Since we need different keys for applications, we will place this parameter in the Config object, which in the future will help us configure the application based on the environment settings or launch flags.

Add an intermediate handler to our API that adds work with sessions:

// Sessions is a Middleware that maps a session.Session service into the Martini handler chain.
// Sessions can use a number of storage solutions with the given store.
func Sessions(name string, store Store) martini.Handler {
	return func(res http.ResponseWriter, r *http.Request, c martini.Context, l *log.Logger) {
		// Map to the Session interface
		s := &session{name, r, l, store, nil, false}
		c.MapTo(s, (*Session)(nil))
		// Use before hook to save out the session
		rw := res.(martini.ResponseWriter)
		rw.Before(func(martini.ResponseWriter) {
			if s.Written() {
				check(s.Session().Save(r, res), l)
			}
		})
...
		c.Next()
	}
}

As you can see from the code, a session is created for each request and added to the request context. At the end of the request, right before the data from the buffer is written, the session data is saved if it has been changed.

Now we rewrite our history (which used to be just a slice), the history.go file:

type History interface {
	All() []string
	Add(string)
}
type SessionHistory struct {
	size    int
	name    string
	session sessions.Session
	data    []string
}
func (history *SessionHistory) All() []string {
	if history.data == nil {
		history.load()
	}
	return history.data
}
func (history *SessionHistory) Add(name string) {
	if history.data == nil {
		history.load()
	}
	history.data = append(history.data, "")
	copy(history.data[1:], history.data)
	history.data[0] = name
	history.save()
}
func (history *SessionHistory) save() {
	size := history.size
	if size > len(history.data){
		size = len(history.data)
	}
	history.session.Set(history.name, history.data[:size])
}
func (history *SessionHistory) load() {
	sessionValue := history.session.Get(history.name)
	history.data = []string{}
	if sessionValue != nil {
		if values, ok := sessionValue.([]string); ok {
			history.data = append(history.data, values...)
		}
	}
}
func NewSessionHistoryHandler(size int, name string) martini.Handler {
	return func(c martini.Context, session sessions.Session) {
		history := &SessionHistory{size: size, name: name, session: session}
		c.MapTo(history, (*History)(nil))
	}
}

In the NewSessionHistoryHandler method, we create a SessionHistory object that implements the History interface (describing the addition and query of all history objects), and then add it to the context of each request. The SessionHistory object has helper methods load and save, which load and save data into the session. Moreover, downloading data from the session is performed only on demand. Now, in all API methods where the history slice was used before, a new object of the History type will be used.

From this moment, each user will have their own history of Bin objects, but through a direct link we can still see any Bin. We will fix this by adding the ability to create private Bin objects.

Let's create two new fields in Bin:

type Bin struct {
...
	Private      bool    `json:"private"`
	SecretKey    string  `json:"-"`
}

A key will be stored in the SecretKey field, giving access to private Bin (those where the Private flag is set to true). Add the method that makes our object private:

func (bin *Bin) SetPrivate() {
	bin.Private = true
	bin.SecretKey = rs.Generate(32)
}

In order to create private Bin, our frontend, when creating an object, will send a json object with the private flag. To parse incoming json, we wrote a small DecodeJsonPayload method that reads the request body and unpacks it into the structure we need:

func DecodeJsonPayload(r *http.Request, v interface{}) error {
	content, err := ioutil.ReadAll(r.Body)
	r.Body.Close()
	if err != nil {
		return err
	}
	err = json.Unmarshal(content, v)
	if err != nil {
		return err
	}
	return nil
}

We will now modify the API to implement the new behavior:

	api.Post("/api/v1/bins/", func(r render.Render, storage Storage, history History, session sessions.Session, req *http.Request){
			payload := Bin{}
			if err := DecodeJsonPayload(req, &payload); err != nil {
				r.JSON(400, ErrorMsg{fmt.Sprintf("Decoding payload error: %s", err)})
				return
			}
			bin := NewBin()
			if payload.Private {
				bin.SetPrivate()
			}
			if err := storage.CreateBin(bin); err == nil {
				history.Add(bin.Name)
				if bin.Private {
					session.Set(fmt.Sprintf("pr_%s", bin.Name), bin.SecretKey)
				}
				r.JSON(http.StatusCreated, bin)
			} else {
				r.JSON(http.StatusInternalServerError, ErrorMsg{err.Error()})
			}
		})

First, we create a payload object of type Bin, the fields of which will be filled with values ​​in the DecodeJsonPayload function from the request body. After that, if the option “private” is set in the input, we make our bin private. Further, for private objects, we store the key value in the session session.Set(fmt.Sprintf("pr_%s", bin.Name), bin.SecretKey). Now we need to change the other API methods so that they check the existence of the key in the session for private Bin objects.

This is done like this:

	api.Get("/api/v1/bins/:bin", func(r render.Render, params martini.Params, session sessions.Session, storage Storage){
			if bin, err := storage.LookupBin(params["bin"]); err == nil{
				if bin.Private && bin.SecretKey != session.Get(fmt.Sprintf("pr_%s", bin.Name)){
					r.JSON(http.StatusForbidden, ErrorMsg{"The bin is private"})
				} else {
					r.JSON(http.StatusOK, bin)
				}
			} else {
				r.JSON(http.StatusNotFound, ErrorMsg{err.Error()})
			}
		})

By analogy, done in other methods. Some tests were also fixed to take into account the new behavior, specific changes can be viewed in the code.

If you run our application now in different browsers or in incognito mode, you can make sure that the history is different, and only the browser in which it is created has access to private Bin objects.

Everything is fine, but now all the objects in our storage live almost forever, which is probably not right, since there can be no eternal memory, so we will try to limit the time of their life.

Seventh step. We clear the unnecessary.



Download the seventh step code:

git checkout step-7

Add another field to the base storage structure:

type BaseStorage struct {
...
	binLifetime		  int64
}

It will store the maximum lifetime of the Bin object and related queries. Now we rewrite our storage in memory - memory.go. The main method to clear all binRecords not updated for more than binLifetime seconds:

func (storage *MemoryStorage) clean() {
	storage.Lock()
	defer storage.Unlock()
	now := time.Now().Unix()
	for name, binRecord := range storage.binRecords {
		if binRecord.bin.Updated < (now - storage.binLifetime) {
			delete(storage.binRecords, name)
		}
	}
}

We also add a timer and methods for working with it to the MemoryStorage type:

type MemoryStorage struct {
...
	cleanTimer *time.Timer
}
func (storage *MemoryStorage) StartCleaning(timeout int) {
	defer func(){
		storage.cleanTimer = time.AfterFunc(time.Duration(timeout) * time.Second, func(){storage.StartCleaning(timeout)})
	}()
	storage.clean()
}
func (storage *MemoryStorage) StopCleaning() {
	if storage.cleanTimer != nil {
		storage.cleanTimer.Stop()
	}
}


The package method time AfterFunc starts the specified function in a separate goroutine (it must be without parameters, therefore, we will use the closure here to pass the timeout) after a timeout, such as time.Duration, passed in the first argument.

For horizontal scaling of our application, it will be necessary to run it on different servers, so we need a separate storage for our data. Take Redis for example.

Step Eight. We use Redis for storage.


The official Redis documentation advises us on an extensive list of clients for Go. At the time of writing, the recommended are radix and redigo . We will choose redigo, as it is being actively developed and has a larger community.

Let's move on to the desired code:

git checkout step-8

Take a look at the redis.go file, and it will be our implementation of Storage for Redis. The basic structure is quite simple:

type RedisStorage struct {
	BaseStorage
	pool       *redis.Pool
	prefix     string
	cleanTimer *time.Timer
}

In the pool, the pool of connections to the radish will be stored, in prefix - the common prefix for all keys. To create a pool, take the code from the redigo examples:

func getPool(server string, password string) (pool *redis.Pool) {
	pool = &redis.Pool{
		MaxIdle:     3,
		IdleTimeout: 240 * time.Second,
		Dial: func() (redis.Conn, error) {
			c, err := redis.Dial("tcp", server)
			if err != nil {
				return nil, err
			}
			if password != "" {
				if _, err := c.Do("AUTH", password); err != nil {
					c.Close()
					return nil, err
				}
			}
			return c, err
		},
		TestOnBorrow: func(c redis.Conn, _ time.Time) error {
			_, err := c.Do("PING")
			return err
		},
	}
	return pool
}

In Dial, we pass a function that, after connecting to the Redis server, will try to log in if a password is specified. After that, the established connection is returned. The TestOnBorrow function is called when a connection is requested from the pool, in it you can check the connection for viability. The second parameter is the time since the connection was returned to the pool. We just send ping every time.

Also in the package we have declared several constants:

const (
	KEY_SEPARATOR    = "|" // разделитель ключей
	BIN_KEY          = "bins" // ключ для хранения объектов Bin
	REQUESTS_KEY     = "rq"  // ключ для хранения списка запросов
	REQUEST_HASH_KEY = "rhsh" // ключ для хранения запросов в хэш таблице
	CLEANING_SET	 = "cln" // множество, в котором будут хранится объекты Bin для очистки
	CLEANING_FACTOR  = 3 // множитель превышения максимального количества запросов
)

We get the keys according to this pattern:

func (storage *RedisStorage) getKey(keys ...string) string {
	return fmt.Sprintf("%s%s%s", storage.prefix, KEY_SEPARATOR, strings.Join(keys, KEY_SEPARATOR))
}


To store our data in a radish, they need to be serialized with something. We will choose the popular msgpack format and use the popular codec library .

We describe methods that serialize everything that is possible into binary data and vice versa:

func (storage *RedisStorage) Dump(v interface{}) (data []byte, err error) {
	var (
		mh codec.MsgpackHandle
		h  = &mh
	)
	err = codec.NewEncoderBytes(&data, h).Encode(v)
	return
}
func (storage *RedisStorage) Load(data []byte, v interface{}) error {
	var (
		mh codec.MsgpackHandle
		h  = &mh
	)
	return codec.NewDecoderBytes(data, h).Decode(v)
}

We now describe other methods.

Creating a Bin Object

func (storage *RedisStorage) UpdateBin(bin *Bin) (err error) {
	dumpedBin, err := storage.Dump(bin)
	if err != nil {
		return
	}
	conn := storage.pool.Get()
	defer conn.Close()
	key := storage.getKey(BIN_KEY, bin.Name) 
	conn.Send("SET", key, dumpedBin)
	conn.Send("EXPIRE", key, storage.binLifetime)
	conn.Flush()
	return err
}
func (storage *RedisStorage) CreateBin(bin *Bin) error {
	if err := storage.UpdateBin(bin); err != nil {
		return err
	}
	return nil
}


First, we serialize bin using the Dump method. Then we take the radish compound from the pool (without forgetting it must be returned using defer).
Redigo supports pipeline mode, we can add a command to the buffer via the Send method, send all data from the buffer using the Flush method and get the result in Receive. The Do command combines all three teams into one. You can also implement transactionality, more in the redigo documentation .

We send two commands, “SET” to save the data of Bin by its name and Expire to set the lifetime of this record.

Getting a Bin Object

func (storage *RedisStorage) LookupBin(name string) (bin *Bin, err error) {
	conn := storage.pool.Get()
	defer conn.Close()
	reply, err := redis.Bytes(conn.Do("GET", storage.getKey(BIN_KEY, name)))
	if err != nil {
		if err == redis.ErrNil {
			err = errors.New("Bin was not found")
		}
		return
	}
	err = storage.Load(reply, &bin)
	return
}

The helper method redis.Bytes tries to read the response from conn.Do into the byte array. If the object was not found, the radish will return the special error type redis.ErrNil. If everything went well, then the data is loaded into the bin object, passed by reference to the Load method.

Retrieving a List of Bin Objects

func (storage *RedisStorage) LookupBins(names []string) ([]*Bin, error) {
	bins := []*Bin{}
	if len(names) == 0 {
		return bins, nil
	}
	args := redis.Args{}
	for _, name := range names {
		args = args.Add(storage.getKey(BIN_KEY, name))
	}
	conn := storage.pool.Get()
	defer conn.Close()
	if values, err := redis.Values(conn.Do("MGET", args...)); err == nil {
		bytes := [][]byte{}
		if err = redis.ScanSlice(values, &bytes); err != nil {
			return nil, err
		}
		for _, rawbin := range bytes {
			if len(rawbin) > 0 {
				bin := &Bin{}
				if err := storage.Load(rawbin, bin); err == nil {
					bins = append(bins, bin)
				}
			}
		}
		return bins, nil
	} else {
		return nil, err
	}
}

Here, almost everything is the same as in the previous method, except that the MGET command is used to get a data slice and the redis.ScanSlice helper method to load the response into the desired type of slice.

Creating Request Request

func (storage *RedisStorage) CreateRequest(bin *Bin, req *Request) (err error) {
	data, err := storage.Dump(req)
	if err != nil {
		return
	}
	conn := storage.pool.Get()
	defer conn.Close()
	key := storage.getKey(REQUESTS_KEY, bin.Name)
	conn.Send("LPUSH", key, req.Id)
	conn.Send("EXPIRE", key, storage.binLifetime)
	key = storage.getKey(REQUEST_HASH_KEY, bin.Name)
	conn.Send("HSET", key, req.Id, data)
	conn.Send("EXPIRE", key, storage.binLifetime)
	conn.Flush()
	requestCount, err := redis.Int(conn.Receive())
	if err != nil {
		return
	}
	if requestCount < storage.maxRequests {
		bin.RequestCount = requestCount
	} else {
		bin.RequestCount = storage.maxRequests
	}
	bin.Updated = time.Now().Unix()
	if requestCount > storage.maxRequests * CLEANING_FACTOR {
		conn.Do("SADD", storage.getKey(CLEANING_SET), bin.Name)
	}
	if err = storage.UpdateBin(bin); err != nil {
		return
	}
	return
}

First, we save the request identifier to the request list for bin.Name, then we save the serialized request to the hash table. In both cases, do not forget to add a lifetime. The LPUSH command returns the number of entries in the requestCount list, if this number exceeded the maximum multiplied by a factor, then add this Bin to the candidates for the next cleanup.

The receipt of the request and the list of requests is done by analogy with Bin objects.

Cleaning

func (storage *RedisStorage) clean() {
	for {
		conn := storage.pool.Get()
		defer conn.Close()
		binName, err := redis.String(conn.Do("SPOP", storage.getKey(CLEANING_SET)))
		if err != nil {
			break
		}
		conn.Send("LRANGE", storage.getKey(REQUESTS_KEY, binName), storage.maxRequests, -1)
		conn.Send("LTRIM", storage.getKey(REQUESTS_KEY, binName), 0, storage.maxRequests-1)
		conn.Flush()
		if values, error := redis.Values(conn.Receive()); error == nil {
			ids := []string{}
			if err := redis.ScanSlice(values, &ids); err != nil {
				continue
			}
			if len(ids) > 0 {
				args := redis.Args{}.Add(storage.getKey(REQUEST_HASH_KEY, binName)).AddFlat(ids)
				conn.Do("HDEL", args...)
			}
		}
	}
}

Unlike MemoryStorage, here we clear redundant requests, since the lifetime is limited by the EXPIRE radish command. First, we take an item from the list for cleaning, request the identifiers of requests for it, which are not included in the limit, and use the LTRIM command to compress the list to the size we need. We remove identifiers obtained earlier from the hash table using the HDEL command, which accepts several keys at once.

We have finished describing RedisStorage, next to it, in the redis_test.go file you will find the same tests.

Now, let's add the ability to select the repository when starting our application, in the api.go file:

type RedisConfig struct {
	RedisAddr			string
	RedisPassword		string
	RedisPrefix			string
}
type Config struct {
...
	Storage				string
	RedisConfig
}
func GetApi(config *Config) *martini.ClassicMartini {
	var storage Storage
	switch config.Storage{
	case "redis":
		redisStorage := NewRedisStorage(config.RedisAddr, config.RedisPassword, config.RedisPassword, MAX_REQUEST_COUNT, BIN_LIFETIME)
		redisStorage.StartCleaning(60)
		storage = redisStorage
	default:
		memoryStorage := NewMemoryStorage(MAX_REQUEST_COUNT, BIN_LIFETIME)
		memoryStorage.StartCleaning(60)
		storage = memoryStorage
	}
...

We added a new Storage field to our configuration structure and, depending on it, initializing either RedisStorage or MemoryStorage. Also added the RedisConfig configuration, for specific radish options.

We also make changes to the main.go file being launched:
import (
	"skimmer"
	"flag"
)
var (
	config = skimmer.Config{
		SessionSecret: "secret123",
		RedisConfig: skimmer.RedisConfig{
			RedisAddr: "127.0.0.1:6379",
			RedisPassword: "",
			RedisPrefix: "skimmer",
		},
	}
)
func init() {
	flag.StringVar(&config.Storage, "storage", "memory", "available storages: redis, memory")
	flag.StringVar(&config.SessionSecret, "sessionSecret", config.SessionSecret, "")
	flag.StringVar(&config.RedisAddr, "redisAddr", config.RedisAddr, "redis storage only")
	flag.StringVar(&config.RedisPassword, "redisPassword", config.RedisPassword, "redis storage only")
	flag.StringVar(&config.RedisPrefix, "redisPrefix", config.RedisPrefix, "redis storage only")
}
func main() {
	flag.Parse()
	api := skimmer.GetApi(&config)
	api.Run()
}


We will use the flag package , which makes it easy and simple to add launch options for programs. Add the “storage" flag to the init function, which will save the value directly to our config in the Storage field. Also add radish launch options.
The init function is special for Go; it is always executed when the package is loaded. Learn more about running programs in Go.

Now, by launching our program with the --help option, we will see a list of available options:

> go run ./src/main.go --help
Usage of .../main:
  -redisAddr="127.0.0.1:6379": redis storage only
  -redisPassword="": redis storage only
  -redisPrefix="skimmer": redis storage only
  -sessionSecret="secret123":
  -storage="memory": available storages: redis, memory


Now we have an application that is still quite raw, and not optimized, but ready to work and run on servers.

In the third part, we will talk about laying out and launching the application in GAE, Cocaine and Heroku, as well as how to distribute it as a single executable file containing all the resources. We will write performance tests while doing optimization. We learn how to proxy requests and respond with the necessary data. Finally, we embed the distributed groupcache database right inside the application.

I will be glad to any corrections and suggestions for the article.

Also popular now: