Golang

Empty Struct

An empty struct in Go has no data elements.

type S struct{}

The most important property of the empty struct is that the width is zero. This allows to create a slice or channel of thousands of empty structs with a tiny memory footprint.

Here is a size comparison of the empty struct vs empty interface vs bool:

package main 
import (
"fmt"
"unsafe"
)

func main() {
var s struct{}
var i interface{}
var b bool
fmt.Println(unsafe.Sizeof(s), unsafe.Sizeof(i), unsafe.Sizeof(b))
}

On a 32 bit system: 0 8 1

On a 64-bit system:0 16 1null

Uses of empty struct

As a method receiver 

An empty struct{} can be used as a method receiver in cases when you don’t need data on a struct just methods with predefined input and output. E.g. You may want have a mock for testing interfaces.

An empty struct channel

An empty struct is very useful in channels when you have to notify that some event occurred but you don’t need to pass any information about it. Using a channel of  empty structure will only increment a counter in the channel but not assign memory, copy elements and so on. Using boolean values for this purpose has a memory footprint that can be avoided using the empty struct.

As a Set data type

Go has no Set data type. This can be easily emulated by using map[keyType]struct{}. This way map keeps only keys and no values.

Golang · Uncategorized

Golang Net HTTP Package

Golang’s net/http package can be used to build a web server in a minutes. It packs in a pretty wide use of Golang concepts like functions, interfaces and types to achieve this.

Here is a basic web server using Go:

package main 
import (
"fmt"
"net/http"
)
func main() {
http.HandleFunc("/", handlerHelloWorld)
http.ListenAndServe(":8082", nil)
}

func handlerHelloWorld(w http.ResponseWriter, r *http.Request){
fmt.Fprintf(w, "Hello world")
}

If we run the above server we can make a GET request and the server will print “Hello World”.

What we need to understand that in the background, the package runs a ServeMux to map the url to the handler.

What is ServeMux?

A ServeMux is a HTTP request multiplexer or router that  matches the incoming requests with a set of registered patterns and  calls  the associated handler for that pattern.

http.ListenAndServe has the following signature

func ListenAndServe(addr string, handler Handler) error

If we pass nil as the handler, as we did in or basic server example, the DefaultServeMux will be used.

ServeMux struct contains the following four vital functions that are key to the working of the http package:

func (mux *ServeMux) Handle(pattern string, handler Handler)
func (mux *ServeMux) HandleFunc(pattern string, handler func(ResponseWriter, *Request))
func (mux *ServeMux) Handler(r *Request) (h Handler, pattern string)
func (mux *ServeMux) ServeHTTP(w ResponseWriter, r *Request)

What is a Handler?

Notice that ServeMux has a function named Handler that takes in a reference to a http.Request param and returns a object of type Handler.   Made my head spin a bit when I first saw that!

But looking under the hood, it turns out, http.Handler is simply an interface. Any object can be made a handler as long as it implements the ServeHTTP function with the following signature.

 ServeHTTP(ResponseWriter, *Request)

So essentially the default ServeMux is a type of Handler since it implements ServeHTTP.

HandleFunc and Handle

In our simple server code above, we did not define a Handler that implements ServeHTTP nor did we define a ServeMux. Instead we called HandleFunc and the function that would handle the response.

This is the source code for HandleFunc in the net/http package

func HandleFunc(pattern string, handler func(ResponseWriter, *Request)) {     
DefaultServeMux.HandleFunc(pattern, handler)
}  

Internally this calls the DefaultServerMux’s HandleFunc. If you take a look at the implementation of HandleFunc within ServeMux, here is what you’ll find:

func (mux *ServeMux) HandleFunc(pattern string, handler func(ResponseWriter, *Request)) {   
if handler == nil {  
panic("http: nil handler")  
}   
mux.Handle(pattern, HandlerFunc(handler))   
}

From the net/http source, we find that HandlerFunc type is an adapter to allows the use of an ordinary functions as HTTP handlers.

type HandlerFunc func(ResponseWriter, *Request)       

// ServeHTTP calls f(w, r).  
 func (f HandlerFunc) ServeHTTP(w ResponseWriter, r *Request) {    f(w, r)   
}

The HandlerFunc makes it possible for us to pass in any function to make it a Handler. So in our simple server example above, we could change the HandleFunc call to a call to the Handle function. All we would have to do is wrap it in  HandlerFunc.

http.Handle("/", http.HandlerFunc(indexHandlerHelloWorld))

The Handle function is used when we want to use a custom Handler in our code. 

To demonstrate the use of some of these concepts, here is a simple example of chat server that will receive messages and broadcast them. It uses a Handler that is passed to a ServeMux. 

package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
)

type MessageDigest struct {
Text string json:"message"
ToUser string json:"to"
}
type ChatHandler struct{}
func (c *ChatHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if r.Body == nil {
return
}
var msg MessageDigest
body, err := ioutil.ReadAll(r.Body)
if err != nil {
http.StatusText(http.StatusInternalServerError),http.StatusInternalServerError)
return
}
err = json.Unmarshal(body, msg)
if err != nil {
http.Error(w, http.StatusText(http.StatusInternalServerError),http.StatusInternalServerError)
return
     }
     w.WriteHeader(http.StatusOK)
fmt.Println("Message for ", msg.ToUser, ": ", msg.Text)
}


func main() {
mux := http.NewServeMux()
chatHandler := new(ChatHandler)
mux.Handle("/ws", chatHandler)
log.Fatal(http.ListenAndServe(":8080", mux))
}
Golang

API Performance Testing

The goal of API Performance Tests are to conduct  load tests that will run broadly across all endpoints of an API and help us understand the distribution of throughput in requests per second – average, peak, etc.

It is important to record response times and resource utilization at average and peak loads. This will allow us to determine system response times, network latency, etc. We should also be able to determine how the concurrency and processing overhead of the API. We should measure performance when concurrent instances are instantiated with instructions to run load testing scripts.

Tooling

Vegeta

Vegeta is an easy to use command line tool for API load testing.

https://github.com/tsenart/vegeta

Testing can be done in 3 simple steps:

  • Install
$ brew update && brew install vegeta
  • Run a list of APIs can be listed in a file called targets.txt
vegeta -cpus 4 attack -targets targets.txt -rate 50 -duration 30s | tee results.bin | vegeta report
  • Plot
cat results.bin | vegeta plot > plot.html

One limitation of vegeta is that cookie session are not supported which shouldn’t be an issue if we follow the JWT stateless model that is scalable and avoid sessions.

K6

k6 is another modern load testing tool that allows us to easily create load test scenarios based on virtual users and simulated traffic configurations

https://docs.k6.io/docs

  • Install
$brew tap loadimpact/k6 && brew install k6
  • Run a es6 Javascript that defines which endpoints to test and what  custom metrics and thresholds need to be gathered.
k6 run --vus 100 --duration 5m --out json=outputs/result.json  k6/script.js
vus  are used to define the number of concurrent users that allow to send API requests in parallel.
  • Plot
We can output to an influxDB instance and plot this using a UI tool like Grafana

Types of Performance Test

  • Stress test: Determine what is the maximum number of concurrent users that the system supports with an acceptable user experience.
  • Soak test: Used to find problems that arise when a system is under pressure for extended periods of time. Test is run for longer duration and is used to find long term problems such as memory leaks, resource leaks or corruption and degradation that occurs over time
  • Spike test:  Spike tests are vital to testing how well your API can perform at peak times. This will ensure your API can handle the amount of users coming in a very short amount of time e.g. if you running a holiday ad campaign and you see a significant rise in traffic.
Golang

Auto-generate code using Go templates

Go has two template packages: a HTML template package that can be used to templatize HTML files and a more general text template.

The Go text template package can be used easily to generate custom code like Javascript, Swift, React etc.

https://golang.org/pkg/text/template/

This package can be very useful for building products for varied customers that are similar but require their own company logo, names, images and text branding.

In fact, this can be implemented in any codebase that is very similar but needs a few tweaks to work for different platforms or products.

I have used Go text templates for code generation in the following scenarios:

  1. Generating multiple Chromium, Firefox, Edge extensions for varied products.
  2. Interfacing with different API’s to get data into our main systems

I’m also working on using it to auto generate custom mobile apps.

Here, I will explain in detail how I build new Chrome and Firefox extensions in seconds using Go templates. This can be expanded to include Microsoft Edge extensions as well.

Start with a basic chrome extension

There is a detailed description on how to build a chrome extension. 

https://developer.chrome.com/extensions/getstarted

Once you have a skeletal chrome extension, it is easy to duplicate and create multiple custom extensions if needed or even build Firefox extension using the same set of extension files.

All you need is a set of template javascript files for your extension and your config json values.

Templatize your extension

I start out by taking my extension file and saving with a .template extension. This way, I can recognize it needs to be parsed to replace values.

Files that do not need have any customizations can be left as is in the folder.

E.g. The chrome extension manifest.json.template file will look like this with template fields:

{  
"name": "{{.Name}}",
"version": "{{.Version}}",
"manifest_version": 2,
"default_locale": "en",
 "description": "{{.Description}}", 
"background": { 
"page": "background.html"   
}, 
"browser_action": {     
"default_title": "{{.Tooltip}}",   
"default_icon": "icon.png"
},
"content_scripts": [
{
"js": ["contentscript.js"]
}
],
"icons": {
"16": "icon16.png",            
"48": "icon48.png",           
"128": "icon128.png"    
}, 
"homepage_url": "{{.Protocol}}://{{.Domain}}", 
"permissions": [ "tabs",     
"{{.Protocol}}://{{.Domain}}/" 
]
}

Similarly we write templates file for all the extension files like popup.js etc.

To build a basic chrome extension, I also define the following in a global.json file. This is external to your extension files and will be used by the Go build system.

{  
"production": "true",
"author": "Maria De Souza",
"author_home": "https://mariadesouza.com/",
"protocol": "https",
"version" : "0.0.0.1",
"domain" : "www.mysite.com",
"name" : "testExtension",
"description" : "This is a test extension",
"tooltip":"Click here to view settings",
"title" : "Test extension",
}

Gather Customized Settings

I create a folder per customer product where I keep the images related to the customer. This source folder is used later when creating the extension.

The global settings can be overridden by a product specific json file:

{ 
"title" : "My cool new extension",
"version" : "0.0.0.2",
"domain" : "www.myotherwebsite.com",
}

The customized product.json can be a subset of the original global.json.

Write the build Script

Now we can get to the fun part, building a script to generate the extensions.

Templates are executed by applying them to a data structure. We first define a struct to Unmarshal our config JSON and use it in our build script. Notice the JSON tags correspond to the JSON field names in the global.json and product.json file.

type Config struct {  
Production  string `json:"production,omitempty"`
Author      string `json:"author,omitempty"`
AuthorHome  string `json:"author-home,omitempty"`
Version     string `json:"version,omitempty"`
Domain      string `json:"domain,omitempty"`
Protocol    string `json:"protocol,omitempty"`
Name        string `json:"name,omitempty"`
Description string `json:"description,omitempty"`
Tooltip     string `json:"tooltip,omitempty"`
Title       string `json:"title,omitempty"`
Browser     string `json:"browser,omitempty"`
ProductDir  string `json:"product_dir,omitempty"`
HomePage    string `json:"home_page,omitempty"`
}

The script starts by unmarshalling the global file in a struct value as below. Note that I have left out error handling to reduce noise. The second part will Unmarshal the custom product values.

var globalConfig Config 
configFile, _ := ioutil.ReadFile("global.json")
json.Unmarshal(configFile, &globalConfig)

var productConfig Config
productconfigFile,_ := ioutil.ReadFile("product.json") json.Unmarshal(productconfigFile, &productConfig)

Using reflect, I override the custom product values:

func mergeWithGlobal(globalConfig, productConfig *Config){  

st := reflect.TypeOf(*globalConfig)

for i := 0; i < st.NumField(); i++ {
tag := strings.Split(field.Tag.Get("json"), ",")
v2 :=
reflect.ValueOf(globalConfig).Elem().FieldByName(st.Field(i).Name)

if tag[0] != "" && v2.String() != "" {
v := reflect.ValueOf(productConfig).Elem().FieldByName(st.Field(i).Name)
v.SetString(v2.String())
}
}

}

Using this Config struct, I can now populate the template files. To do this I read all files with extension .template in the source diectory, execute the template using the populated Config struct and save the result in the destination directory.

Here is what the code would like. Again, I removed some error handling so the main flow is understandable. But error handling should definitely be part of your code.


func populateTemplateFiles(source, dest string, config *Config) error {
//Make destination directory
    os.MkdirAll(destination, 0755)
   
    re := regexp.MustCompile(`(.*)\.template$`)
    files, _ := ioutil.ReadDir(source)

    for _, file := range files {

        //if it is a template file read and populate tags
        filename := file.Name()
        if re.MatchString(filename) == true {

            buf, _ := ioutil.ReadFile(filepath.Join(source, filename))
           
            tmpl, _ := template.New("extensions").Parse(string(buf))
           // final file will drop the .template file ext
            targetfilename := strings.Split(filename, ".template")
            destFile := targetfilename[0]

            targetfile := filepath.Join(dest, destFile)
       
            f, err := os.OpenFile(targetfile, os.O_WRONLY|os.O_CREATE, 0755)
            if err != nil {
                fmt.Println("Failed to create file", targetfile, err)
                continue
            }
            w := bufio.NewWriter(f)

            tmpl.Execute(w, config)
           
            w.Flush()
          
        } else {

// not a template file - copy as is
          copyFile(filepath.Join(dest, filename), filepath.Join(source, filename))
        }
    }
    return nil
}

The customized images in the product directory are copied into the destination directory.

The destination directory will then contain all the customized javascript, other javascript files and custom images.

We can then upload a zipped version of the destination directory to the Chrome Webstore. I have a make file that also generates a pem file and zips the directory.

Extending build system for Firefox

I build Firefox extensions from the same set of Chrome extension files.

This is the link to developing a Firefox browser extension:

https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions

Firefox and Chrome extensions have very subtle differences, in the way they are implemented. They are all based largely on the WebExtensions API for cross browser compatibility.

Notice that my earlier declaration of Config struct had the following field

 Browser     string `json:"browser,omitempty"`

I dynamically change this in my Go script to instruct the system to generate a new set of extension files for Firefox.

I set the value for Browser in my code and then build for Firefox.

productConfig.Browser = "Firefox"

In fact, you can parameterize the Go script to accept a command line argument that will control which browser you want to build for. The default could be all.

The differences between Firefox and Chrome will be conditionals in your template files. It depends on what functionality you are using within your extensions.

E.g. to send a message back to the background script from the content script, I use the template system to conditionally generate code for Firefox vs Chrome using the config value.

{{if eq .Browser "Firefox"}}
browser.runtime.sendMessage({ name: "options", value: event.target.value });
{{else}}
chrome.extension.sendRequest({ name: "options", value: event.target.value });
{{end}}

Another example of this would be, add FirefoxID in my config struct and in my manifest.json.template, I add

{{if eq .Browser "Firefox"}}
"applications": {
"gecko": {
"id": "{{.FirefoxID}}",
"strict_min_version": "42.0",
}
},
{{end}}

Microsoft Edge extensions are also similar and the same set of template files can be used to auto-generate Edge with a few tweaks.

Golang

Launch a Golang web server using Docker

If you want to create a web server using Go the simplest way to deploy is using Docker.  Golang code is compiled to a binary and does not need a special environment to run.

Here is the simplest web server code in Go to get started. Save this as webserver.go

package main 
import (
"fmt"
"log"
"net/http"
"runtime"
)

func main() {
http.HandleFunc("/", indexHandlerHelloWorld)
log.Fatal(http.ListenAndServe(":8080", nil))
}

func indexHandlerHelloWorld(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello world, I'm running search on %s with an %s CPU ", runtime.GOOS, runtime.GOARCH)
}

We can use the simplest docker image scratch and add a directive to copy the binary to the server. Save as Dockerfile.

FROM scratch 
MAINTAINER Maria De Souza <maria.g.desouza@gmail.com>

ADD go-webserver go-webserver
ENTRYPOINT ["/go-webserver"]
EXPOSE 8080

We setup a start shell script to build and kick up the docker

#!/bin/bash 
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -a -o go-webserver webserver.go || exit 1

if [ "$(docker ps -q -f name=go-web-server)" ]; then
docker stop $(docker ps -a -q --filter name=go-web-server --format="{{.ID}}")
fi

docker build -t go-web-server . || exit 1

docker run -p 8000:8080 go-web-server || exit 1

You can use the -h option to add the hostname to the webserver

Voila! Now your webserver is running. Navigate to http://localhost:8000/ to test.

Known Issue

If you make any SSL requests from your webserver, you will see the following error when running the webserver using Docker:

x509: failed to load system roots and no roots provided

This is because the /etc/ssl/certs/ca-certificates.crt, is missing from the scratch image that is read by the golang tls package.

To avoid this copy the cert to your docker image from your local system. I normally add this to my bash script that will copy it based on OS.

if [ ! -e ca-certificates.crt ]; then  
if [[ $(uname) = "Darwin" ]]; then
cp /usr/local/etc/openssl/cert.pem ca-certificates.crt || exit 1
else
cp /etc/ssl/certs/ca-certificates.crt ca-certificates.crt || exit 1
fi
fi

This is what your Dockerfile should then look like:

FROM scratch  
MAINTAINER Maria De Souza <maria.g.desouza@gmail.com>

COPY ca-certificates.crt /etc/ssl/certs/ca-certificates.crt

ADD go-webserver go-webserver
ENTRYPOINT ["/go-webserver"]
EXPOSE 8080
Golang

Packages on the Go

Package management with Go is a very talked about issue. Unfortunately, go get does not support functionality to fetch specific tags or versions. It gets the package from the HEAD in git.

Recently we had a situation where a developer on our team used a package and then it got obsoleted. The package developer tagged the release before making breaking changes and the sources were still available but we had to do a git checkout and run it.

That is when I looked into package management. Go 1.5 introduced the “vendor” directory as an experiment and made it official  in Go 1.6

If you use third party packages in your product, copy it to the vendor directory and go searches for dependencies there.

Ex.

package main 

import (
   "fmt"
   "io"

   "https://github.com/mariadesouza/sftphelper" 
)

 

main.go
vendor
 |--github.com
 |     |--mariadesouza
 |     |    |-- sftphelper
 |     |    |     |-- LICENSE
 |     |    |     |-- README.md 
 |     |    |     |--sftphelper.go
Golang

Pass by value or reference

The official Go site FAQ states,  “As in all languages in the C family, everything in Go is passed by value”. This is because the function gets a copy of everything that is passed in.

Is there such thing as pass by reference in Go?

There are different views  as to what is exactly pass by reference to Go.  Some strongly maintain there is no such thing as pass by reference. In C++ terms, the actual meaning of pass by reference is you pass a reference or a pointer to the actual data structure rather the data itself. The function then can modify the value of the argument using that reference.

In Go when I pass a pointer to a struct for example, whether its a copy to the pointer or not, I am not passing the struct itself but a pointer or a reference to it.  I can modify the actual struct using the pointer. In my view, that fits the definition of pass by reference.

When to pass a pointer?

We don’t need to pass pointers to map and slices  as they are already descriptors that contain pointers to the actual map or slice data.

Compelling arguments to use pointer receiver and pass by reference:

  • You want to modify the receiver. With value receivers you can’t modify the struct itself
  • Its is a big struct. It will cost to deep copy the struct.

When you pass a slice to a function, since it is a pointer to an array you just get a copy of the slice structure. It will still point to the same underlying array segment. So any modifications made to the slice within the function will be seen outside.

https://play.golang.org/p/LrrHtK86WmC

However, if you append an element, remember that a new slice is created and elements are copied over so you will lose the elements if this happens within a function so you must return a slice. E.g. append from the stdlib returns a new slice.

Whatever it is you choose to use, stay consistent. Coming from a C++ background, I always thought pass by reference is cheaper than passing by value. But apparently it is not always the case in Go.

Interesting Reads:

http://goinbigdata.com/golang-pass-by-pointer-vs-pass-by-value/