Golang · Uncategorized

Golang Net HTTP Package

Golang’s net/http package can be used to build a web server in a minutes. It packs in a pretty wide use of Golang concepts like functions, interfaces and types to achieve this.

Here is a basic web server using Go:

package main 
import (
"fmt"
"net/http"
)
func main() {
http.HandleFunc("/", handlerHelloWorld)
http.ListenAndServe(":8082", nil)
}

func handlerHelloWorld(w http.ResponseWriter, r *http.Request){
fmt.Fprintf(w, "Hello world")
}

If we run the above server we can make a GET request and the server will print “Hello World”.

What we need to understand that in the background, the package runs a ServeMux to map the url to the handler.

What is ServeMux?

A ServeMux is a HTTP request multiplexer or router that  matches the incoming requests with a set of registered patterns and  calls  the associated handler for that pattern.

http.ListenAndServe has the following signature

func ListenAndServe(addr string, handler Handler) error

If we pass nil as the handler, as we did in or basic server example, the DefaultServeMux will be used.

ServeMux struct contains the following four vital functions that are key to the working of the http package:

func (mux *ServeMux) Handle(pattern string, handler Handler)
func (mux *ServeMux) HandleFunc(pattern string, handler func(ResponseWriter, *Request))
func (mux *ServeMux) Handler(r *Request) (h Handler, pattern string)
func (mux *ServeMux) ServeHTTP(w ResponseWriter, r *Request)

What is a Handler?

Notice that ServeMux has a function named Handler that takes in a reference to a http.Request param and returns a object of type Handler.   Made my head spin a bit when I first saw that!

But looking under the hood, it turns out, http.Handler is simply an interface. Any object can be made a handler as long as it implements the ServeHTTP function with the following signature.

 ServeHTTP(ResponseWriter, *Request)

So essentially the default ServeMux is a type of Handler since it implements ServeHTTP.

HandleFunc and Handle

In our simple server code above, we did not define a Handler that implements ServeHTTP nor did we define a ServeMux. Instead we called HandleFunc and the function that would handle the response.

This is the source code for HandleFunc in the net/http package

func HandleFunc(pattern string, handler func(ResponseWriter, *Request)) {     
DefaultServeMux.HandleFunc(pattern, handler)
}  

Internally this calls the DefaultServerMux’s HandleFunc. If you take a look at the implementation of HandleFunc within ServeMux, here is what you’ll find:

func (mux *ServeMux) HandleFunc(pattern string, handler func(ResponseWriter, *Request)) {   
if handler == nil {  
panic("http: nil handler")  
}   
mux.Handle(pattern, HandlerFunc(handler))   
}

From the net/http source, we find that HandlerFunc type is an adapter to allows the use of an ordinary functions as HTTP handlers.

type HandlerFunc func(ResponseWriter, *Request)       

// ServeHTTP calls f(w, r).  
 func (f HandlerFunc) ServeHTTP(w ResponseWriter, r *Request) {    f(w, r)   
}

The HandlerFunc makes it possible for us to pass in any function to make it a Handler. So in our simple server example above, we could change the HandleFunc call to a call to the Handle function. All we would have to do is wrap it in  HandlerFunc.

http.Handle("/", http.HandlerFunc(indexHandlerHelloWorld))

The Handle function is used when we want to use a custom Handler in our code. 

To demonstrate the use of some of these concepts, here is a simple example of chat server that will receive messages and broadcast them. It uses a Handler that is passed to a ServeMux. 

package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"log"
"net/http"
)

type MessageDigest struct {
Text string json:"message"
ToUser string json:"to"
}
type ChatHandler struct{}
func (c *ChatHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
if r.Body == nil {
return
}
var msg MessageDigest
body, err := ioutil.ReadAll(r.Body)
if err != nil {
http.StatusText(http.StatusInternalServerError),http.StatusInternalServerError)
return
}
err = json.Unmarshal(body, msg)
if err != nil {
http.Error(w, http.StatusText(http.StatusInternalServerError),http.StatusInternalServerError)
return
     }
     w.WriteHeader(http.StatusOK)
fmt.Println("Message for ", msg.ToUser, ": ", msg.Text)
}


func main() {
mux := http.NewServeMux()
chatHandler := new(ChatHandler)
mux.Handle("/ws", chatHandler)
log.Fatal(http.ListenAndServe(":8080", mux))
}
Golang

API Performance Testing

The goal of API Performance Tests are to conduct  load tests that will run broadly across all endpoints of an API and help us understand the distribution of throughput in requests per second – average, peak, etc.

It is important to record response times and resource utilization at average and peak loads. This will allow us to determine system response times, network latency, etc. We should also be able to determine the concurrency and processing overhead of the API. We should measure performance when concurrent instances are instantiated with instructions to run load testing scripts.

Tooling

Vegeta

Vegeta is an easy to use command line tool for API load testing.

https://github.com/tsenart/vegeta

Testing can be done in 3 simple steps:

  • Install
$ brew update && brew install vegeta
  • Run a list of APIs can be listed in a file called targets.txt
vegeta -cpus 4 attack -targets targets.txt -rate 50 -duration 30s | tee results.bin | vegeta report
  • Plot
cat results.bin | vegeta plot > plot.html

One limitation of vegeta is that cookie session are not supported which shouldn’t be an issue if we follow the JWT stateless model that is scalable and avoid sessions.

K6

k6 is another modern load testing tool that allows us to easily create load test scenarios based on virtual users and simulated traffic configurations

https://docs.k6.io/docs

  • Install
$brew tap loadimpact/k6 && brew install k6
  • Run a es6 Javascript that defines which endpoints to test and what  custom metrics and thresholds need to be gathered.
k6 run --vus 100 --duration 5m --out json=outputs/result.json  k6/script.js
vus  are used to define the number of concurrent users that allow to send API requests in parallel.
  • Plot
We can output to an influxDB instance and plot this using a UI tool like Grafana

Types of Performance Test

  • Stress test: Determine what is the maximum number of concurrent users that the system supports with an acceptable user experience.
  • Soak test: Used to find problems that arise when a system is under pressure for extended periods of time. Test is run for longer duration and is used to find long term problems such as memory leaks, resource leaks or corruption and degradation that occurs over time
  • Spike test:  Spike tests are vital to testing how well your API can perform at peak times. This will ensure your API can handle the amount of users coming in a very short amount of time e.g. if you running a holiday ad campaign and you see a significant rise in traffic.
Uncategorized

Channels and Workerpools

Concurrency are part of the Golang core. They are similar to light weight threads. We run a routine using the go keyword.

go matchRecordsWithDb(record)

Channels

Channels are a way to synchronize and communicate with go routines.

ch := make(chan string) 
ch <- "test" //Send data to unbuffered channel
v := <-ch //receive data from channel and assign to v

The receive here will block till data is available on the channel.

Most programs use multiple go routines and  buffered channels.

doneCh := make(chan bool, 4)

Here we will be able to run a routine 4 times without blocking.

select

select is like a switch with cases that allows you to wait on multiple communication operations. It is a way to receive channel data will block until one of its case is ready.  Select with a default clause is a way to implement non-blocking sends, receives.

select {
case ac <- a:
// sent a on ac
case b:= <-bc:
// received b from bc
}

WorkerPools

I encountered the classic scenario where I had to make thousands of database calls to match records from a file to database entries. Finding viable matches in the database per line in the file, was slow and proving to much of a hassle. I wanted to add concurrency to my calls to achieve this faster. However, I was restricted by the database connection. I could only send a set number of queries at a time or it would error out.

I started with the naive approach. I  create a buffered channel of 100 that went out and call the matching routine. The requirement was to match with a key in a table and return results.  Some improvement. It did about 100 queries. Wait for those to finish and start the next batch of 100.

const workers = 100 
jobsCh := make(chan int, workers)
for rowIdx := 0; rowIdx < len(fileRecords); rowIdx += workers {
for j = 0; j < workers; j++ {
if (rowIndex + j) >=len(fileRecords) {
break;
}
go matchRecordsWithDb(jobsCh, &fileRecords[rowIdx+j])
} // wait for the 100 workers to return
for i := 0; i < j; i++ {
fmt.Printf("%d", <-jobsCh)
}
}

There was a major snag in this approach. We had a condition, that if the line in the file didn’t have the main id field, we had to query on another field. This field was not indexed and took a while to query.  It is a very old legacy system so I can’t change the indexing at this point.

In my entire file I had one such record. The iteration of the 100 workers that had among it the routine to do this one query waited almost a minute and a half on that one query, while the 99 others finished. That is when I started looking at design patterns with channels and came across worker pools.

Worker pools is an approach to concurrency in which a fixed number of m workers have to do n number of  tasks in a work queue. Rather than wait on all the workers (channels) at once, as the workers get idle they can be assigned jobs.

The three main players are :

Collector:  Gathers all the jobs

AvailableWorkers Pool: Buffered channel of channels that is used to process the requests

Dispatcher: Pulls work requests off the Collector and sends them to available channels

All the jobs are add to a collector pool. The dispatcher picks jobs off the collector. If there are availableWorkers it gives them the job else it tries to createOne. If all m workers are busy doing jobs the dispatcher will wait on completion to assign the job.

After reading and experimenting with workerPools, I have written a workerPool package that can be used directly or as a guideline to implement your own.

https://github.com/mariadesouza/workerpool

Uncategorized

Microservices with gRPC

A microservice is an independent runnable services that does one task effectively.  The concept is rather than having one monolithic application, we break it up into independent services that can be easily maintained.

To effectively use microservices there has to be a way for the various independent services to communicate with each other.

There are two ways of communication between microservices:

1. REST, such as JSON or XML over http
2. gRPC – Lightweight RPC protocol brought out by Google

What is gRPC?

To understand gRPC we first take a look at RPC.

RPC(Remote Procedure Call) is a form of inter-process communication (IPC), in that different processes have different address spaces. RPC is a kind of request–response protocol. RPC enables data exchange and invocation of functionality residing in a different address space or process.

gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. A client application can call methods on a server application as if it were a local object.

  • gRPC uses the new HTTP 2.0 spec
  • It allows for bidirectional streaming
  • It uses binary rather than text and that helps keep the payload compact and efficient.

This is Google’s announcement for gRPC.

So whats the “g” in gRPC? Google? Per the official FAQ page, gRPC stands for  gRPC Remote Procedure Calls i.e. it is a recursive acronym.

Protocol Buffers

gRPC uses protocol buffers as Interface Definition Language (IDL) for describing both the service interface and the structure of the payload messages.

Protocol buffers are a mechanism for serializing structured data. Define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages.

https://developers.google.com/protocol-buffers/docs/overview

Specify how you want the information you’re serializing to be structured by defining protocol buffer message types in .proto files. This message is encoded to the protocol buffer binary format.

message Person {
  required string name = 1;
  required int32 id = 2;
  optional string email = 3;
}

gRPC in Go

go get -u google.golang.org/grpc
go get -u github.com/golang/protobuf/protoc-gen-go

protobuf.Protobuf allows you to define an interface to your service using a developer friendly format.

Golang

Auto-generate code using Go templates

Go has two template packages: a HTML template package that can be used to templatize HTML files and a more general text template.

The Go text template package can be used easily to generate custom code like Javascript, Swift, React etc.

https://golang.org/pkg/text/template/

This package can be very useful for building products for varied customers that are similar but require their own company logo, names, images and text branding.

In fact, this can be implemented in any codebase that is very similar but needs a few tweaks to work for different platforms or products.

I have used Go text templates for code generation in the following scenarios:

  1. Generating multiple Chromium, Firefox, Edge extensions for varied products.
  2. Interfacing with different API’s to get data into our main systems

I’m also working on using it to auto generate custom mobile apps.

Here, I will explain in detail how I build new Chrome and Firefox extensions in seconds using Go templates. This can be expanded to include Microsoft Edge extensions as well.

Start with a basic chrome extension

There is a detailed description on how to build a chrome extension. 

https://developer.chrome.com/extensions/getstarted

Once you have a skeletal chrome extension, it is easy to duplicate and create multiple custom extensions if needed or even build Firefox extension using the same set of extension files.

All you need is a set of template javascript files for your extension and your config json values.

Templatize your extension

I start out by taking my extension file and saving with a .template extension. This way, I can recognize it needs to be parsed to replace values.

Files that do not need have any customizations can be left as is in the folder.

E.g. The chrome extension manifest.json.template file will look like this with template fields:

{  
"name": "{{.Name}}",
"version": "{{.Version}}",
"manifest_version": 2,
"default_locale": "en",
 "description": "{{.Description}}", 
"background": { 
"page": "background.html"   
}, 
"browser_action": {     
"default_title": "{{.Tooltip}}",   
"default_icon": "icon.png"
},
"content_scripts": [
{
"js": ["contentscript.js"]
}
],
"icons": {
"16": "icon16.png",            
"48": "icon48.png",           
"128": "icon128.png"    
}, 
"homepage_url": "{{.Protocol}}://{{.Domain}}", 
"permissions": [ "tabs",     
"{{.Protocol}}://{{.Domain}}/" 
]
}

Similarly we write templates file for all the extension files like popup.js etc.

To build a basic chrome extension, I also define the following in a global.json file. This is external to your extension files and will be used by the Go build system.

{  
"production": "true",
"author": "Maria De Souza",
"author_home": "https://mariadesouza.com/",
"protocol": "https",
"version" : "0.0.0.1",
"domain" : "www.mysite.com",
"name" : "testExtension",
"description" : "This is a test extension",
"tooltip":"Click here to view settings",
"title" : "Test extension",
}

Gather Customized Settings

I create a folder per customer product where I keep the images related to the customer. This source folder is used later when creating the extension.

The global settings can be overridden by a product specific json file:

{ 
"title" : "My cool new extension",
"version" : "0.0.0.2",
"domain" : "www.myotherwebsite.com",
}

The customized product.json can be a subset of the original global.json.

Write the build Script

Now we can get to the fun part, building a script to generate the extensions.

Templates are executed by applying them to a data structure. We first define a struct to Unmarshal our config JSON and use it in our build script. Notice the JSON tags correspond to the JSON field names in the global.json and product.json file.

type Config struct {  
Production  string `json:"production,omitempty"`
Author      string `json:"author,omitempty"`
AuthorHome  string `json:"author-home,omitempty"`
Version     string `json:"version,omitempty"`
Domain      string `json:"domain,omitempty"`
Protocol    string `json:"protocol,omitempty"`
Name        string `json:"name,omitempty"`
Description string `json:"description,omitempty"`
Tooltip     string `json:"tooltip,omitempty"`
Title       string `json:"title,omitempty"`
Browser     string `json:"browser,omitempty"`
ProductDir  string `json:"product_dir,omitempty"`
HomePage    string `json:"home_page,omitempty"`
}

The script starts by unmarshalling the global file in a struct value as below. Note that I have left out error handling to reduce noise. The second part will Unmarshal the custom product values.

var globalConfig Config 
configFile, _ := ioutil.ReadFile("global.json")
json.Unmarshal(configFile, &globalConfig)

var productConfig Config
productconfigFile,_ := ioutil.ReadFile("product.json") json.Unmarshal(productconfigFile, &productConfig)

Using reflect, I override the custom product values:

func mergeWithGlobal(globalConfig, productConfig *Config){  

st := reflect.TypeOf(*globalConfig)

for i := 0; i < st.NumField(); i++ {
tag := strings.Split(field.Tag.Get("json"), ",")
v2 :=
reflect.ValueOf(globalConfig).Elem().FieldByName(st.Field(i).Name)

if tag[0] != "" && v2.String() != "" {
v := reflect.ValueOf(productConfig).Elem().FieldByName(st.Field(i).Name)
v.SetString(v2.String())
}
}

}

Using this Config struct, I can now populate the template files. To do this I read all files with extension .template in the source diectory, execute the template using the populated Config struct and save the result in the destination directory.

Here is what the code would like. Again, I removed some error handling so the main flow is understandable. But error handling should definitely be part of your code.


func populateTemplateFiles(source, dest string, config *Config) error {
//Make destination directory
    os.MkdirAll(destination, 0755)
   
    re := regexp.MustCompile(`(.*)\.template$`)
    files, _ := ioutil.ReadDir(source)

    for _, file := range files {

        //if it is a template file read and populate tags
        filename := file.Name()
        if re.MatchString(filename) == true {

            buf, _ := ioutil.ReadFile(filepath.Join(source, filename))
           
            tmpl, _ := template.New("extensions").Parse(string(buf))
           // final file will drop the .template file ext
            targetfilename := strings.Split(filename, ".template")
            destFile := targetfilename[0]

            targetfile := filepath.Join(dest, destFile)
       
            f, err := os.OpenFile(targetfile, os.O_WRONLY|os.O_CREATE, 0755)
            if err != nil {
                fmt.Println("Failed to create file", targetfile, err)
                continue
            }
            w := bufio.NewWriter(f)

            tmpl.Execute(w, config)
           
            w.Flush()
          
        } else {

// not a template file - copy as is
          copyFile(filepath.Join(dest, filename), filepath.Join(source, filename))
        }
    }
    return nil
}

The customized images in the product directory are copied into the destination directory.

The destination directory will then contain all the customized javascript, other javascript files and custom images.

We can then upload a zipped version of the destination directory to the Chrome Webstore. I have a make file that also generates a pem file and zips the directory.

Extending build system for Firefox

I build Firefox extensions from the same set of Chrome extension files.

This is the link to developing a Firefox browser extension:

https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions

Firefox and Chrome extensions have very subtle differences, in the way they are implemented. They are all based largely on the WebExtensions API for cross browser compatibility.

Notice that my earlier declaration of Config struct had the following field

 Browser     string `json:"browser,omitempty"`

I dynamically change this in my Go script to instruct the system to generate a new set of extension files for Firefox.

I set the value for Browser in my code and then build for Firefox.

productConfig.Browser = "Firefox"

In fact, you can parameterize the Go script to accept a command line argument that will control which browser you want to build for. The default could be all.

The differences between Firefox and Chrome will be conditionals in your template files. It depends on what functionality you are using within your extensions.

E.g. to send a message back to the background script from the content script, I use the template system to conditionally generate code for Firefox vs Chrome using the config value.

{{if eq .Browser "Firefox"}}
browser.runtime.sendMessage({ name: "options", value: event.target.value });
{{else}}
chrome.extension.sendRequest({ name: "options", value: event.target.value });
{{end}}

Another example of this would be, add FirefoxID in my config struct and in my manifest.json.template, I add

{{if eq .Browser "Firefox"}}
"applications": {
"gecko": {
"id": "{{.FirefoxID}}",
"strict_min_version": "42.0",
}
},
{{end}}

Microsoft Edge extensions are also similar and the same set of template files can be used to auto-generate Edge with a few tweaks.

Golang

Launch a Golang web server using Docker

If you want to create a web server using Go the simplest way to deploy is using Docker.  Golang code is compiled to a binary and does not need a special environment to run.

Here is the simplest web server code in Go to get started. Save this as webserver.go

package main 
import (
"fmt"
"log"
"net/http"
"runtime"
)

func main() {
http.HandleFunc("/", indexHandlerHelloWorld)
log.Fatal(http.ListenAndServe(":8080", nil))
}

func indexHandlerHelloWorld(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello world, I'm running search on %s with an %s CPU ", runtime.GOOS, runtime.GOARCH)
}

We can use the simplest docker image scratch and add a directive to copy the binary to the server. Save as Dockerfile.

FROM scratch 
MAINTAINER Maria De Souza <maria.g.desouza@gmail.com>

ADD go-webserver go-webserver
ENTRYPOINT ["/go-webserver"]
EXPOSE 8080

We setup a start shell script to build and kick up the docker

#!/bin/bash 
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -a -o go-webserver webserver.go || exit 1

if [ "$(docker ps -q -f name=go-web-server)" ]; then
docker stop $(docker ps -a -q --filter name=go-web-server --format="{{.ID}}")
fi

docker build -t go-web-server . || exit 1

docker run -p 8000:8080 go-web-server || exit 1

You can use the -h option to add the hostname to the webserver

Voila! Now your webserver is running. Navigate to http://localhost:8000/ to test.

Known Issue

If you make any SSL requests from your webserver, you will see the following error when running the webserver using Docker:

x509: failed to load system roots and no roots provided

This is because the /etc/ssl/certs/ca-certificates.crt, is missing from the scratch image that is read by the golang tls package.

To avoid this copy the cert to your docker image from your local system. I normally add this to my bash script that will copy it based on OS.

if [ ! -e ca-certificates.crt ]; then  
if [[ $(uname) = "Darwin" ]]; then
cp /usr/local/etc/openssl/cert.pem ca-certificates.crt || exit 1
else
cp /etc/ssl/certs/ca-certificates.crt ca-certificates.crt || exit 1
fi
fi

This is what your Dockerfile should then look like:

FROM scratch  
MAINTAINER Maria De Souza <maria.g.desouza@gmail.com>

COPY ca-certificates.crt /etc/ssl/certs/ca-certificates.crt

ADD go-webserver go-webserver
ENTRYPOINT ["/go-webserver"]
EXPOSE 8080
Tech News · Technology Security

Meltdown and Spectre Security Flaw

Although most of the news sites initially reported the Meltdown and Spectre security flaws as present in an Intel processor, it is now known to affect other processors like AMD and ARM as well. That means all  devices using these processors like PCs, MacBooks, servers, Android and iOS devices are affected.

What are the Spectre and Meltdown security flaws?

These are security holes introduced by two different optimization techniques used by the processors namely: Speculative Execution and Out-of-order execution.

The technique used to exploit Speculative Execution is called Spectre and has two variations. One that takes advantage of the bounds checks bypass and the other that exploits the capability to do branch target injection by altering the branch target buffer to execute the rogue process.

The technique use to exploit the out-of-order execution performance feature is called Meltdown. This enables a rogue process to read memory of another process or virtual machine in the cloud without permission or privileges.

How to protect yourselves?

Most of the tech industry giants have responded quick. Google in particular developed a mitigation technique to protect against Spectre and shared it with other partners.

Android

The Android 2018-01-05 Security Patch Level(SPL) includes mitigations reducing access to high precision timers that limit attacks on all known variants on ARM processors. These changes were released to Android partners in December 2017

Chromebooks

OS versions prior to 63 are not patched. Chrome OS systems started receiving version 63 on 12/15/2017.

Go to the Google FAQ for steps to take on Google cloud and other Google products.

Linux

After being vocal in his criticism of Intel’s responses to the problems, Linus Torvalds released the first new Linux kernel of 2018 on Jan. 28, after the longest development cycle for a new Linux kernel in seven years. Linux 4.15 was released with improved Meltdown, Spectre Patches. Read the release announcement from Linus Torvalds here.

Microsoft Windows 10

After Intel’s Buggy fix to Spectre, Microsoft has issued a patch to fix this.

Microsoft  has announced more security updates for Windows 10 devices in its March Patch Tuesday. These can be found here. They have also lifted the AV compatibility check put in earlier on Anti Virus software that made calls to the kernel memory.

Apple Devices: MacBook, iPhone, iPad, Apple TV

As of Jan 4th, Apple confirmed that it has addressed the recent “Meltdown” as well as Spectre  vulnerability in previously released iOS 11.2.2, macOS High Sierra 10.13.2, and tvOS 11.2

Browsers

Firefox 57.0.4 released on Jan 4, 2018 includes the two mitigations

Microsoft Edge: Microsoft has released an update to Windows Client to fix the vulnerability on Edge(KB4056890). Check details.

Chrome 64, due to be released on January 23, will contain mitigations to protect against exploitation.

Safari Apple has released new security updates aimed at protecting Safari and WebKit from the Spectre attack. Check details here.

Amazon Cloud

Amazon Web Services(AWS) team put out a security bulletin on Jan 03, 2018 with instructions for customers to follow on protecting their servers against the vulnerability.

These are only the first set of software mitigations. With increasing pressure, Intel admitted that these updates do not totally eliminate the risks. They are now implementing hardware mitigations directly into their chips.