Write a Kubernetes-ready service from zero step-by-step
If you have ever tried Go, you probably know that writing services with Go is an easy thing. Yes, we really need only few lines to be able to run http service. But what do we need to add if we want to prepare our service for production? Let’s discuss it by an example of a service which is ready to be run in Kubernetes.
You can find all examples from this article in the single tag and you can follow this article commit-by-commit.
Step 1. The simplest service
So, we have a very simple application here:
main.go
|
|
If we want to try it for the first time, go run main.go
might be enough. If we want to see how it works, we may use a simple curl command: curl -i http://127.0.0.1:8000/home
. But when we run this application, we see that there is not any information about its state in the terminal.
Step 2. Add a logger
First of all, let’s add a logger to be able to understand what is going on and to be able to log errors and other important situations. In this example we will use the simplest logger from the standard Go library, but for a production-ready service you might be intersted in more complicated solutions such as glog or logrus.
For example, we might want to log 3 situations: when the service is starting, when the service is ready to handle requests and if http.ListenAndServe
returns an error. As the result we will have something like this:
main.go
|
|
Looks better!
Step 3. Add a router
Now, if we write a real application, we might want to add a router to be able to handle different URIs and HTTP methods and match other rules in an easy way. There is not any router in the standard Go library, so let’s use gorilla/mux which is pretty compatible with the standard net/http
library.
If your service needs some significant amount of different routing rules, it makes sense to move all routing-related things to separate functions or even a package. Let’s move router initialization and rules to the package handlers
(see the full change here).
Let’s add Router
function which returns a configured router and home
function which handles /home
path. Personally, I prefer to use separated files for such things:
handlers/handlers.go
|
|
handlers/home.go
|
|
And then we need some small changes in main.go
file:
|
|
Step 4. Tests
It is time to add some tests. Let’s use httptest
package for it. For the Router
function we might add something like this:
handlers/handlers_test.go
:
|
|
Here we check if GET
method for /home
returns code 200
. On the other hand, if we try to send POST
we expect 405
. And, finally, for a route which does not exists we expect 404
. Actually, this example might be a bit “verbose” because the router is already well-tested as a part of gorilla/mux
package, so you might want to check even less things.
For home
it might make sense to check its response code and response body:
handlers/home_test.go
:
|
|
Let’s run go test
to check if our tests work:
|
|
Step 5. Configuration
Next important question is ability to configure our service. Right now it always listens on the port 8000
, and probably it might be useful to be able to configure this value. The Twelve-Factor App manifesto, which represents a really great approach for writing services, tells us that it is good to store configuration based on the environment. So, let’s use environment variables for it:
main.go
|
|
In this example, if the port is not set, the application will simply exit with an error. There is no sense to try continue working, if the configuration is wrong.
Step 6. Makefile
Few days ago there was an article about the make
tool, which is very helpful if you want to automate some repeatable routines. Let’s see how we can use it for our application. Currently, we have two actions: to run the tests, to compile and run the service. Let’s add these action to a Makefile. But instead of simple go run
we will use go build
and we will run a compiled binary, because this approach suits to our production-readiness goals better:
Makefile
|
|
In this example we moved a binary name to a separated variable APP
to not to repeat the name few times.
Here, if we want to run an application, we need to delete an old binary (if it exists), to compile the code and to run a new binary with the right environment variable and to do all these things we can use make run
.
Step 7. Versioning
The next technique we will add to our service is versioning. Sometimes it might be very useful to know what are the exact build and commit we use in production and when the binary was built.
To be able to store this information let’s add a new package - version
:
version/version.go
|
|
We can log these variables when the application starts:
main.go
|
|
And we also may add them to the home
handler (don’t forget to change the test!):
handlers/home.go
|
|
We will use the Go linker to set BuildTime
, Commit
and Release
variables during compilation.
Let’s add the new variables to the Makefile
:
Makefile
|
|
Here COMMIT
and RELEASE
are defined by provided commands and we can use semantic versions for RELEASE
.
Now let’s rewrite the build
target to be able to use these variables:
Makefile
|
|
I also defined the PROJECT
variable in the beginning of Makefile
to not to repeat the same things few times:
Makefile
|
|
All changes we made during this step you can find here. Feel free to try make run
and check how it works.
Step 8. Let’s have less dependencies
There is one thing I do not like in our code: the handler
package depends on the version
package. It is easy to change it: we need to make the home
handler configurable:
handlers/home.go
|
|
And again, do not forget to fix the tests and provide all necessary changes.
Step 9. Health checks
In a case if we want to run a service in Kubernetes, we usually need to add the health checks: liveness and readiness probes. The purpose of a liveness probe is to understand that the application is running. If the liveness probe fails, the service will be restarted. The purpose of a readiness probe is to understand if the application is ready to serve traffic. If the readiness probe fails, the container will be removed from service load balancers.
To define the readiness probe we usually need to write a simple handler which always return response code 200
:
handlers/healthz.go
|
|
For the readiness probe it is often similar, but sometimes we might need to wait for some event (e.g. the database is ready) to be able to serve traffic:
handlers/readyz.go
|
|
In this example we return 200
only if the variable isReady
is set and equals to true
.
Let’s see how we may use it:
handlers.go
|
|
Here we want to mark that the application is ready to serve traffic after 10 seconds. Of course, in the real life there is not any sense to wait for 10 seconds, but you might want to add here cache warming (if your application uses cache) or something like this.
As usual, the whole changes we made of this step you can find on Github.
Note. If your application hits too much traffic, its endpoints will response unstable. E.g. liveness probe might be failed because of timeouts. This is why some engineers prefer to not to use liveness probe at all. Personally, I think that it would be better to scale resources if you find out that you have more and more requests. For example, you might want to scale pods with HPA.
Step 10. Graceful shutdown
When the service needs to be stoped, it is good to not to interrupt connections, requests and other operations immediately, but to handle all those things properly. Go supports graceful shutdown for http.Server
since version 1.8. Let’s see how we may use it:
main.go
|
|
In this example we are able to catch operation system signals and if one of SIGINT
or SIGTERM
is catched, we will shut down the service gracefully.
Note. When I was writing this code, I tried to catch SIGKILL
here. I saw it few times in different libraries and I was sure that it worked. But, as Sandor Szücs noted, it is not possible to catch SIGKILL
. In the case of SIGKILL
, the application will be stoped immediately.
Step 11. Dockerfile
Our application is almost ready to be run in Kubernetes. Now we need to dockerize it.
The simplest Dockerfile
, we need to define here, might look like this:
Dockerfile
|
|
We create the smallest container, copy the binary there and run it (we also do not forget about PORT
configuration variable).
Let’s change a bit the Makefile
to be able to build an image and run a container. Here it might be useful to define new variables: GOOS
and GOARCH
which we will use for cross-compilation in the build
goal.
Makefile
|
|
We also added the container
goal to be able to build an image and the run
goal to run our application from the container. All changes are available here.
Now let’s try make run
to check the whole process.
Step 12. Vendoring
We have an external dependency (github.com/gorilla/mux
) in our project. And it means that for production readiness we definetely need to add dependency management here. If we use dep the only thing which we need for our service is dep init
:
|
|
It created Gopkg.toml
and Gopkg.lock
files and vendor
directory. Personally, I prefer to push vendor
to git, especially for important projects.
Step 13. Kubernetes
The last step. Let’s run our application in Kubernetes. The simplest way to run it locally is to install and configure minikube.
Kubernetes pulls images from a Docker registry. In our case, we will work with the public Docker registry - Docker Hub. We need to add one more variable and one more command to the Makefile
:
Makefile
|
|
The CONTAINER_IMAGE
variable defines a Docker registry repo which we will use to push and pull our service images. As you can see, in this case it includes the username (webdeva
). If you do not have an account at hub.docker.com yet, please create it and login with docker login
command. After this, you will be able to push images.
Let’s try make push
:
|
|
It works! Now you can find the image in the registry.
Let’s define the necessary Kubernetes configuration (manifest). Usually, for the simplest service we need to set at least deployment, service and ingress configurations. By default the manifests are static, it means that you are not able to use any variables there. Hopefully, you can use helm to be able to create flexible configuration.
In this example we will not use helm
, but it might be useful to define a couple of variables: ServiceName
and Release
, it gives us more flexibility. Later, we will use the sed
command to be able to replace these “variable” with the real values.
Let’s look at deployment configuration:
deployment.yaml
|
|
It is better to discuss Kubernetes configuration as a part a separate article, but, as you can see, here, among other things, we defined where it is possible to find a container image and how to reach liveness and readiness probes.
A typical service looks simpler:
service.yaml
|
|
And, finally, ingress. Here we define the rules to access a service from outside of Kubernetes. Assume, that we want to “attach” our service to the domain advent.test
(which is actualy fake):
ingress.yaml
|
|
Now to check how it works we need to install and run minikube
, its official documentation is here. We also need the kubectl tool to be able to apply the configuration and check the service.
To start minikube
, enable ingress and prepare kubectl
we need to run few commands:
|
|
Now let’s add a new Makefile
goal to be able to install the service on minikube
:
Makefile
|
|
These commands “compile” all *.yaml
configurations to a single file, replace Release
and ServiceName
“variables” by the real values (please, note that here I use gsed
instead of the standard sed
) and run kubectl apply
to install the application to Kubernetes.
Let’s check if our configuration works:
|
|
Now we can try to send requests to the service. But first of all, we need to add our fake domain advent.test
to the /etc/host
file:
|
|
And now finally we can check our service:
|
|
Yeah, it works!
You can find all steps here, there are two versions available: commit-by-commit and all steps in one. If you have any questions, please, create an issue or ping me via twitter: @webdeva or just leave a comment here.
It might be interesting for you how a more flexible service, prepared for the real production, may look like. In this case, feel free to take a look at takama/k8sapp, a Go application template which meets the Kubernetes requirements.
P.S. Many thanks to Natalie Pistunovich, Paul Brousseau, Sandor Szücs and others for their review and comments.