Run experiments in local Kubernetes

This getting started will show you how to install and use steadybit locally with Kubernetes on minikube. We will run an ecommerce application in Kubernetes and find out how it handles network latency. By using steadybit, we will slow down individual Kubernetes pods at the network level.

Kubernetes, also known as k8s, is an open source system for automating the deployment, scaling, and management of containerized applications. We are using minikube to set up a local Kubernetes cluster on macOS, Linux or Windows.

Basically, the getting started is split into 4 steps. If you have already done the first steps, get in directly via the short link.


Step 1 - Start your minikube cluster

From a terminal, run:

minikube start

If you already have installed kubectl you can access your cluster with:

kubectl get po -A

If you don't have kubectl installed yet, check this out: How to install kubectl

Step 2 - Install steadybit agent

We take an agent-based approach to help you identify goals and run experiments. The installation of our steadybit agents is very simple. In the case of Kubernetes, you can install our agents in Kubernetes as a DaemonSet.

You can either install our agent directly using a Helm chart or use the YAML file to install it using kubectl.

Step 2.1 - Helm

If you haven't installed Helm yet, go here to get started. Once Helm is installed and configured, the next steps are to add the repo and install the agent.

Add the repo for the steadybit Helm chart:

helm repo add steadybit
helm repo update

In our steadybit platform you will find under section .../settings/agents/setup your agent key.

copy steadybit agent key

Please copy the agent key and replace it below. In addition, you also need to set the cluster name you are installing the agents into:

helm install steadybit-agent \
--namespace steadybit-agent \
--create-namespace \
--set agent.key=<replace-with-agent-key> \
--set<replace-with-cluster-name> \

That's all, ready to start your first experiment!

Step 2.2 - DaemonSet YAML

In our steadybit platform you will find under section .../settings/agents/setup all details to install agents in your system. Please select the Kubernetes tab and copy the YAML file prepared there.

install steadybit agent

Create a DaemonSet based on the YAML file:

kubectl apply -f YOUR-FILE-NAME.yaml

That's all, ready to start your first experiment!

Step 3 - Deploying the steadybit shopping-demo

In order to give you a quick and easy start, we have developed a small demo application. Our shopping demo is a small product catalog provided by 4 distributed backend services and a simple UI.


If you want to learn more about our demo, please take a look at our GitHub repository:

First you need to download our shopping demo app, run following git clone command:

git clone

Now we use kubectl to deploy the demo by running the following command:

kubectl apply -f k8s-manifests.yml

Verify that all Shopping Demo pods are running:

kubectl get pods --namespace steadybit-demo

You will see the following result, all pods are ready if you can see the status Running:

fashion-bestseller-79b9698f88-557vt 1/1 Running 0 11s
gateway-7fc74f7f9b-tshzg 1/1 Running 0 11s
hot-deals-75cb898ff7-wrnxc 1/1 Running 0 10s
postgres-68f9db56cc-wxxth 1/1 Running 0 10s
toys-bestseller-6df5bd864f-kzrt9 1/1 Running 0 11s

The command minikube tunnel creates a route to services deployed with type LoadBalancer and sets their Ingress to their ClusterIP.

minikube tunnel

With the following command you can now determine the external IP and port to access the gateway service:

kubectl get svc -n steadybit-demo

Example response:

fashion-bestseller NodePort ------------- <none> ----:-----/--- ---
gateway LoadBalancer 80:30131/TCP 3h15m
hot-deals NodePort ------------- <none> ----:-----/--- ---
product-db NodePort ------------- <none> ----:-----/--- ---
toys-bestseller NodePort ------------- <none> ----:-----/--- ---

Visit http://{EXTERNAL-IP}:{PORT}/products in your browser to retrieve the aggregated list of all products or just use curl:

curl http://{EXTERNAL-IP}:{PORT}/products

The result is an aggregated list of all products of the services toys, hot-deals and fashion:

"fashionResponse": {
"responseType": "REMOTE_SERVICE",
"products": [
"id": "e9f0bec4-989c-4b9f-8bf9-334622e915ad",
"name": "Bob Mailor Slim Jeans",
"category": "FASHION"
"id": "b110185b-d808-4104-b605-08a90b1248ce",
"name": "Lewi's Jeanshose 511 Slim Fit",
"category": "FASHION"
"id": "222d7084-3cc7-43c3-890f-4598aa44eb2f",
"name": "Urban Classics Shirt Shaped Long Tee",
"category": "FASHION"
"toysResponse": {
"responseType": "REMOTE_SERVICE",
"products": [
"hotDealsResponse": {
"responseType": "REMOTE_SERVICE",
"products": [
"duration": 112,
"statusFashion": "REMOTE_SERVICE",
"statusToys": "REMOTE_SERVICE",
"statusHotDeals": "REMOTE_SERVICE"

Step 4 - Run your first experiment

We will now use steadybit to find out how our shopping demo behaves when there is increased latency in the network for a backend service. This latency should only happen for a specific type of container. One of the reasons for this is that we don't want to negatively impact our colleagues unnecessarily if we want to do this kind of testing later on in a real Kubernetes cluster.

Please create a new experiment for the infrastructure section:

Create experiment step 1

Like everything in life, our experiment needs a fitting name:

Create experiment step 2

Our target for our experiment is a container running in Kubernetes. Therefore, we select Container for the type of targets. We want our experiment to be repeatable as often as we like, so we describe our target container with attributes and not by a unique name:

Create experiment step 3

In the current deployment, none of the services are scaled, which doesn't really make sense and only supports this demo. For this reason we also get only 1 affected container for the attack radius. In real life you would see more than 1 affected container and can then control how many of them should be affected by this experiment:

Create experiment step 4

Our experiment is to inject latency into the network in order to find out how this affects our application and more importantly what our customers' experience is in this case.

Create experiment step 5

For now, let's skip the Execution and Monitoring section. Normally, here you would create an appropriate load test that is executed during the experiment and connect your monitoring solution.

You can read more about this in our docs.

Create experiment step 6

Now everything is ready and we can start the experiment. In the next 30 seconds there will be an increased latency in the Fashion-bestseller service.

You can track this by checking the response of the shopping-demo endpoint /products.

Create experiment step 7

You should notice that in the fashionResponse section only the fallback is displayed and you do not see any products. One good thing is the fact that the latency we injected does not also have a negative impact on the gateway service. However, the current behavior can still be improved, for example by scaling the services.


You have now successfully run an experiment with steadybit in a Kubernetes environment. You could discover how big the impact of a little latency is in a non-scaled system.

What are the next steps?

How about scaling the fashion-bestseller service and then running your new experiment again to increase availability and resilience?

kubectl scale deploy fashion-bestseller --replicas=3 --namespace steadybit-demo

Verify by running:

kubectl get deployments -A
kube-system coredns 1/1 1 1 128d
steadybit-demo fashion-bestseller 3/3 3 3 1h49m
steadybit-demo gateway 1/1 1 1 1h49m
steadybit-demo hot-deals 1/1 1 1 1h49m
steadybit-demo postgres 1/1 1 1 1h49m
steadybit-demo toys-bestseller 1/1 1 1 1h49m

One big advantage is that you can re-run your experiment stored in steadybit at any time.

Need Help? Get in touch with us© steadybit. All rights reserved.