install nginx service mesh
Installing NGINX service mesh
First of all, you need to install monitoring components for nginx service mesh.
Clone from this repo https://github.com/mitzenjeremywoo/nginx-service-mesh-setup.git and then run the following command.
kubectl apply -f prometheus.yaml -f grafana.yaml -f otel-collector.yaml -f jaeger.yaml
Next use helm to install the service mesh
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
helm install nsm nginx-stable/nginx-service-mesh --namespace nginx-mesh --create-namespace --wait --set prometheusAddress=prometheus.nsm-monitoring.svc:9090 --set telemetry.exporters.otlp.host=otel-collector.nsm-monitoring.svc --set telemetry.exporters.otlp.port=4317 --set telemetry.samplerRatio=1
Deploying bookinfo sample app.
We need to label the namespace first - just like how we do it in Istio.
kubectl label namespaces default injector.nsm.nginx.com/auto-inject=enabled
and then run this command to deploy book info.
kubectl apply -f bookinfo.yaml
You can validate the traffic using
kubectl -n <grafana-namespace> port-forward svc/grafana 3000.
And the grafana would look something like this.
Traffic splitting
Next we will setup the gateway and make it accessible. We will be deploying a series of deployment and services.
kubectl apply -f target-svc.yaml -f target-v1.0.yaml -f gateway.yaml
Our load balancer service has been setup and when we curl our localhost endpoint, we have the target v1.0.
To see the traffics, we can run the followings.
nginx-meshctl top
If you don't see this, you may have to need to generate more traffics.
Next, we going to split the traffic using the following yaml. We are just doing what we're doing earlier but being explicit by saying we want 100% of traffic going into target-v1-0.
and when we run curl, we can still see the same message.
We will deploy target versioned 2. This would provide successful (status 200).
kubectl apply -f target-v2.1-successful.yaml
Next, we will split the traffic 50%-50% with v1 and v2 of the service.
kubectl apply -f traffic-split-v2.yaml
And you will be able to see the output below, where traffic split between them
And you can see that we have success request here.
If you run target-v2.0-failing.yaml you will see that request to target v2 will fail and have a slightly different view.
From my testing, although i configure traffic split to 100 for target v2 but traffic seems to be routing for both services as shown here.
Comments