AKS - setting up istio ingress (external and internal)

 

Enabling External Ingress

First we need to ensure our cluster is installed with istio service mesh. I typically run the following command to get it installed on my existing cluster

# enabling my cluster
az aks mesh enable --resource-group istio-rg --name my-istio-cluster

# see if the cluster service mesh is enabled

az aks show --resource-group istio-rg --name my-istio-cluster  --query
'serviceMeshProfile.mode'

External ingress

Next we activate istio external ingress.

Getting 400 bad request when running 

az aks mesh enable-ingress-gateway --resource-group istio-rg --name my-istio-cluster --ingress-gateway-type external

Ensure your managed identity has Network contributor permission for your.

# Get the principal ID for a system-assigned managed identity.

CLIENT_ID=$(az aks show --name my-istio-cluster --resource-group istio-rg  
--query identity.principalId --output tsv)

# Get the resource ID for the node resource group.
RG_SCOPE=$(az group show --name MC_istio-rg_my-istio-cluster_australiaeast
--query id --output tsv)

# Assign the Network Contributor role to the managed identity,
# scoped to the node resource group.
az role assignment create --assignee ${CLIENT_ID} --role "Network Contributor"
--scope ${RG_SCOPE}

Then proceed to run the following command

az aks mesh enable-ingress-gateway --resource-group istio-rg
--name my-istio-cluster --ingress-gateway-type external


Then you will see that the process runs.




To verify your external ingress, you run the following command

kubectl get svc aks-istio-ingressgateway-external -n aks-istio-ingress



And when i looked at my resources, I can see a new public ip is created instead of using existing ones. 

 

So now I have 2 public IP. 

Then we will be creating a gateway associated to our newly created load balancer. So if i were to hit "http://20.227.92.229:80/productpage" I should get a connection timeout error.

Next, we will run the kubectl apply -f gateway.yaml and the yaml looks like this.

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: bookinfo-gateway-external
spec:
  selector:
    istio: aks-istio-ingressgateway-external
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: bookinfo-vs-external
spec:
  hosts:
  - "*"
  gateways:
  - bookinfo-gateway-external
  http:
  - match:
    - uri:
        exact: /productpage
    - uri:
        prefix: /static
    - uri:
        exact: /login
    - uri:
        exact: /logout
    - uri:
        prefix: /api/v1/products
    route:
    - destination:
        host: productpage
        port:
          number: 9080


Key point to notice is this :- 



Let's try to load the page again. You should be able to see something like this.

Additional info

How to customize your istio service mesh configuration, please refer to this article here.


Enabling Internal Ingress

To enable internal ingress run the following commands: 

az aks mesh enable-ingress-gateway --resource-group
$RESOURCE_GROUP --name $CLUSTER --ingress-gateway-type internal



To verify our installation 
kubectl get svc aks-istio-ingressgateway-internal -n aks-istio-ingress



Next verify by running the following command


export INGRESS_HOST_INTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-internal -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT_INTERNAL=$(kubectl -n aks-istio-ingress get service aks-istio-ingressgateway-internal -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
export GATEWAY_URL_INTERNAL=$INGRESS_HOST_INTERNAL:$INGRESS_PORT_INTERNAL


To validate ensure there's output from the following command. 

kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS  "http://$GATEWAY_URL_INTERNAL/productpage"  | grep -o "<title>.*</title>"


It should look something like this:-



Notice right now that I have an internal load balancer created for me.



And noticed that there's an annotation for "service.beta.kubernetes.io/azure-load-balancer-internal: true". Please refer to this article for additional configuration related info.


And then if i look at my service, i can see a local ip address being allocated as shown here.


Whenever a service is created, you may not have an internal load balancer but it will be automatically created.  If i were to create another internal service call "internal2-load-balancer", then a new network ip gets allocated to me. Notice i am using the service tag "service.beta.kubernetes.io/azure-load-balancer-internal" here.

apiVersion: v1
kind: Service
metadata:
  annotations:   
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
  name: internal1-load-balancer
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: azure-load-balancer


Also notice the followings in the front end configuration of your internal load balancer and it matches to my kubernetes services.








Comments

Popular posts from this blog

The specified initialization vector (IV) does not match the block size for this algorithm