Posts

argocd bluegreen deployment example

Image
To deploy blue green, we start off with deploying the following yaml. As you may have notice, we have 2 services here which we will create next apiVersion : argoproj.io/v1alpha1 kind : Rollout metadata :   name :     labels :     app : bluegreen-demo spec :   replicas : 3   revisionHistoryLimit : 1   selector :     matchLabels :       app : bluegreen-demo   template :     metadata :       labels :         app : bluegreen-demo     spec :       containers :       - name : bluegreen-demo         image : argoproj/rollouts-demo:blue         imagePullPolicy : Always         ports :         - name : http           containerPort : 8080           protocol : TCP         resources :...

Using Cloudrun to update your Vertex AI RAG

Image
In this example, we are going to use Google cloud run to update corpus Vertex AI RAG. So when we have a new document, we load it up into Cloud storage. This generates an event for Cloud Run and update RAG.  To setup our Cloud run with event trigger. We will give it a name and then configure the target resource name. As you can see we are using "tgoogle.cloud.storage.object.v1.finalized" which gets fired when we create a new object, or overwrite an existing object, and Cloud Storage creates a new generation of that object. This is code in our cloud run. import functions_framework import os from google.cloud import aiplatform from vertexai.preview import rag import vertexai # Configuration from Environment Variables PROJECT_ID = os.environ.get( "PROJECT_ID" ) LOCATION = os.environ.get( "LOCATION" , "us-central1" ) RAG_CORPUS_ID = os.environ.get( "RAG_CORPUS_ID" ) vertexai.init(project=PROJECT_ID, location=LOCATION) # Triggered by a...

private endpoint with multiple vnet how do you connect

Image
When we setup multiple VNET and peer them, let's say we have marketing VNET and research VNET. then we created a storage account and setup private link in research VNET. At this point, VM trying to resolve our storage account in marketing vnet will get a public ip. How do we configure to ensure this VM in marketing would be able internal IP?  All we need to do is go to the Private dns zone that we have created and then add a " Virtual Network link " and we choose "marketing" vnet target network. And then in the VM that we have created in marketing VNET, you can see that we are resolving to a local ip address instead of public.  And then if we look at the subnet. there's no IP that has been taken up.

Kubernetes ValidatingAdmissionPolicy basic example

Image
To use this we need to create a "ValidatingAdminisionPolicy" and "ValidatingAdmissionPolicyBinding".  This example will prevent pod from being created if it des not have "label called environment". apiVersion : admissionregistration.k8s.io/v1 kind : ValidatingAdmissionPolicy metadata :   name : " require-env-label " spec :   failurePolicy : Fail   matchConstraints :     resourceRules :     - apiGroups :   [ "" ]       apiVersions : [ " v1 " ]       operations :   [ " CREATE " , " UPDATE " ]       resources :   [ " pods " ]   validations :     - expression : " has(object.metadata.labels) && 'environment' in object.metadata.labels "       message : " The 'environment' label is required for all Pods. " And then we have ValidatingAdmissionPolicyBinding apiVersion : admissionregistration.k8s.io/v1 kind : ValidatingAdmissionPolicyBinding meta...

Creating hub spoke network using Azure Network Manager

Image
First we should create our VNET, in this context we are going to create 3 vnet - research, marketing and 3rd party.  Then we are going to create Network Manager and then add your management scope - which is your subscription accessible to this network manager. Please ensure you selected Feature => Connectivity Network Group Then we will create a network group. Then provide it with a name "Hub-Spoke-Network-Group".  After it is crated, click on Add Virtual Network manally. We will select the vnet here  Network Configuration   This is where we will create Hub-Spoke topology here. Then go next:- Click "Review and create" when ready Deployment Before you ca  have your hub and spoke network, we need to click on "Deployment".  And then click on " Deploy configuration " And then after we deploy it, it should look like this :-

argocd rollout basic setup

Image
First we are going to need the following files and apply them  apiVersion : v1 kind : Service metadata :   name : rollouts-demo spec :   ports :   - port : 80     targetPort : http     protocol : TCP     name : http   selector :     app : rollouts-demo And this is the rollout file and notice the image we are using is ' blue '  apiVersion : argoproj.io/v1alpha1 kind : Rollout metadata :   name : rollouts-demo spec :   replicas : 5   strategy :     canary :       steps :       - setWeight : 20       - pause : {}       - setWeight : 40       - pause : {}       - setWeight : 60       - pause : {}       - setWeight : 80       - pause : {}   revisionHistoryLimit : 2   selector :     matchLabels :       ...

installing extension argo rollout as a plugin for kubectl on linux

Image
 To setup argocd plugin for kubectl, we need to run the following command on our linux machine curl -LO https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-linux-amd64 chmod +x ./kubectl-argo-rollouts-linux-amd64 sudo mv ./kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts Finally test if it is working  kubectl argo rollouts version

kubectl installing krew plugins

Krew is a plugin manager which allows us to download and use plugins and extend current functionality of kubectl.  To install on bash, run the following command :- ( set -x; cd " $( mktemp -d ) " && OS = " $( uname | tr '[:upper:]' '[:lower:]' ) " && ARCH = " $( uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/' ) " && KREW = "krew- ${ OS } _ ${ ARCH } " && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/ ${ KREW } .tar.gz" && tar zxvf " ${ KREW } .tar.gz" && ./ " ${ KREW } " install krew ) And remember to add this to your path  export PATH = " ${ KREW_ROOT :- $HOME /.krew } /bin: $PATH " Then run this to update,  kubectl krew update Next run this to list plugins  kubectl krew list 

k8s rbac role, rolebinding and testing

In AKS we can setup rbac for service acount. Let's create a service account and then  using the following yaml definition to setup it access to list pods.  kubectl create serviceaccount pod-watcher -n dev Next we will create a role that only allows this service account to list and watch pods in dev namespace, Role is reusable.  apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata:   namespace: dev   name: pod-reader-only rules: - apiGroups: [ "" ] # The core API group   resources: [ "pods" ]   verbs: ["get", "list", "watch"] # No "delete", "create", or "update" Here we can see the common apiGroups that you can use and please ensure you do not omit the (s) - plural as will NOT work API Group Common Resources "" (Core) pods , services , nodes , namespaces , configmaps , secrets , persistentvolumeclaims apps deployments , statefulsets , daemonsets , replicasets batch jobs , cronjobs ne...

Federating AKS workload identity step by step guide

This is a step by step guide to show how we can federate our workload identity in a kubernetes cluster. First we enable OIDC on our cluster. Next, we create a managed identity in Azure.  Then we will be creating a service account and the federating the managed identity. # 1. Enable OIDC and Workload Identity on your cluster az aks update - g myRG - n myCluster -- enable - oidc - issuer -- enable - workload - identity # 2. Get the OIDC Issuer URL (needed for the trust) AKS_OIDC_ISSUER = $ ( az aks show - n myCluster - g myRG -- query " oidcIssuerProfile.issuerUrl " - otsv ) # 3. Create the Managed Identity in Azure az identity create -- name " my-app-identity " -- resource - group myRG # 4. Create the Kubernetes Service Account (SA) # Note: You MUST annotate it with the Client ID of the Managed Identity CLIENT_ID = $ ( az identity show -- name " my-app-identity " -- resource - group myRG -- query clientId - o tsv ) kubectl create serviceaccount my - app...

azure vnet service endpoint what happens when you turn it on and how do you whitelist your resources?

This questions might sound hard but it is quite easy to answer. When we enable service endpoint for storage acccount or other resources, we will be using microsoft backbone to route request to the storage account. When this happens, we no longer using a NAT and hence our public IP might not be whitelisted.  In that case, how do you whitelist it? Simple, just use the subnet that your resources resides in. 

terraform state list, taint and untaint

Image
Terraform taint will mark a resource to be 'bad' and requires it to be re-created when we run terraform plan or terraform apply. And example of use case would be something like this :- Here we are listing the statefile. From here, we can easily apply taint and untaint our resources.