Posts

google adk deploying to gcp cloud run

Image
We can deploy ADK agent to google cloud with adk command line. This would be available when we install adk module as part of your deployment requirements. To deploy your applications run the following command and if you trying to deploy sample app when you're in the root folder $AGENT_PATH=.  $SERVICE_NAME would be the name for your cloud run.  adk deploy cloud_run \ --project=$GOOGLE_CLOUD_PROJECT \ --region=$GOOGLE_CLOUD_LOCATION \ --service_name=$SERVICE_NAME \ --app_name=$APP_NAME \ --with_ui \ $AGENT_PATH If you do not have an existing container registry, it will prompt if you want to be created. You will be prompted if you would allowed anonymous invocations.  And the output would be something like this:- After you have successfully deploy it, your agentic AI can be tested via the deployed URL.  If you click on the cloud run endpoint, this would appear. And you can type away.

google adk deploying to Agentic Platform

Image
To deploy to Agentic ADK to Agentic Platform (Vertex AI), we need to run setup our .env file.  Pre-requisite   https://github.com/GoogleCloudPlatform/agent-starter-pack GOOGLE_CLOUD_PROJECT = YOUR-PROJECT-ID GOOGLE_CLOUD_LOCATION = us-central1 GOOGLE_GENAI_USE_VERTEXAI = TRUE MAPS_API_KEY =YOUR-MAP-API-KEY You can change the name of your agentic application by modifying this file "app/app_utils/deploy.py" and then  @ click . option (     " --display-name " ,     default = " property-location-strategy-uat " , # change your application name here     help = " Display name for the agent engine " , ) There are other configuration like location and service account.  Run the following command:-  make deploy IAP=true Example output: Behind the scene, it is using Vertex AI's agent_engines.update to do its deployment remote_agent = client . agent_engines . update (             name = matching_agents [ 0 ]. api...

postgres find out out the successful connection via Azure Logs

To uncover the connection being used in a postgres server and historical data, we can use the following query to uncover who is connected, the user and which database.  This handle query is illustrated here - notice that we are pulling out information of a specific point in time.  AzureDiagnostics | where Category == " PostgreSQLLogs " | where TimeGenerated between (datetime( 2026-04-23 15 : 02 : 00 ) .. datetime( 2026-04-23 15 : 09 : 00 )) | where Message contains " connection authorized " | extend DbUser = extract(@ " user=([^ \s ]+) " , 1 , Message) | extend DbName = extract(@ " database=([^ \s ]+) " , 1 , Message) | summarize ConnectionCount = count() by DbUser, DbName | order by ConnectionCount desc

argocd bluegreen deployment example

Image
To deploy blue green, we start off with deploying the following yaml. As you may have notice, we have 2 services here which we will create next apiVersion : argoproj.io/v1alpha1 kind : Rollout metadata :   name :     labels :     app : bluegreen-demo spec :   replicas : 3   revisionHistoryLimit : 1   selector :     matchLabels :       app : bluegreen-demo   template :     metadata :       labels :         app : bluegreen-demo     spec :       containers :       - name : bluegreen-demo         image : argoproj/rollouts-demo:blue         imagePullPolicy : Always         ports :         - name : http           containerPort : 8080           protocol : TCP         resources :...

Using Cloudrun to update your Vertex AI RAG

Image
In this example, we are going to use Google cloud run to update corpus Vertex AI RAG. So when we have a new document, we load it up into Cloud storage. This generates an event for Cloud Run and update RAG.  To setup our Cloud run with event trigger. We will give it a name and then configure the target resource name. As you can see we are using "tgoogle.cloud.storage.object.v1.finalized" which gets fired when we create a new object, or overwrite an existing object, and Cloud Storage creates a new generation of that object. This is code in our cloud run. import functions_framework import os from google.cloud import aiplatform from vertexai.preview import rag import vertexai # Configuration from Environment Variables PROJECT_ID = os.environ.get( "PROJECT_ID" ) LOCATION = os.environ.get( "LOCATION" , "us-central1" ) RAG_CORPUS_ID = os.environ.get( "RAG_CORPUS_ID" ) vertexai.init(project=PROJECT_ID, location=LOCATION) # Triggered by a...

private endpoint with multiple vnet how do you connect

Image
When we setup multiple VNET and peer them, let's say we have marketing VNET and research VNET. then we created a storage account and setup private link in research VNET. At this point, VM trying to resolve our storage account in marketing vnet will get a public ip. How do we configure to ensure this VM in marketing would be able internal IP?  All we need to do is go to the Private dns zone that we have created and then add a " Virtual Network link " and we choose "marketing" vnet target network. And then in the VM that we have created in marketing VNET, you can see that we are resolving to a local ip address instead of public.  And then if we look at the subnet. there's no IP that has been taken up.

Kubernetes ValidatingAdmissionPolicy basic example

Image
To use this we need to create a "ValidatingAdminisionPolicy" and "ValidatingAdmissionPolicyBinding".  This example will prevent pod from being created if it des not have "label called environment". apiVersion : admissionregistration.k8s.io/v1 kind : ValidatingAdmissionPolicy metadata :   name : " require-env-label " spec :   failurePolicy : Fail   matchConstraints :     resourceRules :     - apiGroups :   [ "" ]       apiVersions : [ " v1 " ]       operations :   [ " CREATE " , " UPDATE " ]       resources :   [ " pods " ]   validations :     - expression : " has(object.metadata.labels) && 'environment' in object.metadata.labels "       message : " The 'environment' label is required for all Pods. " And then we have ValidatingAdmissionPolicyBinding apiVersion : admissionregistration.k8s.io/v1 kind : ValidatingAdmissionPolicyBinding meta...

Creating hub spoke network using Azure Network Manager

Image
First we should create our VNET, in this context we are going to create 3 vnet - research, marketing and 3rd party.  Then we are going to create Network Manager and then add your management scope - which is your subscription accessible to this network manager. Please ensure you selected Feature => Connectivity Network Group Then we will create a network group. Then provide it with a name "Hub-Spoke-Network-Group".  After it is crated, click on Add Virtual Network manally. We will select the vnet here  Network Configuration   This is where we will create Hub-Spoke topology here. Then go next:- Click "Review and create" when ready Deployment Before you ca  have your hub and spoke network, we need to click on "Deployment".  And then click on " Deploy configuration " And then after we deploy it, it should look like this :-

argocd rollout basic setup

Image
First we are going to need the following files and apply them  apiVersion : v1 kind : Service metadata :   name : rollouts-demo spec :   ports :   - port : 80     targetPort : http     protocol : TCP     name : http   selector :     app : rollouts-demo And this is the rollout file and notice the image we are using is ' blue '  apiVersion : argoproj.io/v1alpha1 kind : Rollout metadata :   name : rollouts-demo spec :   replicas : 5   strategy :     canary :       steps :       - setWeight : 20       - pause : {}       - setWeight : 40       - pause : {}       - setWeight : 60       - pause : {}       - setWeight : 80       - pause : {}   revisionHistoryLimit : 2   selector :     matchLabels :       ...

installing extension argo rollout as a plugin for kubectl on linux

Image
 To setup argocd plugin for kubectl, we need to run the following command on our linux machine curl -LO https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-linux-amd64 chmod +x ./kubectl-argo-rollouts-linux-amd64 sudo mv ./kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts Finally test if it is working  kubectl argo rollouts version

kubectl installing krew plugins

Krew is a plugin manager which allows us to download and use plugins and extend current functionality of kubectl.  To install on bash, run the following command :- ( set -x; cd " $( mktemp -d ) " && OS = " $( uname | tr '[:upper:]' '[:lower:]' ) " && ARCH = " $( uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/' ) " && KREW = "krew- ${ OS } _ ${ ARCH } " && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/ ${ KREW } .tar.gz" && tar zxvf " ${ KREW } .tar.gz" && ./ " ${ KREW } " install krew ) And remember to add this to your path  export PATH = " ${ KREW_ROOT :- $HOME /.krew } /bin: $PATH " Then run this to update,  kubectl krew update Next run this to list plugins  kubectl krew list 

k8s rbac role, rolebinding and testing

In AKS we can setup rbac for service acount. Let's create a service account and then  using the following yaml definition to setup it access to list pods.  kubectl create serviceaccount pod-watcher -n dev Next we will create a role that only allows this service account to list and watch pods in dev namespace, Role is reusable.  apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata:   namespace: dev   name: pod-reader-only rules: - apiGroups: [ "" ] # The core API group   resources: [ "pods" ]   verbs: ["get", "list", "watch"] # No "delete", "create", or "update" Here we can see the common apiGroups that you can use and please ensure you do not omit the (s) - plural as will NOT work API Group Common Resources "" (Core) pods , services , nodes , namespaces , configmaps , secrets , persistentvolumeclaims apps deployments , statefulsets , daemonsets , replicasets batch jobs , cronjobs ne...