Posts

gke service mesh enabling

Image
To get the state of your current mesh you can run the following commad gcloud container fleet mesh describe --project your-project-fleet-name When you run  "kubectl get controlplanerevisions -n istio-system" then you should get the followig outputs. And as you can see here, my pod has been injected with istio side-car proxy Observability for your mesh  

gcp cloud sql - unable to delete

Image
When trying to remove a sql database in gcp, the option is grey out. After investigating, found that I had the database protection enabled by default. Go to your instance and edit it. Please ensure you have the following unchcked. Save this new settings.  And finally i was able to remove it/

getting the size of your etcd

When running etcd in a kubernetes cluster, there is a size limit of 6G atleast in GCP. To see how much space your etcd in a cluster, you can run the following commands Get the etcd pods kubectl -n kube-system get pods | grep etcd Getting the size used so far kubectl -n kube-system exec -it etcd-<your-master-node> -- \   etcdctl --endpoints=https://127.0.0.1:2379 \   --cert=/etc/kubernetes/pki/etcd/server.crt \   --key=/etc/kubernetes/pki/etcd/server.key \   --cacert=/etc/kubernetes/pki/etcd/ca.crt \   endpoint status --write-out=table

gcp global health check status

Here is a link for google global health check status https://status.cloud.google.com/

google application load balancer configure to redirect traffic to cloudrun

Image
This is a common pattern used in application load balancer setup where we can configure a custom DNS then route traffic via application load balancer to our cloud run via network group endpoint.  Create cloud run app Let's create a simple cloud run application first and call it crtest (cloud run test).  Create load balancer To get started, lets create our network, by going to VPC network, create a VPC name called "lb-network". Subnet creation mode set it to custom. Then in the new subnet enter  a) Name - lb-subnet b) Select region c) IP address 10.1.2.0/24 Click done and create to create this. Next, goto load balancing and click "Create Load balancer" and select Application Load Balancer.  For public facing or internal - select Internal. Click "next".  Then selet "Cross region or single region deployment", choose "Best for regional workload and click "Configure".  We are half way through with our load balancer.  Configuring app...

aws cloudformation - sharing stack outputs with other stacks

Image
We can share stacks outputs from one deployment with another separate deployment. Let's say we create a s3 store stack with outputs. Then we can use this output as input for another stack app stack later.  To see this in action, let create a "storage.yaml" s3 stack as shown here AWSTemplateFormatVersion : '2010-09-09' Description : Storage stack exporting an S3 bucket name and ARN Resources :   MyDataBucket :     Type : AWS::S3::Bucket     Properties :       BucketName : ! Sub "my-data-bucket-${AWS::AccountId}" Outputs :   BucketName :     Value : ! Ref MyDataBucket     Export :       Name : ! Sub "${AWS::StackName}-BucketName"   BucketArn :     Value : ! GetAtt MyDataBucket.Arn     Export :       Name : ! Sub "${AWS::StackName}-BucketArn" Then deploy it: aws cloudformation create-stack \   --stack-name storageStack \ --region ap-southeast-2 \ ...

aws cloudformation debugging command

Here is a handy command to get some identify issues with cloudformation. You will just need to plug in the name of your stack and region. aws cloudformation describe-stack-events --stack-name appStack --query "StackEvents[ ?ResourceStatus=='CREATE_FAILED' ].[LogicalResourceId, ResourceStatusReason]" --region ap-southeast-2 --output text

aws cloudformation hello world

Image
Cloudformation is quite cool and this is an example of how to do a simple s3 bucket creations. AWSTemplateFormatVersion : '2010-09-09' Description : Simple Lambda Hello World example   Resources :   MyS3Bucket :     Type : AWS::S3::Bucket     Properties :       BucketName : ! Sub "jeremywootestbucketstore123" The resources followed by MyS3Bucket (this can be any meaningful name). Type is mandatory and it has to be pre-defined and we're creating a s3 bucket. As for the properties, it has to be references from the documentation.  And then we have outputs - where we can see what the ARN and websiteURL for our s3 bucket. Notice that this doesn't allow you to shared with other resources. Outputs :   BucketName :     Description : The name of the S3 bucket.     Value : ! Ref MyS3Bucket     Export :       Name : MyS3Bucket   BucketArn :     Description : The ARN of the S3 bucket...

aws cli - An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid.

Image
While tryin to configure aws cli, i stumble into this errror.  "An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid." Validated that my access key is correct and ran "aws configure" a couple of times to ensure things are working well. Yet the error keep on coming back.  Finally, I was able to resolve it by not specifying a specific region. 

snowflake load data into database using stage objects

Image
Stage object provides a shared internal storage used by Snowflake to store and stage file that can later be used to load into a table.  CREATE OR REPLACE DATABASE mydatabase ; CREATE OR REPLACE TEMPORARY TABLE mycsvtable (   id INTEGER,   last_name STRING,   first_name STRING,   company STRING,   email STRING,   workphone STRING,   cellphone STRING,   streetaddress STRING,   city STRING,   postalcode STRING) ; CREATE OR REPLACE TEMPORARY TABLE myjsontable (   json_data VARIANT) ; -- Create a warehouse CREATE OR REPLACE WAREHOUSE mywarehouse WITH   WAREHOUSE_SIZE = 'X-SMALL'   AUTO_SUSPEND = 120   AUTO_RESUME = TRUE   INITIALLY_SUSPENDED = TRUE ; You can do that by using the following command. First we create a file format CREATE OR REPLACE FILE FORMAT mycsvformat   TYPE = 'CSV'   FIELD_DELIMITER = '|'   SKIP_HEADER = 1 ; Next we create our stage called my_csv_stage CREATE OR REPLAC...

google cloud helloworld pipeline

Image
We can easily create a build pipeline using google cloud build. Create a file called cloudbuild.yaml and then paste the following content to it.  s teps : - name : 'ubuntu'   script : |     #!/usr/bin/env bash     echo "Hello $_USER" - name : 'ubuntu'   script : |     #!/usr/bin/env bash     echo "Your project ID is $PROJECT_ID" options :   automapSubstitutions : true substitutions :   _USER : "Google Cloud" Every step in google build is running a specific docker image, in my case it is ubuntu. To ask google cloud build start running your code, use the following command:-  gcloud builds submit --config cloudbuild.yaml . The output from the command line:- The output from google cloud build

gcp creating alerting from logs

Image
  To create log based alert in GCP, goto https://console.cloud.google.com/logs/query. Then locate the query window where you can specify different type of query. The language here is LQL (Logging Query language) and you can find more details here .  Once you have enter your query, then in the action drop down see here, you can create an alert.  Follow the prompt and provide name to configure your alert.

gke - deploying adk agent

Image
To deploy ADK agent to a gke cluster, first we need to create the requirement resources.  Setup the variables gcloud config set project PROJECT_ID export GOOGLE_CLOUD_LOCATION=REGION export PROJECT_ID=PROJECT_ID export GOOGLE_CLOUD_PROJECT=$PROJECT_ID export WORKLOAD_POOL=$PROJECT_ID.svc.id.goog export PROJECT_NUMBER=$(gcloud projects describe --format json $PROJECT_ID | jq -r ".projectNumber") And then clone this repository git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples.git cd kubernetes-engine-samples/ai-ml/adk-vertex Next we setup our cluster gcloud container clusters create-auto CLUSTER_NAME \ --location=$GOOGLE_CLOUD_LOCATION \ --project=$PROJECT_ID And then create artifact repository container registry gcloud artifacts repositories create adk-repo \ --repository-format=docker \ --location=$GOOGLE_CLOUD_LOCATION \ --project=$PROJECT_ID Next permission and role assignment - please ensure you provided the right proj...