Posts

Showing posts from 2025

gke installing asmcli

 You can easily install asmcli by using the following command curl https://storage.googleapis.com/csm-artifacts/asm/asmcli_1.20 > asmcli chmod +x asmcli 

gke mesh - getting sidecar to work on your cluster

After you have enable mesh in the cluster, the next thing to do is to deploy your application. But before that, you need to ensure your namespace has been annotated correctly using the following commands kubectl label namespace test1 istio-injection=enabled --overwrite kubectl annotate --overwrite namespace test1 mesh.cloud.google.com/proxy='{"managed":"true"}' This contrasts with the traditional Istio setup, where sidecar injection is controlled by istio-injection=enabled. This provides seamless proxy upgrades. 

gke getting service mesh implementation

You can use the following command to get more details of your gke cluster mesh: gcloud container fleet mesh describe --project FLEET_PROJECT_ID Possible values are  TRAFFIC_DIRECTOR : Google Cloud's core infrastructure acts as the control plane for Cloud Service Mesh. ISTIOD : A managed instance of istiod functions as the Cloud Service Mesh control plane. 

gke autopilot setting up istio (unmanaged mesh) and getting it to run

Image
 When setting up istio separately on a GKE autopilot cluster and not making use of gke managed service, so you can installed it but won't be able to get istio side car to run when you deploy your application.  Then you will get some error such as if you look the log explorer :-  " 'istio-init' not allowed; Autopilot only allows the capabilities: 'AUDIT_WRITE,CHOWN,DAC_OVERRIDE,FOWNER,FSETID,KILL,MKNOD,NET_BIND_SERVICE,NET_RAW,SETFCAP,SETGID,SETPCAP,SETUID,SYS_CHROOT,SYS_PTRACE'."]} " This is by design. But you can override it by running the following command and that would get your pod to be running in an istio injected namespace.  gcloud container clusters update $CLUSTER_NAME --workload-policies=allow-net-admin As you can see here, after running the gcloud command above, we are able to see the pod and sidecar running.

gcloud set region

You can use the following command to set the region  gcloud config set compute/zone us-central1-a

nodejs starter app samples that supports different setup like graphql, knex databases and many more

  This is a good starter app that supports many different ORM and implementations such as graphql. Check it out. https://github.com/ljlm0402/ typescript-express-starter

gcp creating cloud nat gateway for a cusom vpc and getting a public ip

Image
While creating either vpc or cloud nat gateway is not really a thing, but it can be confusing when you trying to wonder where is the public ip associated with it?  It is allocated automatically. all you need to do is  - create custom vpc say mytest-vpc - create cloud nat gateway  - then place vm instance in the subnet and you can see your public ip address appearing  This is my custom vpc network. And then placing the VM in the subnet  When you ssh into that box, you will notice there's a public ip address appearing.   

gcp vm (compute) - forcing it to use a cloud NAT router

Image
We can configure the VM to use cloud NAT router instead of its public external ip. First we have to create a cloud NAT router.  Let's create our cloud NAT and cloud router first. In this configuration we will be attaching it to default VPC network.  It should give you an public IP. Next, let's create our VM and configure it to  - use default network  - disable public ip creation by setting it to none, by setting External IP to none - as shown here:- And when try to et the default IP use for your network, you will get it using the cloud router IP.

kubectl run ubuntu image shell in your kubernetes pod

 kubectl run -it ubuntu --image=ubuntu -- bash

quick and fast way to check your source ip address via curl

  curl -s https://checkip.amazonaws.com

angular creating service

  To create a service we can just use the following demo code  import { Injectable } from "@angular/core" ; export class Logger {   log ( message : string )   {     console . log ( message );   } } Then to use that in the component, we can include the following code:  import { Component , inject } from '@angular/core' ; import { Logger } from "./betterLogger" ; @ Component ({   selector : 'app-ssrcounter' ,   imports : [],   templateUrl : './ssrcounter.component.html' ,   styleUrl : './ssrcounter.component.css' ,   providers : [ Logger ] }) export class SsrcounterComponent {     //alternative   //private logger = inject(Logger);   constructor ( privateprivate logger : Logger )   {       service . getUsers ();       logger . log ( "testing" )   } }

angular creating pipe

To create an angular pipe, you can use the following code. We create a mycustomTransformation pipe by adding a file that has the following codes import { Pipe , PipeTransform } from '@angular/core' ; @ Pipe ({   name: 'myCustomTransformation' , }) export class MyCustomTransformationPipe implements PipeTransform {   transform ( value : string ) : string {     return `My custom transformation of ${ value } .`   } } To use the pipe above in your angular component, you need to import it as show below here import { Component , inject } from '@angular/core' ; import { UserService } from "./userservice" ; import { Logger , EvenBetterLogger } from "./betterLogger" ; import { MyCustomTransformationPipe } from './userpipe' ; @ Component ({   selector : 'app-ssrcounter' ,   imports : [ MyCustomTransformationPipe ],   templateUrl : './ssrcounter.component.html' ,   styleUrl : './ssrcounter.component...

gcp - figure out what accelerator type is supported in different regions

To get a list of accelerator listed in specific regions, you can use the following command: gcloud compute accelerator-types list

azure devops calling rest api using workload identity

In this setup we are  trying to call Azure Devops endpoint to files in a repository or get a file content using REST API without using a PAT token. Instead we are fully relying on using workload identity and use Bear authorization. Let's see how we can do that.  Before starting we need to ensure  1. your pod is running on a specific managed identity (workload identity) 2. you have granted permission for this managed identity to access your Azure Devops  repositories / project $AzureDevopsApplicationId = "499b84ac-1321-427f-aa17-267ca6975798" # we are using az cli to get us the acces token $token = az account get-access - token -- resource $AzureDevopsApplicationId | ConvertFrom-Json $headers = SetupAuthorizationHeader $token .accessToken function SetupAuthorizationHeader ( $usertoken ) {           Write-Host ( "SetupAuthorizationHeader function/module" )   Write-Host ( $usertoken )   $headers = New-Object "System.Collect...

azure function app - isolated process model - service bus / event hub

Azure function app code for setting up service bus and event hub under the isolated model requires the following code setup. First we will have the following packages:-  < ItemGroup >     < FrameworkReference Include = "Microsoft.AspNetCore.App" />     < PackageReference Include = "Azure.Messaging.ServiceBus" Version = "7.18.4" />     < PackageReference Include = "Microsoft.Azure.Functions.Worker" Version = "2.0.0" />     < PackageReference Include = "Microsoft.Azure.Functions.Worker.Extensions.EventHubs" Version = "6.3.6" />     < PackageReference Include = "Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore" Version = "2.0.1" />     < PackageReference Include = "Microsoft.Azure.Functions.Worker.Extensions.ServiceBus" Version = "5.22.1" />     < PackageReference Include = "Microsoft.Azure.Functions.Wo...

azure function app - Can't find app with name "xxxxx"

Image
One of the confusing thing working with Function app is trying to figure out "function app name" after you have setup publish profile.  func azure functionapp list-functions pricingfunction It turns out that the function app require here is the actual function app name in azure.  After fixing this then my command was able to function correctly.

gcp - cloudsql disable instance protection

Image
 We can easily disable instance delete protection by going into your target cloudsql instance, select Edit -> Then scroll down to instance delete protections.  Then you can remove your database instance

github - changing repository visibility

Image
Changing project visibility from public to private or vice-versa. To do that you can goto your repository -> settings then scroll down to "Change visibility".  

react/reduxt toolkit sample app - quick start

Setting up react/redux for an application can be quite interesting experience.  Begin by setting up installing the libraries and the setup hook and store.  Ensure your package.json has the following libraries   "dependencies" : {     "react" : " ^18.2.0 " ,     "react-dom" : " ^18.2.0 " ,     "react-redux" : " ^9.1.0 " "@reduxjs/toolkit" : " ^2.1.0 " ,   } , hook.ts (this file pretty much the same. import { useDispatch , useSelector } from " react-redux " ; import type { AppDispatch , RootState } from " ./store " ; // Use throughout your app instead of plain `useDispatch` and `useSelector` export const useAppDispatch = useDispatch . withTypes <AppDispatch> () ; export const useAppSelector = useSelector . withTypes <RootState> () ; then setup your counterslice.ts import { createAsyncThunk , createSlice , PayloadAction } from ...

keda - getting error reading from azure keyvault with context failed

This can be a confusing error from keda. On one hand it is saying unable to read from keyvault while others trigger authentication is able to work without issues. Let's say I have a scaledobject that has 4 event hub scaler configured, two are happy while another 2 are not working.  All are using the same keyvault and authentication approach.  When i check keda operator logs i see, "unable to read from keyvault" - yet 2 of those scaler are working. I noticed that the reason it is failing is "context cancelled". This could mean that it failed while performing other operations such as trying to connect to event hub via a specific connection strings.  After debugging those event hub connection string - i found that the issue is due to this. All i have to do is, do a  kubectl describe scaledobject/my-scaled-app -n mynamespace Figure out which event hub scaler that is not working / happy. Then focus on fixing those - instead of trying to review the keda operator logs ...

llm coding leaderboards

 These are the coding  Leaderboards for different models  BigCodeBench ,  LiveCodeBench ,  Aider Leaderboard ,  lmsys

gke - good way to spin up a pod and test workload identity

  First create a pod under that namespace that you would like to test. Here we are using test namespace and service account sa.  apiVersion : v1 kind : Pod metadata :   name : test-pod   namespace : test spec :   serviceAccountName : sa   containers :   - name : test-pod     image : google/cloud-sdk:slim     command : [ "sleep" , "infinity" ]     resources :       requests :         cpu : 500m         memory : 512Mi         ephemeral-storage : 10Mi Next, we will  kubectl exec -it pods/test-pod --namespace = test -- /bin/bash And then run the following command curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" "https://storage.googleapis.com/storage/v1/b/jerwotestbuckety/o"