Posts

terraform Error: building account: getting authenticated object ID: listing Service Principals: autorest.DetailedError

  If you're getting the following error when running terraform plan, then you hit an issue with azurerm that is old and not supported anymore. The error is really confusing to start with. After I upgraded my azurerm  Error:  building account: getting authenticated object ID: listing Service Principals: autorest.DetailedError{ Original:(*azure.RequestError) (0xc001ab65a0), PackageType:"graphrbac. ServicePrincipalsClient", Method:"List", StatusCode:403, Message:"Failure responding to request", ServiceError:[]uint8(nil), Response:(*http.Response)( 0xc001ab6510)} So you need to update  terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "~> 4.0" <-- newer version available } } required_version = ">= 1.0.0" } provider "azurerm" { features { } }
Image
To authenticate to another git repository that uses a different keys, we can run the following command to generate secret.  flux create secret git jeremywoo119-auth --url=ssh://git@github.com/jeremywoo119/my-demo-app --namespace=flux-system Next, we need to update the generated ssh key to our github by going to Settings -> SSH Key -> New SSH Key. apiVersion : source.toolkit.fluxcd.io/v1 kind : GitRepository metadata :   name : demo-app   namespace : flux-system spec :   interval : 1m   url : ssh://git@github.com/jeremywoo119/my-demo-app   secretRef :     name : jeremywoo119-auth # Matches the secret name above   ref :     branch : main  And if everything goes well, we can see:-   To force reconciliation for whatever reason, please run the following command  flux reconcile source git demo-app -n flux-system

flux - how to deploy app that leverages a helm chart repository (standard service)

Image
Let's say you would like to deploy your app and it needs to use a helm chart with fluxcd, you can do that by creating a gitrepository and a helm release. Git repository is for your helm chart (you only required to do it once) and then create multiple helm releases - as more than one application would be using this standard chart. This is what my chart repo looks like - if you have sub-directory, please specify that in your helm release setup.  And these are my yamls  apiVersion : source.toolkit.fluxcd.io/v1 kind : GitRepository metadata :   name : standardservice   namespace : flux-system spec :   interval : 1m   url : https://github.com/kepungnzai/helloworld-chart   ref :     branch : main And the actual app deployment using helm release  apiVersion : helm.toolkit.fluxcd.io/v2 kind : HelmRelease metadata :   name : my-configmap   namespace : test spec :   interval : 1m   releaseName : my-configmap   chart : ...

flux force a reconciliation

Instead of waiting for the helm to be triggered, we can force a helm reconciliation by  flux reconcile helmrelease [name] --force

fluxcd example of deploying keda charts helm release with value files from another repository

Image
This is a typical setup for everyone trying to use helm to deploy keda. This deploy keda using chart. A twist to that is getting our values files from another repository. In this setup, we will do this. Given helmrelease in flux won't be able to read value files directly, we use kustomization.   You can get access to the file here https://github.com/kepungnzai/keda-flux-chart-demo/blob/main/README.md Value files repository :- https://github.com/kepungnzai/keda-aks-deploy So you need to run the following in sequence (important)  kubectl apply -f keda-values-gitrepo.yaml kubectl apply -f keda-values-kustomization.yaml kubectl apply -f keda-helmrepo.yaml kubectl apply -f keda-helmrelease.yaml keda-values-gitrepo.yaml apiVersion : source.toolkit.fluxcd.io/v1 kind : GitRepository metadata :   name : keda-values   namespace : flux-system spec :   interval : 1m   url : https://github.com/kepungnzai/keda-aks-deploy   ref :     branch : main keda...

fluxcd installation and bootstraping

Image
To install flux key component for testing and dev purpose, you can run  flux install Please ensure you're using your github user id in replace owner and then the repository must be created as well.  Bootstrapping is basically lining up your flux to sync changes based in your git repository - in my setup here, my repository is called my-flux-bootstrap - you can rename it differently.  flux bootstrap github --token-auth --owner=kepungnzai --repository=my-flux-bootstrap --branch=main --path=clusters/my-cluster --personal   Please ensure you have configure "read and write" permission accordingly as shown here:-  Given that you have the right permission, you should be able to integrate it to your github.  

homebrew install on linux

  To install homebrew on linux, we can do this  /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"  And then    echo >> /home/nzai/.bashrc   echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv bash)"' >> /home/nzai/.bashrc   eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv bash)" Once you have included that in your .bashrc, then you should be able to use it. brew install fluxcd/tap/flux

docker desktop not starting up again

 First we need to run the following command  wsl --shutdown Then goto Docker -> Option -> Kubernetes -> Reset cluster Start your wsl again by running maybe Ubuntu 24.04.  And then Docker -> Option -> Kubernetes -> ensure that the cluster is up and running. 

kubernetes 1.32 migration to kubernetes 133 with endpoints migration to endpointslices

Image
In this setup, i would like to focus on kubernetes 1.32 to 1.33 migration to see if endpoints that has been created automatically gets migrated to endpoint slices.  And then upgrading it to 1.33, As you can se below, i have my cluster provisioned in 1.32 and has some endpoints automatically created. Then i will attempt to upgrade it.   I have installed istio and then added an ingress gateway. And then i have: And finally And then i proceed to delete endpoint and run my kubectl port-forward svc, it still works. But also noticed that endpoint are re-created again