Posts

Showing posts from May, 2023

using variable and secrets in gitub calling workflow

Accessing variables If you defined your variables via Github-> Settings-> Secret and Variable -> Variable, then you can reference it using ${{ vars.my_variable }}. Pretty straight forward.  Accessing secrets  Accessing secret is abit tricky for calling workflow. You need to declare the secrets to be used, like what is shown below 1. add secrets before calling the workflow    build-image :     uses : mitzen/dotnet-gaction/.github/workflows/buildimage.yaml@main     needs : build-project     with :       artifact : build-artifact       repository : kepung       tags : "dotnetaction"           secrets :       docker_username : ${{ secrets.DOCKERUSER}}       docker_token : ${{ secrets.DOCKERUSER}} 2. Next, add the secret as parameter in the calling workflow and use it on :   workflow_call :     secrets :       docker_username :         required : true             docker_token :         required : true     inputs :       artifact :         required : true         type : s

AKS - quick ways to prevent pod being scheduled on system pool via CriticalAddonsOnly=true:NoSchedule

The taint CriticalAddonsOnly=true:NoSchedule is used to prevent application pods from being scheduled on a node. This is useful for system node pools, which contain pods that are essential for the operation of the Kubernetes cluster. By preventing application pods from being scheduled on system node pools, you can help to ensure that the cluster remains stable and available. You can find out more from this link here:- https://learn.microsoft.com/en-us/azure/aks/use-system-pools?tabs=azure-cli

setting up argocd - quickstart

  Install ArgoCD  kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml Next, setup Ingress kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalacer r"}}' Open your brownser and goto http://localhost Alternatively you can do the same, using the following command kubectl port-forward svc/argocd-server -n argocd 8080 :443 IMPORTANT Getting your default admin password. User name is default to "admin". argocd admin initial-password -n argocd Copy the password and we going to use it to login via the command line, For example,   argocd login localhost --name admin --password PNUgOGiZsH4snOZj  Once you have done this, you will be able to use argocd to register application - guestbook.  Deploy your application   kubectl config get-contexts -o name argocd cluster add docker-desktop Setting to default contexts kubectl config set-context --curr

prometheus install with helm chart

  helm repo add prometheus-community https://prometheus-community.github.io/helm-charts Quite alof charts available helm search repo prometheus-community to install it helm install prometheus prometheus-community/prometheus Once you have installed it, you should be able to see kubectl get pods -A Look for alert manager pods. To access the UI, simply run the following commands: kubectl port-forward prometheus-server-568d75474d-gck62 5000:9090 Open your browser using http://localhost:5000 This will give you access to the prometheus server id and then you can look at service discovery In your browser goto Graph -> Expression -> Enter your metrics like for example kube_pod_info, then click "Execute", you will see output on in the table user interface. Unsure if this works - trying to include the pods into prometheus kubectl label pod httpbin-9dbd644c7-6sx7f -n default prometheus.io/scrape=true

k8s pdb eviction policy - AlwaysAllow - for 1.27

  Example where we can set PDB to evict pods altho it is not necessarily in a ready state - this will be useful for evicting pods that are buggy apiVersion : policy/v1 kind : PodDisruptionBudget metadata : name : nginx-pdb spec : selector : matchLabels : app : nginx maxUnavailable : 1 unhealthyPodEvictionPolicy : AlwaysAllow

flyte dask task erroring out with the following message

  If you face the following error message, this means you probably missing core kubernetes setup such as installing operator. In this case, i am trying to deploy and run a DASK task. Apparently, I am missing the dask operator. This was fixed after I install the operator. "currentAttempt done. Last Error: USER::Pod failed. No message received from kubernetes."

No module named dask when running flyte dask task

Getting the following error trying to run flyte + dask task. Apparently my custom build image is missing dask.   [1/1] currentAttempt done. Last Error: USER::Pod failed. No message received from kubernetes. [f7ff659ca25e040d89db-n0-0] terminated with exit code (1). Reason [Error]. Message: otstrap>", line 688, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/root/workflows/hellodask.py", line 7, in <module> from dask import array as da ModuleNotFoundError: No module named 'dask' All you need to do is ensure, your requirements.txt is updated. flytekit >= 1.2.7 pandas scikit-learn dask flytekitplugins-dask Rebuild your image and re-run your workflow.   

flyte one stop to setup plugin like dask, spark, ray and etc

 You can refer to this section to setup all the required operator instead of going to their homepage to do it. https://docs.flyte.org/en/latest/deployment/plugins/k8s/index.html#deployment-plugin-setup-k8s

installing dask operator

These are the steps for install dask :- helm repo add dask https://helm.dask.org helm repo update helm install --create-namespace -n dask-operator --generate-name dask/dask-kubernetes-operator Finally to see the dask operator running :- kubectl get pods -A -l app.kubernetes.io/name = dask-kubernetes-operator

where can you find kubernetes REST API reference?

The main reference link is available here  https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/ Some common ones are found here. If you're looking for cronjob, spec then you can quickly refer to it here. https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#jobspec-v1-batch As you may have notice if you're going through the deployment spec, it also has an attribute of type pod template spec - which is the template used by POD.  https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#podtemplate-v1-core

vscode loses intellisense for golang or probably most programming languages

 Recently I uninstall my golang sdk from my laptop and then vscode loses all the intellisense when i tried to use it on a cloned repo.  So I install the sdk, install all the good stuff like go-def and other plugins dlv, staticcheck, gopls (automatically prompt by vscode), I am able to regain back by intellisense like Goto Definition. 

when upgrading AKS node pool fails

Image
  If you upgrade fails, you get something like this.  then just run the following command az aks nodepool upgrade --cluster-name myclustername --resource-group resourcegroup --name mynodepoolname Wait for it to complete, and you will have your clsuter And sometimes i had to run it 5-6 times to get a upgrade completed - at times it last up to 12 hours

Backstage - creating a template and replace values in your template

  1. First, you need to create an input form.  apiVersion :  scaffolder.backstage.io/v1beta3 kind :  Template # some metadata about the template itself metadata :    name :  v1beta3-demo    title :  Test    description :  TestandTest spec :    owner :  testowner    type :  service    # these are the steps which are rendered in the frontend with the form input    parameters :     -  title :  Fill in some steps        required :         -  name        properties :          name :            title :  short application name            type :  string            description :  Please provide a name for your application            ui:autofocus :  true            maxLength :  10            ui:options :              rows :  2          appname :            title :  Application Name            type :  string            description :  Please provide a name for your application            ui:autofocus :  true            maxLength :  40            ui:options :              rows :  2          k8sname

k8s secret with docker

  In order to create this secret, you need to login to docker and obtained config.json file and remember to set the type to "kubernetes.io/dockerconfigjson" kubectl create secret generic regcred \ --from-file = .dockerconfigjson = <path/to/.docker/config.json> \ --type = kubernetes.io/dockerconfigjson Or alternatively, use yaml  apiVersion : v1 kind : Secret metadata : name : reggred data : .dockerconfigjson : UmVhbGx5IHJlYWxseSByZWVlZWVlZWVlZWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWFhYWxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGxsbGx5eXl5eXl5eXl5eXl5eXl5eXl5eSBsbGxsbGxsbGxsbGxsbG9vb29vb29vb29vb29vb29vb29vb29vb29vb25ubm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== type : kubernetes.io/dockerconfigjson How do you use it? 

Failed with Unknown Exception Reason: PYTHON_3_11

 Received this error while playing around with python app. To resolve this, I use conda to create an python 3.11 new environment. After that it fixes it. 

conda installing python 3.11

  Creating an environment with conda couldn't be easier with  conda create -n my_conda_env_with_py311 python= 3.11

k8s - opaque secret

Image
  As you already know, k8s have many secrets - opaque, tls (certificate), service account, ssh, docker secret, boostrap token secret. Opaque secret Creating a single secret (username in this case)  kubectl create secret generic user-info --from-literal=user_secret=mysecret  Creating a secret with multiple key  kubectl create secret generic user-info --from-literal=user_secret=mysecret --from-literal=user_password=password From file We can create secret from file using the following command :-  kubectl create secret generic user-info-file --from-file=./username.txt --from-file=password.txt The filename will be used as key  Using secret from environment variables The yaml here shows you how to reference/user secrets in your pod  

deleting pods from cronjob

 Sometimes after your k8s cronjob runs, you will get some pods left with completed but terminated state.  In order to remove these pods, you can configure these history to be removed via yaml  spec:      failedJobsHistoryLimit: 0      successfulJobsHistoryLimit: 0 You can also delete by running the following command:   kubectl delete pod --field-selector=status.phase==Succeeded