Posts

Showing posts from 2023

Create a hello world lambda app using aws template that supports dotnet 8 didn't work

Was having quite abit of issuing deploying AWS Lambda created in dotnet 8 - yeah i was just trying to see if that's going to be something supported.  The creation template work just fine. dotnet new lambda.EmptyFunction --name myDotnetFunction The deployment is really mess up    dotnet lambda deploy-function myDotnetFunction It keeps on saying that i am unable to deploy. But my aws cli works - I was able list my s3 buckets.  Error retrieving configuration for function  The security token included in the request is invalid. Then i switch back to use dotnet6 template. Then it starting to work again

Example using aws lamda to create a dotnet core app

  To create an AWS lambda with a dotnet core app aws lambda create-function --function-name my-function --zip-file fileb://webapi.zip --handler index.handler --runtime dotnet6 --role arn:aws:iam::your-subscription:role/mylambda

c# aws lambda surprisingly good info link :)

 I thought building app on a AWS lambda is going to be tough. https://docs.aws.amazon.com/lambda/latest/dg/csharp-package-cli.html

installing dotnet core 8 for visual studio

To get dotnet 8 sdk installer where you can just run from an executable, try this  https://dotnet.microsoft.com/en-us/download/visual-studio-sdks This helps to update the cli so you can run dotnet version etc. You still need to update VS2022 to be able to support dotnet 8. 

!! parameter null checking is taken off from C# 11

 As discussed here:- https://github.com/dotnet/csharplang/blob/main/meetings/2022/LDM-2022-04-13.md Instead you get this - :) https://learn.microsoft.com/en-us/visualstudio/ide/reference/add-null-checks-for-parameters?view=vs-2022

helm rendering k8s resources like annotation or labels

  I think the best way to handle annotations is to have the values files similiar to k8s API spec for example working with annotations data. Of course labels and annotations have different purposes. What i am saying is the chart has to be close to k8s API spec to avoid confusion and duplication of charts - as someone might think service annotation does not exist but when it did, then it breaks your k8s deployment - not helm chart rendering but right till the very end - sometimes deployment does happen but the yaml looks corrupted and obviously your deployment will not work. For example, if we needed annotation for a service, best to keep it like this on the template. And then the value files is given below:- {{- if .Values.serviceAccount.create - }} apiVersion : v1 kind : ServiceAccount metadata :   name : {{ include "simple.serviceAccountName" . }}   labels :     {{- include "simple.labels" . | nindent 4 }}   {{- if .Values.serviceAccount.annotations }}   {{- wi

Keycloak CIBA setup

Image
What is CIBA? It is client initiated backchannel authentication and it allows your keycloak client to initiate the login to a 3rd party. Uses cases for this could be a mobile app that wanted to sign in to an application. In the traditional code flow model, user is required to login and enter their password. With CIBA, we use your client to login automatically Start up your keycloak docker instance by replacing the <your-host-up> below. If you're using windows, you can do ipconfig and you should get something like  192.168.1.70. Then replace it so keycloak can discover your node app. docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:22.0.5 start-dev  --spi-ciba-auth-channel-ciba-http-auth-channel-http-authentication-channel-uri=http://<your-host-ip>:3000/request --log-level=DEBUG Then clone the following program to https://github.com/mitzen/keycloak-ciba-sample.  Run  npm install npm run start  This will start up a

coredns - proxy no longer valid - it is being replaced by forward

  In the documentation, it says we can use coredns to do dns forwarding with this . { proxy . 8.8.8.8:53 { protocol https_google } prometheus errors log } This no longer work, as it is being replace with forward as shown below . { forward . 8.8.8.8 9.9.9.9 log }

Coredns - setup on windows

  Download coredns from  https://github.com/coredns/coredns/releases/ Extract it and then run it with a CoreFile example.org { file example.org prometheus # enable metrics errors # show errors log # enable query logs } .\coredns -conf Corefile This would run it on port 53.  Then use nslookup command to test it:  nslookup example.org localhost

CMU - Phds Thesis for machine learning

  https://www.ml.cmu.edu/research/phd-dissertation-pdfs/

core dns enabling logging

Enable core dns logging with this yaml apiVersion : v1 kind : ConfigMap metadata :   name : coredns-custom   namespace : kube-system data :   log.override : | # you may select any name here, but it must end with the .override file extension         log kubectl apply -f corednsms.yaml kubectl -n kube-system rollout restart deployment coredns kubectl logs --namespace kube-system -l k8s-app=kube-dns

Always remember to set ASPNETCORE_URLS http://+:5000

 Always remember to set and configure this into your environment variables.   I left this out one time in my k8s deployment and it spent 4 hours troubleshooting this.  :(

installing istio operator version 1.15.7 causes operator to reconcile forever

  1. The istioctl version that you're using dictates which version you will be deploying to. For example, if you're using istioctl 1.16 then you will be deploying istio-operator 1.16 Then run  istioctl operator init It will create istio-operator and istio-system namespace. Then use the following yaml to create the related gateways apiVersion : install.istio.io/v1alpha1 kind : IstioOperator metadata :   namespace : istio-system   name : example-istiocontrolplane spec :   profile : demo   Then try to add additional serviceAnnotations to the load balancer apiVersion : install.istio.io/v1alpha1 kind : IstioOperator metadata :   namespace : istio-system   name : example-istiocontrolplane spec :   profile : demo   components :     ingressGateways :     - name : istio-ingressgateway       enabled : true       k8s :         serviceAnnotations :           service.beta.kubernetes.io/azure-load-balancer-internal : "true"           service.beta.kubernetes.io/azure-load-balancer-

istioctl dashboard controlz

Image
 By using the following command, you will have access to istio controlz dashboard. istioctl dashboard controlz istiod-5865f5bdc5-lgt5r -n istio-system

istioctl generate manifest - generate installation scripts

k8a service yaml - back to basic

 Given the following deployment yaml. There service selector has to match ALL. If you have 2 label to match, it must match all before it can successfully link up the service to the pod Please have a look at the service selector:  apiVersion : v1 kind : ServiceAccount metadata :   name : httpbin --- apiVersion : v1 kind : Service metadata :   name : httpbin   labels :     app : httpbin     service : httpbin     version : v1 spec :   ports :   - name : http     port : 8000     targetPort : 80   selector :     required : pr2 ## selector here     app : httpbin1 ## selector here --- apiVersion : apps/v1 kind : Deployment metadata :   name : httpbin spec :   replicas : 1   selector :     matchLabels :       app : httpbin1       version : v1       required : pr   template :     metadata :       labels :         app : httpbin1         version : v1         required : pr     spec :       serviceAccountName : httpbin       containers :       - image : docker.io/kennethreitz/httpbin         imageP

using kubectl auth can-i to test for group and users permission

  You can use the following command  kubectl auth can-i list sa --group=my-group-id-name  --as=ef6982c9-ed49-4259-962b-488cffbca659 -A or just to test a single user  kubectl auth can-i list sa --as=ef6982c9-ed49-4259-962b-488cffbca659 -A

az get credentials - unable to login

  If you have this issues, go to your c:\users\your-users\.azure and remove the cache folder.  Then try to re-run az get credentials again

Using C# integrate with AKS API Server endpoint via AzureCredential

You can use the following code to call your AKS API Endpoint. At the end of the day, it is just API and Kubernetes RBAC. HTTPS API code are given below.  As for setting up the kubernetes RBAC, we need to create the group, user ass you would do normally.  Then you need to setup the RBAC. Remember if you need to call the API Server, there's a webhook that validates and check if you are permitted to call it. This has to be configure using kubernetes RBAC - role/rolebinding or clusterrole/clusterrolebinding. As for the credential - it can be managed identity or your az login credential. But for our test, I am using az cli credential. But i will be deploying this to a workload identity. // ignore certificate issues var handler = new HttpClientHandler() {     ServerCertificateCustomValidationCallback = HttpClientHandler.DangerousAcceptAnyServerCertificateValidator }; // Get the proper token for your login credentials var creds = new DefaultAzureCredential(); var token = await creds.GetTo

using kubectl to test a user permission

  You can use the following  test a user if they have permission to do something. In this example, I am testing cluster role permission for group and users kubectl auth can-i list pod --as-group=b1fe0e96-4c9e-4075-b8d7-7fcf713e536d --as=5e127f6f-220d-414e-af95-d0b5c3fbf3ca -A The output would be just a yes or no or just as a user.  kubectl auth can-i list pod  --as=5e127f6f-220d-414e-af95-d0b5c3fbf3ca -A

aks cluster rbac is confusing and not working

Image
Despite adding Kubernetes Service RBAC Reader and Kubernetes Service RBAC Cluster Admin - my users don't have access to ger or list pods or replicas as indicated here https://learn.microsoft.com/en-us/azure/aks/concepts-identity And I have to create role and rolebinding to make the whole thing work. 

a story of kubelogin mess up my aks setup

Following aks connect instructions to run the following command   kubelogin convert-kubeconfig -l azurecli Then after i try to run something like   kubectl get deployments --all-namespaces=true Then i get the following error:  Error: failed to get token: expected an empty error but received: AzureCLICredential: exit status 1 To resolve this, I had to re-run the entire step (clear your kubeconfig along the way) az aks get-credentials --resource-group myresourcegroup --name mycluster and skip the kubelogin step as show above (just continue to do kubectl get pods or something) 

az cli getting WARNING: Please select the account you want to log in with.

 First i get this error WARNING: Please select the account you want to log in with Then when i tried to use az account set --subscription my-expensive-subscription  It is telling me, subscription does not exist.  I have to clear files in .azure folder and in this case, i remove all the file under c:\users\myuser\.azure

AKS not using local account - Azure AD authentication with Kubernetes RBAC

Image
When you disable local account and uses  Azure AD authentication with Kubernetes RBAC this means we can't do az aks get-credentials --resource-group rbacakscluster --name myrbacdev --admin.  

AzureCredential credential sequence and flow

Image
  The following are used by the AzureCredential Environment - Refer to this link for env variables used (https://learn.microsoft.com/en-us/dotnet/api/overview/azure/identity-readme?view=azure-dotnet#environment-variables) Workload Identity   Managed Identity Visual Studio Visual Studio Code Azure CLI  -  Azure PowerShell Azure Developer CLI   Interactive browser As shown in the graphic above, the API server calls the AKS webhook server and performs the following steps: kubectl  uses the Microsoft Entra client application to sign in users with  OAuth 2.0 device authorization grant flow . Microsoft Entra ID provides an access_token, id_token, and a refresh_token. The user makes a request to  kubectl  with an access_token from  kubeconfig . kubectl  sends the access_token to API Server. The API Server is configured with the Auth WebHook Server to perform validation. The authentication webhook server confirms the JSON Web Token signature is valid by checking the Microsoft Entra public sign

Getting kubernetes certificates data - quick start

  # Extrat the Cluster Certificate Authorithy $ kubectl config view --minify --raw --output 'jsonpath={..cluster.certificate-authority-data}' | base64 -d | openssl x509 -text -out - ... # Extract the Client Certificate $kubectl config view --minify --raw --output 'jsonpath={..user.client-certificate-data}' | base64 -d | openssl x509 -text -out - ... # Extract the Client Private Key $ kubectl config view --minify --raw --output 'jsonpath={..user.client-key-data}' | base64 -d ...

kubernetes ca.crt

kubernetes ca.crt would look something like this (missing  -----BEGIN CERTIFICATE----- and  -----END CERTIFICATE----- So you might not be able to use openssl to view it unless you add those in --- as shown below  MIIC/jCCAeagAwIBAgIBADANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwprdWJl cm5ldGVzMB4XDTIzMTEwOTE1MzYwMloXDTMzMTEwNjE1MzYwMlowFTETMBEGA1UE AxMKa3ViZXJuZXRlczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL6k oE+D6EywDSjyJYbp6kSqpTAg59IN2NXZVKzF7fp3ePJcF+jsHXZlqi++aFA3DA3y jPv8KVrT8h0kJLt6Tcyzt6oU/MxhEs4VIb1hdaqoaubuydFa++npM6Ca4uPAHhuW xFQdOmNOuIrgk5zmZWwoYIlxPNdXt0DYZXs2ohdSG949OOYUGrKhkXGNTjsjlZIk N69FCnNr5KjLFBEq/w48DY9tcwnIsUqNt4HICGtl0ogb+n/OPCcU3uxaqws/HYns VV9NYY+msoczuYhtwGVjpHd7LZbzvR6mw9/tK38m1PT1GEo30/0cj0+vKIrqZDuM YqnOK2Eok+loudCnkoUCAwEAAaNZMFcwDgYDVR0PAQH/BAQDAgKkMA8GA1UdEwEB /wQFMAMBAf8wHQYDVR0OBBYEFNJ8x9zzVLYmRfhVC/MJPWBqzjboMBUGA1UdEQQO MAyCCmt1YmVybmV0ZXMwDQYJKoZIhvcNAQELBQADggEBAAXtnuLkTdHv/UZgKP0H l5X4txvmpbUbEXcCjWdVTNC8kHxbbky2REaCm2z5fX86CG/LFF9UrHMSAcWa5hB+ 4z89p4W

kubernetes - accessing API server via pod

  The following are instructions how to do it.  # Point to the internal API server hostname APISERVER = https://kubernetes.default.svc # Path to ServiceAccount token SERVICEACCOUNT = /var/run/secrets/kubernetes.io/serviceaccount # Read this Pod's namespace NAMESPACE = $( cat ${ SERVICEACCOUNT } /namespace ) # Read the ServiceAccount bearer token TOKEN = $( cat ${ SERVICEACCOUNT } /token ) # Reference the internal certificate authority (CA) CACERT = ${ SERVICEACCOUNT } /ca.crt # Explore the API with TOKEN curl --cacert ${ CACERT } --header "Authorization: Bearer ${ TOKEN } " -X GET ${ APISERVER } /api References  https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/

kubernetes API using curl + http client - really awesome coverage by the author

https://iximiuz.com/en/posts/kubernetes-api-call-simple-http-client/  

using azure sdk to get managed identities from Microsoft Entra

The SDK can be download here (under Azure Management)  https://azure.github.io/azure-sdk/releases/latest/mgmt/dotnet.html and look for  Resource Management - Managed Service Identity Install this library and get started using the code snippet from the website that looks something like this  using System; using System.Threading.Tasks; using Azure.Core; using Azure.Identity; using Azure.ResourceManager; using Azure.ResourceManager.Compute; using Azure.ResourceManager.Resources; ArmClient client = new ArmClient(new DefaultAzureCredential()); You also need the required permission/role assignment setup in Microsoft Entra.  Next best places to get started is to look at the same code. The sdk itself is pretty straight forward. https://github.com/Azure/azure-sdk-for-net/tree/Azure.ResourceManager.ManagedServiceIdentities_1.1.0/sdk/managedserviceidentity/Azure.ResourceManager.ManagedServiceIdentities/

mitmproxy - getting certificate issues and unable to browse https enabled website

After you follow the instruction to install mitm proxy, you also need to install the certificate by browsing to this link here :  http://mitm.it/. Install certificate for your platform - Windows/Linux etc. Then restart your mitm proxy and you should be good to go.  Refer to this link here https://docs.mitmproxy.org/stable/concepts-certificates/#quick-setup

rust - &'static str

This is one of the WTF moments in rust. How am i going to remember to write a return type that looks like  &'static str  To ensure my return string slice lasted forever and won't be garbage collected   deallocated. Yep, rust doesn't use garbage collector but instead it uses ownership to manage memory allocations async fn root () -> & ' static str {     "Hello, World!" }

writing rust module

Let's say you have a file called "mymodule.rs" in the folder and you want to import it in main.rs  Approach #1  mymodule.rs  pub mod hello {         pub fn hello () -> String     {         return String :: from ( "hello world" )     }     pub fn hello2 () {         println ! ( "hello world" )     } } main.rs - it start off with mod mymodule (which matches the filename of your module). noticed I added another layer by having hello2() function under mod hello -- this gives you mymodule::hello::hello2() mod mymodule ; fn main () {     // initialize tracing     // tracing_subscriber::fmt::init();     mymodule :: hello :: hello2 (); } Approach #2 This could be another solution  #[path = "mymodule.rs" ] mod thing ; fn main () {   thing :: hello :: hello2 (); }  There's a couple of other solution as well https://stackoverflow.com/questions/26388861/how-can-i-include-a-module-from-another-file-from-the-same-project

keda pause - proper way to do it

the proper way to pause keda from taking in any messages from a queue/topic or event is using annotation below  autoscaling.keda.sh/paused-replicas : "0" # Optional. Use to pause autoscaling of objects autoscaling.keda.sh/paused : "true" # Optional. Use to pause autoscaling of objects explicitly  

kubeconfig location

 The default location for kubeconfig used by kubectl or kubelogin are located under this folder:   C:\Users\X1100543\.kube

K8s API server endpoints references

You can refer to this link for API Server endpoints  https://docs.openshift.com/container-platform/3.11/rest_api/index.html How do you get information specific to service accounts? https://docs.openshift.com/container-platform/3.11/rest_api/core/serviceaccount-core-v1.html#serviceaccount-core-v1 Another references (but abit confusing)  https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.27/#serviceaccountlist-v1-core

rust book https://doc.rust-lang.org/book/ch20-00-final-project-a-web-server.html

rust - quick start with docker windows

  To start a docker instance of RUST docker run -it -v c:\work\rust\app:/app rust /bin/sh Then using vscode, attach to a running container, choose the rust dev container. Then run the following command  The following command to create a simple project: cargo new hello_cargo cd hello_cargo cargo build cargo run To debug, ensure you have install  the following extension  - rust-analyzer  - codelldb Then use vscode -> Open folder -> then locate your folder in the container.  You should be able to go to Run->Debug to debug your application, Ensure the intellisense is working as well Example of my launch.json that was automatically generated :- {     // Use IntelliSense to learn about possible attributes.     // Hover to view descriptions of existing attributes.     // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387     "version" : "0.2.0" ,     "configurations" : [         {             "type" : "lldb" ,

nextjs - codemons - applying code transforming to your code base

It is a very specific sets of transformation that can be applied to your code via cli commands. For example,  npx @next/codemod@latest next-og-import . transform import  next/server  to  next/og  for usage of  Dynamic OG Image Generation . This provides a quick way to upgrade/change your codebase without modifying each file Others codemon can be found from this link here: https://nextjs.org/docs/app/building-your-application/upgrading/codemods    

nextjs 14 turbo mode - check out the performance differences

Image
 Almost 100% per improvements

kubernetes connection issues - is often due to SNAT problems

k8s autoscaler reaction time to scale up or scale down

 You can change the behaviour of the autoscaler scan time by adding this on the pod level. "cluster-autoscaler.kubernetes.io/pod-scale-up-delay": "600s" \

helm adopt - creating new charts with existing k8s resources

Let's say you have deployed k8s resources into a cluster and you wanted a way to generate a chart and deploy, then helm adopt is good options.  https://github.com/HamzaZo/helm-adopt (based on https://github.com/helm/helm/issues/2730) You need to specify all the resources like deployment, services in one go, otherwise there's some error. For example, let say you use the following command: helm adopt resources deployment:mytest-demo services:mytest-demo --output frontend instead of  helm adopt resources deployment:mytest-demo  --output frontend helm adopt resources services:mytest-demo  --output frontend

helm - unable to change label as it is immutable

I tested this with helm v.3.10 - doesn't seems to happen and only happened to me when i am using helm version 3.7. Perhaps upgrading the helm runtime version might help. 

nextjs 13 upgrade to 14

You can do that using the following command - magic  npm i next@latest react@latest react-dom@latest eslint-config-next@latest Unfortunately not a straight forward upgrade for me as i am using apolog server integration package. Anyways I force it and will be trying to test that out.

Nextjs 14 : next dev --turbo

 Really fast start up after install nextjs 14 and start dev with --turbo options.

class NextRequest extends Request { ^ ReferenceError: Request is not defined

 This is when trying to run Nextjs 14 on node 16 :)  After you install, all the error goes away.

use the following to stress test, create memory and intensive cpu load for your load

  --- apiVersion : apps/v1 kind : Deployment metadata :   name : progrium spec :   replicas : 2   selector :     matchLabels :       apptype : backend   template :     metadata :       labels :         app : progrium         version : v1         apptype : backend     spec :       containers :       - image : progrium/stress         imagePullPolicy : IfNotPresent         name : progium         command : "stress"         args : [ "--cpu" , "100" , "--io" , "10" , "--vm" , "2" , "--vm-bytes" , "928M" ]         ports :         - containerPort : 80

aks how does node auto-scaling works

Image
  According to the documentation, if the memory and cpu request falls below 50%. So both CPU and memory needs to be above 50%. I also noticed that you need to have a k8s deployment. If you do not have a deployment, those nodes just keeps on throttling.  There will be a wait between 10-30 minutes before you can start seeing auto-scaling.  You don't necessarily need to get your pods on "pending" state for it to scale.  FAQ for how autoscaling works  https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work You can also view the status via kubectl top - which is an accurate account of what's happening.  You can get the status of your node auto-scaler using the following command  kubectl describe configmap --namespace kube-system cluster-autoscaler-status Not using k8s deployment yaml Pods needs to be in pending state and meets the cpu / memory utilization factor of above 50% to scale. If pods can be scheduled, runnig and NOT

Simulating loads using kubectl

 You can try the following command to create some load stress for your nodes kubectl run progrium4 --image=progrium/stress  -- --cpu 80 --io 10 --vm 2 --vm-bytes 928M 

referencing a variable from another stage requires dependsOn to be specified

  To use variable output from another stage, you require a direct (1 level) of reference to that stage. Lets say you have created a 3 stage pipeline that looks like this  Stage A -> Stage B -> Stage C  Variable X is output in Stage A. Then Stage B would be able to reference Stage A. Stage C will not be able to reference variable X. 

k8s prevent node from being scaled down

  The following annotation can be used to prevent node from scaling down. "cluster-autoscaler.kubernetes.io/scale-down-disabled": "true"

helm - using condtional if to reliably check for nil value

  If you're within a scope  {{- if and ($.Values.envs) ($.Values.envs.demo) }}     - name : ok       value : oktoo   {{- end }} If you're not within any scope {{- if and (.Values.envs) (.Values.envs.demo) }}     - name : ok       value : oktoo   {{- end }}

golang - net ListenIP network type

It can be difficult to get right. Perhaps the docs should be better. If you try to use net.ListenIP and since the network is a string parameter, it should be what listed in the code below but it can also just be "ip4" - and i guess that's what trip alot of developers.  ipServer , err := net. ResolveIPAddr (CONN_TYPE, CONN_HOST)     if err != nil {         fmt. Println ( "unable to resolve ip address" , err. Error ())         os. Exit ( 1 )     }     l , err := net. ListenIP ( "ip4:icmp" , ipServer)     if err != nil {         fmt. Println ( "Error listening:" , err. Error ())         os. Exit ( 1 )     }     defer l. Close () For a complete list of options available, you can check this link out https://github.com/golang/go/blob/9341fe073e6f7742c9d61982084874560dac2014/src/net/lookup.go#L22  

keycloak 22.0.3 k8s CRDS

 Go and clone keycloak git repostory . Then go into operator folder and run "mvn package". Then it will generate all the deployment required. Please make sure you have install maven and jdk17.

Unhandled exception. System.ArgumentException: The connection string used for an Event Hub client must specify the Event Hubs namespace host, and either a Shared Access Key (both the name and value) or Shared Access Signature to be valid

After getting some error that looks like this.  Basically you're using an ROOT access key that don't come with eventhub name. Probably use another constructor if you're using root managed connection string. ----- Unhandled exception. System.ArgumentException: The connection string used for an Event Hub client must specify the Event Hubs namespace host, and either a Shared Access Key (both the name and value) or Shared Access Signature to be valid. The path to an Event Hub must be included in the connection string or specified separately. (Parameter 'connectionString') at Azure.Messaging.EventHubs. EventHubsConnectionStringPrope rties.Validate(String explicitEventHubName, String connectionStringArgumentName) at Azure.Messaging.EventHubs. Primitives.EventProcessor`1.. ctor(Int32 eventBatchMaximumCount, String consumerGroup, String connectionString, String eventHubName, EventProcessorOptions options) at Azure.Messaging.EventHubs. EventProcessorClient..ctor( BlobContain

updating k8s deployment label can caused helm future deployment to fail

 After we tried to update an existing helm deployment label, it complaint about label immutability and then stop all deployment.  Unfortunately, there's no easy fix because helm stores these information in its namespace secrets. 

golang : no new variables on left side of

 so happens that my code redeclare the same variable name.

go: cannot determine module path for source directory

  To resolve this  go mod init your-module-name/your-module-subname

mongodb university - free course on index design

  https://learn.mongodb.com/courses/mongodb-indexes

Setting your ts project quickly for node

npm init -y  npm install typescript --save-dev npm install @types/node --save-dev npx tsc --init --rootDir src --outDir dist --esModuleInterop --resolveJsonModule --lib es6,dom  --module commonjs