Posts

Showing posts from September, 2024

extending azure nat gateway with additional public ip

Image
We can associate additional public IP to our NAT gateway by going into your Nat gateway -> Settings -> Outbound IP. Then click on 'Edit' - and then under Public IP, create public IP address. Provide an IP name and create it. I think for testing purposes this would be suffice but for production extensibility, it is best to use the "Public IP prefixes" When i click "Ok" - i bump into this error. After a few retries, then i was able to get it working. So this is what our NAT gateway would look like. 

istio setting up local rate limit

Image
 In this setup, we are going to use httpbin in the istio samples folder. The steps is to  1. deploy httpbin  2. enforce local rate limit 3. test  Local rate limit means it applied to workloads. In this context, we are deploy to istio-system, which means it gets applied to all workload in all namespace.  Let's deploy the httpbin. It is pretty plain. I didn't deploy any config map that specified in istio documentation. apiVersion : v1 kind : ServiceAccount metadata :   name : httpbin --- apiVersion : v1 kind : Service metadata :   name : httpbin   labels :     app : httpbin     service : httpbin spec :   ports :   - name : http     port : 8000     targetPort : 80   selector :     app : httpbin --- apiVersion : apps/v1 kind : Deployment metadata :   name : httpbin spec :   replicas : 1   selector :     matchLabels :       app : httpbin   ...

k3s find and clear images

 We can use this command to show images used in your k3s. sudo k3s crictl images To clear up some spaces,  sudo k3s crictl rmi --prune

testing reachability of private endpoint for a vnet

Image
  private endpoint lab to test out storage account reachability from a vnet.  1. setup vnet  2. setup storage account and enable a private link and place in default. 3. setup linux vm in default subnet - test if the private link is resolvable 3. setup linux vm in another subnet  test if the private link is resolvable.  Doing a nslookup on a non-vnet machine.  nslookup from vm1 on the same vnet and subnet. You can see it resolve to an internal ip. nslookup from vm2 on the same vnet and different subnet (subnet2). You can see it resolve to an internal ip. Wire server with address 168.63.129.16 is scope to a VNET. This means, you just need to use this to resolve a private endpoint for a VNET.  - Hostname - probably not a good idea as it might trip you at some point. - Conditional forwarders - scope to a VNET  - Forward Lookup zone - if you're trying to resolve with a different VNET. Say you might have multiple VNET and you need your application to be...

az cli - unrecognized arguments: command requires the extension amg

To install the amg extension - extension for grafana, please run  az upgrade az extension add --name amg  Additional info/details can be found here. https://github.com/Azure/azure-cli-extensions/tree/main/src/amg

troubleshooting connectivity to azure pipeline artifacts for npm

Image
Quite often we need use npm to connect to Azure devops artifact to download packages. First, the user needs to be authenticated.  In order to do that, we need to run some commands depending on what OS we're currently using.  After running the command above and still get a 403 when connecting to the feed, then you can run the following to see which users id is being selected.  vsts-npm-auth -config .npmrc -v detailed If a different user is selected, then you can try to  1. For older system, you can try to delete registry entry here -  HKEY_CURRENT_USER\SOFTWARE\Microsoft\VSCommon\14.0\ClientServices\TokenStorage\VisualStudio\VssApp 2. Delete the folder for the users selected by vsts-npm-auth in your folder cached credentials  Once you have done this, re-run  vsts-npm-auth -config .npmrc -F

kubernetes too many pods when scheduling pod to run on a tainted node

  When i was trying to schedule a pod to run on a node, i bump into this message - "too many pods" and always stuck on pending state. Then i look at the pods by node, it has less than 20 pods and i can't get it to schedule on those pods. Then i started review my nodeSelector to make sure everything is fine. Then i look at my node taint and pods tolerations. Seems to be fine.  Getting the node labels kubectl get node --show-labels Getting the nodes taint  kubectl get nodes -o custom-columns=NAME:.metadata.name,TAINTS:.spec.taints --no-headers Listing out the pods used by a node  kubectl get pods --all-namespaces -o wide --field-selector spec.nodeName=<your-node-name> Then i started to check my deployment to verify everything is working as expected apiVersion : apps/v1 kind : Deployment metadata :   name : with-node-affinity spec :   replicas : 4   selector :     matchLabels :       app : with-node-affinity ...