Posts

Showing posts from June, 2024

azure function app - how to check on your app scaling

 To check if your function app scales, you can goto your function app -> Monitoring -> Metrics -> And under the metrics select "Automatic Instance Count Scaling".

azure function app troubleshooting cold start and availablity

Under your Function app, goto "Help and support" ->  Support and Troubleshoot ->  Availability and Performance -> Here you can see various reporting about your function app.  Choose Function App Down or Reporting Error - to see if you're function app goes down for some reasons. There are others reporting that are of interest, like High CPU, High Memory.   

Azure function time trigger crontab configuration - getting it right

Image
  Azure function app that uses time trigger needs to be configured correctly otherwise it will behave differently.  The crontab with 0 */5 * * * * runs as expected. It runs every 5 minutes. The crontab with 0 5 * * * * configuration. We notice it runs every hour instead of 5 minutes.  You can investigate further by using NCrontab library with the following codes. var cron = NCrontab . CrontabSchedule . Parse ( "5 * * * *" ); var start   = DateTime . Now ; var end = DateTime . Now . AddHours ( 1 ); var occurences =   cron . GetNextOccurrences ( start , end ); foreach ( var occurence in occurences ) {     Console . WriteLine ( occurence ); } And you will notice only 1 occurrences.  If you update it to as 0 */5 * * * *, you will typically get 11 occurrences. var cron = NCrontab . CrontabSchedule . Parse ( "*/5 * * * *" ); var start   = DateTime . Now ; var end = DateTime . Now . AddHours ( 1 ); var occurences =   cron . GetNextOccurrences (

github actions outputs

Image
The following workflow provides an example of using outputs. step1 generates the outputs. The name is defined by the outputs in job1.  Then job2 refers to it using job1.outputs.output1.  needs is use to define dependencies between jobs. jobs :   job1 :     runs-on : ubuntu-latest     # Map a step output to a job output     outputs :       output1 : ${{ steps.step1.outputs.test }}       output2 : ${{ steps.step2.outputs.test }}     steps :       - id : step1         run : echo "test=hello" >> "$GITHUB_OUTPUT"       - id : step2         run : echo "test=world" >> "$GITHUB_OUTPUT"   job2 :     runs-on : ubuntu-latest     needs : job1     steps :       - env :           OUTPUT1 : ${{needs.job1.outputs.output1}}           OUTPUT2 : ${{needs.job1.outputs.output2}}         run : echo "$OUTPUT1 $OUTPUT2"         After running, the output will be "hello world" as shown in diagram below:

github - creating a basic composite action

Image
What is github composite action allow you to host multiple commonly used task/steps/actions into a single actions. This prevents repetition of commonly used task. To get started, you create a repository that will host your actions/template. Create a file called action.yml For example, this can be your template  https://github.com/mitzenjeremywoo/hello-world-composite-action Next, to test out your composite repository, create another repository and create your workflow.  It should look like this. Notice how it refers to hello-world-composite-action@v1 on : [ push ] jobs :   hello_world_job :     runs-on : ubuntu-latest     name : A job to say hello     steps :       - uses : actions/checkout@v4             - id : foo         uses : mitzenjeremywoo/hello-world-composite-action@v1         with :           who-to-greet : 'Mona the Octocat'       - run : echo random-number "$RANDOM_NUMBER"         shell : bash         env :           RANDOM_NUMBER : ${{ steps.foo.outputs.r

github actions sample to deploy to aks cluster

To use github action to deploy to AKS cluster, you can try the following sample code. This approaches uses kubectl  To use helm file you can try the following sample  

azure devops agent - ##[error]No connection could be made because the target machine actively refused it.(ip number)

While trying to deploy application to the target server installed with azure devops agent, we are getting this error message. If you get this error, this means it is not due to an issue trying to send instructions/tasks to the target agent but the issue with target server trying to download artifact or something from Azure Devops. So it is outgoing traffic and the ip number above, is probably going to be your proxy to get to Azure devops.  To resolve this, you need to check with your internal network admin to figure out which proxy to use and you need to update .proxy file. Updating the proxy file and restarting it, didn't work for me.  So i had to re-configure (re-install) my agent and using the right proxy. 

bicep example to create service bus

Sample bicep file to create Service bus  Because i am provisioning Basic service bus tier, there's a couple of properties that needs to be commented out. @ description ( 'Location for all resources.' ) param location string = resourceGroup ( ) . location @ description ( '' ) resource serviceBusNamespace 'Microsoft.ServiceBus/namespaces@2021-06-01-preview' = {   name : 'mytestservicebusjerwo'   location : location   sku : {     name : 'Basic'     capacity : 1     tier : 'Basic'   } } resource serviceBusQueue 'Microsoft.ServiceBus/namespaces/queues@2022-01-01-preview' = {   parent : serviceBusNamespace   name : 'myqueue'   properties : {     lockDuration : 'PT5M'     maxSizeInMegabytes : 1024     requiresDuplicateDetection : false     requiresSession : false     //defaultMessageTimeToLive: 'P10675199DT2H48M5.4775807S'     deadLetteringOnMessageExpiration : false     duplicateDetectionHistoryTimeW

bicep resources creation references

  When you're trying to create a service bus queue but not entirely sure what are the schema/properties that you can tweak, please checkout this link here below  https://learn.microsoft.com/en-us/azure/templates/microsoft.servicebus/namespaces/queues?pivots=deployment-language-bicep

azure service bus - how do you ensure messages are deliver sequentially

By using session of course, :)  For more info, please check out link below https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-sessions  

azure ad create service principals with role and scopes predefined

  Quite a handy command to create key features for a service principal in Microsoft Entra. ✌ az ad sp create-for-rbac --name {app-name} --role contributor --scopes /subscriptions/{subscription-id}/resourceGroups/exampleRG --json-auth

dotnet tool list argument with package-id does not work on dotnet 6.

Dotnet 6 cli does not really support "dotnet tool list -g <package-id>". When i ran this on a dotnet 6 build agent, it gives me 'unknown command or unrecognizable argument".  Running on dotnet 8 works for me.   

github actions - starters for ci/cd

  https://github.com/actions/starter-workflows/tree/main/ci

github actions - running your workflow manually

Image
To be able to run your workflow manually, you need to have this trigger configure ' workflow_dispatch ' for your default branch or main branch. If you have it in your feature branch, that won't work. You will either need to create a merge request to merge that into your default/main branch. Once you have done that, you should see the following and be able to trigger it.  

github actions - debug logging mode

Image
Github actions supports steps (task) and runner debugging mode. You can turn these on by  Step Debugging level  Set your environment variable to  ACTIONS_STEP_DEBUG  to true   build :         env :       MyEnvVar : Hello       ACTIONS_STEP_DEBUG : true Runner debugging level Set your environment variable to  ACTIONS_RUNNER_DEBUG   to true   build :         env :       MyEnvVar : Hello       ACTIONS_RUNNER_DEBUG : true After running the pipeline, if you goto your repository -> Actions -> Ensure you selected your workflow -> expand / open your job and then download your logs as shown here.  If you expand your logs file, you will be able to see additional logs compare to a normal logs (without debugging turned on)

dockerfile - running bash command and then reseting to default shell

 You can use the following to use bash for running a command for example, "source my-execution-file" and switch back to sh. SHELL [ "/bin/bash" , "-c RUN nvm install 17 SHELL [ "/bin/sh" , "-c" ] To find out more about bash vs sh https://stackoverflow.com/questions/5725296/difference-between-sh-and-bash

github actions resources for quick examples

https://github.com/mitzenjeremywoo/githubaction_template/actions/new?category=deployment  

Github variable $GITHUB_WORKSPACE vs ${{ github.workspace }}

Image
I was using $GITHUB_WORKSPACE in my yaml as shown below:- - name : build dotnet         run : |             cd $GITHUB_WORKSPACE       While it does work, it is recommended to use ${{ github.workspace }} variable instead. - name : build dotnet         run : |             cd ${{ github.workspace }}       Apparently, $GITHUB_WORKSPACE is used and maintained for backward compatibility reasons. Full list of the environment variables is being defined here .  As stated here, we have a proper way to use the environment variables in our workflow.

dotnet pack with version number

 To build a nuget package with a different version number you can use the followings, where /p:Version switch provides option to specify a new version number dotnet pack -c Release /p:Version=1.0.${{ github.run_number }}

github actions reusable workflow example

To create a template or reusable workflow, we probably need to understand how to  1. create the template 2. call or use the template  In this example, we will create a pipeline that does some fake pre-requisite thing and then perform dotnet build.  Creating the template To create the template that accepts some input, you can use this example here. Notice how we use the input variables. // dotnet-workflow.yaml on :   workflow_call :     inputs :       message :         type : string         required : true       message2 :         type : string jobs :   build :     runs-on : ubuntu-latest     steps :       - name : build dotnet         run : |             echo ${{ inputs.message }}             echo ${{ inputs.message2 }} To use/apply this template from the pipeline, under the job call prerequisite, we uses dotnet-workflow.yaml that is defined in a branch called " feat/template-resable-workflow ".   name : dotnet package on : [ push ] jobs :   prerequisite :     uses : mitzenj

Github action - setting up GITHUB_TOKEN permission to publish

Image
  Github allows user to publish nuget packages to nuget.pkg.github.  To be able to publish repository, you need a read:write permission configure for your GITHUB_TOKEN. GITHUB_TOKEN are automatically injected into your github actions and normally would have read permission. You can easy find that out by reviewing your job.  Tbis means you won't be able to publish your packages. You need to configure write permission. To do that, please goto: Your repository -> Settings  Then goto Workflow Permission and choose Read and write permission. Click Save. Then you will be able to see the permission as follows:  To view your packages, please goto this page: 

Github action :- Unrecognized named-value: 'GITHUB_REPOSITORY'

  While trying to use github action, I tried to use environment variable like this. And get Unrecognized named-value: 'GITHUB_REPOSITORY' issues.   - name : build dotnet         run : |             cd ${{ GITHUB_WORKSPACE }}             cd src/ConsoleApp1             dotnet restore             dotnet build Then i change to the $GITHUB_WORKSPACE, i was able to get it working.    - name : build dotnet         run : |             cd $GITHUB_WORKSPACE             cd src/ConsoleApp1             dotnet restore             dotnet build So what is the differences between these?  $GITHUB_WORKSPACE are used within shell scripts. 

sockperf for linux and latte for windows - network performance tool

 Latte https://github.com/microsoft/latte Sockperf https://manpages.ubuntu.com/manpages/jammy/man1/sockperf.1.html

keycloak - when importing realm json and you're getting unable import errors.

Image
 One of the reason this could happen is due to ''authorizationSettings". And you need to remove those settings as shown below from the realm export json file. After that, you should be able to import it into keycloak realm

pytorch fundamental from microsoft

 https://learn.microsoft.com/en-us/training/paths/pytorch-fundamentals/?wt.mc_id=aiml-26954-cxa

Azure SQLServer on VM - High availability setup

 In a 2 part tutorial that guides you how to setup SQL Server on VM high availability Part 1: Pre-requisite https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/availability-group-manually-configure-multi-subnet-multiple-regions?view=azuresql Part 2:  https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/availability-group-manually-configure-tutorial-multi-subnet?view=azuresql

kubernetes awesome tools

This link is quite good, as it tracks a list of interesting and powerful tools for kubernetes https://collabnix.github.io/kubetools/

c# 12 new features on my repo

 Just need to track and understand new features in c# 12 with a git repo.  https://github.com/mitzenjeremywoo/csharp12-feature

istio debugging and troubleshooting

 Taken from  https://github.com/istio/istio/wiki/Troubleshooting-Istio To get configuration and stats from a proxy (gateway or sidecar): Stats:  kubectl exec $POD -c istio-proxy -- curl 'localhost:15000/stats' > stats Config Dump:  kubectl exec $POD -c istio-proxy -- curl 'localhost:15000/config_dump' > config_dump.json  OR  istioctl proxy-config all $POD -ojson > config_dump.json Clusters Dump:  kubectl exec $POD -c istio-proxy -- curl 'localhost:15000/clusters' > clusters Logs:  kubectl logs $POD -c istio-proxy > proxy.log To enable debug logging, which may be useful if the default log does not provide enough information: At runtime:  istioctl proxy-config log POD --level=debug For a pod, set annotation:  sidecar.istio.io/logLevel: "debug" For the whole mesh, install with  --set values.global.proxy.logLevel=debug