Posts

Showing posts from July, 2024

cloud trail bug for logging s3 events

  Has some issues trying to get s3 events logging well for cloud trail. Then found this document that provides some good advices. :) https://github.com/hashicorp/terraform-provider-aws/issues/24596

AWS stack name cannot be found for elastic beanstalk environment

 You can refer to the documentation here to get the name of stack supported by your platform configurations. For example, if you're using python then you can essential select those stack names such as " 64bit Amazon Linux 2023 v4.1.2 running Python 3.11 " The completed list can be found here:-  https://docs.aws.amazon.com/elasticbeanstalk/latest/platforms/platforms-supported.html

AWS elastic beanstalk - getting error - Environment must have instance profile associated with it

Image
 What is the message above means?  This basically means that elastic beanstalk requires or missing EC2 profile as shown below. All you need t do is choose EC2InstanceRole, then you are sorted

aws beanstalk delete environment disabled

Problem After trying to create AWS beanstalk environment, it failed and when i tried to remove it, the remove option is disabled. The error message that was logged are "Environment must have instance profile associated with it. Failed to launched environment". Resolution Wait for 1 hour and the environment will automatically remove itself.

github actions env variable not supported in reusable workflow

github actions does not support passing environment variables when calling a reusable workflow. According to the actual docs https://docs.github.com/en/actions/using-workflows/reusing-workflows#limitations

aws setting up autoscaling group - to scale your instance

Image
To create autoscaling, first we need to  - Create launch template  - Create EC2 Autoscaling group To create your launch template, please goto your AWS Ec3 page and then select Launch template -> Create launch template.  Provide a name for your template.  Then under Auto scaling guidance select the checkbox.  Select your Image - Amazon Linux 2 (HVM) would be good. As for instance type, select the free tier for testing purposes.  Select your key pair  Under network settings - this is a quite tricky part - Add network interface.  In the section Auto-assign public ip -> choose disable.  For security group id -> select the VPC for Auto-scaling group. And for Delete on termination, choose "Yes".  Skip to review and the choose Create Auto scaling group. 2. Crate single instance Auto scaling group. In Ec2 select Create Autoscaling group. Enter a name for it.  Choose instance launch option In the network, select default VPC for your AWS region.  As for Availability Zones and

Sql server error code for troubleshooting

  I really like this link that helps to troubleshoot sql connection issue. Really awesome to figure out what's wrong with the SQL instance when unable to connect https://learn.microsoft.com/ en-us/troubleshoot/sql/ database-engine/connect/ network-related-or-instance- specific-error-occurred-while- establishing-connection

windows crowdstrike issue workaround

Looks like it is not going to be an automatic update like we all would wanted.  Due to an issue with crowdstrike that kept host rebooting and stalling, there is a workaround https://www.crowdstrike.com/blog/statement-on-falcon-content-update-for-windows-hosts/ In short, we have to  1. boot into safe mode 2. navigate to %WINDIR%\system32\drivers\Crowdstrike directory 3. Remove all matching file C-00000291*.sys 4. Reboot 

Azure and AWS equivalent for networking components

This article provides a one-to-one component in Azure and its equivalent in AWS. https://learn.microsoft.com/en-us/azure/architecture/aws-professional/networking 

github action deploying to azure function

Using github action to deploy Azure Function is pretty straight forward. All we need is download and setup publishing profile. Once we have that it will be using another github action called Azure/functions-action@v1. name : Deploy DotNet project to Azure Function App on :   [ push ] env :   AZURE_FUNCTIONAPP_NAME : 'MyTimeScheduler'   # set this to your function app name on Azure   AZURE_FUNCTIONAPP_PACKAGE_PATH : '.'       # set this to the path to your function app project, defaults to the repository root   DOTNET_VERSION : '6.0.x'                   # set this to the dotnet version to use (e.g. '2.1.x', '3.1.x', '5.0.x') jobs :   build-and-deploy :     runs-on : ubuntu-latest     environment : dev     steps :     - name : 'Checkout GitHub Action'       uses : actions/checkout@v3     - name : Setup DotNet ${{ env.DOTNET_VERSION }} Environment       uses : actions/setup-dotnet@v3       with :         dotnet-version : ${{ env.D

azure storage account blob - using http and sas to access your container

To list a blob via https and using your browser, first generate the SAS key from your azure storage account. Then wire up the following HTTP request as shown below: https://functionrg94ba.blob.core.windows.net/ your-storage-container-name ?restype=container&comp=list& your-sas-generated-key You should be able to get back some xml 

azure private dns zone and vnet

Image
  Given that we have a private dns zone and vnet routing capabilities, if i were to lookup a server name say db.private.contoso.com, how does it gets resolve.  By default, it goes through Azure default routing and if that fails, it will try to use a DNS zone and in my case a private dns zone.  Azure routing  MyVM01 is auto-registered after DNS Zone is associated to a VNET. In this case, it uses cloudapp Private DNS When i lookup my db.private.contoso.com, it tries to use cloudapp then falling back to my private DNS zone as you can see here.  So key takeaways, when you create DNS zone, remember to link it to your vnet.  It uses azure default routing before falling back on DNS zone.  To learn how to create a private dns zone, you can follow the guide here. https://learn.microsoft.com/en-us/azure/dns/private-dns-getstarted-portal

python function as parameters

 An example of passing in a function as a parameter. Also notice that we have define typings.  from typing import Callable def hello ( x ):     return x .upper() def a ( func : Callable [[ str ], str ]):     print ( func ( "abc" )) a ( hello )

what registry is supported by github

 It is really nice to be able to publish your library to be consume by others in github. Some of the registry that is currently being supported are - Container registry for docker images - RubyGem registry  - nuget - maven  - gradle - npm  For more info on working with them, please goto: https://docs.github.com/en/packages/working-with-a-github-packages-registry

bicep resource deletion using --mode complete

Let's say you created azure resources using the following bicep file and deploy it using:    az deployment group create -f ./servicebus.bicep -g exampleRG   @ description ( 'Location for all resources.' ) param location string = resourceGroup ( ) . location @ description ( '' ) resource serviceBusNamespace 'Microsoft.ServiceBus/namespaces@2021-06-01-preview' = {   name : 'mytestservicebusjerwo'   location : location   sku : {     name : 'Basic'     capacity : 1     tier : 'Basic'   } } resource serviceBusQueue 'Microsoft.ServiceBus/namespaces/queues@2022-01-01-preview' = {   parent : serviceBusNamespace   name : 'myqueue'   properties : {     lockDuration : 'PT5M'     maxSizeInMegabytes : 1024     requiresDuplicateDetection : false     requiresSession : false     //defaultMessageTimeToLive: 'P10675199DT2H48M5.4775807S'     deadLetteringOnMessageExpiration : false     duplicateDetectionHistoryTi

ssl - unable to get local issuer certificate issue

This typically means openssl is unable to verify the certificate authenticity in the certificate store. Some openssl version unable to validate against the installed certificates that normally stored in  /etc/ssl/certs. To verify the certificate using a bundle file   openssl s_client -connect example.com:443 -CAfile bundle.crt To verify the certificate by providing a truststore openssl s_client -connect example.com:443 -CApath /path/to/truststore Other command that might be of use You can check the fingerprint using  openssl x509 -in stca.crt   -sha256 -fingerprint -noout

viking cloud and root certificate download

 You can download viking cloud and its root certificate here https://certs.securetrust.com/support/support-root-download.php

curl ifconfig.me - what is another way to check your public ip address that you're vm uses?

  You can try to use " curl ifconfig.me"

azure devops - Pipeline does not have permissions to use the referenced

Image
  Bump into this error and found out that it does not have the right permission. All you need to do is identify the service connection and add the necessary pipeline in there, as shown below:

bicep - using conditional when creating resources

To use conditional in Bicep, you can just specify "if" as shown below. Also we can place a param that pass from command line during deployment. param deploy bool @ description ( 'Location for all resources.' ) param location string = resourceGroup (). location @ description ( '' ) resource serviceBusNamespace 'Microsoft.ServiceBus/namespaces@2021-06-01-preview' = if ( deploy ) {   name : 'mytestservicebusjerwo'   location : location   sku : {     name : 'Basic'     capacity : 1     tier : 'Basic'   } } resource serviceBusQueue 'Microsoft.ServiceBus/namespaces/queues@2022-01-01-preview' = if ( deploy ) {   parent : serviceBusNamespace   name : 'myqueue'   properties : {     lockDuration : 'PT5M'     maxSizeInMegabytes : 1024     requiresDuplicateDetection : false     requiresSession : false     //defaultMessageTimeToLive: 'P10675199DT2H48M5.4775807S'     deadLetteringOnMessageExpiratio

bicep - we can specify different API to use for different resources

 we can specify which api version to use in our bicep file and it would still work with an existing resources.  For example, if I create my Azure service bus resources using, Microsoft.ServiceBus/namespaces/queues@2022-01-01-preview, i would still be able to change/update using a different API version for example, Microsoft.ServiceBus/namespaces/queues@2022-10-01-preview. These API are real and not made up.  On top of that, we can even use different API for resources in a bicep file.  For example, service bus namespace uses Microsoft.ServiceBus/namespaces/queues@2022-01-01-preview, which queue uses Microsoft.ServiceBus/namespaces/queues@2022-10-01-preview - We still able to get it to work.

bicep to see what are the infrastructure changes with --what-if

Image
If you wanted to see what are the changes that will be made to your Azure resources defined in your bicep files. please use the --what-if command as shown below: az deployment group create --resource-group exampleRg  --template-file servicebus.bicep --what-if  Sample output will be shown below:

app gateway lab1 - setting up app gateway for vm

Image
  To get started  1. create your app gateway - Goto Azure -> Create Application Gateway Resource group: aprg Application Gateway Name: ap Tier: Standard: V2 Enable autoscaling: No  Instance Count: 1  Availability zone: Zones 1 Under Virtual Network - Create your virtual network if you do not have one. Please note you need to create a default and another subnet for backend pool.  The portal does not allow you to add another subnet which you may need to add later.  In the front-end, ensure you create a public IP by clicking on "Add new". Under backend tab click on "Add a backend pool".  Name: APBackendPool Add backend pool without adding target: Yes Click Add. Configuration Under the configuration tab, click on "Add a routing rule".  We will only be configuring HTTP rules to port 80.  Rule name: HttpRoutingRule Priority: 100 Listener tab In the Listener tab, use the following settings for a listener.  Listener Name: HTTP Front end IP: Public IPv4 Port: 80

Azure Kubernetes Service Automatic

Azure Kubernetes service Automatic is an AKS comes with pre-configured components such as Azure linux OS, container insights, managed Grafana, managed Prometheus and many other components that you can see here . It even perform automated upgrades for you and this is something that you can configure as well - like every weekend. FYI: container insights, managed grafana and managed prometheus can be disabled during setup.

Azure load balancer - setting up load balancer and vm as backend pool

Image
  In this example, we are going to setup an Azure public load balancer and backed by Azure VM backend pool. To get started we will do the following  - Create virtual Network  Goto Virtual Network -> Create -> Then provide the following configuration  Resource group: mylbrg Virtual network name: lbvnet Security: Do not do anything here  IP Address:  Default: 10.0.0.0/24 BackendPoolSubnet: 10.0.2.0/24 Review and create. - Create 1 VM with a public ip and setup IIS  Goto Virtual Machine -> Create  Resource group: mylbrg Virtual machine name: VM`1 In Networking tab, ensure you create a new Public IP for your VM.  Subnet  is: BackendPoolSubnet Enable  Inbound port  for: http, https and RDP  Leave the rest as it is. Click on  Review and Create .  Once your VM is up and running, goto cloud shell and run the following command to setup IIS Set-AzVMExtension -ResourceGroupName mylbrg -ExtensionName IIS -VMName VM1 -Publisher Microsoft.Compute -ExtensionType CustomScriptExtension -TypeHa