Posts

Showing posts from 2019

Encrypting and decrpyting using Azure KeyVault with KEY RSA

Let's say you created Azure Key Vault and created a KEY (not secret or certificate) that is made up of RSA Key.

The following codes can be used to encrpyt and decrpyt value.

Powershell : creating your PEM on DevOps pipeline

Sometimes you might want to create expose your public key / generate your public key to 3rd party apps. This is a code written entirely on Powershell to help you export public key (modulus and exponent) to a legit PEM file:

To use it,


$pemPublicKey = GeneratePublicKey $your_modulus_base64encoded $your_exponent
Write-Host($pemPublicKey)





How C# / Netcore would like to use PEM file (Public side of things)

Publicc key are typically stored in a PEM file which contains modulus and exponent value. These get converted into RSAParameters so that it can be used by RSA to do its magic. It looks something lke this :-

General Idea 

var rsa = new RSACryptoServiceProvider();
rsa.ImportParameters(p);

To use a nuget package called PemUtil. 
This allows you to read a PEM file into a RSAParameters. Go and install a nuget package called PemUtil and then you can use the following code. Then you can plug it into code above.
 public static RSAParameters ReadPEMFileB(string PemFilePath)         {             RSAParameters rsaParameters;             using (var stream = File.OpenRead(PemFilePath))             using (var reader = new PemReader(stream))             {                 rsaParameters = reader.ReadRsaKey();             }
            return rsaParameters;         }


A sample PEMfile might look like this :-
-----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCxSxqCESCdpeXWCprQGMrpwlEg 91EF…

RSA Public Key PEM File

What is this? A PEM file which contains your public key.  Public key will contain your Modulus and Exponent. (P and Q).  It is different from private key which contains alot more information.

-----BEGIN PUBLIC KEY-----
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCxSxqCESCdpeXWCprQGMrpwlEg
91EF4Qt1R247gTGaUMgEjFEq9c/UffmtYyccmoIq3n8m/ZcsiCNJNFfGx5h/YEdR
ZyzQjSLg9zt3zu5JbBNx4zb63vvhpCGoMf65y/RvMs+XjSBl8ybl0vbbrC62YH1I
7ZJYSbKhXr3ILHyUSwIDAQAB
-----END PUBLIC KEY-----

OpenSSL

To generate a rsa key

openssl genrsa -out private.pem 1024
To view your private key

openssl pkey -in private.pem -text

To generate a public key (your private key already contain your public key)

openssl pkey -in private.pem -out public.pem -pubout
View our public key (noticed the additional -pubin argument)

openssl pkey -in public.pem -pubin -text





ModuleNotFoundError: No module named 'transforms.data.processors.squad'

If you encounter this issue, this is because the latest code base is not package to pypi for some reason. So i use a rather hacky way to resolve this.

If you're running on your computer

Clone https://github.com/huggingface/transformers and locate your installed python package for "transformer". It should be somewhere usr/lib/python/dist-packages.

Replace all code in the dist-package transfomer folder with your clone folder. Yes, replace all as there are alot of dependencies. :(


Using colab.

This is almost the same, clone the repository and start upload to the destination folder. Loads of work and hope you have fun.


To be honest, i didn't get it to do those hard core training. It just stall after a few runs.

OpenAI GPT2 - Generating your first content

Perhaps the best way to get started is to look at this Colab notebook. Just run it and you will see loads of fake content being generated.

https://colab.research.google.com/drive/1OqF99q0-G1RCfmvcEw_qZvjuiJL_066Q#scrollTo=TsNXf-0zgaHn

Azure function weird way of setting up logger

No really sure why we need to specify a type for creating / initializing logger (as indicated by code below)


azure app insights :- understanding Performance the charts

Image
Sometimes when things are not so clear, and it suddenly turns mysterious when another person say else. All you ever get is a huge mess. This is one of those cases

Operation Tab

In this tab, you get to see "What and how long an operations are running on my server"




Operation times give you how long a spike last. What is the peak operation time happened on the server.

Just underneath the "Operation time", you can sub-divide by Request, CPU, Available Memory, Storage.

The story it is telling is, during those time when the spike occurs, how is my resources holding up?
Does memory got sucked out and is the CPU reaching certain limit (since it is showing a number like 0, 1,2) i am thinking it means, average limit between available CPU.

The distribution chart, shows where percentage of operations lies. In my case, 50% of the time, there are over 100 request processed within 64 miliseconds, 90% of the time, under 10 request served within 6.8 seconds window.

Roles tab
Role ta…

Simple QA with BERT

This is the easiest Q and A example that you can find.
Workflow goes something like this,

1. tokenizer to convert into torch tensor. You need to encode beginning with [CLS] and [SEP]. Statement or knowledge is encode with 1. Question is 0.

2. create BERT QA model and run. 

3. get the score out. Interpreting the score is key. You get start and end index of a answer. Represented by start_scores and end_scores below. It is like using a highlighter and crossing out answer to our face.


## initialize the model
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
### Please note here that encoding are given as   #                 0 0 0 0 0 0 0 (denote question) while 1 1 1 1 1 1 1 (denotes answers / knowledge ) question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" input_text = "[CLS] " + question + " [SEP] "…

Azure function v2 - Logger injection

Here is the code for startup function for doing logger injection





And then in your function app constructor, this is where you start to get instances of your logger.

function app v2 - appsettings local and remote

Azure function app v2 uses standarized approach for getting  app settings / app configuration information.

This is what your start up function looks like :




Metric settings is place to hold your settings info:



And finally, wiring it up to your function app



This is what your configuration looks like :-






Method not found: 'Microsoft.Extensions.DependencyInjection.IServiceCollection

I faced this issues after i upgrade Microsoft.Logging.Extensions. Jezz man, just trying to add more visibility.

Remove this and you're set to go but if you really need to do logging and stuff, probably add reference to the correct nuget package.

For my case it is

"Microsoft.ApplicationInsights.AspNetCore" Version="2.8.2"

COLA - Benchmark top model ranking

BERT for dummies

I will keep this short. BERT is a pre-trained model (trained on wikipedia) that means we skip all those time consuming stuff of tokenizing and encoding those models. Everything here is a ripped off without modification from Google Colab example (please refer to link below for all the codes)

This example is a classification problem and it will classify if a sentence is grammatically correct or not.

Sample used :- COLA - a dataset for benchmarking grammar.

Summary of tasks before getting lost in the crazy codes maze.


- Load all the necessary library - which includes hugging face's BERT and pytorch

- Load this COLA dataset and transform it into a correct format

- Load BERT model

- Predict and Evaluate

The results is true or false.


Done :)


Reference link

https://colab.research.google.com/drive/1ywsvwO6thOVOrfagjjfuxEf6xVRxbUNO


Configuring Azure diagnostic logging retention period

Image
It is just confusing sometimes for developer to specify the correct retention period for Azure diagnostic settings, especially when there is a retention period option next to different types of logs / metrics.
Which one is which?

If you decided to send it to a storage account, then the retention period make sense.
If you sending it to azure diagnostic, then retention period is whatever you specify for the workspace.

How do you go about changing this?

Go to -> Azure diagnostic logging workspace -> Cost and billing -> Data Retention.





Google open IA natural dataset and model ranking

Link to google open IA natural dataset and model ranking


https://ai.google.com/research/NaturalQuestions/dataset

Appinsights : Do you know what you're doing

This link here offers great practical approach to train yourself with appinsights.

https://www.azuredevopslabs.com/labs/azuredevops/appinsights/

Conversational Question Answering Challenge : CoQa

Check out the leader board rankings here :

https://stanfordnlp.github.io/coqa/

azure devops - build the proper way

Been a while since i did anything with Azure Devops build pipeline

A successful and proven deployment way to do build is using the following yml. This can be easily extracted and use by webdeploy for releases

Git - Getting and changing remote url

I happened to have updated my username and password for a git repo that i worked with. The thing is, info is persisted in the git repository and giving exceptions whenever i tried to push stuff across.


To view config url 

git config --get remote.origin.url


To Update config url for your repo

git remote set-url origin https://username:password@dev.azure.com/someorg/repository_url

powerbi service - creating streaming dataset

Image
Goto -> My Workspace -> (you will see your dashboard) -> goto Datasets (tab) and then  look for "+ Create" on top right of your screen.

Please hav a look at the screenshots below :-





patch - python unit test

To use patch (to spy on arguments and methods) called, here is a code example :-


simple mocking in python

There are basically 2 main mock object, mock and magicmock.
Here is a simple example of python mocking



Appinsights ASP.Net- Non http apps / background apps / any newer version of console app

Regardless of web / console / service app that you're running, you typically calling the same piece of code  which is teletmetry client, TrackMetric() for example and pass in the required parameter.

The only difference is how to create your Startup code (entry point) code and started setting up your telemetry client)

For web, you need only a single package




For function app v2, you need







And configure your FunctionsStartup, using the following codes :-


For console app, the package required is here :-




You need the following startup code (which you easy create using dotnet new worker)





Assets file project.assets.json not found. Run a NuGet package restore

Bump into this error and thanks for stackoverflow, was able to resolve it by 
Tools->Nuget Package Manager-> Package manager Console -> 
And type, 
"dotnet restore" 
Or you can go into the command prompt of that project and type 
"dotnet restore"
Which ever work faster for you, i guess.




Sending custom metrics data to Azure

Sometimes you might send custom metrics data to Azure. A use case would be you trying to push metric information or signal that something has been processed and then it gets display in Azure monitor dashbaord.



The following code snippet works using TrackMetrics whch is a appinsights libarary. And yes, the metrics (whatever metrics you created) is visible in App Insights. Sometimes it is like a witch hunt trying to figure out where the metrics would come out.

For me, it took about a day before it is visible.

Azure AppInsights ASP.net core

ASP.Net Core

<ItemGroup><PackageReferenceInclude="Microsoft.ApplicationInsights.AspNetCore"Version="2.8.0" /></ItemGroup>


Startup.cs


publicvoidConfigureServices(IServiceCollection services) { // The following line enables Application Insights telemetry collection. services.AddApplicationInsightsTelemetry(); // This code adds other services for your application. services.AddMvc(); }






Understanding python import

I think this is the best article that helps to import sub-directory as module,

https://timothybramlett.com/How_to_create_a_Python_Package_with___init__py.html

The only thing i wanted to say is, if you use

__all__ = ['mymodule1', 'mymodule2']

That's to accomodate syntax such as this

from sub-directory import *

See the magically asterisk there...... all this is for this little baby. :)





C#8 Getting around optional interface task

Perhaps not the best of ways to work with optional interface, but here is something i come out with to prevent error coughing up with optional interface in C#8.




Python using stream io

You should make use of pytohn io as defined here to work with stream data. So there are basically 3 main types of stream io which are text io, binary io,

Instead of reading everything into memory, you are able to read this line by line as a stream....




import subprocess
## assuming r.csv is a large file
self.streamedOutput = subprocess.Popen(['cat', 'r.csv'], stdout = subprocess.PIPE) self.sendOutput(self.streamedOutput)

whileTrue: executionResult = streamedOutput.stdout.readline() if executionResult: print(executionResult) else: break

Linux getting port already in use

Was having a issue trying to bind to a specific port on my machine.
Then this command came about, :)


sudo netstat -nlp | grep :80

Getting gunicorn to run on a different port

As simple as using the following command :-


gunicorn --bind 0.0.0.0:8888 FalconRestService:api

vscode - enabling python library / module debugging

You can start using vscode to step into python library / imports. All you need is to have it in the following configuration below.

Once you have it in your launch.json, keep on pressing F11 :)


{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Current File (Integrated Terminal)", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "justMyCode": false <---------------- span=""> }, ] }

Kafka setup the scalable way ...

Great link to setup kafka how it should be. Scalable and reliable.  This link is really useful.

https://success.docker.com/article/getting-started-with-kafka


However images no longer exsit, and requires some tuning :-

 sudo docker service create --network kafka-net --name broker          --hostname="{{.Service.Name}}.{{.Task.Slot}}.{{.Task.ID}}"          -e KAFKA_BROKER_ID={{.Task.Slot}} -e ZK_SERVERS=tasks.zookeeper          qnib/plain-kafka:2019-01-28_2.1.0



Python serializing and deserializing to json

Here a some quick sample code to work with json objects in python.

Deserializing to object


commands = json.loads(commandstrings) commandResultList = []
for command in commands: o = CommandType(**command) commandResultList.append(o)


Serializing to Json


import json
cmd1 = CommandType("list", "ls", "al") cmd2 = CommandType("list", "pwd", "") cmd3 = CommandType("list", "ls", "www.google.com")
cmds = [cmd1, cmd2, cmd3] a = json.dumps(cmds, default=lambdao: o.__dict__)

git squash - interactive rebasin

To squash last 3 commits

git rebase -i HEAD~3


Then you get something like this, you need to change one of it to "pick". In this case, i pick "c8659b4"

pick c8659b4 Merged PR 1914: ensure strongly type response messages are factor in.
s 986cad8 Updated azure-pipelines.yml
s bdb2086 1 2 3

As long as you have a single pick or all pick statement, it will be good. You should be able to rebase (squash) your commits





python celery first steps

If you follow python celery first step from the official site, you probably gonna get some heart attack trying to get it work.


Please use the following docker command :


docker run -d -p 5672:5672 rabbitmq


First you need to tell celery, which method you would like to register. This allows celery to provide task registration and execution to.

This is an example of tasks.py


from time import sleep from celery import Celery
app = Celery('tasks', broker='amqp://guest:guest@localhost:5672')

@app.task defadd(x, y): sleep(4) print("executing stuff") return x + y



Now that you have it registered, next is to run it. This queue "add" task (defined earlier) and put it into motion.

from tasks import add
add.delay(4, 4)





Azure diagnostic logging - Getting better understanding of what log type means

When I go into Azure Monitor log diagnostics and setup what logs I would like to include in my workspace, I have difficulty figuring out what information I will be getting. To make matter worst, how do i know what type of activities i should be querying or appearing in my workspace.

The thing is, it is hard. You can try to have a go at this site here.....

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-monitor/platform/resource-logs-overview.md

spark - pyspark reading from excel files

I guess a common mistake is to load the right jar file when loading excel file. Yes, you have to use version 2.11 and not 2.12, :)



You can try using the following command line


pyspark --packages com.crealytics:spark-excel_2.11:0.11.1


And use the following code to load an excel file in a data folder. If you have not created this folder, please create it and place an excel file in it.


from com.crealytics.spark.excel import *
## using spark-submit with option to execute script from command line ## spark-submit --packages spark-excel_2.11:0.11.1 excel_email_datapipeline.py
## pyspark --packages spark-excel_2.11:0.11.1 ## pyspark --packages com.crealytics:spark-excel_2.11:0.11.1
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("excel-email-pipeline").getOrCreate()
df = spark.read.format("com.crealytics.spark.excel").option("useHeader", "true").option("inferSchema", "true").load("data/excel.xlsx")
df.sh…

Az cli passing the correct data type with some examples

When you are working with Az cli, you often need to pass in the correct data type. This can be pretty time consuming to figure out the correct syntax and what the data type looks like.

For demo purposes, I assume we are working with an event hub


Update resource 

When you update, you need to use the following syntax. Notice the equal sign and setting the value 'Deny' to a nested properties called  properties.defaultAction.

az resource update -n "myeventhubnamespace/networkrulesets/default" -g "myfakeresourcegroup" --resource-type "Microsoft.EventHub/namespaces" --set properties.defaultAction=Deny

Adding resources 

When you add, you use the following syntax.  Notice no more equal sign. In this scenario, we wanted to add entries to an array / list of key value pairs. The syntax looks something like this :-

az resource update -n "myeventhubnamespace/networkrulesets/default" -g "myfakeresourcegroup" --resource-type "Microsoft.Ev…

spark - connecting to mongodb using mongodb connector

One way of connectg to mongodb database using mongodb (not the usual mongodb driver), is using the following codes.

First it starts off with command line (to download driver from maven repository) and then run the code to connect and show.


# Start the code with the followings..
# pyspark --conf "spark.mongodb.input.uri=mongodb://127.0.0.1/apptest.product?readPreference=primaryPreferred" \ # --conf "spark.mongodb.output.uri=mongodb://127.0.0.1/apptest.product" \ # --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.1

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("mongo-email-pipeline").config("spark.mongodb.input.uri", "mongodb://127.0.0.1/apptest.product").config("spark.mongodb.output.uri", "mongodb://127.0.0.1/apptest.product").getOrCreate()
df = spark.read.format("mongo").load()
df.show()

python import

Looking at different way you can import python modules


Generally it begins by path to a module, for example, if you have a custom module, you can do something like this ::


from email.emailer import *


Hence 'from' keyword is for.  The asterisk tells python to import all.


Don't forget the __init__.py file







openssl - where you can find some examples code

Did some digging and found out that openssl does have some code example that you can find here :-
https://github.com/appcoreopc/openssl/tree/master/demos/bio (this is a forked repo) you can find recent codes from the master repo


Have fun


openssl client example - compiling and running

I assume you have build and install openssl. So next step would be to compile it....






To compile it as a shared object,


gcc -o client client.c -lssl -lcrypto -L /opt/openssl/lib -I /opt/openssl/include


Setting up your library ptah

export LD_LIBRARY_PATH=/opt/openssl/lib

And finally running it using 

./client 


openssl - compiling in the docker ubuntu

To setup openSSL run the following commands


1. docker run -it ubuntu /bin/bash


 2. apt-get install git


3. apt install g++


4. apt install build-essential


5. git clone https://github.com/appcoreopc/openssl.git


6. ./config --prefix=/opt/openssl --openssldir=/opt/openssl enable-ec_nistp_64_gcc_128


7. make depend


8. make











javascript ; different ways of writing return types

Definitely wouldn't think about how to write the same statement multiple ways..

Ripped off from stackoverflow :-




react with custom hooks

As you can see from the code below, this control uses react hooks with a custom flavour. Pretty easy to understand.

useState allows you to configure value setting methods

This control exposes onChange which just update the state

And finally it also exposes value which you can use to show / render into the control what info has bene typed by a user.

What is the spread operator doing in there?

It just doing a mapping and renders something that looks like this






async function - different ways of writing

Interestingly there are many ways of writing async function which i do forget ... :)

Just to make sure i have documented the right ways....



The specified initialization vector (IV) does not match the block size for this algorithm

Image
If you're getting  exception when trying to assigned key to a symmetric key and getting error message below :-

The specified initialization vector (IV) does not match the block size for this algorithm

And your code probably look like something below :-


Then you need to go into debug mode and start looking into the supported size for IV as shown in diagram below :-

As you can see, IV is 16 bytes, so you need to provide 16 bytes. The same goes for Key field too, if you're adding anything to it.


As long as you provide a valid key size, then we're good and you're code will be ready.





npm install package using proxy

Sometimes you might not have access to direct internet and when that happens, you can be limited.
To get around this, you can authenticate yourself using proxy. Use the following command :-
where proyxserver.com is your proxy.

npm config set proxy http://"dev.username:password"@proxyserver.com:80
npm config set https-proxy http://"dev.username:password"@proxyserver.com:80

I also notice that setting this, results in 418 or probably 404.

npm config set registry "http://registry.npmjs.org/"

So if you have this, try to remove it using npm config edit


Also if you have some issues with certificate, you can probably set

npm set strict-ssl false

apollo server mutation sample

Sample mutation query for testing on apollo server machine :-














Azure key vault - enabling protection from delete and purging

Soft delete - means your item is marked for delete but your key are not removed from system.

Purging is like 'emptying' your  recycle bin. Purge everything and then you won't see it again. If you have a soft delete key, you can still purge it and you key still goes missing.

That's why purge protection is important too.

Here's some consideration when working with soft delete and purging vault

1. You cannot undo it. You cannot change purging = false. You cannot change soft delete = false once you have enable it.

2. You need to use Cli to recover it.

3. If you purge your vault, you still get charge for it until it is really removed



React Test Util - Find control by Id

The following code take advantage of react testing util to find control by id :-


react-test-renderer - Cannot read property 'current' of undefined

You need to have the same version for react and react-test-renderer to work.
As you can see from my package.json file :-

"react": "16.8.6",
"react-test-renderer": "16.8.6"

a react library called create scripts...

react-script is a package that provides zero configuration way of setting up and running your react project

It provides dev server that support hot loading of your react app.


deploying static website (react) to storage account using az cli

The following scripts allows you to deploy a compiled web assets and deploy it, into storage account in Azure.

Live sharing with visual studio

Image
To fire off your live share, just click on the icon on your far right on your visual studio.



It will generate a link which you can copy and share with your team members.

Your team member will then copy and paste that into a browser which fires up a visual studio. And then you can see who is changing what....... if you open the same file. :)


Sweet! that's it



az cli number of workers used for a webapp

I use the following scripts to configure number of workers used for a webapp







terraform - specifying the size of instance for a service plan

You can specify number of instances you want to create for a service plan. This can be shown in code below :-

The magic number can be set / configure for a field named "capacity"


Redux - is there any way to explain it easier

First you need a store. This store is the central repository of data and you can work with data here.

#1 Code

import { createStore } from'redux'
constreduxstore = createStore(appreducers)
What is reducer? It is sets the value of our data. It could be a single record update or appending to a list.

<Providerstore={reduxstore}> </Provider>
When you set this up, this means you're making the store available to component underneath it.


#2 - When you have your store, what do you do next?

You want your component to be able to work with it, meaning you send data and get certain data that you need.

First you need to use a "connect"from react-redux. And you wire it to your component.

Importing the connect function


import { connect } from'react-redux'

Next you defined a simple component like you would do normally.
And finally you make your store accessible to your component using code below.


exportdefaultconnect(mapStateToProps,mapDispatchToProps)(AddToCart)

Azure linking up different resource groups into a service plan

In your deployment, you might want to link function app from different resource group with a specific service plan. Working with az cli blindly (especially with dodgy documentation) could be a challenge.

Here is a tested script az cli to tied a function app into a specific service plan (that you might have created somewhere else) that might help.





 az functionapp create --name $env:environment$env:functionapp_name --resource-group $env:environment$env:resource_group_name --storage-account $env:environment$env:storage_accountname -p /subscriptions/$env:Subscription/resourceGroups/$($env:environment)$env:serviceplan_resource_group/providers/Microsoft.Web/serverFarms/$($env:environment)$env:serviceplan_name --app-insights-key $appInsightId --runtime dotnet --os-type Windows

$xxx - are variables for powershell and you can tell that the script is written in powershell.



git pull and push with username password

Just use the command below and provide your username and password :-

git pull https://yourUsername:yourPassword@githubcom/repoInfo 

When you trying to push upstream and getting some errors :

git push --set-upstream origin feature/stored-proc-refactor

An then it prompts your for password for an invalid username, you can do something like this

git remote set-url origin https://dev.azure.com:jeremy/some_distant_repo/_git/super_awesome_project_name 


javascript prototype

_proto_ is an internal property of a class pointing to prototype. It is like python ways of always using double underscore to refers to internal object attributes.

Prototype in javascript is used to reference parents. You can find a very clear and good description of this here.














Recent Http2/DOS attack

Recent Http2/DOS attack

CVE-2019-9511 HTTP/2 Data DribbleCVE-2019-9512 HTTP/2 Ping FloodCVE-2019-9513 HTTP/2 Resource LoopCVE-2019-9514 HTTP/2 Reset FloodCVE-2019-9515 HTTP/2 Settings FloodCVE-2019-9516 HTTP/2 0-Length Headers LeakCVE-2019-9518 HTTP/2 Request Data/Header Flood

Nextjs - How do you handle onclick which do something

You can use the following codes to do it :-






Whitelisting Azure event hub

To decide which event hub to whitelist, you can use the following command to get some legit ip address

nslookup .servicebus.windows.net

nexjts - how i deploy it on a docker container running nginx

You need to have something like prod setup below :-





Next, run your docker container and mount it to a nginx default folder,


sudo docker run  -p 8008:80 -v $(pwd):/usr/share/nginx/html nginx


Fire up your browser and then type localhost:8008



Azure autoscaling - how it works from my observation

Image
This is my observation with scaling out app service plan in an ASE environment.

How long does it takes to scale?
30 to 40 min to scale up one instance. Your service plan status will be pending.

Where do you view your metrics / logs?
Go to app insights in your function app or apps. For some reason it is not updated

What metric to choose?

That depends. For my case, I have to scale function app. So i set it scale if function app execution count is greater than 8. I noticed that regardless of how many instance you already running, for example you have 3 instance, as long as function app execution is 8 then it will scale up to 4 instance.




It is not like function app execution unit shared by current 3 instances. If you have 9 function app, then you get a new instance.

Off course it differ depending on metrics types.

Event hubs

As i push more load to event hubs, partitioning is pretty much fixed. If you have 2 VM, then those functionapp will be processing the same partition. Process wpw3 is…

Scaling with metrics

To get a list of metrics supported, please go here.

Azure functions plans types

Typically we can have service plan configuration as follows when setting our terraform azurerm_app_service_plan's kind configuration. These settings are :-

a) App - isolated. Scaling out is manual / autoscale control by your service plan

b) FunctionApp - free, sometimes it goes into idle mode.

c) Linux

d) Windows


These plans dictates how your functions app scales.  Scale controller helps you to decide how your functions app get scaled. It the condition could varies depending on the type of function app trigger that you're running.

For example, if you're running event hub, your app can scale depending on number of messages.

Here's some interesting service limit for scaling out

-Consumption plan - event driven 

-Premium plan - event driven 

-App service - manual / auto (depends on how you configure your service plan) 


Max Instances

-Consumption plan - 200 - think this is going to back fire. Imagine you have 200 instance creating new connection to your database .

-Premium pla…

Azure event hub namespace connection string or event hub connection string?

It depends, if you have the requirement to write to say different event hub in the namespace then event hub namespace connection string. The problem could be if your connection string is compromised then the application can potentially send to all your event queue. It is always better to have finer control. :)

So I would use event hub based connection string.


terraform creating rule for auto scaling for service plan

It seems like terraform just doesn't like to create a rule. But if you go and create manually in the portal, giving it a name, then terraform auto scaling service plan works.

Here is my terraform :-

Some point to note - I have to use "azurerm_app_service_plan" tags here as oppose to manually copy and pasting the resource id.  And remember to create a rule called "devspapptest-Autoscale-9680" so terraform would be able to find it.

So strange ......











To work with what metrics are available, go here. Also don't forget to go, App Service Plan -> Scale out -> Json to try to match or copy some of the operator values or statistic used. It is like you can copy and paste it into your terraform code.

Terraform secret rule of thumb - keyvault has to depend on policy :)

Did you know when you create a new secret, you need to add "depend_on", to associate it to a key policy. That means Vault -> Policy -> Secret (you need to specifically add "depend_on" in your secret resource provisioning section.



Understanding TLS and its cipher suite - Part 1

Image
Key exchange algorithmsprotect information required to create shared keys. These algorithms are asymmetric (public key algorithms) and perform well for relatively small amounts of data.
Bulk encryption algorithms encrypt messages exchanged between clients and servers. These algorithms are symmetric and perform well for large amounts of data. Message authentication algorithms generate message hashes and signatures that ensure the integrity of a message. Other scheme used here includes HMAC.










MAC - Common message authentication scheme are HMAC, OMACCBC-MAC and PMAC. Newer and better ones would be AES-GCM and ChaCha2, Poly1305



Setup azure function app to a service plan

Yes, sometimes you do need alot of testing to make sure you get this right.
Let's say you already setup a ASE (isolated environment) and then you would like to associate that single service plan (resource group A) to a function app in resource group B.

How do you do that?




With Az Powershell?










azure service plan - scaling out and scaling up

Scaling up means increasing your computing resources like instead of running your app using 4G, you are saying I want to run it on a 16 G machine.

Scaling out means increase number of VM to run your existing application. You may have 1 vm running your app right now, lets increase this to say 2 or 4. A limit of 100, if you're on a isolated plan.

How does this related to a Service plan? Well, service plan controls the scaling of your resources.

nextjs optimization

Lazy loading module is achieve through




Lazy loading components






terraform azurerm_app_service_plan

This is to create an application web service and a service plan. Not to be confused with App Service Environment.


Some useful port in Azure

Some ports that you will often work with in Azure.


UsePortsHTTP/HTTPS80, 443FTP/FTPS21, 990, 10001-10020Visual Studio remote debugging4020, 4022, 4024Web Deploy service8172


ASE app service plan - 1 instance per service plan

Did you know that in ASE, one service plan typically means you are running atleast 1 vm?
Well, you know now.... that's going to cost.

Probably merge all into a single service plan.. :)

Also, turning on diagnostic logging is expensive

react : functionalcomponent vs class component

Was playing around with Nextjs and the code that keeps on popping up are Functional component (as shown in code below).

The difference between functional and class component.


Functional component 



Class component 





nexts - pushing client side javascript

When using SSR, you probably need to push some client side code. This requires some manual configuration.

First you need a folder called static (it has to be static, otherwise it won't work), place your javascipt code in it and then from your index.js or index.tsx.





And oh, you need to "Reload that page" / F5

Microsoft Seal - Setting up your docker on Ubuntu

If you're thinking of working with Microsoft Seal library, then get your hands dirty with a Linux build.

To setup your dev environment

docker run -it mcr.microsoft.com/dotnet/core/sdk:2.2-bionic /bin/bash


Then start to install

apt-get install update

 apt-get install software-properties-common
 apt-get install cmake
 apt-get install git
 apt-get install g++



 git clone https://github.com/microsoft/SEAL.git
Build the goodies 
cd native/src cmake . make sudo make install cd ../.. Don't forget the samples..... good fun :)
cd native/examples cmake . make cd ../..





warning: Error disabling address space randomization: Operation not permitted

When trying to run gdb on a docker, i got this nice error :-


This is a solution, which i will try later ..  :)

docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined

CMAKE_CXX_COMPILER-NOTFOUND“ was not found.”

Solution : apt-get install g++