Sometimes you might want to create expose your public key / generate your public key to 3rd party apps. This is a code written entirely on Powershell to help you export public key (modulus and exponent) to a legit PEM file:
Publicc key are typically stored in a PEM file which contains modulus and exponent value. These get converted into RSAParameters so that it can be used by RSA to do its magic. It looks something lke this :-
var rsa = new RSACryptoServiceProvider();
To use a nuget package called PemUtil.
This allows you to read a PEM file into a RSAParameters. Go and install a nuget package called PemUtil and then you can use the following code. Then you can plug it into code above.
public static RSAParameters ReadPEMFileB(string PemFilePath)
using (var stream = File.OpenRead(PemFilePath))
using (var reader = new PemReader(stream))
rsaParameters = reader.ReadRsaKey();
A sample PEMfile might look like this :-
-----BEGIN PUBLIC KEY-----
What is this? A PEM file which contains your public key. Public key will contain your Modulus and Exponent. (P and Q). It is different from private key which contains alot more information.
-----BEGIN PUBLIC KEY-----
-----END PUBLIC KEY-----
To generate a rsa key
openssl genrsa -out private.pem 1024
To view your private key
openssl pkey -in private.pem -text
To generate a public key (your private key already contain your public key)
openssl pkey -in private.pem -out public.pem -pubout
View our public key (noticed the additional -pubin argument)
Sometimes when things are not so clear, and it suddenly turns mysterious when another person say else. All you ever get is a huge mess. This is one of those cases
In this tab, you get to see "What and how long an operations are running on my server"
Operation times give you how long a spike last. What is the peak operation time happened on the server.
Just underneath the "Operation time", you can sub-divide by Request, CPU, Available Memory, Storage.
The story it is telling is, during those time when the spike occurs, how is my resources holding up?
Does memory got sucked out and is the CPU reaching certain limit (since it is showing a number like 0, 1,2) i am thinking it means, average limit between available CPU.
The distribution chart, shows where percentage of operations lies. In my case, 50% of the time, there are over 100 request processed within 64 miliseconds, 90% of the time, under 10 request served within 6.8 seconds window.
This is the easiest Q and A example that you can find.
Workflow goes something like this,
1. tokenizer to convert into torch tensor. You need to encode beginning with [CLS] and [SEP]. Statement or knowledge is encode with 1. Question is 0.
2. create BERT QA model and run.
3. get the score out. Interpreting the score is key. You get start and end index of a answer. Represented by start_scores and end_scores below. It is like using a highlighter and crossing out answer to our face.
## initialize the model
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad') ### Please note here that encoding are given as # 0 0 0 0 0 0 0 (denote question) while 1 1 1 1 1 1 1 (denotes answers / knowledge )
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_text = "[CLS] " + question + " [SEP] "…
I will keep this short. BERT is a pre-trained model (trained on wikipedia) that means we skip all those time consuming stuff of tokenizing and encoding those models. Everything here is a ripped off without modification from Google Colab example (please refer to link below for all the codes)
This example is a classification problem and it will classify if a sentence is grammatically correct or not.
Sample used :- COLA - a dataset for benchmarking grammar.
Summary of tasks before getting lost in the crazy codes maze.
- Load all the necessary library - which includes hugging face's BERT and pytorch
- Load this COLA dataset and transform it into a correct format
It is just confusing sometimes for developer to specify the correct retention period for Azure diagnostic settings, especially when there is a retention period option next to different types of logs / metrics.
Which one is which?
If you decided to send it to a storage account, then the retention period make sense.
If you sending it to azure diagnostic, then retention period is whatever you specify for the workspace.
How do you go about changing this?
Go to -> Azure diagnostic logging workspace -> Cost and billing -> Data Retention.
I happened to have updated my username and password for a git repo that i worked with. The thing is, info is persisted in the git repository and giving exceptions whenever i tried to push stuff across.
Bump into this error and thanks for stackoverflow, was able to resolve it by
Tools->Nuget Package Manager-> Package manager Console ->
And type, "dotnet restore"
Or you can go into the command prompt of that project and type "dotnet restore"
Which ever work faster for you, i guess.
Sometimes you might send custom metrics data to Azure. A use case would be you trying to push metric information or signal that something has been processed and then it gets display in Azure monitor dashbaord.
The following code snippet works using TrackMetrics whch is a appinsights libarary. And yes, the metrics (whatever metrics you created) is visible in App Insights. Sometimes it is like a witch hunt trying to figure out where the metrics would come out.
// The following line enables Application Insights telemetry collection.
// This code adds other services for your application.
When I go into Azure Monitor log diagnostics and setup what logs I would like to include in my workspace, I have difficulty figuring out what information I will be getting. To make matter worst, how do i know what type of activities i should be querying or appearing in my workspace.
The thing is, it is hard. You can try to have a go at this site here.....
Sometimes you might not have access to direct internet and when that happens, you can be limited.
To get around this, you can authenticate yourself using proxy. Use the following command :-
where proyxserver.com is your proxy.
npm config set proxy http://"dev.username:password"@proxyserver.com:80 npm config set https-proxy http://"dev.username:password"@proxyserver.com:80
I also notice that setting this, results in 418 or probably 404.
npm config set registry "http://registry.npmjs.org/"
So if you have this, try to remove it using npm config edit
Also if you have some issues with certificate, you can probably set
In your deployment, you might want to link function app from different resource group with a specific service plan. Working with az cli blindly (especially with dodgy documentation) could be a challenge.
Here is a tested script az cli to tied a function app into a specific service plan (that you might have created somewhere else) that might help.
az functionapp create --name $env:environment$env:functionapp_name --resource-group $env:environment$env:resource_group_name --storage-account $env:environment$env:storage_accountname -p /subscriptions/$env:Subscription/resourceGroups/$($env:environment)$env:serviceplan_resource_group/providers/Microsoft.Web/serverFarms/$($env:environment)$env:serviceplan_name --app-insights-key $appInsightId --runtime dotnet --os-type Windows
$xxx - are variables for powershell and you can tell that the script is written in powershell.
This is my observation with scaling out app service plan in an ASE environment.
How long does it takes to scale?
30 to 40 min to scale up one instance. Your service plan status will be pending.
Where do you view your metrics / logs?
Go to app insights in your function app or apps. For some reason it is not updated
What metric to choose?
That depends. For my case, I have to scale function app. So i set it scale if function app execution count is greater than 8. I noticed that regardless of how many instance you already running, for example you have 3 instance, as long as function app execution is 8 then it will scale up to 4 instance.
It is not like function app execution unit shared by current 3 instances. If you have 9 function app, then you get a new instance.
Off course it differ depending on metrics types.
As i push more load to event hubs, partitioning is pretty much fixed. If you have 2 VM, then those functionapp will be processing the same partition. Process wpw3 is…
Typically we can have service plan configuration as follows when setting our terraform azurerm_app_service_plan's kind configuration. These settings are :-
a) App - isolated. Scaling out is manual / autoscale control by your service plan
b) FunctionApp - free, sometimes it goes into idle mode.
These plans dictates how your functions app scales. Scale controller helps you to decide how your functions app get scaled. It the condition could varies depending on the type of function app trigger that you're running.
For example, if you're running event hub, your app can scale depending on number of messages.
Here's some interesting service limit for scaling out
-Consumption plan - event driven -Premium plan - event driven -App service - manual / auto (depends on how you configure your service plan)
-Consumption plan - 200 - think this is going to back fire. Imagine you have 200 instance creating new connection to your database . -Premium pla…
It depends, if you have the requirement to write to say different event hub in the namespace then event hub namespace connection string. The problem could be if your connection string is compromised then the application can potentially send to all your event queue. It is always better to have finer control. :)
It seems like terraform just doesn't like to create a rule. But if you go and create manually in the portal, giving it a name, then terraform auto scaling service plan works.
Here is my terraform :-
Some point to note - I have to use "azurerm_app_service_plan" tags here as oppose to manually copy and pasting the resource id. And remember to create a rule called "devspapptest-Autoscale-9680" so terraform would be able to find it.
So strange ......
To work with what metrics are available, go here. Also don't forget to go, App Service Plan -> Scale out -> Json to try to match or copy some of the operator values or statistic used. It is like you can copy and paste it into your terraform code.
Did you know when you create a new secret, you need to add "depend_on", to associate it to a key policy. That means Vault -> Policy -> Secret (you need to specifically add "depend_on" in your secret resource provisioning section.
Key exchange algorithmsprotect information required to create shared keys. These algorithms are asymmetric (public key algorithms) and perform well for relatively small amounts of data.
Bulk encryption algorithms encrypt messages exchanged between clients and servers. These algorithms are symmetric and perform well for large amounts of data. Message authentication algorithms generate message hashes and signatures that ensure the integrity of a message. Other scheme used here includes HMAC.
MAC - Common message authentication scheme are HMAC, OMAC, CBC-MAC and PMAC. Newer and better ones would be AES-GCM and ChaCha2, Poly1305
Yes, sometimes you do need alot of testing to make sure you get this right.
Let's say you already setup a ASE (isolated environment) and then you would like to associate that single service plan (resource group A) to a function app in resource group B.
git clone https://github.com/microsoft/SEAL.git
Build the goodies cd native/src
sudo make install
Don't forget the samples..... good fun :) cd native/examples