Posts

Showing posts from 2019

Python using stream io

You should make use of pytohn io as defined here to work with stream data. So there are basically 3 main types of stream io which are text io, binary io,

Instead of reading everything into memory, you are able to read this line by line as a stream....




import subprocess
## assuming r.csv is a large file
self.streamedOutput = subprocess.Popen(['cat', 'r.csv'], stdout = subprocess.PIPE) self.sendOutput(self.streamedOutput)

whileTrue: executionResult = streamedOutput.stdout.readline() if executionResult: print(executionResult) else: break

Linux getting port already in use

Was having a issue trying to bind to a specific port on my machine.
Then this command came about, :)


sudo netstat -nlp | grep :80

Getting gunicorn to run on a different port

As simple as using the following command :-


gunicorn --bind 0.0.0.0:8888 FalconRestService:api

vscode - enabling python library / module debugging

You can start using vscode to step into python library / imports. All you need is to have it in the following configuration below.

Once you have it in your launch.json, keep on pressing F11 :)


{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Current File (Integrated Terminal)", "type": "python", "request": "launch", "program": "${file}", "console": "integratedTerminal", "justMyCode": false <---------------- span=""> }, ] }

Kafka setup the scalable way ...

Great link to setup kafka how it should be. Scalable and reliable.  This link is really useful.

https://success.docker.com/article/getting-started-with-kafka


However images no longer exsit, and requires some tuning :-

 sudo docker service create --network kafka-net --name broker          --hostname="{{.Service.Name}}.{{.Task.Slot}}.{{.Task.ID}}"          -e KAFKA_BROKER_ID={{.Task.Slot}} -e ZK_SERVERS=tasks.zookeeper          qnib/plain-kafka:2019-01-28_2.1.0



Python serializing and deserializing to json

Here a some quick sample code to work with json objects in python.

Deserializing to object


commands = json.loads(commandstrings) commandResultList = []
for command in commands: o = CommandType(**command) commandResultList.append(o)


Serializing to Json


import json
cmd1 = CommandType("list", "ls", "al") cmd2 = CommandType("list", "pwd", "") cmd3 = CommandType("list", "ls", "www.google.com")
cmds = [cmd1, cmd2, cmd3] a = json.dumps(cmds, default=lambdao: o.__dict__)

git squash - interactive rebasin

To squash last 3 commits

git rebase -i HEAD~3


Then you get something like this, you need to change one of it to "pick". In this case, i pick "c8659b4"

pick c8659b4 Merged PR 1914: ensure strongly type response messages are factor in.
s 986cad8 Updated azure-pipelines.yml
s bdb2086 1 2 3

As long as you have a single pick or all pick statement, it will be good. You should be able to rebase (squash) your commits





python celery first steps

If you follow python celery first step from the official site, you probably gonna get some heart attack trying to get it work.


Please use the following docker command :


docker run -d -p 5672:5672 rabbitmq


First you need to tell celery, which method you would like to register. This allows celery to provide task registration and execution to.

This is an example of tasks.py


from time import sleep from celery import Celery
app = Celery('tasks', broker='amqp://guest:guest@localhost:5672')

@app.task defadd(x, y): sleep(4) print("executing stuff") return x + y



Now that you have it registered, next is to run it. This queue "add" task (defined earlier) and put it into motion.

from tasks import add
add.delay(4, 4)





Azure diagnostic logging - Getting better understanding of what log type means

When I go into Azure Monitor log diagnostics and setup what logs I would like to include in my workspace, I have difficulty figuring out what information I will be getting. To make matter worst, how do i know what type of activities i should be querying or appearing in my workspace.

The thing is, it is hard. You can try to have a go at this site here.....

https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/azure-monitor/platform/resource-logs-overview.md

spark - pyspark reading from excel files

I guess a common mistake is to load the right jar file when loading excel file. Yes, you have to use version 2.11 and not 2.12, :)



You can try using the following command line


pyspark --packages com.crealytics:spark-excel_2.11:0.11.1


And use the following code to load an excel file in a data folder. If you have not created this folder, please create it and place an excel file in it.


from com.crealytics.spark.excel import *
## using spark-submit with option to execute script from command line ## spark-submit --packages spark-excel_2.11:0.11.1 excel_email_datapipeline.py
## pyspark --packages spark-excel_2.11:0.11.1 ## pyspark --packages com.crealytics:spark-excel_2.11:0.11.1
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("excel-email-pipeline").getOrCreate()
df = spark.read.format("com.crealytics.spark.excel").option("useHeader", "true").option("inferSchema", "true").load("data/excel.xlsx")
df.sh…

Az cli passing the correct data type with some examples

When you are working with Az cli, you often need to pass in the correct data type. This can be pretty time consuming to figure out the correct syntax and what the data type looks like.

For demo purposes, I assume we are working with an event hub


Update resource 

When you update, you need to use the following syntax. Notice the equal sign and setting the value 'Deny' to a nested properties called  properties.defaultAction.

az resource update -n "myeventhubnamespace/networkrulesets/default" -g "myfakeresourcegroup" --resource-type "Microsoft.EventHub/namespaces" --set properties.defaultAction=Deny

Adding resources 

When you add, you use the following syntax.  Notice no more equal sign. In this scenario, we wanted to add entries to an array / list of key value pairs. The syntax looks something like this :-

az resource update -n "myeventhubnamespace/networkrulesets/default" -g "myfakeresourcegroup" --resource-type "Microsoft.Ev…

spark - connecting to mongodb using mongodb connector

One way of connectg to mongodb database using mongodb (not the usual mongodb driver), is using the following codes.

First it starts off with command line (to download driver from maven repository) and then run the code to connect and show.


# Start the code with the followings..
# pyspark --conf "spark.mongodb.input.uri=mongodb://127.0.0.1/apptest.product?readPreference=primaryPreferred" \ # --conf "spark.mongodb.output.uri=mongodb://127.0.0.1/apptest.product" \ # --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.1

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("mongo-email-pipeline").config("spark.mongodb.input.uri", "mongodb://127.0.0.1/apptest.product").config("spark.mongodb.output.uri", "mongodb://127.0.0.1/apptest.product").getOrCreate()
df = spark.read.format("mongo").load()
df.show()

python import

Looking at different way you can import python modules


Generally it begins by path to a module, for example, if you have a custom module, you can do something like this ::


from email.emailer import *


Hence 'from' keyword is for.  The asterisk tells python to import all.


Don't forget the __init__.py file







openssl - where you can find some examples code

Did some digging and found out that openssl does have some code example that you can find here :-
https://github.com/appcoreopc/openssl/tree/master/demos/bio (this is a forked repo) you can find recent codes from the master repo


Have fun


openssl client example - compiling and running

I assume you have build and install openssl. So next step would be to compile it....






To compile it as a shared object,


gcc -o client client.c -lssl -lcrypto -L /opt/openssl/lib -I /opt/openssl/include


Setting up your library ptah

export LD_LIBRARY_PATH=/opt/openssl/lib

And finally running it using 

./client 


openssl - compiling in the docker ubuntu

To setup openSSL run the following commands


1. docker run -it ubuntu /bin/bash


 2. apt-get install git


3. apt install g++


4. apt install build-essential


5. git clone https://github.com/appcoreopc/openssl.git


6. ./config --prefix=/opt/openssl --openssldir=/opt/openssl enable-ec_nistp_64_gcc_128


7. make depend


8. make











javascript ; different ways of writing return types

Definitely wouldn't think about how to write the same statement multiple ways..

Ripped off from stackoverflow :-




react with custom hooks

As you can see from the code below, this control uses react hooks with a custom flavour. Pretty easy to understand.

useState allows you to configure value setting methods

This control exposes onChange which just update the state

And finally it also exposes value which you can use to show / render into the control what info has bene typed by a user.

What is the spread operator doing in there?

It just doing a mapping and renders something that looks like this






async function - different ways of writing

Interestingly there are many ways of writing async function which i do forget ... :)

Just to make sure i have documented the right ways....



The specified initialization vector (IV) does not match the block size for this algorithm

Image
If you're getting  exception when trying to assigned key to a symmetric key and getting error message below :-

The specified initialization vector (IV) does not match the block size for this algorithm

And your code probably look like something below :-


Then you need to go into debug mode and start looking into the supported size for IV as shown in diagram below :-

As you can see, IV is 16 bytes, so you need to provide 16 bytes. The same goes for Key field too, if you're adding anything to it.


As long as you provide a valid key size, then we're good and you're code will be ready.





npm install package using proxy

Sometimes you might not have access to direct internet and when that happens, you can be limited.
To get around this, you can authenticate yourself using proxy. Use the following command :-
where proyxserver.com is your proxy.

npm config set proxy http://"dev.username:password"@proxyserver.com:80
npm config set https-proxy http://"dev.username:password"@proxyserver.com:80

I also notice that setting this, results in 418 or probably 404.

npm config set registry "http://registry.npmjs.org/"

So if you have this, try to remove it using npm config edit


Also if you have some issues with certificate, you can probably set

npm set strict-ssl false

apollo server mutation sample

Sample mutation query for testing on apollo server machine :-














Azure key vault - enabling protection from delete and purging

Soft delete - means your item is marked for delete but your key are not removed from system.

Purging is like 'emptying' your  recycle bin. Purge everything and then you won't see it again. If you have a soft delete key, you can still purge it and you key still goes missing.

That's why purge protection is important too.

Here's some consideration when working with soft delete and purging vault

1. You cannot undo it. You cannot change purging = false. You cannot change soft delete = false once you have enable it.

2. You need to use Cli to recover it.

3. If you purge your vault, you still get charge for it until it is really removed



React Test Util - Find control by Id

The following code take advantage of react testing util to find control by id :-


react-test-renderer - Cannot read property 'current' of undefined

You need to have the same version for react and react-test-renderer to work.
As you can see from my package.json file :-

"react": "16.8.6",
"react-test-renderer": "16.8.6"

a react library called create scripts...

react-script is a package that provides zero configuration way of setting up and running your react project

It provides dev server that support hot loading of your react app.


deploying static website (react) to storage account using az cli

The following scripts allows you to deploy a compiled web assets and deploy it, into storage account in Azure.

Live sharing with visual studio

Image
To fire off your live share, just click on the icon on your far right on your visual studio.



It will generate a link which you can copy and share with your team members.

Your team member will then copy and paste that into a browser which fires up a visual studio. And then you can see who is changing what....... if you open the same file. :)


Sweet! that's it



az cli number of workers used for a webapp

I use the following scripts to configure number of workers used for a webapp







terraform - specifying the size of instance for a service plan

You can specify number of instances you want to create for a service plan. This can be shown in code below :-

The magic number can be set / configure for a field named "capacity"


Redux - is there any way to explain it easier

First you need a store. This store is the central repository of data and you can work with data here.

#1 Code

import { createStore } from'redux'
constreduxstore = createStore(appreducers)
What is reducer? It is sets the value of our data. It could be a single record update or appending to a list.

<Providerstore={reduxstore}> </Provider>
When you set this up, this means you're making the store available to component underneath it.


#2 - When you have your store, what do you do next?

You want your component to be able to work with it, meaning you send data and get certain data that you need.

First you need to use a "connect"from react-redux. And you wire it to your component.

Importing the connect function


import { connect } from'react-redux'

Next you defined a simple component like you would do normally.
And finally you make your store accessible to your component using code below.


exportdefaultconnect(mapStateToProps,mapDispatchToProps)(AddToCart)

Azure linking up different resource groups into a service plan

In your deployment, you might want to link function app from different resource group with a specific service plan. Working with az cli blindly (especially with dodgy documentation) could be a challenge.

Here is a tested script az cli to tied a function app into a specific service plan (that you might have created somewhere else) that might help.





 az functionapp create --name $env:environment$env:functionapp_name --resource-group $env:environment$env:resource_group_name --storage-account $env:environment$env:storage_accountname -p /subscriptions/$env:Subscription/resourceGroups/$($env:environment)$env:serviceplan_resource_group/providers/Microsoft.Web/serverFarms/$($env:environment)$env:serviceplan_name --app-insights-key $appInsightId --runtime dotnet --os-type Windows

$xxx - are variables for powershell and you can tell that the script is written in powershell.



git pull and push with username password

Just use the command below and provide your username and password :-

git pull https://yourUsername:yourPassword@githubcom/repoInfo 

When you trying to push upstream and getting some errors :

git push --set-upstream origin feature/stored-proc-refactor

An then it prompts your for password for an invalid username, you can do something like this

git remote set-url origin https://dev.azure.com:jeremy/some_distant_repo/_git/super_awesome_project_name 


javascript prototype

_proto_ is an internal property of a class pointing to prototype. It is like python ways of always using double underscore to refers to internal object attributes.

Prototype in javascript is used to reference parents. You can find a very clear and good description of this here.














Recent Http2/DOS attack

Recent Http2/DOS attack

CVE-2019-9511 HTTP/2 Data DribbleCVE-2019-9512 HTTP/2 Ping FloodCVE-2019-9513 HTTP/2 Resource LoopCVE-2019-9514 HTTP/2 Reset FloodCVE-2019-9515 HTTP/2 Settings FloodCVE-2019-9516 HTTP/2 0-Length Headers LeakCVE-2019-9518 HTTP/2 Request Data/Header Flood

Nextjs - How do you handle onclick which do something

You can use the following codes to do it :-






Whitelisting Azure event hub

To decide which event hub to whitelist, you can use the following command to get some legit ip address

nslookup .servicebus.windows.net

nexjts - how i deploy it on a docker container running nginx

You need to have something like prod setup below :-





Next, run your docker container and mount it to a nginx default folder,


sudo docker run  -p 8008:80 -v $(pwd):/usr/share/nginx/html nginx


Fire up your browser and then type localhost:8008



Azure autoscaling - how it works from my observation

Image
This is my observation with scaling out app service plan in an ASE environment.

How long does it takes to scale?
30 to 40 min to scale up one instance. Your service plan status will be pending.

Where do you view your metrics / logs?
Go to app insights in your function app or apps. For some reason it is not updated

What metric to choose?

That depends. For my case, I have to scale function app. So i set it scale if function app execution count is greater than 8. I noticed that regardless of how many instance you already running, for example you have 3 instance, as long as function app execution is 8 then it will scale up to 4 instance.




It is not like function app execution unit shared by current 3 instances. If you have 9 function app, then you get a new instance.

Off course it differ depending on metrics types.

Event hubs

As i push more load to event hubs, partitioning is pretty much fixed. If you have 2 VM, then those functionapp will be processing the same partition. Process wpw3 is…

Scaling with metrics

To get a list of metrics supported, please go here.

Azure functions plans types

Typically we can have service plan configuration as follows when setting our terraform azurerm_app_service_plan's kind configuration. These settings are :-

a) App - isolated. Scaling out is manual / autoscale control by your service plan

b) FunctionApp - free, sometimes it goes into idle mode.

c) Linux

d) Windows


These plans dictates how your functions app scales.  Scale controller helps you to decide how your functions app get scaled. It the condition could varies depending on the type of function app trigger that you're running.

For example, if you're running event hub, your app can scale depending on number of messages.

Here's some interesting service limit for scaling out

-Consumption plan - event driven 

-Premium plan - event driven 

-App service - manual / auto (depends on how you configure your service plan) 


Max Instances

-Consumption plan - 200 - think this is going to back fire. Imagine you have 200 instance creating new connection to your database .

-Premium pla…

Azure event hub namespace connection string or event hub connection string?

It depends, if you have the requirement to write to say different event hub in the namespace then event hub namespace connection string. The problem could be if your connection string is compromised then the application can potentially send to all your event queue. It is always better to have finer control. :)

So I would use event hub based connection string.


terraform creating rule for auto scaling for service plan

It seems like terraform just doesn't like to create a rule. But if you go and create manually in the portal, giving it a name, then terraform auto scaling service plan works.

Here is my terraform :-

Some point to note - I have to use "azurerm_app_service_plan" tags here as oppose to manually copy and pasting the resource id.  And remember to create a rule called "devspapptest-Autoscale-9680" so terraform would be able to find it.

So strange ......











To work with what metrics are available, go here. Also don't forget to go, App Service Plan -> Scale out -> Json to try to match or copy some of the operator values or statistic used. It is like you can copy and paste it into your terraform code.

Terraform secret rule of thumb - keyvault has to depend on policy :)

Did you know when you create a new secret, you need to add "depend_on", to associate it to a key policy. That means Vault -> Policy -> Secret (you need to specifically add "depend_on" in your secret resource provisioning section.



Understanding TLS and its cipher suite - Part 1

Image
Key exchange algorithmsprotect information required to create shared keys. These algorithms are asymmetric (public key algorithms) and perform well for relatively small amounts of data.
Bulk encryption algorithms encrypt messages exchanged between clients and servers. These algorithms are symmetric and perform well for large amounts of data. Message authentication algorithms generate message hashes and signatures that ensure the integrity of a message. Other scheme used here includes HMAC.










MAC - Common message authentication scheme are HMAC, OMACCBC-MAC and PMAC. Newer and better ones would be AES-GCM and ChaCha2, Poly1305



Setup azure function app to a service plan

Yes, sometimes you do need alot of testing to make sure you get this right.
Let's say you already setup a ASE (isolated environment) and then you would like to associate that single service plan (resource group A) to a function app in resource group B.

How do you do that?




With Az Powershell?










azure service plan - scaling out and scaling up

Scaling up means increasing your computing resources like instead of running your app using 4G, you are saying I want to run it on a 16 G machine.

Scaling out means increase number of VM to run your existing application. You may have 1 vm running your app right now, lets increase this to say 2 or 4. A limit of 100, if you're on a isolated plan.

How does this related to a Service plan? Well, service plan controls the scaling of your resources.

nextjs optimization

Lazy loading module is achieve through




Lazy loading components






terraform azurerm_app_service_plan

This is to create an application web service and a service plan. Not to be confused with App Service Environment.


Some useful port in Azure

Some ports that you will often work with in Azure.


UsePortsHTTP/HTTPS80, 443FTP/FTPS21, 990, 10001-10020Visual Studio remote debugging4020, 4022, 4024Web Deploy service8172


ASE app service plan - 1 instance per service plan

Did you know that in ASE, one service plan typically means you are running atleast 1 vm?
Well, you know now.... that's going to cost.

Probably merge all into a single service plan.. :)

Also, turning on diagnostic logging is expensive

react : functionalcomponent vs class component

Was playing around with Nextjs and the code that keeps on popping up are Functional component (as shown in code below).

The difference between functional and class component.


Functional component 



Class component 





nexts - pushing client side javascript

When using SSR, you probably need to push some client side code. This requires some manual configuration.

First you need a folder called static (it has to be static, otherwise it won't work), place your javascipt code in it and then from your index.js or index.tsx.





And oh, you need to "Reload that page" / F5

Microsoft Seal - Setting up your docker on Ubuntu

If you're thinking of working with Microsoft Seal library, then get your hands dirty with a Linux build.

To setup your dev environment

docker run -it mcr.microsoft.com/dotnet/core/sdk:2.2-bionic /bin/bash


Then start to install

apt-get install update

 apt-get install software-properties-common
 apt-get install cmake
 apt-get install git
 apt-get install g++



 git clone https://github.com/microsoft/SEAL.git
Build the goodies 
cd native/src cmake . make sudo make install cd ../.. Don't forget the samples..... good fun :)
cd native/examples cmake . make cd ../..





warning: Error disabling address space randomization: Operation not permitted

When trying to run gdb on a docker, i got this nice error :-


This is a solution, which i will try later ..  :)

docker run --cap-add=SYS_PTRACE --security-opt seccomp=unconfined

CMAKE_CXX_COMPILER-NOTFOUND“ was not found.”

Solution : apt-get install g++

PALISADE - Library Compile for Linux Ubuntu

Get the tools 

apt-get install software-properties-common sudo add-apt-repository ppa:george-edison55/cmake-3.x sudo apt-get updatesudo apt-get install cmakesudo apt-get install g++sudo apt-get install git
Clone the repo

git clone https://git.njit.edu/palisade/PALISADE.git


Configure and build 

Go into your cloned dircectory

./configure ./make

Azure - how do you upload your react assets (actually for any assets) into static storage web enabled account

This is the script that i used to deploy my react assets from azure devops into azure storage account :






Webdeploy to ASE environment breaks after forcing TLS / SSL upgrade

If your deployment suddenly stops working when someone set to TLS 1.2/1.3 or prevent TLS 1.0 from being used.

Then devops code deployment will keep on complaining that it was cut off from the tcp stream.

https://support.microsoft.com/en-nz/help/3206898/enabling-iis-manager-and-web-deploy-after-disabling-ssl3-and-tls-1-0


Enabling debug for Azure Dev ops

Set the variable name System.Debug  to true


Enable c# with c++ dll debugging options in visual studio

In your c# project, right click -> Properties -> Debug -> Enable Native Code debugging. You're all set.



az cli - setting variable on a windows machine build

Did you know that you need to use the following command to set variable if you're using Azure DevOps Az Cli running on a Windows machine?


For /f %%i in ('az keyvault secret show --vault-name "Your-KeyVault-Name" --name "Your-Secret-Name" --query "value"') do set "password=%%i"
Don't ask me why..just weird

Azure WebDeploy and Kudu

Regardless of whether it is Azure or not, when you use WebDeploy, you're using port 8172 to do your deployment. Unlike zip deployment, webdeploy does not use Kudu service.

That also means any service like VSTS that uses webdeploy, do not use Kudu.

Why is this important? When security team starts to knock on your door asking everything to be locked, you need to know which port is important

Kudu service uses port 80 / 443.

How does Kudu deploy



Function app ASE

ASE (Application service environment) is an isolated environment for you to run your code on.


If you use Terraform 

Unfortunately if you're using terraform, you get error message trying to provision a function app that ties to a service plan  (ASE).

Status code nil, nil - Not a very helpful message

Issue is tracked here.

If you still want to use terraform, get it to create service plan and stop. Don't provision your function app. That service plan you just created are tied to an environment id.

Then use Az Cli to create a function app and manually ties it to the service plan created locally.

You also need to setup system identity and somehow add that into resources like keyvault and all the goodies.

If you use Az Cli

Creating a service plan that ties to a ASE is not supported. Look up the service plan, you cannot create Isolated using az cli.

Microsoft has just move this into their backlog

Powershell AZ 2.4

The only option is to use Powershell AZ.


My case 

Since i am using Terraf…

Azure devops - debugging pipeline using System.Debug

One cool feature that you can turn on whenever you try to troubleshoot build issues in Azure Devops is "System.Debug". Create a new variable called "System.Debug" and set it to true.

Run your pipeline and you will see a bunch of messages.


git apply patch done properly

Totally agree with the way this has been done.

Getting to know what are the change made  git apply --stat 0001-file.patch
Initiate dry run to detect errors: git apply --check 0001-file.patch
Finally, to apply patches git am to apply your patch as a commit: it allows you to sign off an applied patch. This can be useful for later reference. git am --signoff < 0001-file.patch

Terraform - configuring function app to use existing ASE

It is something hard to get the settings rights for terraform, if you don't run it multiple times.
In this example, i had so much errors and i found out that, if you referencing an existing ASE plan, you better make sure it matches - in terms of TIER and SIZE. Otherwise your service plan is as good as no service plan





\resource"azurerm_app_service_plan""ase-service-plan"{ name = "${var.environment}${var.service_plan_name}" resource_group_name = "${azurerm_resource_group.azure-rg.name}" location = "${var.location}" kind = "FunctionApp" app_service_environment_id = "/subscriptions/ your-subscription-id/ resourceGroups/yourResourceGroup/providers/Microsoft.Web/ hostingEnvironments/your-ASE-name" maximum_elastic_worker_count = 1 ## required - Isolated ASEV2 ## Best to match this sku { tier = "Isolated" size = "I1" capacity = 1 } }

Nodejes - Loading from node_modules from parent directory

Interestingly, node_modules libraries are loaded from child, then move its way up to parent until the root.

That's is not really what the docs says. 

To resolve this, either add package dependencies on the parent node_modules or remove that folder.

:)

npm audit error - package vulnerabilities

Run into this error during my npm build. Awesome treat of the day, it seems.
Time to get cracking with resolving inflight npm libraries issues.

https://gist.github.com/appcoreopc/7b335ac237cd7198027a3af43d895bef

When i encounter this issues, non of my build work. It just shows the following error and exit.

Good thing it ask me to use

npm audit fix

to fix stuff and it works. Obey the npm cli :)

npm react script error - The react script package provided by Create React App requires a dependency

Bump into this error today,

The react script package provided by Create React App requires a dependency. The react-scripts package provided by Create React App requires a dependency

"jest" : "24.7.1"

To resolves this, apply changes suggested from here.



you also need to provide this.......


npx npm-force-resolutions npm install



Funny thing, is that this doesn't work for me.

Az cli - setting diagnostic logs for event hub

Image
This is a script that allows you to setup diagnostic logging for keyvault and event hub. You can easily use it for other stuff as well.

First of all your start off with something simple like this, to enable diagnostic logging for a vault called "myvault". Unless it is a resource Id, then you need to provide resource group info.  (Please note - resource group is the resource group that vault resides)

When it comes to --workspace, ideally it is best to




az monitor diagnostic-settings create -n "lalala" --resource "myvault" -g "devrgpmtengine" --resource-type "Microsoft.KeyVault/vaults" --workspace "mydevworkspace" --metrics '[{"category": "AllMetrics","enabled": true,"retentionPolicy": {"enabled": false, "days": 0 }}]'




when it comes to --workspace, it is best to have something that looks like this, full resource path to your workspace. It looks like the figure b…