Posts

hardhat deploying application to sepolia testnet

Image
To deploy to a testnet, let's use this endpoint as my testnet - https://eth-sepolia.g.alchemy.com/v2/xxxxxxxxxxxxxxxxxxxxxxxxx. Please register and create a free account and select all the testnet A nd we also need to set the wallet private keys (please do not share actual private key). This account here must have some $$$$. If you don't have you can use sepolia to send this over to your MetaMask or any other wall.  https://cloud.google.com/application/web3/faucet/ethereum/sepolia Why use google? Others like Alchemy requires atleast some $$$ in your wallet before you can deploy anything.Ensure that we have the followings configurations:-     hardhatOp : {       type : " edr-simulated " ,       chainType : " op " ,     },     sepolia : {       type : " http " ,       chainType : " l1 " ,       url : configVariable ( SEPOLIA_RPC_URL ) ,       accounts : ...

hardhat setting up project

Image
To start off with you need to run "pnpm dlx hardhat --init" Next it will ask you as series of questions, which you can just choose yes. You should be asked to install the following dependencies  You need to install the necessary dependencies using the following command: pnpm add --save-dev "hardhat@^3.1.7" "@nomicfoundation/hardhat-toolbox-viem@^5.0.2" "@nomicfoundation/hardhat-ignition@^3.0.7" "@types/node@^22.8.5" "forge-std@foundry-rs/forge-std#v1.9.4" "typescript@~5.8.0" "viem@^2.43.0" Otherwise you will get weir issue like forge-std missing. And if all goes well, you will get this 

hardhat npx hardhat test Error HHE902: There was an error while resolving the import "forge-std/Script.sol" from "./contracts/script/Counter.s.sol": The package "forge-std" is not installed.

When running the following command  npx hardhat test Error HHE902: There was an error while resolving the import "forge-std/Script.sol" from "./contracts/script/Counter.s.sol": The package "forge-std" is not installed. This is due to the standard nodejs library that is not installed. Please ensure that your package.json has the following settings.  {   " name " : " hardhat2-example " ,   " version " : " 1.0.0 " ,   " type " : " module " ,   " devDependencies " : {     " @nomicfoundation/hardhat-ignition " : " ^3.0.7 " ,     " @nomicfoundation/hardhat-toolbox-viem " : " ^5.0.2 " ,     " @types/node " : " ^22.19.9 " ,     " forge-std " : " github:foundry-rs/forge-std#v1.9.4 " ,     " hardhat " : " ^3.1.7 " ,     " typescript " : " ~5.8.3 " ,     " viem ...

hardhat Cannot find package '@nomicfoundation/hardhat-toolbox-viem' imported from /home/nzai/work/ether/hardhat-example/hardhat.config.ts

When running "npx hardhat test", I ran into this error:- Cannot find package '@nomicfoundation/hardhat-toolbox-viem' imported from /home/nzai/work/ether/hardhat-example/hardhat.config.ts To resolve this npm install --save-dev @nomicfoundation/hardhat-toolbox-viem viem   

kubectl ai setup and getting started

  Setup and getting started with kubectl AI. https://github.com/GoogleCloudPlatform/kubectl-ai?tab=readme-ov-file#installation

getting started with Google GenAI SDK

To get started with Google GenAI SDK key, first you need to create an API key here. https://aistudio.google.com/app/api-keys After that, you can fire up your python env and install  pip install -q -U google-genai Setting up API key in your environment export GOOGLE_API_KEY="your-key-here" And then do a hello work python with the following code:  from google import genai # The client gets the API key from the environment variable `GEMINI_API_KEY`. client = genai . Client () response = client . models . generate_content (     model = " gemini-3-flash-preview " , contents = " Explain how AI works in a few words " ) print ( response . text )

rancher deleting service from the cluster the right way

Image
 After de-registering the server, we still have some namespaces, rolebindings, role and pods in the cluster that needs to be clean up. To do that without causing your resource wait indefinite due to finalizer, do the following  curl -O https://raw.githubusercontent.com/rancher/rancher/master/cleanup/user-cluster.sh chmod +x user-cluster.sh   ./user-cluster.sh rancher/rancher-agent:v2.13-head                         If everything runs well, you get this 

keda 2.18.3 upgrade issue

Image
I guess this can happen to any version of keda metric api metric server getting the error after upgrade/deployment.  grpc: addrConn.createTransport failed to connect to {Addr: "10.0.224.114:9666", ServerName: "keda-operator.keda.svc.cluster.local:9666", }. Err: connection error: desc = "transport: authentication handshake failed: read tcp 10.244.43.200:60114->10.0.224.114:9666: read: connection reset by peer" The solution, delete a secret call kedaorg-cert in the keda namespace and let keda re-generate by doing a deployment rollout for all keda pods.  Check if the certificate exist  kubectl get secrets -n keda Proceed to delete the certificate  kubectl delete secret kedaorg-certs -n keda Redeploy the instances kubectl rollout restart deployment keda-operator -n keda kubectl rollout restart deployment keda-operator-metrics-apiserver -n keda Right at the end you will see connection is established. 

aws role based policy understanding

In this setup, you create a role and then give it some permission. Then whenever a user would like to use it, they assume this role. Typically this can be done simply  aws sts assume-role --role-arn arn:aws:iam::my-aws-id:role/s3-power-user --role-session-name jeremy-session And you can test it out simply by running the following commands:- aws s3 ls s3://appjerwo-demo-test aws s3 cp test.txt s3://appjerwo-demo-test/  aws s3 cp s3://appjerwo-demo-test/test.txt .  A typical policy would look like this. The key here is Action: "sts:AssumeRole". {     " Version " : " 2012-10-17 " ,     " Statement " : [         {             " Effect " : " Allow " ,             " Principal " : {                 " AWS " : " arn:aws:iam::(my-aws-id):root "             },             " Action ...

aws s3 bucket policy mistake

When configuring AWS policy, it can gets tricky, as i am using this policy on my bucket  {     " Version " : " 2012-10-17 " ,     " Statement " : [         {             " Principal " : {                 " AWS " : " arn:aws:iam::(masked-not-actual):user/jeremydev "             },             " Effect " : " Allow " ,             " Action " : [                 " s3:GetObject " ,                 " s3:PutObject " ,                 " s3:* "             ],             " Resource " : " arn:aws:s3:::appjerwo-demo-test "             ]         }     ] }...

coredns - beginner guide

Image
To get started with coredns, we can edit config map in kube-system.  kubectl edit cm/coredns -n kube-system And then we are going to add the following  apiVersion : v1 data :   Corefile : | 2     hello.test:53 {       errors       log       hosts {         10.0.0.42 hello.test       }       reload     }     .:53 {         errors         health {            lameduck 5s         }         ready         kubernetes cluster.local in-addr.arpa ip6.arpa {            pods insecure            fallthrough in-addr.arpa ip6.arpa            ttl 30         }         prometheus :9153       ...