Posts

Showing posts from July, 2023

vercel deployment of my nextjs app is so easy

  Just goto Vercel.com and then create a user and link it up to your github account. From there, it can detect and craft out your deployment.  Under deployment, click on Deploy. Once it is done, then just click on "Visit site" and you're done. Easy! 

Function declarations are not allowed inside blocks in strict mode when targeting 'ES3' or 'ES5'. Modules are automatically in strict mode.

 Getting this error and quite easy to resolve, just convert    async function getJobListing()  To use a const as shown below:     const getJobListing = async function () {             const jobs = await prisma.jobs.findMany({

Missing "key" prop for element in iterator react/jsx-key

  And then this blog solve it for me : https://bobbyhadz.com/blog/react-missing-key-prop-for-element-in-iterator

nextjs how do you get value from your query parameters?

  Let's say you posted this /api/jobs/1234 in your nextjs endpoint. How do you get /1234 value? Since 1234 is auto map to id, you access it via id as shown in diagram below: import { PrismaClient } from '@prisma/client' import { NextApiRequest , NextApiResponse } from 'next' export default async function handler ( req : NextApiRequest , res : NextApiResponse ) {     console . log ( req . query . id )     res . json ( "hiring party id" ) } What if you have something like this api/jobs/1234?name=testdemoapp. Since it parse as a dictionary, you can access it using code below:  import { PrismaClient } from '@prisma/client' import { NextApiRequest , NextApiResponse } from 'next' export default async function handler ( req : NextApiRequest , res : NextApiResponse ) {     console . log ( req . query . id )     console . log ( req . query . name )     res . json ( "hiring party id" ) }

vscode node / npm debugging - the most versatile options

  The most versatile option for debugging vscode and npm is using the following configuration - I think. Sometimes just got weird stuff like npm.cmd ,....etc. {     // Configure your terminal to use bash terminal as the default profile.     "configurations" : [     {         "command" : "npm run dev" ,         "name" : "Run npm start" ,         "request" : "launch" ,         "type" : "node-terminal"     }     ] }

prisma requires your MongoDB server to be run as a replica set

If you bump into this error while trying to write to Mongodb, we basically need to setup our mongodb replica set.    Then use the following link below to setup the cluster  https://www.mongodb.com/compatibility/deploying-a-mongodb-cluster-with-docker   On Windows, you may need to make some slight changes to your command to configure your replica set  Try this - I have replace double quote with single quote near --eval docker exec -it mongo1 mongosh --eval 'rs.initiate({_id: \"myReplicaSet\", members: [ {_id: 0, host: \"mongo1\"}, {_id: 1, host: \"mongo2\"}, {_id: 2, host: \"mongo3\"} ] })' After you run command above, you will get a connection string. Use MongoDB compass to try to connect via this connectionstring   mongodb://127.0.0.1:27017/directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1 If you're able to connect, you're all set.  You can start connecting to

Prisma re-generating your schema - if you make changes to your schema so your code intellisense would work

Let's say you're adding a new collection called "Test" and your schema.prisma would look something like this generator client  generator client {   provider = "prisma-client-js" } datasource db {   provider = "mongodb"   url      = env("DATABASE_URL") } model Post {   id     String @id @default(auto()) @map("_id") @db.ObjectId   title  String   userId String @db.ObjectId   user   User   @relation(fields: [userId], references: [id]) } model User {   id    String @id @default(auto()) @map("_id") @db.ObjectId   email String   posts Post[] } model Test {   id    String @id @default(auto()) @map("_id") @db.ObjectId   email String }  To ensure these schema are reflected in your code, run npx prisma generate

Creating a simple API route for nextjs 13

Create your nextjs 13 app via command line and then place this file 'route.ts' and paste the following content to it your-next-js-app/api/route.ts. Then you can browser localhost:3000/api/ - it should return ok import { NextResponse } from 'next/server' export async function GET(request: Request) { return NextResponse.json("Ok") }

Connecting to Mongodb on your docker image

First, you must have docker and Mongosh (https://www.mongodb.com/try/download/shell) Then you start up your docker image with this command docker run --name mongo -p 27017:27017 -d mongo mongod Then to connect, run 'mongosh' - just press enter when prompt for connection string. You should be able to connect.

nextjs 13 - deploying a production build (standalone)

You can create a production build by running these command npm run build To run your production build in standalone mode, run the following command node .next/standalone/server.js You also need to copy the static files to your .next/standalone/.next/static folder As stated in the docs: https://nextjs.org/docs/pages/api-reference/next-config-js/output It is meant to be deployed to a cdn, if not you need to do some manual copying.

an example of how we can extend react custom component

Getting this error: refers to a value, but is being used as a type here

 Instead of using ts please use tsx. 

nextjs templates : https://github.com/vercel/next.js/tree/canary/examples/

Cloudformation unable to delete stack - role is invalid or cannot be assumed

Reason for this error is because the role used by cloudformation is not available or deleted. Therefore Cloudformation can't use it to do its work.  Solution is documented here:-  https://repost.aws/knowledge-center/cloudformation-role-arn-error

How to install starship on Windows

Imstall nerd-fonts  choco install nerd - fonts - hack Then install starship  choco install starship Let's say you want to customize this preset here https://starship.rs/presets/pastel-powerline.html Download the toml file into a directory, then run  starship preset pastel-powerline -o ~/.config/starship.toml

recall vs percision

Precision  Precision measure how good the model is at getting things right. For example, if a spam filter has a precision of 0.9, then it means that 90% of the emails that it predicts as spam are actually spam.   Precision = TP / (TP + FP)  In Precision, we want straight up, how many did the model get right, how many did the model get wrong - that will give us how precise it is.   Spam filtering:  A spam filter with high precision will have few false positives, which means that it will not send many ham emails to the spam folder. So spam filter with high precision may also have some false negatives, which means that some spam emails will not be caught. Recall Recall measures how good the model is at avoiding false negatives. For example, if a spam filter has a recall of 0.8, then it means that 80% of the spam emails are actually predicted as spam.   Recall = TP / (TP + FN)  In Recall, we want to know this model predicted correctly and what is predicted badly when the result is act

Updating from such a repository can't be done securely, and is therefore disabled by default.

Thise has to do with either you did something to your /tmp folder, or you accidentally mount to /tmp or using out of date OS that constains old repositories Please go through your settings / setup.

Type hint for a dict gives TypeError: 'type' object is not subscriptable

If you're using python 3.8, then perhaps you need to do some import from typing import Dict memo: Dict[int, int] = {0: 0, 1: 1} If you're using python 3.9 and above, you should be good. If you're going to do some type hinting. memo: dict[int, int] = {0: 0, 1: 1}

Transformers list and codes

A Length-Extrapolatable Transformer https://arxiv.org/abs/2212.10554 https://github.com/sunyt32/torchscale Longformer: The Long-Document Transformer https://github.com/allenai/longformer https://arxiv.org/pdf/2004.05150.pdf A Length-Extrapolatable Transformer https://arxiv.org/pdf/2212.10554.pdf https://github.com/microsoft/torchscale Efficient Content-Based Sparse Attention with Routing Transformers https://arxiv.org/pdf/2003.05997.pdf https://github.com/google-research/google-research/tree/master/routing_transformer

Pytorch gradient update issues and possible solution

tutorial about a simple model and how this applies to more complex ml model

Really like this example here:

T5 parameters

T5 parameters

Open service mesh doesn't work on windows node

Gave OSM a go in my AKS cluster. I install it as a add-on component but unfortunately it didn't like Windows node and won't schedule anything on it after I enable my namespace with osm. osm namespace add default. Then proceed to deploy using yaml below and nothing happens. Then i remove my default namespace from osm control. Then re-ran the same yaml -- i can see the pods getting created. In the docs, i thought they say -- not supporting Window Server contaiers. I also get error like this Error creating: Internal error occurred: failed calling webhook "osm-inject.k8s.io": received invalid webhook response: expected response.uid="50cc3ac7-2f16-4e60-9efe-edd71e82b448", got "" Ths is the sample yaml used: ------------ apiVersion: apps/v1 kind: Deployment metadata: name: sample labels: app: sample spec: replicas: 1 template: metadata: name: sample labels: app: sample spec: nodeSelector:

OSM - setup locally on your machine

Get OSM binaries and extract it wget https://github.com/openservicemesh/osm/releases/download/v1.2.4/osm-v1.2.4-windows-amd64.zip -o osm.zip Then run the following command osm install --mesh-name "osm" --osm-namespace "osm-system" --set=osm.enablePermissiveTrafficPolicy=true --set=osm.deployPrometheus=true --set=osm.deployGrafana=true --set=osm.deployJaeger=true

OSM - TrafficSplit, TrafficTarget - where to get the docs for all these

https://github.com/servicemeshinterface/smi-spec/blob/main/apis/traffic-access/v1alpha2/traffic-access.md

helm command to search and get manifest by chart version

helm repo add kedacore  https://kedacore.github.io/ charts helm search repo "keda" -l helm repo update helm template test kedacore/keda --version 2.7.0

aks - running windows node pool :- The image and underlying windows host running the image must match

  As stated in the documentation, we need to match image with the underlying host OS. Otherwise we will get some issues as shown below: This is going to be interesting. As we upgrade our AKS cluster, the underlying OS gets updated. The only problem is we also need to upgrade all application images. That's going to be tricky.  -------------------------------------------------- ltsc2019" in 3m50.4987807s (3m50.4987807s including waiting)   Normal   Created    8s (x2 over 13s)  kubelet            Created container helloworld   Normal   Pulled     8s                kubelet            Container image "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019" already present on machine   Warning  Failed     2s (x2 over 8s)   kubelet            Error: failed to create containerd task: failed to create shim task: hcs::CreateComputeSystem helloworld: The container operating system does not match the host operating system.: unknown   Warning  BackOff    0s (x2 over

istio can''t work on a windows node pool : 9-July-2023

I guess it is not supported - as has been claimed in many blogs. Istio not able to run on a windows node pool. Getting the following error: Events:   Type     Reason     Age                    From               Message   ----     ------     ----                   ----               -------   Normal   Scheduled  33m                    default-scheduler  Successfully assigned test/helloworld to akswin000001   Normal   Pulling    31m (x4 over 33m)      kubelet            Pulling image "docker.io/istio/proxyv2:1.17.2"   Normal   BackOff    7m58s (x102 over 32m)  kubelet            Back-off pulling image "docker.io/istio/proxyv2:1.17.2"   Warning  Failed     2m48s (x10 over 32m)   kubelet            Failed to pull image "docker.io/istio/proxyv2:1.17.2": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/istio/proxyv2:1.17.2": no match for platform in manifest: not found

creating custom dataset from huggingface dataset

Sample code can be found here .   You can create dataset by using the the python module: dataset. If you don't have it, please run  pip install dataset Next, ensure you have a file with content called my_train.txt and my_test.txt. Both file needs content otherwise it won't work from datasets import load_dataset dataset = load_dataset ( 'text' , data_files = { 'train' : [ 'my_train.txt' ], 'test' : 'my_test.txt' }) To access these dataset, you can either loop or directly reference: As you can see here, we have train and test > print (dataset[ 'train' ][ 0 ]) { 'text' : 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' } > print (dataset[ 'test' ][ 0 ]) { 'text' : 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' } Other supported format are cvs, json       ## loading json dataset dataset = load_dataset ( 'json' , data_files = { 'train' :