Posts

NX Error while loading rule '@typescript-eslint/no-unused-vars': structuredClone is not defined

Ran into this error the other day and resolve it by upgrading my nodejst to version 18.  

hugging face error uploading dataset or model - getting http status: 403 / 416

Image
Sometimes when working with hugging face from colab and you find yourself having to push dataset or model to hugging face repository, you might run into this error :- 403 Forbidden: Forbidden: you must use a write token to upload to a repository.   Or this could be a 416 at times. I have login multiple times and yet i still bump into this error. I even try setting HF_TOKEN environment variable in google colab yet still end up with this error. login( "hf_your-write-token" ) To resolve this error, you can use the following code. Let's say you're uploading dataset, you can do this dataset.push_to_hub( "kepung/hellodataset" , token= "hf_write-token" ) where kepung is my profile name and dataset is called hellodataset. Include the write token here. 

nodejs - ReferenceError: fetch is not defined

Getting nodejs error above and check the node version that was used is 16. The options for me is to upgrade to newer nodejs like 18 and above that would work.  

Error: main exceeded maximum budget. Budget 1.20 MB was not met by 786 bytes with a total of 1.20 MB.

When building angular app, bump into this error here and we don't have angular.json.  The solution pretty straight forward, create an angular.json and specify budget limits.  References  https://angular.io/guide/build#configure-size-budgets

azure aks system node pool reconfiguration - does it cause node pool to be recreated?

Image
 We can typically choose between manual and automatic approach to auto-scale our cluster when we create our AKS cluster as you can see from here. So if we have choosen manual approach, if we were to change it to auto-scale, do we need to re-create the node pool. Here are my findings  System node pool - Can i change the node pool from manual to auto without recreating node pool? Yes you can  - Can i change VM sku? From the portal no but technically yes that will be an operation to delete existing and recreate a new one. I think someone requested this feature to be added  https://github.com/Azure/AKS/issues/1574 User node pool - Can i change the node pool from manual to auto without recreating node pool? Yes you can  - Can i change VM sku? From the portal no but technically yes that will be an operation to delete existing and recreate a new one  What if we would like to change the sku of our system pool? We can create a new system pool with the desired vm sku...

aws sns - setting up local development environment to publish messages to aws sns c#

Image
When developing locally, we should probably configure our development environment (there's many options available) and this can be done by setting up by running aws configure . You will be prompted for your id, password and region. If you already set this up, then you can proceed to create a SNS topic.  In this exercise, we will create SNS topic and then send a message via dotnet app. Then we use a SQS to subscribe to this topic.  You need to be creating your SNS topic and ensure that you allow anyone to be able to subscribe to this topic. This is required for easy setup for SQS later Next, we will setup who can publish to this topic. Setting up SNS publish policy for local development AWS user Go to your AWS console and create a topic. Ensure you have configure: {       "Effect": "Allow",       "Principal": {         "AWS": "arn:aws:iam::your-aws-account-id:user/jeremydev"       },       "Action":...

langchain arxivWrapper - a demo to showcase llm integrating with external parties

Langchain docs link https://www.llm-agents.info/agents/65d55c720e711b83655b13df The code  https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/utilities/arxiv.py

ollama basic intro

Image
Ollama with a double l :) makes it easy to download and run your llm on your local machine. No boiler code and you can get your llm model setup as a service ready to accept REST request. Not only that, if you're had ollama running, you can just type away to ask it about anything and get a response.  For example, i started my ollama instance then i type something like exit(). Then it gives me a bunch of details about python. I think llama 3.2 is pretty good with programming :)  Hides the complexities involve in many ways. If you're thinking of trying out llama 3.2 model, then can try this out.  Download binaries and then proceed to download your model. In my case, i am running llama 3.2 ollama run llama3.2 Then I just have to issue a curl command against it.  Noticed it have updated my model accordingly.   curl http://localhost:11434/api/chat -d '{   "model": " llama3.2 ",   "messages": [     {       "role": "user",   ...

llama stack - setting up using wdl ubuntu 24 and downloading model

Image
Python 3.12 is installed by default in ubuntu 24. You can try to install pip by running The following command " sudo apt install python3-pip " You would received an email by Meta. Goto https://github.com/meta-llama/llama-models And follow the readme instructions.  Next install llama stack " pip install llama-stack ".  Run pip show llama-stack and see where you have install the global executable file.  Ensure you have set your path the proper folder, in my case it is  export PATH=$PATH:/home/jeremy/.local/bin   Then run "llama model list" and you get the following outputs. Depending on the llama version that you've requested, the will dictates which model you can download. for example, i have requested llama 3.3 but when i tried to download llama 3.2 model, it doesn't allow me to do so. Llama 3.3 is really huge. So I think i should be requesting for llama 3.2 instead.   

python pip - This environment is externally managed

Image
 If you're getting the following error, when running  pip install llama-stack as shown in the diagram below:- Then you can proceed to add/append --break-system-packages to it and run "pip install llama-stack --break-system-packages" but this can be potentially harmful. So it might be better to create your own python environment instead python3 -m venv myllm source myllm /bin/activate  pip install llama-stack

windows python installed but not accessible via powershell

 Sometime ago, I installed python using Microsoft store. I can get access if i choose the command icon but not accessible from powershell. So I need to update my path accordingly.  So I added the followings to my path. Please not the \scripts is mostly for allowing access to pip  C:\Users\usertest\AppData\Local\Programs\Python\Python312 C:\Users\usertest\AppData\Local\Programs\Python\Python312\Scripts

gcp authenticating using client sdk - service account approach.

Create a service account in IAM and Admin. Once you have done that, go to your newly created service account and then click on Create Key Pair. Then it will automatically download the private key json Create your gcp storage bucket. The under permission tab, grant your service account access perhaps with a role "storage object creator". Let's switch to your laptop and fireup a console c# project. Add google bucket sdk libraries into the by running: dotnet add package Google.Cloud.Storage.V1 --version 4.10.0 Then configure the environment variables  GOOGLE_APPLICATION_ CREDENTIALS=path-to-your-json. key  for example "C:\\work\\gcp-cred\\gcp- myprivate-project- 14197e38ef68.json" Then use the following code to test out your configurations using Google.Cloud.Storage.V1; // upload var objectName = "readme.md"; var storage = StorageClient.Create(); using var fileStream = File.OpenRead("c:\\work\\ foctest\\README.md"); storage.UploadObject(" m...

gcp clound run function - configuring scaling and concurrency

Image
To change your cloud function cpu and scaling configurations, go to your cloud run function and select "Edit and Deploy new revision" as shown below:- Proceed to update the memory, cpu allocations.  Update the concurrency if required to take advantage of the higher cpus According to the docs, each instance can handle up to 80 concurrency request but you can change this to 1000. More details https://cloud.google.com/run/docs/about-concurrency