Posts

deep research agentic app - google sample

Image
This is from the sample code in google and to illustrate what the UI looks like for the agent. It is shown here. Here we wanted to do a research for "best ev cars". After that the agent will come out with a plan and then provide reports.    To create this sample, all you need to do is  uvx agent-starter-pack create my-fullstack-agent -a adk@deep-search And to run it cd my-fullstack-agent && make install && make dev Then proceed to  http://localhost:5173/app / and load it with your browser. Then you can ask it to do research.

How do you test out your deployed instance of agent A2A in vertex AI

How do you test out your deployed instance of agent A2A in vertex AI? I assume that we have already deploy the agent to Vertex AI.  All you have to do is run the following code - importing the required libraries import asyncio import json import os import requests import uuid import vertexai from a2a . types import (     Message ,     MessageSendParams ,     Part ,     Role ,     SendStreamingMessageRequest ,     TextPart , ) from IPython . display import Markdown , display from google . adk . artifacts import InMemoryArtifactService from google . adk . sessions import InMemorySessionService from app . agent_engine_app import AgentEngineApp from tests . helpers import (     build_get_request ,     build_post_request ,     poll_task_completion , ) Setup your vertex ai client  LOCATION = " us-central1 " client = vertexai . Client (     location = LOCAT...

github gcp workload pool identity federation

To allow passwordless authentication to gcp from github, we need to setup workload identity pool. To do that we can use the following command gcloud iam workload-identity-pools create "github-pool" \     --location="global" \     --display-name="GitHub Actions Pool" And then setup integration to your repo  gcloud iam workload-identity-pools providers create-oidc " kepungnzai " \     --location="global" \     --workload-identity-pool="github-pool" \     --display-name="GitHub Actions Provider" \     --issuer-uri="https://token.actions.githubusercontent.com" \     --attribute-mapping="google.subject=assertion.sub,attribute.repository=assertion.repository,attribute.actor=assertion.actor" \     --attribute-condition="attribute.repository == ' kepungnzai/agentic-a2a-weather-currency '" And all you need is yaml to deploy. Please replace the variables in the yaml. Project-ID is not the ...

terraform rancher/rancher2 provider upgrade in terraform workspace

Ran into an issue after rancher upgrade to version 2.12.3 and found out that the terraform provider is out of date (by probably a few years). Then proceed to upgrade it in terraform. Change the provider version to 8.5.0 because that version is recommened but I end up with v12.  Run terraform init upgrade   This will upgrade the .terraform.lock.hcl . Remove the older version if you need to.  Then you may need to generate a token to connect to terraform workspace. Once it is done, all you need to do is run terraform plan to see if you need to fix anything  Then raise a PR 

kubernetes agentic AI enforcement for mcp servers running in the cluster

  https://kube-agentic-networking.sigs.k8s.io/guides/quickstart/

Running mcp server in a more realistic environment - close to production

Image
Here is a way to productionzing a simple mcp server application. The code for this is here import logging from fastapi import FastAPI from mcp . server . fastmcp import FastMCP # 1. Initialize your MCP logic mcp = FastMCP ( " ProductionToolbox " ) # Add a sample tool @ mcp . tool () def greeting_time ( server_id : str ) -> str :     """ Returns a greeting hello. """     return f "Server hello { server_id } ." # 2. Create your Production Web App (FastAPI) app = FastAPI ( title = " MCP Cloud Gateway " ) # --- THE INTEGRATION POINT --- # This line connects the MCP protocol to the web server. # It automatically creates endpoints like /sse and /messages. app . mount ( " /mcp " , mcp . sse_app ()) # ----------------------------- @ app . get ( " /health " ) def health_check ():     """ A standard production health check endpoint. """     return { " status "...

langchain using gemini flash 2.5

Image
We can use langchain  to build our mcp server import os from typing import Annotated , TypedDict , List # CHANGE 1: Import Google GenAI instead of OpenAI from langchain_google_genai import ChatGoogleGenerativeAI from langchain_core . tools import tool from langgraph . graph import StateGraph , END from langgraph . graph . message import add_messages # --- SETUP --- # CHANGE 2: Use GOOGLE_API_KEY instead of OPENAI_API_KEY os . environ [ " GOOGLE_API_KEY " ] = " your-google-api-key-here " # --- 1. DEFINE TOOLS --- @ tool def get_stock_price ( symbol : str ) -> str :     """ Get the current stock price for a given ticker symbol. """     price = 150.00 if symbol . upper () == " AAPL " else 100.00     return f "The current price of { symbol . upper () } is $ { price } " tools = [ get_stock_price ] # CHANGE 3: Initialize Gemini Model # 'gemini-1.5-flash' is fast and cheap for te...

vertex ai agent builder deploy langchain app

 We will be using google colab to demostrate the deployment of a simple app to Vertex AI agent builder  Create your notebook and then run the following codes  ! pip install --upgrade --quiet google-cloud-aiplatform[agent_engines,langchain]>= 1.112 Next we will do some authentication and initialization import vertexai vertexai.init(     project= "project-xxxxxxx" ,               # Your project ID.     location= "us-central1" ,     staging_bucket= "gs://staging-bucket" # Replace with your GCS bucket ) client = vertexai.Client(     project= "project-xxxxxxxxx" ,   # Your project ID.     location= " us-central1 " ) Creating a simple function def get_exchange_rate (     currency_from : str = "USD" ,     currency_to : str = "EUR" ,     currency_date : str = "latest" , ):     """Retrieves the exchange rate between two currencies on a specifi...

python new workflow with pyproject.toml with poetry

Image
Newer python development workflow use pyproject.toml. To get started let's install poetry.    pip install poetry Then we create a project with the modern project settings.  poetry new my-python-app And it creates this  my-python-app/ ├── pyproject.toml ├── README.md ├── src/my-python-app           # Source code │   └── __init__.py ├── tests/            # Tests (Automatically created!) │   └── __init__.py And to add dependencies, run the followings poetry add --group dev pytest pytest-cov To run tests poetry run pytest To package your app, run  poetry build To remove dependencies poetry remove --group dev pytest To create a different group, we can use   poetry add --group linting black flake8 And then run black to format your python code poetry run black src To install specific dependencies # Install main + dev + linting poetry install --with linting # Install ONLY main (no dev, ...

azure function hosting mcp server

Image
Run the following command to create the azure function with mcp  azd init --template remote-mcp-functions-python -e mcpserver-python Once you have finish, go to the root folder where you will see pyproject.toml and azure.yaml, etc. Then start vscode. Make sure you have the pre-requisites like Azure function Developer tool and azurite all setup.  From vscode, select "Debug". It will prompt you to create a virtual environment. Please select "yes". When it ask you to run azurite - please select yes.  Running the MCP function app Next run the function app. You have to go into the src folder first and run vscode.  Then you will be prompted to create an environment. Hit F5 or Run -> Debug. that will install your python modules under requirements.txt.  Launch MCP Server Next we will lanch our mcp server by creating a file call mcp.json. {     " inputs " : [         {             " type " : " promptString ...

saving and loading safe tensors

While current llm model mostly uses safe tensor, we can force an existing model to save and load safe tensor by using the following command  ! pip install torch ! pip install -U transformers datasets evaluate accelerate timm from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen3.5-2B" , dtype= "auto" , device_map= "auto" ) tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen3.5-2B" ) model_inputs = tokenizer([ "The secret to baking a good cake is " ], return_tensors= "pt" ).to(model.device) generated_ids = model.generate(**model_inputs, max_length= 30 ) tokenizer.batch_decode(generated_ids)[ 0 ] ## This is where we save the model as safe tensor model.save_pretrained( "model" , safe_serialization= True ) Then we can reload it using this code model_safe = AutoModelForCausalLM.from_pretrained(     "./model" ,     trust_remote_code= True ) gener...

Creating simple hello world lambda that uses AWS Gateway API

Image
First we will create AWS lambda and then configure our app to point to it via Gateway API.  Create AWS lambda using the following steps:- Sign in to the Lambda console at https://console.aws.amazon.com/lambda. 1.Choose Create function. For Function name, enter my-function. 2. For all other options, use the default setting. Choose Create function. Gateway API Configuring the API gateway is abit tricky. Goto https://console.aws.amazon.com/apigateway. And the click on "Create API".   Then select HTTP API and click on "build". Provide a name, in this case we will call it "my-http-api" and add the required integration to lambda as shown here:- Then in the route settings, ensure you have setup the followings In the define "stage", click "Next" and please verify the configuration is ok. Click "Create". And then to test it out, go to API Gateway -> APIS -> "my-http-api". Then go to Deploy -> Stage and click on the I...