Posts

Running mcp server in a more realistic environment - close to production

Image
Here is a way to productionzing a simple mcp server application. The code for this is here import logging from fastapi import FastAPI from mcp . server . fastmcp import FastMCP # 1. Initialize your MCP logic mcp = FastMCP ( " ProductionToolbox " ) # Add a sample tool @ mcp . tool () def greeting_time ( server_id : str ) -> str :     """ Returns a greeting hello. """     return f "Server hello { server_id } ." # 2. Create your Production Web App (FastAPI) app = FastAPI ( title = " MCP Cloud Gateway " ) # --- THE INTEGRATION POINT --- # This line connects the MCP protocol to the web server. # It automatically creates endpoints like /sse and /messages. app . mount ( " /mcp " , mcp . sse_app ()) # ----------------------------- @ app . get ( " /health " ) def health_check ():     """ A standard production health check endpoint. """     return { " status "...

langchain using gemini flash 2.5

Image
We can use langchain  to build our mcp server import os from typing import Annotated , TypedDict , List # CHANGE 1: Import Google GenAI instead of OpenAI from langchain_google_genai import ChatGoogleGenerativeAI from langchain_core . tools import tool from langgraph . graph import StateGraph , END from langgraph . graph . message import add_messages # --- SETUP --- # CHANGE 2: Use GOOGLE_API_KEY instead of OPENAI_API_KEY os . environ [ " GOOGLE_API_KEY " ] = " your-google-api-key-here " # --- 1. DEFINE TOOLS --- @ tool def get_stock_price ( symbol : str ) -> str :     """ Get the current stock price for a given ticker symbol. """     price = 150.00 if symbol . upper () == " AAPL " else 100.00     return f "The current price of { symbol . upper () } is $ { price } " tools = [ get_stock_price ] # CHANGE 3: Initialize Gemini Model # 'gemini-1.5-flash' is fast and cheap for te...

vertex ai agent builder deploy langchain app

 We will be using google colab to demostrate the deployment of a simple app to Vertex AI agent builder  Create your notebook and then run the following codes  ! pip install --upgrade --quiet google-cloud-aiplatform[agent_engines,langchain]>= 1.112 Next we will do some authentication and initialization import vertexai vertexai.init(     project= "project-xxxxxxx" ,               # Your project ID.     location= "us-central1" ,     staging_bucket= "gs://staging-bucket" # Replace with your GCS bucket ) client = vertexai.Client(     project= "project-xxxxxxxxx" ,   # Your project ID.     location= " us-central1 " ) Creating a simple function def get_exchange_rate (     currency_from : str = "USD" ,     currency_to : str = "EUR" ,     currency_date : str = "latest" , ):     """Retrieves the exchange rate between two currencies on a specifi...

python new workflow with pyproject.toml with poetry

Image
Newer python development workflow use pyproject.toml. To get started let's install poetry.    pip install poetry Then we create a project with the modern project settings.  poetry new my-python-app And it creates this  my-python-app/ ├── pyproject.toml ├── README.md ├── src/my-python-app           # Source code │   └── __init__.py ├── tests/            # Tests (Automatically created!) │   └── __init__.py And to add dependencies, run the followings poetry add --group dev pytest pytest-cov To run tests poetry run pytest To package your app, run  poetry build To remove dependencies poetry remove --group dev pytest To create a different group, we can use   poetry add --group linting black flake8 And then run black to format your python code poetry run black src To install specific dependencies # Install main + dev + linting poetry install --with linting # Install ONLY main (no dev, ...

azure function hosting mcp server

Image
Run the following command to create the azure function with mcp  azd init --template remote-mcp-functions-python -e mcpserver-python Once you have finish, go to the root folder where you will see pyproject.toml and azure.yaml, etc. Then start vscode. Make sure you have the pre-requisites like Azure function Developer tool and azurite all setup.  From vscode, select "Debug". It will prompt you to create a virtual environment. Please select "yes". When it ask you to run azurite - please select yes.  Running the MCP function app Next run the function app. You have to go into the src folder first and run vscode.  Then you will be prompted to create an environment. Hit F5 or Run -> Debug. that will install your python modules under requirements.txt.  Launch MCP Server Next we will lanch our mcp server by creating a file call mcp.json. {     " inputs " : [         {             " type " : " promptString ...

saving and loading safe tensors

While current llm model mostly uses safe tensor, we can force an existing model to save and load safe tensor by using the following command  ! pip install torch ! pip install -U transformers datasets evaluate accelerate timm from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen3.5-2B" , dtype= "auto" , device_map= "auto" ) tokenizer = AutoTokenizer.from_pretrained( "Qwen/Qwen3.5-2B" ) model_inputs = tokenizer([ "The secret to baking a good cake is " ], return_tensors= "pt" ).to(model.device) generated_ids = model.generate(**model_inputs, max_length= 30 ) tokenizer.batch_decode(generated_ids)[ 0 ] ## This is where we save the model as safe tensor model.save_pretrained( "model" , safe_serialization= True ) Then we can reload it using this code model_safe = AutoModelForCausalLM.from_pretrained(     "./model" ,     trust_remote_code= True ) gener...

Creating simple hello world lambda that uses AWS Gateway API

Image
First we will create AWS lambda and then configure our app to point to it via Gateway API.  Create AWS lambda using the following steps:- Sign in to the Lambda console at https://console.aws.amazon.com/lambda. 1.Choose Create function. For Function name, enter my-function. 2. For all other options, use the default setting. Choose Create function. Gateway API Configuring the API gateway is abit tricky. Goto https://console.aws.amazon.com/apigateway. And the click on "Create API".   Then select HTTP API and click on "build". Provide a name, in this case we will call it "my-http-api" and add the required integration to lambda as shown here:- Then in the route settings, ensure you have setup the followings In the define "stage", click "Next" and please verify the configuration is ok. Click "Create". And then to test it out, go to API Gateway -> APIS -> "my-http-api". Then go to Deploy -> Stage and click on the I...

vertex ai agent runtime deployment fail - reasoning engine fail to start and cannot serve traffic

Image
Ran into this error when trying to deploy agent to vertex ai agent runtime.  Base on the tutorial, we are told to deploy using this command  remote_agent = client.agent_engines.create(     agent=app,     config={         "requirements": ["google-cloud-aiplatform [ agent_engines,adk ] "],         "staging_bucket": "STAGING_BUCKET",     } ) And that's was i bump into this error. To resolve it, I use the following code remote_agent = agent_engines.create(     agent,     requirements=[         "google-cloud-aiplatform [ agent_engines,langchain ] ==1.140.0",         "pydantic==2.12.3",         "cloudpickle==3.1.2"     ] ) And then i was ble to see that my agent being deployed. 

bicep how do i consume public avm modules

In this example, we will be using Azure verified module to create a storage account. So first we need  module cheapStorage 'br/public:avm/res/storage/storage-account:0.31.2' = {   name : 'storageDeployment'   params : {     // Required: Must be 3-24 lowercase letters/numbers and globally unique     name : 'stg ${ uniqueString ( resourceGroup (). id ) } '     location : resourceGroup (). location         skuName : 'Standard_LRS'     kind : 'StorageV2'     accessTier : 'Hot' // 'Hot' is better if you'll actually look at the files; 'Cool' is cheaper for long-term storage         allowBlobPublicAccess : false     networkAcls : {       defaultAction : 'Allow'       bypass : 'AzureServices'     }   } } // Output the primary endpoint so you can actually find your new toy output storageUri string = cheapStorage . outputs . primar...

bicep extending the template earlier to support azure service bus

In the previous post we creating azure storage account using bicep. Now we are going to create a service bus module and calling it.  So we will place our service bus under the path modules/servicebus/main.bicep and its contents are given here. This will create a service bus namespace and a queue @description('Name of the Service Bus namespace') param serviceBusNamespaceName string @description('Name of the Queue') param serviceBusQueueName string @description('Location for all resources.') param location string = resourceGroup().location resource serviceBusNamespace 'Microsoft.ServiceBus/namespaces@2022-01-01-preview' = {   name: serviceBusNamespaceName   location: location   sku: {     name: 'Standard'   }   properties: {} } resource serviceBusQueue 'Microsoft.ServiceBus/namespaces/queues@2022-01-01-preview' = {   parent: serviceBusNamespace   name: serviceBusQueueName   properties: {     lockDuration: 'PT5M'   ...

bicep creating storage account using module

To create a storage account we can make our bicep code more re-usable by placing in into modules So we have this as our directory structure root (main.bicep) -> module -> storage -> (we call this main.bicep too)  root main.bicep contains the following code // main.bicep module storageModule './modules/storage/main.bicep' = {   name: 'storageDeploy-${uniqueString(resourceGroup().id)}'    params: {     storageSku: 'Standard_LRS'   } } // Accessing an output from the module output storageId string = storageModule.outputs.saName   And our storage module (modules/storage/main.bicep)  // main.bicep @description('Storage Account Name (must be globally unique)') param storageName string = 'store${uniqueString(resourceGroup().id)}' @description('The location for the resource') param location string = resourceGroup().location @description('Storage SKU') @allowed([   'Standard_LRS'   'Standard_GRS' ]) param storage...

azure bicep quickstart to create many different resources in Azure

This link provides lots of resource and sample code to work with Bicep  https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts

model safe tensor - how to determine if model uses safe tensor

Image
  Potential you can check its main branch for example for Qwen, it will be this here https://huggingface.co/Qwen/Qwen3.5-2B/tree/main And then you can see there's safetensor as shown below:-

aws sqs to read and write to a queue

Image
 To create a cloud formation sqs, we can use the following code AWSTemplateFormatVersion : ' 2010-09-09 ' Description : ' Creates SQS Queue with specific User Access ' Parameters :   AccountId :     Type : String     Description : Your 12-digit AWS Account ID     Default : ' 00000000 '   UserName :     Type : String     Description : IAM User to grant access     Default : ' jeremydev ' Resources :   MySQSQueue :     Type : AWS::SQS::Queue     Properties :       QueueName : mytestsqs1       VisibilityTimeout : 300   MySQSQueuePolicy :     Type : AWS::SQS::QueuePolicy     Properties :       Queues :         - ! Ref MySQSQueue       PolicyDocument :         Version : ' 2012-10-17 '         Statement :           - Sid : Allo...