Posts

opencode setting up skills

opencode can be extended with skills. To setup, go and create opencode.json config file.  In the folder that you're trying to fun opencode, create a file called opencode.json and then place the following content in there. It will setup obra superpowers and opencode-devcontainers.  superpowers is a skills or plugin that supports your model application development. It helps with writting plans, request code review and brain storming.  devcontainers provide workspace isolations by running your code on a container and not touching the host.  To install all these superpowers above, add this to your opencode.json file. opencode.json  {   " $schema " : " https://opencode.ai/config.json " ,   " plugin " : [ " superpowers@git+https://github.com/obra/superpowers.git " , " opencode-devcontainers " ] } More info https://github.com/athal7/opencode-devcontainers https://github.com/obra/superpowers

opencode using qwen 3.6 code for free

Image
Opencode supports qwen 3.6 and you can use it for free. Here is how you can use it. Download and install opencode. curl -fsSL https://opencode.ai/install | bash Then you can fire it up in your bash terminal.  Press / (slash) and then choose your qwen 3.6 as your model. And that's it.   Give it a go, ask it to coding :- 

Google ADK - taking a look at LLMAgent, Agent, AgentTool, SubAgent, SequentialAgent and ParrallelAgent

This covers some confusion around agent in Google ADK. So what is the differences between LLMAgent, Agent, SequentialAgent and ParallelAgent?  LLMAgent is used for reasoning, understanding your intention, making decision and ability to interact with tools.  To use LlmAgent here is an example capital_agent = LlmAgent ( model = "gemini-2.5-flash" , name = "capital_agent" , description = "Answers user questions about the capital city of a given country." # instruction and tools will be added next ) Agent is an alis for LLMAgent.  SequentialAgent on the other hand are pre-defined sequential workflow to be carried out by an orchestration agent. In this case, it would be Agent / LLMAgent.  Code example of sequentialAgent can be shown here  location_strategy_pipeline = SequentialAgent (     name = " LocationStrategyPipeline " ,     description = """ Comprehensive retail location strategy analysis pipeline. "...

bicep using user defined function

User defined function can be quite handy to help us some save time. Here is an example of how we can defned user defined function (like in golang) func buildResourceName ( prefix string , env string , suffix string ) string =>   toLower ( ' ${ prefix } - ${ env } - ${ suffix } ' ) And to use it we simple call it here in our storage container examples:-  resource container 'Microsoft.Storage/storageAccounts/blobServices/containers@2023-01-01' = [ for name in containers : {   parent : blobService   name : buildResourceName ( name , 'dev' , 'aue' )   properties : {     publicAccess : 'None'     metadata : {       project : 'audit-2026'     }   } }] You can get all the bicep function here https://learn.microsoft.com/en-us/azure/azure-resource-manager/bicep/bicep-functions Placing your function in another file  You can place all your function in a separate file and import it in.  functions....

bicep using if condition and for keyword

We can use if to control whether we should create a resource in bicep. For example, we have this code here and you can see we are using "if" to evaluate if we should be creating resources. resource container 'Microsoft.Storage/ storageAccounts/blobServices/containers@2023-01-01' = if ( deploy ) {   parent : blobService   name : name   properties : {     publicAccess : 'None'     metadata : {       project : 'audit-2026'     }   } } And then we can use "for" to create multiple container, for example, here we are using for and looping our array called containers to create multiple container in our storage account. param containers array = [   'container1'   'container2'   'container3' ] resource container 'Microsoft.Storage/ storageAccounts/blobServices/containers@2023-01-01' = [ for name in containers : {   parent : blobService   name : name   properties : {     publicAcc...

bicep working with existing resources and creating additional resources out of it

Image
We can use bicep with existing resource. Let's say we have a storage account called " mystorage2026jw To do that can use existing block. Bicep is very particular about placement of the curly braces after the "equal" = sign.  Otherwise you will get BCP-007 error  storage-account.bicep param resourceName string = 'mytest-kv-rg' param name string = 'mystorage2026jw' resource storageAccount 'Microsoft.Storage/storageAccounts@2021-09-01' existing = {   name : name   scope : resourceGroup ( resourceName ) } output id string = storageAccount . id output currentStorageAccountName string = storageAccount . name Running az bicep to validate our code is good az bicep build --file storage-account.bicep To see your what-if plan   az deployment group what-if -g mytest-kv-rg --template-file storage-account.bicep --query "properties.outputs" And to run the actual deployment, you do   az deployment group create -g mytest-kv-rg --templ...

terraform provider alias

Terraform provider alias allow us to have different configuration when provision resources. In Azure term, that would be creating different resources using different subscriptions. Let's illustate this quickly.  Here we are configuring provider with different subscriptions  # Default Provider (Subscription A) provider "azurerm" {   features {}   subscription_id = " 00000000-0000-0000-0000-000000000000 " } # Secondary Provider (Subscription B) provider "azurerm" {   alias           = " sub_b "   features {}   subscription_id = " 11111111-1111-1111-1111-111111111111 " } And to use this provider here, we can try to create a resource group. resource "azurerm_resource_group" "rg_in_sub_a" {   name     = " primary-resources "   location = " East US "   # No provider specified, so it uses the default } resource "azurerm_resource_group" "rg_in_sub_b" {   provi...

terraform move code block - better way to let terraform know you have moved your resources

Image
 Sometimes we refactor our terraform code and move the basic terraform code  Let's say we have the following code  # main.tf (Root level) resource "azurerm_storage_account" "old_storage" {   name                     = " mystorage2026 "   resource_group_name       = " my-rg "   location                 = " East US "   account_tier             = " Standard "   account_replication_type = " LRS " } And then we refactor our code to the following code structure  The Module Code (modules/storage/main.tf) Notice that inside the module, I've given the resource a new local name: modular_storage. resource "azurerm_storage_account" "old_storage" {   name                     = " mystorage2026jw "   resource_group_name       = " m...

gcp running serverless dataproc with a hello world python script

Image
Pyspark Pi calculation To get thing started, we need a pyspark code - this is a simple pi.py andvthen upload this to your google bucket.  import sys from random import random from operator import add from pyspark . sql import SparkSession if __name__ == " __main__ " :     """         Usage: pi [partitions]     """     spark = SparkSession \         . builder \         . appName ( " PythonPi " )\         . getOrCreate ()     partitions = int ( sys . argv [ 1 ]) if len ( sys . argv ) > 1 else 2     n = 100000 * partitions     def f ( _ : int ) -> float :         x = random () * 2 - 1         y = random () * 2 - 1         return 1 if x ** 2 + y ** 2 <= 1 else 0     count = spark . sparkContext . parallelize ( range ( 1 , n...

ADK using Vertex AI RAG

Using RAG with ADK engine, we need to setup our RAG engine corpus and tell our agent to use that as a point of reference. As you can see here RAG_CORPUS contains details of this endpoint.   Then we use a tool called VertexAiRagRetrieval  from google . adk . tools . retrieval . vertex_ai_rag_retrieval import (     VertexAiRagRetrieval , ) Then we hook it up to our agent and make these knowledge accessible  # Initialize tools list tools = [] # Only add RAG retrieval tool if RAG_CORPUS is configured rag_corpus = os . environ . get ( " RAG_CORPUS " ) if rag_corpus :     ask_vertex_retrieval = VertexAiRagRetrieval (         name = " retrieve_rag_documentation " ,         description =(             " Use this tool to retrieve documentation and reference materials for the question from the RAG corpus, "         ),         rag_resources ...

ADK app deployment to Vertex AI and consuming it

This is for formality purposes and we constantly need to deploy and consumer agentic app that we deployed to Vertex AI.  To deploy,  we can use the following code. import logging import os import vertexai from dotenv import set_key from vertexai import agent_engines from vertexai . preview . reasoning_engines import AdkApp from rag . agent import root_agent logging . basicConfig ( level = logging . DEBUG ) logger = logging . getLogger ( __name__ ) GOOGLE_CLOUD_PROJECT = os . getenv ( " GOOGLE_CLOUD_PROJECT " ) GOOGLE_CLOUD_LOCATION = os . getenv ( " GOOGLE_CLOUD_LOCATION " ) STAGING_BUCKET = os . getenv ( " STAGING_BUCKET " ) # Define the path to the .env file relative to this script ENV_FILE_PATH = os . path . abspath (     os . path . join ( os . path . dirname ( __file__ ), " .. " , " .env " ) ) vertexai . init (     project = GOOGLE_CLOUD_PROJECT ,     location = GOOGLE_CLOUD_LOCATION ,     staging_bucket = S...