autogen - getting started with gemini API

By default code initialization using Open API client, would uses OPEN API platform  https://platform.openai.com/account/api-keys. 

To use Gemini, we have to initialize it using the following code to avoid issues. (otherwise it goes back to OPEN API endpoints instead of google AI endpoints (https://ai.google.dev)

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
import asyncio
from autogen_core.models import ModelInfo
from autogen_core.models import UserMessage

# Run the agent and stream the messages to the console.
async def main() -> None:

    # model_client = OpenAIChatCompletionClient(
    # model="gemini-1.5-flash-8b",
    # api_key="AIzaSyAZPjwmY7e5Ti9NKHzjPjbYQpx1dmPwLI8",
    # )

    model_client = OpenAIChatCompletionClient(
    model="gemini-2.0-flash-lite",
    model_info=ModelInfo(vision=True, function_calling=True, json_output=True, family="unknown", structured_output=True),  
    api_key="AIzaSyAZPjwmY7e5Ti9NKHzjPjbYQpx1dmPwLI8",
    )

    response = await model_client.create([UserMessage(content="What is the capital of France?", source="user")])
    print(response)
    await model_client.close()

if __name__ == "__main__":
    asyncio.run(main())



You can find more models here
# https://ai.google.dev/gemini-api/docs/models


Comments

Popular posts from this blog

vllm : Failed to infer device type

android studio kotlin source is null error

gemini cli getting file not defined error