ollama basic intro
Ollama with a double l :) makes it easy to download and run your llm on your local machine. No boiler code and you can get your llm model setup as a service ready to accept REST request. Not only that, if you're had ollama running, you can just type away to ask it about anything and get a response.
For example, i started my ollama instance then i type something like exit(). Then it gives me a bunch of details about python. I think llama 3.2 is pretty good with programming :)
Hides the complexities involve in many ways. If you're thinking of trying out llama 3.2 model, then can try this out.
Download binaries and then proceed to download your model. In my case, i am running llama 3.2
ollama run llama3.2
Then I just have to issue a curl command against it. Noticed it have updated my model accordingly.
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [
{
"role": "user",
"content": "who wrote the book godfather?"
}
],
"stream": false
}'
This would give you some outputs.
To use langchain for this, first you need to install the necessary libraries by running this in your command prompt:-
pip install langchain langchain-community langchain-core
Next, you can use the following codes
To use the prompt chat template:
from langchain_community.chat_models import ChatOllama
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOllama(model="llama3.2", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human", "{input}"),
]
)
chain = prompt | llm
chain.invoke(
{
"input_language": "English",
"output_language": "German",
"input": "I love programming.",
}
)
From the code above, noticed how we place parameters which we will be overriding when we invoke.
For example, {input_language} and {output_language} in ChatPromptTemplate
And when we invoke, actual values are submitted.
And you will get the following outputs
Another sample code to get llm to return only difference between 2 text
from langchain_community.chat_models import ChatOllama
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOllama(model="llama3.2", temperature=0)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant that understand diff given {input_language} and expected output: {output_language}. Ignore everything else. Ignore newline and whitespaces. highlight only difference",
),
("human", "{input}"),
]
)
chain = prompt | llm
response = chain.invoke(
{
"input_language": "the brown fox jump over the fence",
"output_language": "the brown fox jump over the gate",
"input": "show difference",
}
)
print(response.content)
https://python.langchain.com/docs/integrations/chat/ollama/
Comments