creating ollama model from huggingface's model

 First we will need to install hugging face cli and set the environment variable to optimize the download speed.

pip install -U "huggingface_hub[cli]"

pip install -U "huggingface_hub[hf_transfer]"

HF_HUB_ENABLE_HF_TRANSFER=1


Next, we will download the model we wanted.  For example

huggingface-cli download kepung/llama-fibo-gguf

Then we note down the path of the model being transfered.

Then create a Modelfile with the followings. Please change the path to the correct model that you have downloaded.


FROM /root/.cache/huggingface/hub/models--kepung--llama-fibo-gguf/snapshots/3b583d43ffab9421fbf272464c26ed40dca13a15/unsloth.Q8_0.gguf


Once you save it, ensure your llama is running. If not run ollama serve.


Then in another command prompt, run 

ollama create kepung/ollama-llama3-2 -f Modelfile 

You should see the output from here.













Comments

Popular posts from this blog

The specified initialization vector (IV) does not match the block size for this algorithm

NodeJS: Error: spawn EINVAL in window for node version 20.20 and 18.20