creating ollama model from huggingface's model
First we will need to install hugging face cli and set the environment variable to optimize the download speed.
pip install -U "huggingface_hub[cli]"
pip install -U "huggingface_hub[hf_transfer]"
HF_HUB_ENABLE_HF_TRANSFER=1
Next, we will download the model we wanted. For example
huggingface-cli download kepung/llama-fibo-gguf
Then we note down the path of the model being transfered.
FROM /root/.cache/huggingface/hub/models--kepung--llama-fibo-gguf/snapshots/3b583d43ffab9421fbf272464c26ed40dca13a15/unsloth.Q8_0.gguf
Once you save it, ensure your llama is running. If not run ollama serve.
Then in another command prompt, run
ollama create kepung/ollama-llama3-2 -f Modelfile
You should see the output from here.
Comments