iptv techs

IPTV Techs


Use Ollama with any GGUF Model on Hugging Face Hub


Use Ollama with any GGUF Model on Hugging Face Hub


Ollama is an application based on llama.cpp to engage with LLMs straightforwardly thraw your computer. You can engage any GGUF quants produced by the community (bartowski, MaziyarPanahi and many more) on Hugging Face straightforwardly with Ollama, without creating a novel Modelfile. At the time of writing there are 45K accessible GGUF verifypoints on the Hub, you can run any of them with a one ollama run order. We also supply customisations appreciate choosing quantization type, system prompt and more to better your overall experience.

Getting begined is as basic as:

ollama run hf.co/{engagername}/{repository}

Plrelieve remark that you can engage both hf.co and huggingface.co as the domain name.

Here are some models you can try:

ollama run hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF
ollama run hf.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abteachdd-GGUF
ollama run hf.co/arcee-ai/SuperNova-Medius-GGUF
ollama run hf.co/bartowski/Humanish-LLama3-8B-Instruct-GGUF

Custom Quantization

By default, the Q4_K_M quantization scheme is engaged, when it’s conshort-term inside the model repo. If not, we default to picking one reasonable quant type conshort-term inside the repo.

To pick a contrastent scheme, srecommend comprise a tag:

ollama run hf.co/{engagername}/{repository}:{quantization}

For example:

ollama run hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:IQ3_M
ollama run hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Q8_0


ollama run hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:iq3_m


ollama run hf.co/bartowski/Llama-3.2-3B-Instruct-GGUF:Llama-3.2-3B-Instruct-IQ3_M.gguf

Custom Chat Temptardy and Parameters

By default, a lureardy will be picked automaticpartner from a enumerate of commonly engaged lureardys. It will be picked based on the built-in tokenizer.chat_lureardy metadata stored inside the GGUF file.

If your GGUF file doesn’t have a built-in lureardy or if you want to customize your chat lureardy, you can produce a novel file called lureardy in the repository. The lureardy must be a Go lureardy, not a Jinja lureardy. Here’s an example:

{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ finish }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end|>
{{ finish }}<|assistant|>
{{ .Response }}<|end|>

To comprehend more about the Go lureardy createat, plrelieve refer to this recordation

You can chooseionpartner configure a system prompt by putting it into a novel file named system in the repository.

To alter sampling parameters, produce a file named params in the repository. The file must be in JSON createat. For the enumerate of all useable parameters, plrelieve refer to this recordation.

References

< > Update on GitHub

Source join


Leave a Reply

Your email address will not be published. Required fields are marked *

Thank You For The Order

Please check your email we sent the process how you can get your account

Select Your Plan