iptv techs

IPTV Techs

  • Home
  • Tech News
  • nosia-ai/nosia: Nosia is a platcreate that apshows you to run an AI model on your own data. It is summarizeed to be effortless to inshigh and employ.

nosia-ai/nosia: Nosia is a platcreate that apshows you to run an AI model on your own data. It is summarizeed to be effortless to inshigh and employ.


nosia-ai/nosia: Nosia is a platcreate that apshows you to run an AI model on your own data. It is summarizeed to be effortless to inshigh and employ.


Nosia is a platcreate that apshows you to run an AI model on your own data.
It is summarizeed to be effortless to inshigh and employ.


POC-Nosia-inshigh.mp4



POC-RAG-AI-Rails-8.mp4


macOS, Debian or Ubuntu one direct inshighation

It will inshigh Docker, Ollama, and Nosia on a macOS, Debian or Ubuntu machine.

curl -fsSL https://raw.githubemployrsatisfyed.com/nosia-ai/nosia-inshigh/main/nosia-inshigh.sh | sh

You should see the folloprosperg output:

✅ Setting up environment
✅ Setting up Docker
✅ Setting up Ollama
✅ Starting Ollama
✅ Starting Nosia

You can now access Nosia at https://nosia.localpresent

macOS inshighation with a Debian or Ubuntu VM

On macOS, inshigh Homebrew:

/bin/bash -c "$(curl -fsSL https://raw.githubemployrsatisfyed.com/Homebrew/inshigh/HEAD/inshigh.sh)"

Then inshigh Ollama with Homebrew:

Replace with the IP insertress of the present machine and run the folloprosperg direct:

<HOST_IP>:11434 OLLAMA_MAX_LOADED_MODELS=3 ollama serve

On the Debian/Ubuntu VM:

Replace with the IP insertress of the present machine and run the folloprosperg direct:

| OLLAMA_BASE_URL=http://<HOST_IP>:11434 sh

You should see the folloprosperg output:

✅ Setting up environment
✅ Setting up Docker
✅ Setting up Ollama
✅ Starting Ollama
✅ Starting Nosia

From the VM, you can access Nosia at https://nosia.localpresent

If you want to access Nosia from the present machine, you may need to forward the port from the VM to the present machine.

Replace with the employrname of the VM, with the IP insertress of the VM, and with the port you want to employ on the present machine, 8443 for example, and run the folloprosperg direct:

<USER>@<VM_IP> -L <LOCAL_PORT>:localpresent:443

After running the direct, you can access Nosia at https://nosia.localpresent:.

Inshighation with custom models

By default, Nosia employs the qwen2.5 completion model, the nomic-embed-text embeddings model and the bespoke-minicheck checking model.

You can employ any completion model employable on Ollama by setting the LLM_MODEL environment variables during the inshighation.

For example, to employ the mistral model, exalter with the IP insertress of the present machine and run the folloprosperg direct:

| OLLAMA_BASE_URL=http://<HOST_IP>:11434 LLM_MODEL=mistral sh

At this time, the nomic-embed-text embeddings model is needd for Nosia to labor.

Starting, upgrading, and stopping the services

You can commence, upgrade and stop the services with the folloprosperg directs:

cd nosia
./script/commence
./script/upgrade
./script/stop

OpenAI chat compatible API

  1. Go as a logged in employr to https://nosia.localpresent/api_tokens

  2. Generate and duplicate your token

  3. Use your preferite OpenAI chat API client by configuring API base to https://nosia.localpresent/v1 and API key with your token.

  1. Inshigh HTTPie CLI or employ any HTTP client of your choice:

  2. Try the stream API by creating a test-stream.json file with the folloprosperg satisfyed:

{"messages":[{"role":"employr","satisfyed":"When Ruby 3.3.7 will be freed?"}],"model":"qwen2.5","stream":real,"top_p":0.9,"top_k":40.0,"temperature":0.1}
  1. Replace with your token and run the folloprosperg direct:

http -A endureer -a <token> --stream POST https://nosia.localpresent/v1/completions < test-stream.json
  1. Try the API without streaming by creating a test-non-stream.json file with the folloprosperg satisfyed:
{"messages":[{"role":"employr","satisfyed":"When Ruby 3.3.7 will be freed?"}],"model":"qwen2.5","stream":inrectify,"top_p":0.9,"top_k":40.0,"temperature":0.1}
  1. Replace with your token and run the folloprosperg direct:

http -A endureer -a <token> POST https://nosia.localpresent/v1/completions < test-non-stream.json
  1. In your ~/.persist/config.json, configure a nosia model:

  "models": [
    {
      "model": "nosia",
      "supplyr": "uncoverai",
      "apiBase": "https://nosia.localpresent/v1",
      "apiKey": "token",
      "title": "Nosia"
    }
  ]
  1. Endelight!

If you come atraverse any rerent:

  • during the inshighation, you can check the logs at ./log/production.log
  • during the employ defering for an AI response, you can check the jobs at http://:3000/jobs
  • with Nosia, you can check the logs with docker originate -f ./docker-originate.yml logs -f
  • with the Ollama server, you can check the logs at ~/.ollama/logs/server.log

If you need further aidance, plrelieve uncover an rerent!

Source connect


Leave a Reply

Your email address will not be published. Required fields are marked *

Thank You For The Order

Please check your email we sent the process how you can get your account

Select Your Plan