User Tools

Site Tools


first_use

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
first_use [2025/12/30 02:33] robynfirst_use [2025/12/30 03:51] (current) robyn
Line 21: Line 21:
   - Persona: This is where you can put the AvaDroids backstory. You can change this, about a paragraph will do. You don't have to do anything right away here.   - Persona: This is where you can put the AvaDroids backstory. You can change this, about a paragraph will do. You don't have to do anything right away here.
   - This is where you put your UUID and if you want UUIDs of other administrators seperate with a comma. Cut and paste what is called "Key" in Firestorm from YOUR profile.    - This is where you put your UUID and if you want UUIDs of other administrators seperate with a comma. Cut and paste what is called "Key" in Firestorm from YOUR profile. 
-  - The LLM that Ollama will use. I recommend llama2-uncensored 7B to start with, as it will run well on an 8GB VRAM GPU or bigger, it uses about 6GB. You can pull the LLM or I recommend, run it, which will pull and run a command line session. So, Via SSH or Console on the machine running Ollama (it may be the same one as your docker or a totally different machine run ''ollama run llama2-uncensored''. Wait for the LLM to be pulled and the chat prompt is shown. Have a chat to test that it works. The type /bye to close. If you want to list your LLMs type ''ollama list''. Cut and paste the Model ''llama2-uncensored'' or whatever model you want to use. leave off the tag like :latest in llama2-uncensored:latest.+  - The LLM that Ollama will use. I recommend llama3.2 to start with, as it will run well on an 8GB VRAM GPU or bigger, it uses about 5GB. You can pull the LLM or I recommend, run it, which will pull and run a command line session. So, Via SSH or Console on the machine running Ollama (it may be the same one as your docker or a totally different machine run ''ollama run llama3.2''. Wait for the LLM to be pulled and the chat prompt is shown. Have a chat to test that it works. The type /bye to close. If you want to list your LLMs type ''ollama list''. Cut and paste the Model ''llama3.2'' or whatever model you want to use. leave off the tag like :latest in llama3.2:latest. llama2-uncensored will not work. Disregard what the graphic says
   - Edit the IP number of your Ollama server. It should look like %%http://%%<fc #ffff00>192.168.1.11</fc>:11434/api/generate . Just the IP leave **:11434/api/generate** as it is.   - Edit the IP number of your Ollama server. It should look like %%http://%%<fc #ffff00>192.168.1.11</fc>:11434/api/generate . Just the IP leave **:11434/api/generate** as it is.
   - This section should reflect the settings you did previously direct to the config.json file.   - This section should reflect the settings you did previously direct to the config.json file.
first_use.1767090805.txt.gz · Last modified: by robyn