A machine, can be the same one, running Ollama and a Selected LLM. This machine requires an NVIDIA GPU with at least 8GB VRAM, more the better, at least 12GB recommended. For Second Life, I recommend a LLM such as
llama3.1 to start with. Ollama is preferred as it can dynamically load and unload LLMs allowing the AI engine to be used for other tasks. The Ollama Server, also, needs a fixed IP number. I do not recommend Dockerising Ollama as it runs well on the Host machine “closer” to the GPU. I recommend Ubuntu 24.04 LTS server.