To set up Ollama on Windows, follow these detailed steps:
Download and Install Ollama
-
Download the Installer:
- Visit the Ollama download page and click on the “Download for Windows (Preview)” button. Ensure your system meets the minimum requirements: Windows 10 or later[1][3].
-
Run the Installer:
- Locate the downloaded
OllamaSetup.exefile and double-click it to start the installation process. - Follow the on-screen instructions to complete the installation. The installer does not require administrator rights and installs Ollama in your user account[4].
- Locate the downloaded
-
Verify Installation:
- After the installation, you should see an Ollama icon in the system tray.
- Open a terminal (Command Prompt, PowerShell, or any other terminal application) and type
ollamato check if the command-line interface is accessible[6].
Running Models with Ollama
-
Open a Terminal:
- Open Command Prompt or PowerShell.
-
Run a Model:
- Use the command
ollama run <model_name>to run a model. For example, to run the LLaVA model, type:OLLAMA_ORIGINS=app://obsidian.md* ollama run llama3 - If the model is not already downloaded, Ollama will automatically download it before running[3][6].
- Use the command
Using the Ollama API (Optional)
- API Access:
- Ollama runs an API server on
http://localhost:11434by default. You can interact with it using HTTP requests. - Example using PowerShell:
(Invoke-WebRequest -method POST -Body '{"model":"llama2", "prompt":"Why is the sky blue?", "stream": false}' -uri http://localhost:11434/api/generate ).Content | ConvertFrom-json - This command sends a prompt to the LLaVA model and returns the response[3][4].
- Ollama runs an API server on
Setting Environment Variables (Optional)
- Configure Environment Variables:
- If you need Ollama to listen on all interfaces, set the
OLLAMA_HOSTenvironment variable. - Open the Control Panel (Windows 10) or Settings (Windows 11), navigate to System > Advanced system settings > Environment Variables.
- Add a new user variable:
- Variable name:
OLLAMA_HOST - Variable value:
0.0.0.0:8080(or another port if 8080 is in use)[13][14].
- Variable name:
- If you need Ollama to listen on all interfaces, set the
Using Ollama with Open WebUI (Optional)
-
Install Docker:
- Ensure Docker Desktop is installed and running on your system.
-
Run Open WebUI:
- Use the following Docker command to deploy Open WebUI:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main - Access Open WebUI at
http://localhost:3000in your web browser. Follow the instructions to set up and connect it to Ollama[5][10].
- Use the following Docker command to deploy Open WebUI:
By following these steps, you can successfully install and run Ollama on a Windows system, allowing you to leverage various AI models locally.
Citations: [1] https://ollama.com/download/windows [2] https://www.linkedin.com/pulse/ollama-windows-here-paul-hankin-2fwme [3] https://ollama.com/blog/windows-preview [4] https://github.com/ollama/ollama/blob/main/docs/windows.md [5] https://www.gpu-mart.com/blog/how-to-install-and-use-ollama-webui-on-windows [6] https://www.jeremymorgan.com/blog/generative-ai/ollama-windows/ [7] https://www.reddit.com/r/ollama/comments/1buv5ct/made_a_quick_tutorial_on_installing_ollama_on/ [8] https://www.youtube.com/watch?v=3t_P0tDvRCE [9] https://www.youtube.com/watch?v=3W-trR0ROUY [10] https://www.doprax.com/tutorial/a-step-by-step-guide-for-installing-and-running-ollama-and-openwebui-locally-part-1/ [11] https://www.youtube.com/watch?v=0eTu2pirOcA [12] https://www.youtube.com/watch?v=AI2OLoPsn7E [13] https://github.com/ollama/ollama/blob/main/docs/faq.md [14] https://www.thetechnerd.org/articles/mastering-ollama-a-step-by-step-guide-to-installing-ollama-and-the-open-webui-frontendmastering-ollama-a-step-by-step-guide-to-installing-ollama-and-the-open-webui-frontend