How to Run an AI on Your Own PC
Artificial Intelligences (AIs) are evolving rapidly, and we’re now in an era where even everyday users interact with multiple AIs, much like mobile apps. Today, assistant chat AIs are integrated into almost every website and application. Although there’s no centralized app market like Google Play or the App Store for AIs, there are still ways to run them on your own devices. In this article, we’ll explore how you can run advanced open-source AIs, such as Llama, on your own computer in the simplest and most efficient way.
Firstly, it’s important to note that most well-known AIs in the market run on powerful hardware. Companies like OpenAI utilize state-of-the-art GPUs like the NVIDIA A100 and terabytes of RAM, which can make your own computer’s processing speeds seem relatively slow. However, don’t worry—there are open-source AIs with smaller parameters designed to run on standard computers.
Advantages of Running AI on Your Own Computer
While there are many reasons to run AI locally, here are the top benefits of setting up AI on your own device.
Data Privacy & Security
Your data remains on your device, ensuring maximum privacy and security. Running AI locally minimizes the risk of data breaches, as no information is shared with external servers or third parties.
Control & Independence
Running AI on your computer gives you complete control over the entire process. You’re not reliant on external services, allowing you to work offline and manage the AI environment according to your preferences.
Customization & Flexibility
You can tailor AI models to suit your specific needs, enhancing performance and accuracy. The open-source nature of many AI models allows for extensive customization, giving you the flexibility to adapt solutions to a wide range of applications.
Cost Efficiency
Running AI locally on your computer is a cost-effective solution, as it eliminates the need for expensive cloud services. By utilizing your own hardware, you can control expenses and avoid recurring subscription fees.
How to Set Up AI on Your Own Computer
As mentioned earlier, there isn’t a single application marketplace for this. Let’s look at the simplest way to achieve it. I use three main tools for this process. In this guide, you’ll learn step-by-step how to run AI on your computer using Docker, Ollama, and Open WebUI. Let’s explore why we use these tools and how to set them up:
What is Docker and How to Install It?
Docker is a container platform that allows us to package and run applications along with their dependencies. In AI projects, Docker enables us to run applications consistently across different operating systems, isolate all necessary dependencies, and distribute them easily. This facilitates the quick installation and execution of AI models. Follow the steps for installing Docker based on your operating system.
Installing Docker on Windows
1. Download Docker Desktop from this link.
2. After downloading, run the installer and follow the installation instructions.
3. Once installed, start Docker and verify it’s running by executing docker --version
in the terminal.
Installing Docker on MacOS
1. Download Docker Desktop for MacOS from this link.
2. Once the download is complete, open the DMG file and drag Docker to the Applications folder.
3. Start Docker and verify it’s running by executing docker --version
in the terminal.
Installing Docker on Linux
1. Open the terminal and run the following commands to install Docker:
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
2. Start Docker and verify it’s running by executing docker --version
in the terminal.
What is Ollama and How to Install It?
Ollama is a tool optimized for running AI models locally. It specifically allows the execution of text-based models locally without needing to connect to any cloud service, providing responses with low latency. Follow the installation steps for Ollama based on your operating system.
Installing Ollama on Windows
1. Download the Windows version of Ollama from this link.
2. Run the installer and complete the installation.
3. Open the terminal and verify the installation by running ollama --version
.
Installing Ollama on MacOS
1. Download the MacOS version of Ollama from this link.
2. Open the DMG file and drag Ollama to the Applications folder.
3. Open the terminal and verify the installation by running ollama --version
.
Installing Ollama on Linux
1. Open the terminal and run the following commands to install Ollama:
sudo apt-get update
sudo apt-get install -y ollama
2. Verify the installation by running ollama --version
in the terminal.
What is Open WebUI and How to Install It?
Open WebUI is a tool that allows you to control and interact with AI models through a web interface. It provides users with the ability to manage models via a graphical interface, analyze model outputs, and monitor their performance. It offers ease of use and flexibility.
If you want to install Open WebUI on your computer using Docker, open the command line and run the following command:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
This command will start a container in Docker and make Open WebUI accessible at http://localhost:3000
.
If you are installing Open WebUI on another device, you can refer to the GitHub page for installation instructions.
How to Install an AI Model on Your Own Computer
When it comes to finding and downloading models, we’ll use Ollama. To provide an example, I’ve included a video below showing how to obtain the code needed to install LLaMA 3.1. By following the steps, you can load any model you wish.
How to Find a Model with Ollama
- Visit the Ollama Library page.
- Enter the name of the AI model you want to install.
- Select the version of the model you wish to install and copy the code shown on the right.
- Paste the copied code into your terminal to install the model.
How to Run the AI Model Installed on Your Own Computer
After completing all the steps and installing the desired model, you can run the AI model on your computer by following these steps:
- Check Docker: Open Docker and ensure that the Open WebUI container is running.
- Access Open WebUI: Navigate to http://localhost:3000/. If everything is set up correctly, the Open WebUI page should appear.
- Register and Log In: Sign up and log in to Open WebUI. The interface is designed to manage AI models in a simple way, similar to ChatGPT.
- Select Your Model: Click on “Select Model” at the top left corner and choose the AI model you want to run.
By following these steps, you can run AI smoothly on your own computer.
Which AI Should I Install on My Computer?
The answer to this question depends on you. You can find it by answering the following two questions:
- What area do I need the AI to be trained in?
You can find a suitable AI by checking the categories in which AI models are trained on the Hugging Face and Ollama Library pages. - What hardware specifications do I need to run AI models effectively?
Running large parameter models on your computer can be challenging. Therefore, many companies produce smaller parameter models for general users. Each AI model has different hardware requirements, which you can find online for the specific AI model you wish to install.