Introduction
DeepSeek, a company that took the world by storm has released high-performing artificial intelligence (AI) models to transform data analysis and generate insights. You can deploy and run DeepSeek models on your cloud server using Ollama, an open-source tool that allows users to run large language models (LLMs) directly on their systems.
This guide shows you how to run DeepSeek AI models on Ubuntu 24.04.
Prerequisites
Before you get started:
- Deploy an Ubuntu 24.04 server with at least 16GB of RAM. For instance, Digital Ocean VPS.
- Create a non-root sudo user. Read our guide on How to Create a Non-Root Sudo User on Ubuntu 24.04.
Install Ollama
You'll run a command that securely downloads and executes a shell script from Ollama's website to automate the installation process. This process simplifies the installation by eliminating manual steps, ensuring you have Ollama correctly installed and ready to use.
-
SSH to your server and update the package information index.
CONSOLE$ sudo apt update
-
Download and install Ollama. The command might take a few minutes to complete.
CONSOLE$ curl -fsSL https://ollama.com/install.sh | sh
In the above command:
curl
: Transfers data from Ollama website to your server.-f
: Instructscurl
to fail silently on server errors.-s
: Operatescurl
in silent mode, which means it won't show progress or error messages.-S
: Makescurl
show error messages even when-s
is used.-L
: Instructscurl
to follow any redirect links.https://ollama.com/install.sh
: Defines the URL of the script to download.| sh
: Pipes (|
) the downloaded script to the shell (sh
) for execution.
-
Check the Ollama version
CONSOLE$ ollama --version
Output:
ollama version is 0.5.7
Download and Run the DeepSeek Model
Ollama ships with different commands for working with models. You'll pull the model, confirm the installation, and prompt the model to get a response. Follow the steps below.
-
Pull the Ollama model.
CONSOLE$ ollama pull deepseek-llm:7b
Output:
verifying sha256 digest writing manifest success
-
List the model to ensure the model is available.
CONSOLE$ ollama list
Output:
NAME ID SIZE MODIFIED deepseek-llm:7b 9aab369a853b 4.0 GB 2 minutes ago
-
Run the
deepseek-llm:7b
model.CONSOLE$ ollama run deepseek-llm:7b
Output:
CONSOLE>>> Send a message (/? for help)
-
Write a prompt, for instance
what is AI?
.CONSOLE>>> what is AI?
Output:
AI stands for Artificial Intelligence, which refers to the development of computer systems or software that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
Deepseek Models System Requirements
You should understand the system requirements for each DeepSeek models for efficient deployment and performance. Each model variant, has specific requirements in terms of RAM, vCPU, and disk space. Ensuring your hardware meets these specifications guarantees optimal functioning and maximizes the potential of DeepSeek AI models in your projects. The following lists the system requirements for some of the DeepSeek models.
| Model Name | RAM | vCPU | Disk Space |
|------------------|------|------|------------|
| DeepSeek-LLM:1.5B| 8GB | 2 | 50GB |
| DeepSeek-LLM:7B | 16GB | 4 | 100GB |
| DeepSeek-LLM:33B | 32GB | 8 | 200GB |
| DeepSeek-LLM:67B | 64GB | 16 | 500GB |
| DeepSeek-LLM:671B| 128GB| 32 | 1TB |
Conclusion
In this guide, you installed Ollama using a script, pulled a DeepSeek model, and ran the model on Ubuntu. Now, you're ready to use DeepSeek models efficiently for your projects, harnessing their full potential. For advanced setups, consider deploying the model on a GPU server to significantly enhance response time and performance, making your AI applications more robust and scalable.