$USD
  • EUR€
  • £GBP
  • $USD
REVIEWS AI

Deploy and run LLM on Raspberry Pi 5 vs Raspberry Pi 4B (LLaMA, LLaMA2, Phi-2, Mixtral-MOE, etc)

DFRobot Jan 02 2024 14517

This article will introduce how to deploy and run the recently popular LLM (large language models), including LLaMA, LLaMA2, Phi-2, Mixtral-MOE, and mamba-gpt, on the Raspberry Pi 5 8GB. Compared to the Raspberry Pi 4 model B, the Raspberry Pi 5 has upgrades in terms of processor, memory, and other aspects, resulting in some differences in performance and effectiveness. We will compare the differences in running speed, resource usage, and model performance among these LLMs to help you choose the right device for your needs and provide a reference for researching AI with limited hardware resources. At the same time, we will also discuss the key steps and matters, so that you can experience and test the running performance of LLMs on Raspberry Pi 5.

 

Specifications of Raspberry Pi 5 vs Raspberry Pi 4B

Specifications of Raspberry Pi 5 vs Raspberry Pi 4B
 

Benchmarks on Raspberry Pi 5 8GB and Raspberry Pi 4 8GB

Benchmarks on Raspberry Pi 5 8GB and Raspberry Pi 4 8GB

(From Alasdair Allan)

 

How to Choose LLM

LLM usually puts forward the prerequisite requirements for CPU/GPU in the project requirements. Since GPU inference LLM is temporarily unavailable on Raspberry Pi 5, we need to give priority to models that support CPU operation. In terms of model selection, due to the RAM limitation of Raspberry Pi 5, we need to give priority to models with smaller memory. Under normal circumstances, the model requires double the size of RAM to run normally. The quantized model has lower memory requirements, so we recommend using an 8GB Raspberry Pi 5 and a quantized small-scale model to experience and test LLM. Running effect on Raspberry Pi.
The following list is a selection of smaller models from the open_llm_leaderboard on the Huggingface website, as well as the latest popular models.

ModelAverageARCHellaSwagMMLUTruthfulQALicense
mixtral_7bx4_moe68.8365.2785.2862.8459.85MistralAI
Phi-261.3361.0975.1158.1144.47Non-commercial
mamba-gpt-7b-v158.6161.2684.163.4646.34Apache License 2.0
LLaMA2-7B-chat-hf56.452.978.648.345.6meta
LLaMA-13B56.156.280.947.739.5Non-commercial
LLaMA-7B49.75177.835.734.3Non-commercial
ChatGLM-6B48.238.85946.748.1Non-commercial
Alpaca-7b31.928.125.825.348.5Non-commercial

 

How to run LLM

After testing, since the GPU cannot be used to infer LLM on Raspberry Pi 5, we temporarily use LLaMA.cpp and the CPU of Raspberry Pi 5 to infer each LLM. The following uses Phi-2 as an example to guide you in detail on how to deploy and run LLM on a Raspberry Pi 5 with 8GB RAM. At the same time, we will also discuss the key steps and matters needing attention, so that you can more quickly experience and test the running performance of LLM on Raspberry Pi 5.

PS: If you want to experience Mixtral_moe, please refer to: https://github.com/ggerganov/llama.cpp/tree/mixtral

 

Environment Deployment

1. Deploying a virtual Python environment on the Raspberry Pi 5

sudo apt update && sudo apt install git

mkdir my_project

cd my_project

python -m venv env

source env/bin/activate

 

2. Download dependencies

python3 -m pip install torch numpy sentencepiece

sudo apt install g++ build-essential

 

3. Download: https://github.com/ggerganov/llama.cpp/tree/gg/phi-2

 

4. Build

cd /home/dfrobot/Desktop/llama.cpp-phi

make

LLM Environment Deployment
 

Quantization

Model quantization aims to reduce hardware requirements by reducing the accuracy of the weight parameters of each neuron in a deep neural network model. GGUF is a commonly used quantification method, allowing you to run LLMs on CPU or CPU + GPU. In general, the lower the number of bits and the more quantization, the smaller and faster the model will be, but at the expense of accuracy. For example, Q4 is the quantization method of the GGUF model file, which means using a 4-bit integer to quantize the weight of the model.

The 8GB RAM of the Raspberry Pi 5 is unsuitable for quantization models. We recommend quantizing the model on a Linux PC first and then copying the quantized files to the Raspberry Pi for deployment. On the Linux PC, after deploying the environment as in the previous step, use the convert-hf-to-gguf.py in LLaMA.cpp to convert the original Microsoft phi-2 model to GGUF format. The download URL for the original Microsoft phi-2 model: https://huggingface.co/microsoft/phi-2

 

# convert hf model to GGUF

python convert-hf-to-gguf.py phi-2

# fp-16 inference

./main -m phi-2/ggml-model-f16.gguf -p "Question: Write a python function to print the first n numbers in the fibonacci series"

 

You can also directly search for already quantized GGUF model files on Huggingface and use the LLaMA.cpp to experience the model's performance quickly.

The original model is less than 6GB, while the Q4 quantized GGUF file is only 1.6GB in size. Q4-GGUF model URL: https://huggingface.co/TheBloke/phi-2-GGUF/tree/main

After downloading, place the model file in the directory: "llama.cpp-phi/models/".

LLM Quantization
 

 

Model Deployment

Running commands in the terminal of Raspberry Pi 5

./main -m models/phi-2.Q4_0.gguf -p "Question: Write a python function to print the first n numbers in the fibonacci series"

 

Summary

Test for Raspberry Pi 5 (8GB) & LLM

ModelFile SizeCompatibilityOut of MemoryToken Speed
phi-2-Q41.7GB 5.13 tokens/s
LLaMA-7B-Q4< 4GB 2.2 tokens/s
LLaMA2-7B-Q4< 7GB 2.3 tokens/s
LLaMA2-13B-Q4< 4GB 2.02 tokens/s
mixtral_7bx2_moe_Q4<8GB use llama.cpp <1 tokens/s
mamba-gpt-7b<13GB  


 Test for Raspberry Pi 4B (8GB) & LLM

ModelFile SizeCompatibilityOut of MemoryToken Speed
LLaMA-7B-Q4< 4GB ~0.1 tokens/s
Alpaca-7B-Q4< 4GB  
LLaMA2-7B-Q4< 7GB ~0.83 tokens/s
LLaMA-13B-Q4< 8GB  
ChatGLM-6B-Q413GB  

Through analyzing the above table, it is not difficult to find that the running speed of LLM on Raspberry Pi 5 has significantly improved compared to Raspberry Pi 4B. [Deploy and run LLM on Raspberry Pi 4B (LLaMA, Alpaca, LLaMA2, ChatGLM)]

This indicates that Raspberry Pi 5 has stronger processing capabilities. As a resource-limited device, phi-2-Q4 performs particularly well, with an eval time speed of 5.13 tokens/s, which undoubtedly demonstrates its excellent performance in processing speed.

In addition to the excellent performance of phi-2-Q4, LLaMA-7B-Q4, LLaMA2-7B-Q4, and LLaMA2-13B-Q4 also run satisfactorily on Raspberry Pi 5. However, it must be noted that for LLMs larger than 8GB, Raspberry Pi 5 still has limitations in loading the model, highlighting its RAM capacity constraints.

For LLM applications that require higher performance, LattePanda Sigma is a consideration. When running LLaMA2-7B-Q4, its speed can reach an astonishing 6 tokens/s. [Deploy and run LLM on LattePanda Sigma (LLaMA, Alpaca, LLaMA2, ChatGLM)]

Overall, Raspberry Pi 5 has significantly improved in processing speed compared to its predecessor, but there are still limitations when dealing with large LLMs due to its RAM capacity constraints. LattePanda Sigma provides higher performance to meet the requirements of applications with higher demands on LLMs.

In summary, although Raspberry Pi 5 has significantly improved in processing speed compared to Raspberry Pi 4B, there are still some limitations when dealing with large LLMs. This provides new challenges and opportunities for future technological development.

 

More About AI Models

1. Deploy and run LLM on Raspberry Pi 4B (LLaMA, Alpaca, LLaMA2, ChatGLM)

2. Deploy and run LLM on LattePanda Sigma (LLaMA, Alpaca, LLaMA2, ChatGLM)

REVIEW