Run the following to start generating text with Llama 3 8B:
Copy
Ask AI
cd ./examples/llama# Download the modelbash ./setup/setup.sh# Run the modelcargo run --release --features metal # MacOS (Recommended)cargo run --release --features cuda # Nvidiacargo run --release # CPU
Luminal currently isn’t well optimized for CPU usage, so running large models like Llama 3 on CPU isn’t recommended.