Documentation
Quickstart
Start running ML models in minutes.
Clone the repo
Clone the codebase locally by running the following:
Hello World
Simple examples demonstrate how a library works without diving in too deep. Run your first Luminal code like so:
Great! You’ve ran your first Luminal model!
Run Llama 3
Run the following to start generating text with Llama 3 8B:
Luminal currently isn’t well optimized for CPU usage, so running large models like Llama 3 on CPU isn’t recommended.