3.5 KiB
Forget expensive NVIDIA GPUs, unify your existing devices into one powerful GPU: iPhone, iPad, Android, Mac, Linux, pretty much any device!
Get Involved
exo is experimental software. Expect bugs early on. Create issues so they can be fixed. The exo labs team will strive to resolve issues quickly.
We also welcome contributions from the community. We have a list of bounties in this sheet.
Features
Wide Model Support
exo supports LLaMA and other popular models.
Dynamic Model Partitioning
exo optimally splits up models based on the current network topology and device resources available. This enables you to run larger models than you would be able to on any single device.
Automatic Device Discovery
exo will automatically discover other devices using the best method available. Zero manual configuration.
ChatGPT-compatible API
exo provides a ChatGPT-compatible API for running models. It's a one-line change in your application to run models on your own hardware using exo.
Installation
The current recommended way to install exo is from source.
From source
git clone https://github.com/exo-explore/exo.git
cd exo
pip install -r requirements.txt
Documentation
Example Usage on Multiple Devices
Device 1:
python3 main.py
Device 2:
python3 main.py
That's it! No configuration required - exo will automatically discover the other device(s).
The native way to access models running on exo is using the exo library with peer handles. See how in this example for Llama 3.
exo also starts a ChatGPT-compatible API endpoint on http://localhost:8000. Note: this is currently only supported by tail nodes (i.e. nodes selected to be at the end of the ring topology). Example request:
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "llama-3-70b",
"messages": [{"role": "user", "content": "What is the meaning of exo?"}],
"temperature": 0.7
}'
// A ChatGPT-like web interface will be available on each device on port 8000 http://localhost:8000 and Chat-GPT-compatible API on port 8001 (currently doesn't work see https://github.com/exo-explore/exo/issues/6).
curl -X POST http://localhost:8001/api/v1/chat -H "Content-Type: application/json" -d '{"messages": [{"role": "user", "content": "What is the meaning of life?"}]}'
Inference Engines
exo supports the following inference engines:
Networking Modules
Known Issues
- 🚧 As the library is evolving so quickly, the iOS implementation has fallen behind Python. This is being addressed, and longer term we will push out an approach that will unify the implementations so we don't have to maintain separate implementations.
