mirror of
https://github.com/mudler/LocalAI.git
synced 2026-02-24 10:46:38 -05:00
* chore: remove install.sh script and documentation references - Delete docs/static/install.sh (broken installer causing issues) - Remove One-Line Installer section from linux.md documentation - Remove install.sh references from installation/_index.en.md - Remove install.sh warning and commands from README.md Closes #8032 * fix: add missing closing braces to notice shortcode
1.2 KiB
1.2 KiB
title, description, weight, url
| title | description | weight | url |
|---|---|---|---|
| Linux Installation | Install LocalAI on Linux using binaries | 3 | /installation/linux/ |
Manual Installation
Download Binary
You can manually download the appropriate binary for your system from the releases page:
- Go to GitHub Releases
- Download the binary for your architecture (amd64, arm64, etc.)
- Make it executable:
chmod +x local-ai-*
- Run LocalAI:
./local-ai-*
System Requirements
Hardware requirements vary based on:
- Model size
- Quantization method
- Backend used
For performance benchmarks with different backends like llama.cpp, visit this link.
Configuration
After installation, you can:
- Access the WebUI at
http://localhost:8080 - Configure models in the models directory
- Customize settings via environment variables or config files