mirror of
https://github.com/mudler/LocalAI.git
synced 2026-02-25 19:27:04 -05:00
* chore: remove install.sh script and documentation references - Delete docs/static/install.sh (broken installer causing issues) - Remove One-Line Installer section from linux.md documentation - Remove install.sh references from installation/_index.en.md - Remove install.sh warning and commands from README.md Closes #8032 * fix: add missing closing braces to notice shortcode
51 lines
1.2 KiB
Markdown
51 lines
1.2 KiB
Markdown
---
|
|
title: "Linux Installation"
|
|
description: "Install LocalAI on Linux using binaries"
|
|
weight: 3
|
|
url: '/installation/linux/'
|
|
---
|
|
|
|
## Manual Installation
|
|
|
|
### Download Binary
|
|
|
|
You can manually download the appropriate binary for your system from the [releases page](https://github.com/mudler/LocalAI/releases):
|
|
|
|
1. Go to [GitHub Releases](https://github.com/mudler/LocalAI/releases)
|
|
2. Download the binary for your architecture (amd64, arm64, etc.)
|
|
3. Make it executable:
|
|
|
|
```bash
|
|
chmod +x local-ai-*
|
|
```
|
|
|
|
4. Run LocalAI:
|
|
|
|
```bash
|
|
./local-ai-*
|
|
```
|
|
|
|
### System Requirements
|
|
|
|
Hardware requirements vary based on:
|
|
- Model size
|
|
- Quantization method
|
|
- Backend used
|
|
|
|
For performance benchmarks with different backends like `llama.cpp`, visit [this link](https://github.com/ggerganov/llama.cpp#memorydisk-requirements).
|
|
|
|
## Configuration
|
|
|
|
After installation, you can:
|
|
|
|
- Access the WebUI at `http://localhost:8080`
|
|
- Configure models in the models directory
|
|
- Customize settings via environment variables or config files
|
|
|
|
## Next Steps
|
|
|
|
- [Try it out with examples](/basics/try/)
|
|
- [Learn about available models](/models/)
|
|
- [Configure GPU acceleration](/features/gpu-acceleration/)
|
|
- [Customize your configuration](/advanced/model-configuration/)
|