Complete setup guide for Windows, macOS, and Linux β including troubleshooting the most common errors.
Last updated: February 2026
| Component | Minimum | Recommended |
|---|---|---|
| GPU | NVIDIA RTX 2080 (8GB VRAM) | NVIDIA RTX 3090/4090 (24GB VRAM) |
| CPU | Intel i7-8700 / AMD Ryzen 7 3700X | Intel i9-12900K / AMD Ryzen 9 5900X |
| RAM | 16GB DDR4 | 32GB+ DDR4 |
| Storage | 20GB SSD | 50GB+ NVMe SSD |
| CUDA Version | 11.8 | 12.1 |
| Python | 3.9 | 3.10 or 3.11 |
| OS | Windows 10, macOS 12, Ubuntu 20.04 | Ubuntu 22.04 LTS (best performance) |
Download and install Miniconda from conda.io. This manages your Python environment and prevents dependency conflicts.
Run: git clone https://github.com/ace-step/ACE-Step.git && cd ACE-Step
git clone https://github.com/ace-step/ACE-Step.git
cd ACE-StepRun: conda env create -f environment.yml && conda activate ace-step
conda env create -f environment.yml
conda activate ace-stepFor CUDA 11.8: pip install torch==2.0.1+cu118 -f https://download.pytorch.org/whl/cu118/torch_stable.html
pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2+cu118 \
-f https://download.pytorch.org/whl/cu118/torch_stable.htmlRun: python download_models.py to fetch the ~15GB model checkpoint from HuggingFace.
python download_models.pyRun: python app.py to start the Gradio web interface at http://localhost:7860
python app.pyFor faster generation, use FP16 precision (--fp16 flag) on supported GPUs. Enable xFormers attention by installing xformers for 30-40% speed improvement. For GPUs with less than 12GB VRAM, always use 8-bit quantization.
pip install xformers
python app.py --fp16If you encounter CUDA OOM errors, try enabling 8-bit quantization by adding --quantize int8 to your launch command. Alternatively, enable CPU offload with --cpu-offload to move some layers to system RAM.
python app.py --quantize int8
# or
python app.py --cpu-offload --fp16ACE-Step requires specific versions of PyTorch and CUDA. Always use the provided conda environment file or requirements.txt. Mixing pip and conda installs frequently causes conflicts.
If port 7860 is busy, add --port 7861 (or any available port) to the Gradio launch command.
python app.py --port 7861Ensure you have enough disk space (model weights are ~15GB). Run the download script from the project root directory, not a subdirectory.
Not a fan of terminal commands and dependency hell? FM9 gives you cloud-powered ACE-Step compatible music generation in your browser. No GPU, no setup, no waiting.
Start Creating Free