Installation¶
Autoware-ML runs well in a Docker container with GPU support. We encourage you to use Docker for the smoothest experience during Early Alpha.
Prerequisites¶
- Autoware-ML
-
NVIDIA GPU with CUDA support (Compute Capability 8.0+)
-
NVIDIA Driver version 570 or higher
- Docker Engine version 20.10 or higher
- NVIDIA Container Toolkit for GPU support in Docker
- NVIDIA CUDA Toolkit for local development and building CUDA-backed native dependencies and ops
- Pixi for managing locked development environments and dependencies
- Bash Completion for autoware-ml CLI command completion on the host
Host Setup¶
We provide separate Ansible playbooks for Docker-based and local development:
# Remove apt-installed Ansible (In Ubuntu 22.04, the Ansible version is old)
sudo apt purge ansible
# Install pip
sudo apt -y update
sudo apt -y install python3-pip
# Install Ansible (if not already installed)
sudo python3 -m pip install ansible==10.7.0
# Install required Ansible collections
cd ~/autoware-ml
ansible-galaxy collection install -f -r ansible-galaxy-requirements.yaml
# Pick one of the two playbooks below depending on your workflow:
# Docker-based development host
ansible-playbook ansible/playbooks/setup_docker_host.yaml -K
# Local pixi development host
ansible-playbook ansible/playbooks/setup_local_host.yaml -K
If you prefer to install components individually, see the tabs below. Follow only the tabs that match your workflow.
Scope: Docker and Local
Check if you have a compatible NVIDIA driver installed:
If not installed or outdated:
# Add NVIDIA driver repository
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get update
# Install prerequisites
sudo apt-get install -y software-properties-common build-essential dkms
# Install NVIDIA driver (version 580 recommended)
sudo apt-get install -y nvidia-driver-580
# Reboot required
sudo reboot
After rebooting, verify with nvidia-smi.
Scope: Docker
Remove any old Docker installations:
Install Docker from the official repository:
# Install dependencies
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release
# Create keyrings directory
sudo mkdir -p /etc/apt/keyrings
# Add Docker's GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod 644 /etc/apt/keyrings/docker.asc
# Add the repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | \
sudo tee /etc/apt/sources.list.d/docker.sources > /dev/null
# Install Docker
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Verify installation
sudo docker run hello-world
Post-Installation steps for running Docker without sudo:
Scope: Docker
This enables Docker to access your GPU:
# Add NVIDIA GPG key
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg
# Add repository
echo "deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] \
https://nvidia.github.io/libnvidia-container/stable/deb/$(dpkg --print-architecture) /" | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
# Install
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
# Configure Docker runtime
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
Verify GPU access in Docker:
You should see your GPU information printed.
Scope: Local
This gives you nvcc and CUDA libraries for local development and building CUDA-backed packages from source:
UBUNTU_MAJOR_VERSION="$(. /etc/os-release && echo "${VERSION_ID%%.*}")"
if [ "$(uname -m)" != "x86_64" ]; then
echo "Unsupported architecture: $(uname -m)" >&2
exit 1
fi
sudo apt-get update
sudo apt-get install -y wget
wget "https://developer.download.nvidia.com/compute/cuda/repos/ubuntu${UBUNTU_MAJOR_VERSION}04/x86_64/cuda-keyring_1.1-1_all.deb"
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
# NVIDIA's guide uses `cuda-toolkit`; Autoware-ML pins the 12.8 series
# to match the repository's current CUDA stack.
sudo apt-get install -y cuda-toolkit-12-8
cat <<'EOF' | sudo tee /etc/profile.d/cuda-toolkit.sh >/dev/null
export CUDA_HOME=/usr/local/cuda
export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
EOF
sudo reboot
# After reboot, open a new shell
nvcc --version
Scope: Local
Install Bash completion support on the host:
Scope: Local
Install pixi for local development:
if [ "$(uname -m)" != "x86_64" ]; then
echo "Unsupported architecture: $(uname -m)" >&2
exit 1
fi
PIXI_VERSION="0.66.0"
PIXI_ARCHIVE="pixi-x86_64-unknown-linux-musl.tar.gz"
PIXI_BASE_URL="https://github.com/prefix-dev/pixi/releases/download/v${PIXI_VERSION}"
mkdir -p "$HOME/.pixi/bin"
curl -fsSLo "/tmp/${PIXI_ARCHIVE}" "${PIXI_BASE_URL}/${PIXI_ARCHIVE}"
curl -fsSLo "/tmp/${PIXI_ARCHIVE}.sha256" "${PIXI_BASE_URL}/${PIXI_ARCHIVE}.sha256"
(cd /tmp && sha256sum -c "${PIXI_ARCHIVE}.sha256")
tar -xzf "/tmp/${PIXI_ARCHIVE}" -C "$HOME/.pixi/bin" pixi
chmod +x "$HOME/.pixi/bin/pixi"
rm -f "/tmp/${PIXI_ARCHIVE}" "/tmp/${PIXI_ARCHIVE}.sha256"
export PATH="$HOME/.pixi/bin:$PATH"
Or if you want to use latest pixi version, just run:
If you prefer, you can start a new shell instead of exporting PATH
manually in the current session.
Project Setup¶
Pull the latest Docker image from our registry:
If you need to modify the Docker image or can't pull from our registry:
Then run with:
The container image builds the full locked contributor pixi
environment (dev) on top of an Ubuntu 24.04 CUDA/cuDNN development
base. PyTorch and the rest of the ML stack come from the lockfile rather
than from a preloaded PyTorch image.
Not Recommended for Alpha
We recommend Docker for the smoothest experience during Early Alpha.
Local installation uses the same locked pixi environments as Docker.
Before running pixi, make sure the machine-level GPU prerequisites are
already installed:
- NVIDIA driver compatible with CUDA 12.8
- CUDA toolkit with
nvccavailable onPATH
The local dev environment can still build CUDA-backed native
dependencies and Autoware-ML ops, so the CUDA toolkit is a required local
prerequisite even though Docker keeps that system layer inside the image.
Then choose the environment that matches your workflow:
Then choose one of the two environments below:
The separate docs environment is reserved for documentation-only
workflows and CI — you do not need to install it manually.
The setup-project task installs Bash completion automatically. Open a
new shell after it finishes so the completion file is loaded.
Dataset Setup¶
We assume all datasets are stored in the same directory. You can organize paths as you prefer, but you will need to update our configuration files to match your dataset paths. The recommended structure is:
You can set the internal environment variable AUTOWARE_ML_DATA_PATH using the provided script:
The following files will use this variable to locate your datasets:
./docker/container.sh --run.devcontainer/devcontainer.json- Model config files
Next Steps¶
Navigate to Quick Start to start training your first model.
Dev Containers
For the best development experience, see Dev Containers first.