If you develop AI or machine learning projects inside VS Code Dev Containers, you often need GPU acceleration for training models. By default, Dev Containers do not expose your host GPU to the container environment. This means GitHub Copilot works fine for code suggestions, but any GPU-dependent code inside the container fails. This article explains how to configure a Dev Container that has GPU access while keeping GitHub Copilot fully functional. You will learn the exact Docker settings, the devcontainer.json modifications, and the host prerequisites required.
Key Takeaways: GPU-Enabled Dev Container with Copilot
- NVIDIA Container Toolkit installation on host: Required to expose GPU hardware to Docker containers.
- devcontainer.json
runArgswith--gpus all: Grants the container access to all host GPUs. - Copilot extension in
extensionsarray: Ensures GitHub Copilot is installed inside the Dev Container.
Why GPU Access Requires Special Dev Container Configuration
Dev Containers run inside Docker containers by default. Docker containers do not have direct access to host hardware unless explicitly configured. GPU access requires the NVIDIA Container Toolkit on the host system. This toolkit enables Docker to interact with NVIDIA GPU drivers from within a container.
Without this toolkit, commands like nvidia-smi fail inside the container. Frameworks such as TensorFlow or PyTorch fall back to CPU execution, which slows model training significantly. GitHub Copilot itself does not require GPU. However, if your project uses GPU-dependent libraries, the container must have GPU support configured before Copilot can assist with GPU-related code.
The configuration involves three layers: the host operating system, Docker, and the Dev Container definition file. Each layer must allow GPU passthrough. The following sections cover the exact steps for each layer.
Steps to Set Up GitHub Copilot in a Dev Container with GPU Access
Prerequisites
- Install NVIDIA drivers on the host
Download and install the appropriate NVIDIA driver for your GPU from the NVIDIA website. Verify the driver works by runningnvidia-smiin a terminal on the host. The output must show your GPU model and driver version. - Install Docker
Install Docker Desktop or Docker Engine on your host. For Windows, use WSL 2 backend. For Linux, follow the official Docker installation guide for your distribution. - Install NVIDIA Container Toolkit
Follow the NVIDIA Container Toolkit installation guide for your OS. On Ubuntu, run:sudo apt-get install -y nvidia-container-toolkit. After installation, restart Docker withsudo systemctl restart docker. - Install VS Code and the Dev Containers extension
Install Visual Studio Code and the Dev Containers extension by Microsoft. Also install the GitHub Copilot extension in VS Code.
Create the Dev Container Configuration
- Open your project in VS Code
Open the folder containing your project. If you do not have a project, create a new folder and open it in VS Code. - Add a .devcontainer folder
In the root of your project, create a folder named.devcontainer. Inside it, create a file nameddevcontainer.json. - Set the base image
In devcontainer.json, set theimageproperty to a CUDA-enabled image. For example:"image": "mcr.microsoft.com/devcontainers/base:ubuntu-22.04". For GPU support, use an image with CUDA preinstalled, such asnvidia/cuda:12.2.0-devel-ubuntu22.04. - Add runArgs for GPU access
Add arunArgsarray with the value["--gpus", "all"]. This passes all host GPUs to the container. Example:"runArgs": ["--gpus", "all"]. - Install GitHub Copilot extension
Add theextensionsarray and include the Copilot extension identifier:"extensions": ["GitHub.copilot"]. Also add"GitHub.copilot-chat"if you want Copilot Chat. - Set container user
Add"remoteUser": "vscode"to avoid permission issues with Copilot authentication. This ensures the Copilot token is stored correctly. - Full devcontainer.json example
Paste this complete configuration into your devcontainer.json file:{
"name": "GPU Dev Container",
"image": "nvidia/cuda:12.2.0-devel-ubuntu22.04",
"runArgs": ["--gpus", "all"],
"extensions": ["GitHub.copilot", "GitHub.copilot-chat"],
"remoteUser": "vscode",
"postCreateCommand": "pip install --upgrade pip"
} - Rebuild the container
Open the Command Palette with Ctrl+Shift+P. TypeDev Containers: Rebuild and Reopen in Container. Select it. VS Code builds the container with GPU access and installs Copilot.
Verify GPU Access Inside the Container
- Open a terminal in VS Code
After the container starts, open the integrated terminal with Ctrl+Shift+Backtick. - Run nvidia-smi
Typenvidia-smiand press Enter. The output should list your GPU and driver details. If you see an error, GPU access is not configured correctly. - Test Copilot
Open a Python file and start typing a comment like# load a PyTorch model on GPU. Copilot should suggest code that uses.cuda()or.to('cuda').
Common Issues and Their Fixes
nvidia-smi returns “command not found” inside the container
This error means the NVIDIA Container Toolkit is not installed on the host, or Docker was not restarted after installation. Verify the toolkit is installed by running dpkg -l | grep nvidia-container-toolkit on the host. If the package is missing, install it. If installed, restart Docker with sudo systemctl restart docker and rebuild the container.
Copilot does not activate inside the container
Copilot requires authentication with a GitHub account that has an active Copilot subscription. Open the Command Palette and run GitHub Copilot: Sign In. Follow the authentication flow. If the extension is missing, check that the extensions array in devcontainer.json includes "GitHub.copilot".
Container fails to start with “–gpus all” on Windows
Docker Desktop on Windows requires WSL 2 backend for GPU passthrough. Ensure WSL 2 is installed and set as the default backend in Docker Desktop settings. Also install the NVIDIA driver for WSL from the NVIDIA website. Rebuild the container after these changes.
TensorFlow or PyTorch does not detect GPU
The base image may lack CUDA libraries. Use an official NVIDIA CUDA image such as nvidia/cuda:12.2.0-devel-ubuntu22.04. Verify CUDA is installed inside the container by running nvcc --version. If missing, install CUDA in the Dockerfile or use a postCreateCommand.
Dev Container with GPU vs Standard Dev Container: Key Differences
| Item | GPU-Enabled Dev Container | Standard Dev Container |
|---|---|---|
| Host requirement | NVIDIA driver + NVIDIA Container Toolkit | Docker only |
| devcontainer.json runArgs | Includes ["--gpus", "all"] |
No GPU arguments |
| Base image | CUDA-enabled image (e.g., nvidia/cuda) | Any base image |
| nvidia-smi inside container | Works and shows GPU details | Command not found |
| GPU frameworks (TensorFlow, PyTorch) | GPU acceleration available | CPU-only execution |
| Copilot functionality | Fully functional | Fully functional |
The only difference between the two configurations is the GPU-related setup. Copilot works identically in both. Choose the GPU-enabled container only when your project requires GPU acceleration for model training or inference.
You can now set up a Dev Container that combines GPU access with GitHub Copilot support. Start by installing the NVIDIA Container Toolkit on your host. Then create a devcontainer.json with --gpus all and the Copilot extension. After rebuilding the container, verify GPU access with nvidia-smi. For advanced scenarios, consider adding a Dockerfile that installs additional CUDA libraries or Python packages automatically.