Setup and customize deep learning environment in seconds.

Deepo is an open framework for painlessly assembling specialized Docker images for deep learning research. It provides a “Lego set” of dozens of standard components for preparing deep learning tools, along with a framework for composing them into custom Docker images.
At the core of Deepo is a Dockerfile generator that
We also provide a series of pre-built Docker images that
docker pull ufoym/deepo
Verify that GPU access works inside a container:
docker run --gpus all --rm ufoym/deepo nvidia-smi
If this does not work, check the issues section of the NVIDIA Container Toolkit GitHub — many solutions are already documented. To launch an interactive shell in a persistent container:
docker run --gpus all -it ufoym/deepo bash
To share data and configuration between the host (your machine or VM) and the container, use the -v option:
docker run --gpus all -it -v /host/data:/data -v /host/config:/config ufoym/deepo bash
This makes /host/data on the host visible as /data inside the container, and /host/config as /config. This isolation helps prevent containerized experiments from accidentally overwriting or reading the wrong data.
Note that some frameworks (e.g., PyTorch) use shared memory for inter-process communication. If you use multiprocessing, the container’s default shared memory size may be insufficient. Increase it with --ipc=host or --shm-size:
docker run --gpus all -it --ipc=host ufoym/deepo bash
docker pull ufoym/deepo:cpu
Launch an interactive shell:
docker run -it ufoym/deepo:cpu bash
To share data and configuration between the host (your machine or VM) and the container, use the -v option:
docker run -it -v /host/data:/data -v /host/config:/config ufoym/deepo:cpu bash
This makes /host/data on the host visible as /data inside the container, and /host/config as /config. This isolation helps prevent containerized experiments from accidentally overwriting or reading the wrong data.
Note that some frameworks (e.g., PyTorch) use shared memory for inter-process communication. If you use multiprocessing, the container’s default shared memory size may be insufficient. Increase it with --ipc=host or --shm-size:
docker run -it --ipc=host ufoym/deepo:cpu bash
You are now ready to begin your journey.
$ python
>>> import tensorflow
>>> import torch
>>> import keras
>>> import mxnet
>>> import chainer
>>> import paddle
$ darknet
usage: darknet <function>
The docker pull ufoym/deepo command from Quick Start gives you a standard image containing every available deep learning framework. You can also customize your own environment.
If you prefer a single framework instead of the all-in-one image, simply append a tag with the framework name. For example, to pull TensorFlow only:
docker pull ufoym/deepo:tensorflow
docker pull ufoym/deepo
docker run --gpus all -it -p 8888:8888 -v /home/u:/root --ipc=host ufoym/deepo jupyter lab --no-browser --ip=0.0.0.0 --allow-root --LabApp.allow_origin='*' --LabApp.root_dir='/root'
git clone https://github.com/ufoym/deepo.git
cd deepo/generator
For example, to create an image with pytorch and keras:
python generate.py Dockerfile pytorch keras
Or with CUDA 11.3 and cuDNN 8:
python generate.py Dockerfile pytorch keras --cuda-ver 11.3.1 --cudnn-ver 8
This generates a Dockerfile with everything needed to build pytorch and keras. The generator automatically resolves dependencies and topologically sorts them, so you don’t need to worry about missing packages or ordering.
You can also specify the Python version:
python generate.py Dockerfile pytorch keras python==3.8
docker build -t my/deepo .
This may take several minutes, as some libraries are compiled from source.
| . | modern-deep-learning | dl-docker | jupyter-deeplearning | Deepo |
|---|---|---|---|---|
| ubuntu | 16.04 | 14.04 | 14.04 | 20.04 |
| cuda | X | 8.0 | 6.5-8.0 | 11.3/None |
| cudnn | X | v5 | v2-5 | v8 |
| onnx | X | X | X | O |
| tensorflow | O | O | O | O |
| pytorch | X | X | X | O |
| keras | O | O | O | O |
| mxnet | X | X | X | O |
| chainer | X | X | X | O |
| darknet | X | X | X | O |
| paddlepaddle | X | X | X | O |
| . | CUDA 11.3 / Python 3.8 | CPU-only / Python 3.8 |
|---|---|---|
| all-in-one | latest all all-py38 py38-cu113 all-py38-cu113 |
all-py38-cpu all-cpu py38-cpu cpu |
| TensorFlow | tensorflow-py38-cu113 tensorflow-py38 tensorflow |
tensorflow-py38-cpu tensorflow-cpu |
| PyTorch | pytorch-py38-cu113 pytorch-py38 pytorch |
pytorch-py38-cpu pytorch-cpu |
| Keras | keras-py38-cu113 keras-py38 keras |
keras-py38-cpu keras-cpu |
| MXNet | mxnet-py38-cu113 mxnet-py38 mxnet |
mxnet-py38-cpu mxnet-cpu |
| Chainer | chainer-py38-cu113 chainer-py38 chainer |
chainer-py38-cpu chainer-cpu |
| Darknet | darknet-cu113 darknet |
darknet-cpu |
| PaddlePaddle | paddle-cu113 paddle |
paddle-cpu |
| . | CUDA 11.3 / Python 3.6 | CUDA 11.1 / Python 3.6 | CUDA 10.1 / Python 3.6 | CUDA 10.0 / Python 3.6 | CUDA 9.0 / Python 3.6 | CUDA 9.0 / Python 2.7 | CPU-only / Python 3.6 | CPU-only / Python 2.7 |
|---|---|---|---|---|---|---|---|---|
| all-in-one | py36-cu113 all-py36-cu113 |
py36-cu111 all-py36-cu111 |
py36-cu101 all-py36-cu101 |
py36-cu100 all-py36-cu100 |
py36-cu90 all-py36-cu90 |
all-py27-cu90 all-py27 py27-cu90 |
all-py27-cpu py27-cpu |
|
| all-in-one with jupyter | all-jupyter-py36-cu90 |
all-py27-jupyter py27-jupyter |
all-py27-jupyter-cpu py27-jupyter-cpu |
|||||
| Theano | theano-py36-cu113 |
theano-py36-cu111 |
theano-py36-cu101 |
theano-py36-cu100 |
theano-py36-cu90 |
theano-py27-cu90 theano-py27 |
theano-py27-cpu |
|
| TensorFlow | tensorflow-py36-cu113 |
tensorflow-py36-cu111 |
tensorflow-py36-cu101 |
tensorflow-py36-cu100 |
tensorflow-py36-cu90 |
tensorflow-py27-cu90 tensorflow-py27 |
tensorflow-py27-cpu |
|
| Sonnet | sonnet-py36-cu113 |
sonnet-py36-cu111 |
sonnet-py36-cu101 |
sonnet-py36-cu100 |
sonnet-py36-cu90 |
sonnet-py27-cu90 sonnet-py27 |
sonnet-py27-cpu |
|
| PyTorch | pytorch-py36-cu113 |
pytorch-py36-cu111 |
pytorch-py36-cu101 |
pytorch-py36-cu100 |
pytorch-py36-cu90 |
pytorch-py27-cu90 pytorch-py27 |
pytorch-py27-cpu |
|
| Keras | keras-py36-cu113 |
keras-py36-cu111 |
keras-py36-cu101 |
keras-py36-cu100 |
keras-py36-cu90 |
keras-py27-cu90 keras-py27 |
keras-py27-cpu |
|
| Lasagne | lasagne-py36-cu113 |
lasagne-py36-cu111 |
lasagne-py36-cu101 |
lasagne-py36-cu100 |
lasagne-py36-cu90 |
lasagne-py27-cu90 lasagne-py27 |
lasagne-py27-cpu |
|
| MXNet | mxnet-py36-cu113 |
mxnet-py36-cu111 |
mxnet-py36-cu101 |
mxnet-py36-cu100 |
mxnet-py36-cu90 |
mxnet-py27-cu90 mxnet-py27 |
mxnet-py27-cpu |
|
| CNTK | cntk-py36-cu113 |
cntk-py36-cu111 |
cntk-py36-cu101 |
cntk-py36-cu100 |
cntk-py36-cu90 |
cntk-py27-cu90 cntk-py27 |
cntk-py27-cpu |
|
| Chainer | chainer-py36-cu113 |
chainer-py36-cu111 |
chainer-py36-cu101 |
chainer-py36-cu100 |
chainer-py36-cu90 |
chainer-py27-cu90 chainer-py27 |
chainer-py27-cpu |
|
| Caffe | caffe-py36-cu113 |
caffe-py36-cu111 |
caffe-py36-cu101 |
caffe-py36-cu100 |
caffe-py36-cu90 |
caffe-py27-cu90 caffe-py27 |
caffe-py27-cpu |
|
| Caffe2 | caffe2-py36-cu90 caffe2-py36 caffe2 |
caffe2-py27-cu90 caffe2-py27 |
caffe2-py36-cpu caffe2-cpu |
caffe2-py27-cpu |
||||
| Torch | torch-cu113 |
torch-cu111 |
torch-cu101 |
torch-cu100 |
torch-cu90 |
torch-cu90 torch |
torch-cpu |
|
| Darknet | darknet-cu113 |
darknet-cu111 |
darknet-cu101 |
darknet-cu100 |
darknet-cu90 |
darknet-cu90 darknet |
darknet-cpu |
@misc{ming2017deepo,
author = {Ming Yang},
title = {Deepo: Set up a deep learning environment with a single command line.},
year = {2017},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/ufoym/deepo}}
}
We appreciate all contributions. If you are planning to contribute bug fixes, please go ahead and open a pull request directly. If you plan to contribute new features, utility functions, or extensions, please open an issue first to discuss your idea with us.
Deepo is MIT licensed.