Basic Docker Commands
-
docker version
- Displays Docker version information.
bashdocker version
-
docker info
- Provides more detailed information about the Docker installation.
bashdocker info
-
docker run
- Runs a command in a new container.
bashdocker run hello-world
-
docker ps
- Lists running containers. Use
-a
to list all containers (running and stopped).
bashdocker ps docker ps -a
- Lists running containers. Use
-
docker stop
- Stops one or more running containers.
bashdocker stop <container_id_or_name>
-
docker start
- Starts one or more stopped containers.
bashdocker start <container_id_or_name>
-
docker restart
- Restarts a running container.
bashdocker restart <container_id_or_name>
-
docker rm
- Removes one or more containers.
bashdocker rm <container_id_or_name>
-
docker rmi
- Removes one or more images.
bashdocker rmi <image_id_or_name>
Image Management
-
docker images
- Lists the Docker images available locally.
bashdocker images
-
docker pull
- Pulls an image or a repository from a registry.
bashdocker pull ubuntu
-
docker build
- Builds Docker images from a Dockerfile.
bashdocker build -t myimage .
Network Management
-
docker network ls
- Lists networks.
bashdocker network ls
-
docker network create
- Creates a new network.
bashdocker network create my-network
-
docker network rm
- Removes one or more networks.
bashdocker network rm my-network
Docker Compose Commands
-
docker-compose up
- Builds, (re)creates, starts, and attaches to containers for a service.
bashdocker-compose up
-
docker-compose down
- Stops and removes containers, networks, volumes, and images created by
up
.
bashdocker-compose down
- Stops and removes containers, networks, volumes, and images created by
These commands are quite common for daily use in Docker environments and are essential for managing Docker containers and images effectively. Remember to replace placeholders (like <container_id_or_name>
or <image_id_or_name>
) with actual values from your Docker environment.
With GPU
When you have already created and started a Docker container with NVIDIA GPU support, using it through a terminal involves a similar process to accessing any Docker container, as described previously. The difference lies in ensuring that the container was properly set up to use the NVIDIA GPU, which involves having the appropriate NVIDIA Docker configurations.
Below are detailed steps on how to access and use your NVIDIA GPU-enabled Docker container from a terminal:
1. Verify GPU Access in the Container
Before diving into accessing the container, it's useful to first confirm that your container has access to the GPU. You can check this by running a command like nvidia-smi
inside the container:
bash
docker exec -it <container_name_or_id> nvidia-smi
This command should output information about the GPU, indicating that the container has access to it. If it does, you can proceed to interact with the container normally.
2. Accessing the Container
To access the container, you use the docker exec
command to start an interactive shell session:
bash
docker exec -it <container_name_or_id> /bin/bash
Replace <container_name_or_id>
with the actual name or ID of your container. You can find this by listing all running containers with docker ps
.
3. Running GPU-Accelerated Programs
Inside the container, you can execute any installed GPU-accelerated programs. For example, if you have TensorFlow installed in a container configured for GPU, you can start a Python session and import TensorFlow to verify it recognizes the GPU:
python
import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))
This Python code should list the available GPUs if TensorFlow is set up correctly to use the GPU.
4. Exiting the Container
To exit the container terminal without stopping the container, you can simply type exit
or press Ctrl-D
.
Example Session
Here's a quick recap of how the flow might look:
-
List Containers (to find your specific container):
bashdocker ps
-
Check GPU Access (using
nvidia-smi
):bashdocker exec -it my_gpu_container nvidia-smi
-
Access the Container:
bashdocker exec -it my_gpu_container /bin/bash
-
Run Python and Check TensorFlow GPU (inside the container):
bashpython >>> import tensorflow as tf >>> print(tf.config.list_physical_devices('GPU'))
-
Exit When Done:
bashexit
Troubleshooting
If the nvidia-smi
command does not show the GPUs or if TensorFlow does not recognize the GPU, ensure that:
- Your container was started with the
--gpus all
flag or similar GPU specification. - The NVIDIA Docker runtime is correctly installed and configured on your host system.
- The Docker image you are using is CUDA-capable and has the necessary NVIDIA libraries.
By following these steps, you can effectively use and interact with your NVIDIA GPU-accelerated Docker container from the terminal.