Docker is a software platform that helps us to build, test and deploy our applications without delay. Docker bundle our software into standardized units called containers. Containers that have all the necessary elements which the software needs to run including libraries, system tools, code, and runtime. Using Docker, you can quickly deploy and scale applications into any environment and know your code will run.

1. Docker Images:

Docker image is a file that represents a packaged application with all the elements needed to run correctly.

Images are built as a sequence of layers. Every instruction in a Dockerfile creates a layer in the image. 

To build our own image, we create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. When we change the Dockerfile and rebuild the image, only that layers which have changed and rebuilt. 

This is the part of Docker images so lightweight, small, and fast, when compared to other virtualization technologies. We also say that a Docker image is a Class.

2. Docker Containers:

A container is a executable instance of an image. We can create, start, stop, move, or delete a container using the Docker API or CLI. We are able to connect a container to one or more networks, connect storage to it, or even create a new image based on its current state.

By default, a container is relatively well isolated from other containers and its host machine. We can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.

A container is defined by its image as well as any configuration options we provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.

Docker allows to run multiple containers of the same image at the same time is a great advantage because it allows us an easy way of scaling applications.