Docker – A quick introduction

Old age days were, when IT users / developers needed to do everything on a machine host. A machine had to had a OS which provided all necessary computing, I/O, memory management and everything expected by a PC. Later virtualization era came and it allowed to use the same machine to host multiple OS by using either hyper-visor servers (HyperV, ESX etc) or virtual work station client software (VMWare workstation, Virtual box). Virtualization was a revolution in order to use the maximum capacity of any hardware but it still needed to have it’s own OS and related overweight. Containerization came next which allowed to make it more lighter to use a separated execution environment.

In summary, A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.

Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space. Containers take up less space than VMs (container images are typically tens of MBs in size), and start almost instantly.

containerd is an industry-standard container runtime with an emphasis on simplicity, robustness and portability. It is available as a daemon for both Linux and Windows, which can manage the complete container lifecycle of its host system: image transfer and storage, container execution and supervision, low-level storage and network attachments.

 

Read more @ https://goo.gl/moTigz