97

I see that a lot of docker images in the docker repository are made with an Ubuntu base.

What does that mean? Does each container bundle a stripped down version of the Linux kernel?

Do containers sit on top of their own kernels? But I thought containers share the kernel of the host (which in some cases is boot2docker, a custom Tiny Core Linux build, and in others something like CoreOS).

EDIT: Clarifying the question a bit. Yes I know docker is a process container, not a full VM. But since there are "Ubuntu" containers in the official docker hub registry and other OSes like CentOS, what does it mean to run Ubuntu in a container?

Answer: Ahh it just dawned on me. It is the Ubuntu user land processes, containing apt-get and other configuration processes for a particular Ubuntu build. Similarly for CentOS. Docker is not single process, just single entry. So for these distributions the entry point is some sort of init process that spawns other processes.

hippietrail
  • 4,505
  • 15
  • 53
  • 86
stewart99
  • 1,081
  • 1
  • 8
  • 5
  • 2
    "It is the Ubuntu user land processes," - not only processes, but also libraries. – osgx Mar 14 '15 at 19:08

3 Answers3

55

Docker uses host OS kernel, there is no custom or additional kernel inside container. All containers which run on a machine are sharing this "host" kernel.

Wikipedia says http://en.wikipedia.org/wiki/Docker_(software) that

Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting virtual machines.

cgroups, namespaces and LXC are features of Linux kernel to isolate groups of processes; there is still single kernel, single scheduler, and one instance of kernel memory manager.

Boot2docker and CoreOS are just lightweight Linux distributions with some host kernel; they can be used to load Docker containers.

http://boot2docker.io/

boot2docker is a lightweight Linux distribution based on Tiny Core Linux made specifically to run Docker containers. It runs completely from RAM, weighs ~27MB and boots in ~5s (YMMV).

http://en.wikipedia.org/wiki/CoreOS

A single control host (CoreOS instance) runs multiple isolated Linux systems (containers), using Docker as an additional layer of abstraction and interface[14] to the underlying operating-system-level virtualization features of the Linux kernel. ... This approach relies on the Linux kernel's cgroups functionality, which provides namespace isolation and abilities to limit, account and isolate resource usage (CPU, memory, disk I/O, etc.) for the collections of processes.

Scott Stensland
  • 464
  • 5
  • 14
osgx
  • 6,677
  • 8
  • 56
  • 71
  • 2
    This doesn't answer the question. – EML Mar 26 '17 at 10:10
  • 3
    EML, which question? By default all Docker containers have no own kernels. There is only one host kernel for all Docker containers. – osgx Mar 26 '17 at 14:00
  • sure, your answer is fine in as far as it goes, but the OP wanted to know why he must `FROM` a Ubuntu base image when the host is *already* running Ubuntu. To answer the question you have to explain what a base image is. – EML Mar 26 '17 at 14:50
  • 4
    EML, the question was edited with "Clarifying"(https://superuser.com/posts/889472/revisions) after my answer had been posted, so it is answer to original question without clarification. If you have some information to share about base images and kernels in docker, add another answer. – osgx Mar 26 '17 at 15:05
  • Is the boot2docker link dead? It is telling me something about washing backpacks... – SOFe Dec 08 '20 at 13:51
25

In almost all cases, the host OS kernel is shared. To run a different kernel you need to use virtualization. This is rare and only used when necessary due to performance degradation.

"The Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient."

This might help explain how it works: enter image description here

Source: https://www.docker.com/whatisdocker/

JeremiahBarrar
  • 1,060
  • 9
  • 14
  • 4
    Any source for "Packages may use different kernels" ??? Docker itself can't use several kernels, there is always only single host kernel. Only when combined with hypervisor (virtualization) we can start several hosts each with own kernel version and run one Docker per virtual host; but for any host there will be only one kernel for host and for its Docked containers) – osgx Mar 14 '15 at 06:46
  • I clarified my answer. – JeremiahBarrar Mar 14 '15 at 06:52
  • 2
    What is package? Docker container has no kernel inside it; it just installed and started on the kernel which is used on the host. So: one Docker = one server = one kernel, just as shown in picture. No way to use two kernels with single Docker engine; all containers inside this engine will use same kernel. I think correct answer is "No, Docker containers can't use different kernels within single instance of Docker Engine" – osgx Mar 14 '15 at 06:58
  • 1
    Each docker container can run whatever code it wants, including virtualization software that is able to load any kernel your software may require. You can run Windows in a container if you want to. – JeremiahBarrar Mar 14 '15 at 07:40
  • 1
    JeremiahBarrar, Got it, thank you for explanation. Is running virtualization software from inside of Docker container documented and is it supported by Docker? What kind of virtualization will work from Docker (software qemu, qemu+kvm, xen,...)? – osgx Mar 14 '15 at 08:10
  • Most virtualization software that runs on linux will work when properly configured. Here is an example: https://github.com/Ulexus/docker-qemu – JeremiahBarrar Mar 14 '15 at 10:48
  • 2
    This is a misleading answer. – Chris Beach Sep 28 '18 at 08:35
4

All the docker containers use the host kernel.

It would also mean, that some incompatibility between a host kernel and the container distro could cause problems. In theory. For example, if a containerized software would want to use some kernel feature what was not compiled into the host kernel, then it would not work.

The practice is that this does not happen. The main trouble source of the docker containers are the (often unexplainable) limitations of the docker itself, and not some host-guest kernel incompatibility.

(P.s. It is because practically all Linux distros work with any Linux kernel.)

peterh
  • 2,553
  • 10
  • 31
  • 49
  • "practically all Linux distros work with any Linux kernel" - this is too broad a generalisation – scytale Nov 28 '21 at 00:34
  • @scytale That is the reality. – peterh Nov 28 '21 at 09:02
  • An image containing windows software will not run on a linux system only able to create linux kernel based containers from it. – Oliver Meyer Jul 28 '22 at 07:29
  • That is why running linux software in docker on windows ist not that natural. In the past the linux kernel was run in a virtual machine. Today it is run via Hyper-V in the windows subsystem for linux (wsl). – Oliver Meyer Jul 28 '22 at 08:04
  • @OliverMeyer I fully agree - I think the problem was what m$ did to wsl2. I think it was not a conceptual change, but it was an [intentional harm](https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish). – peterh Jul 28 '22 at 11:18
  • "is**t** not natural" :-) Willkommen :-) – peterh Jul 28 '22 at 12:32
  • @peterh sorry but... why would all linux distros work with any kernel? if one distro is using a feature from new kernel, how can it work with older kernel? – Summer Sun Oct 14 '22 at 03:29
  • 1
    @SummerSun It won't. But it happens *very* rarely - practically all the tools and softwares what an ordinary user (and most developer) needs is supported by the current mainstream kernel. If a new, patched kernel X has a function Y, which is used by an essential software of your distro, then yes, that won't work. But I did not see such a thing since decades (although nothing prevents them to exist). – peterh Oct 14 '22 at 13:01
  • @SummerSun ...and it is practically impossible in docker. The essence of the docker is that it limits the kernel functionality of the processes running inside the containers. That a containerized process would need some new, non-standard kernel feature, that would be imho closed out (and it would be a crap solution even if it would exist). – peterh Oct 14 '22 at 13:02
  • thanks @peterh! I get it. I just went through a cross-platform docker based project. For windows containers the host OS version (even windows build no.) must match with image OS closely, but on linux I can run ubuntu 18.04 container (kernel 4.15) on centos 7 host (kernel 3.10.0) with nothing wrong. That's why i have a doubt for kernel compability problem and find your answer. So I am not going to consider the compability issue for my project anymore XDDDD – Summer Sun Oct 17 '22 at 06:53