Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Surprised to see Docker-in-Docker mentioned so deeply down here. It’s an extremely valid way of doing things, and non-trivial to implement a caching layer for.


Isn't Docker-in-Docker actually using the host's Docker daemon? I am mounting the docker socket in all my Docker-in-Docker containers, thus all the build tasks running on the same host can share the caches.

I guess one could have docker containers that actually run docker, but I don't see a reason to do that...


No Docker-in-Docker would generally refer to running a new dockerd inside of a container.


I was wondering how Docker-in-Docker works, but I couldn't find it dockermented anywhere. If it's using the host's Docker daemon, why do you need to mount the docker socket?


> If it's using the host's Docker daemon, why do you need to mount the docker socket?

There are 2 components for docker: the daemon and the tool used to send commands to the daemon. In order for said tool to be able to send commands to the daemon, it needs a way to communicate with the daemon. Mounting the socket in the container is the easiest method.

I have a "tooling" image that consists of a set of scripts (python code) to do various things ops related. One of the things is to build new images when required. I have a script that given a git commit will detect the images that need to be build and build them. Having my tooling code in a container makes it easier to deploy and use new versions of the tooling code. I don't need anything on the host apart docker itself. No build scripts, no python.

As I said, i could be running the docker daemon inside the container, but that breaks one of my rules related to containers: containers are not virtual machines, they should only run 1 process and the output of that process should be std out.


Very interesting, thanks for sharing! I found a good article about it: https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-d...

At the end he describes mounting the socket. The tooling image which has all the dependencies needed to build will also have the docker cli installed, which is what I'm assuming you are doing.

I might just use this. Cheers!


Docker-in-Docker (DinD) doesn't piggy back on the host's Docker daemon, but instead runs a stripped-down Docker daemon inside of the container. The major downside is that I/O is quite slow, since you're going through two virtualization layers (the DinD one, plus the host Docker daemon).


This is not true.

There is, effectively, no "virtualization" layer here. There are some things that if needed can cause overhead... such as the bridge networking (really shouldn't be a bottleneck for majority of people), and the CoW filesystem... which docker won't be (or shouldn't be) running on top of since, for example, overlayfs on top of overlayfs is not supported.

There is also nothing stripped down about the daemon inside of the container.


Sure, I was speaking off the cuff based on my experience from a few years ago. Maybe I messed up and somehow had the DinD daemon not use a volume mount, and that's what caused it to build images slowly?


Very well could be since it would have to fallback to the naive graphdriver that just copies stuff around.


Will mounting the socket, as the person I replied to suggested, make it use the host's docker daemon?


Yes, that's the point.


Usually docker outside of docker is used, no? If the image is cached on the host, it would be available to any container having access to the docker daemon socket as well since it's the same daemon.


No that’s only the case if you mount the Docker socket into the container, which is not what Docker-in-Docker is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: