New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can create a deeply-nested directory structure that can't be deleted #13451
Comments
Environment details (AWS, VirtualBox, physical, etc.):
|
I'm not sure what's expected here. |
Docker allows a directory structure to be created that can not be deleted. This is a bad thing: a defect. It has down-stream effects on other projects (coreutils, OpenEmbedded), which is not ideal either. How serious is it? Not so bad - at least there are some work-arounds available. So what is expected? A fix perhaps, but it's up to you. |
@DouglasRoyds I don't see how Docker is allowing this, or how it could prevent it. |
According to https://docs.docker.com/reference/run, if "We set nothing about memory, this means the processes in the container can use as much memory and swap memory as they need" - which I hadn't. As mentioned in my original report (above), I ran exactly the same experiment outside of the Docker container, generating a directory tree 10 times as deep without any problem. |
@DouglasRoyds this is not an entirely accurate statement, as any process started by the docker daemon is going to inherit certain settings (like ridiculously high ulimit values) by default, which you as a user probably do not have set (and as such are comparatively low)... note I'm just using this as an example. Curious if you tried to make this fail on a different graphdriver? |
... uh, no, sorry. I'd have to do quite a lot of work here to set up a separate partition to try btrfs, for instance. You don't happen to have a btrfs (or even vfs) instance available yourself? My test script above is pretty straightforward. Sorry I can't be more helpful. |
Just hit this, annoying. Ubuntu 12.04, kernel 3.13.0-57-generic (pretty sure this is a 14.04 kernel), docker 1.7.1 Devicemapper works:
Aufs fails:
For completeness I also tried AUFS on Ubuntu 14.04 with kernel 3.19.0-25 (i.e. from 15.04). The result is exactly the same as for AUFS on 12.04. I don't know why you still recommend AUFS! I've been having a streak of bad luck with it recently :( |
@aidanhs the main reason for recommending AUFS is that at this moment it's the most stable solution out of the box (compare the number of issues labeled devicemapper with aufs). Devicemapper certainly can be a good driver, but requires manual configuration to be useful in production (the default setting certainly are not recommended for that) In the long term, we hope to make overlay the default driver, as it is part of recent kernels. However, there are some show-stoppers there, making this not possible currently. |
Yeah, you're between a rock and a hard place really. I'm sure if I were on devicemapper I would be asking why aufs wasn't the default! |
Just found this line in /var/log/kern.log, for what it's worth. The word-wrap is mine, it was all on one line in the logfile:
I'm using the first workaround from the list in my original posting, so I assume the second and third complaints are something to do with the I have no familiarity with apparmor, so I'm open to suggestions for experiments. |
ping @ewindisch, can you have a look here? |
This problem with being unable to remove deeply nested confdir3 directories happens when building GNU Octave also. @DouglasRoyds's patch for getcwd-path-max.m4 is a good workaround. I was also able to run the following line at the bash terminal.
|
I solved this problem. Try this.
http://wiki.apparmor.net/index.php/FAQ#Failed_name_lookup_-_name_too_long |
Thanks @yuseitahara |
Changing the apparmor path_max just moves the goalposts. Outside the container (ie. on the host machine):
Inside the container:
So we just added some 800 bytes to the too-long-to-delete directory path. On the other hand, if I create the overly-long directory path first, and then bump up the path_max (outside the container), I can subsequently delete the directory. The longest path I can delete with the default path_max = 8192 is 8086 characters long, ie. 106 characters shorter than the path_max. The longest path I can create is 8099 characters, ie. 93 shorter than path_max. Sure enough, if I then increase path_max by 13 characters (over the default 8192), I can delete the directory from inside the container:
We can create a directory path that is 13 characters longer than we can delete. Interestingly, if I create a file deep down in this (deep) directory tree, I can delete the longest filename that I am able to create, eg:
|
I cannot replicate this on a recent aufs (4.4 kernel series), it accepts paths of at least 44128 characters without issues, so it looks like this was resolved upstream. |
if this is an aufs issue outside of docker, I don't think there's much we can do here indeed. I'll close this issue, but feel free to continue the discussion |
Problem still applies under 4.4 kernel:
Just to be clear, this is an interaction problem between aufs and docker, as opposed to just an aufs problem: the defect does not happen outside of a docker container (directly in an aufs mount). |
I just ran into this as well while trying to |
I have the same problem while building octave 4.0.0 docker image
For me workaround was to change docker storage driver from aufs to overlay2 System information:
|
While trying to build coreutils inside a Docker container, I found that the coreutils ./configure step left behind a very deep directory structure that could not be deleted:
Users could conceivably be affected in other cases in which an application (presumably accidentally) creates an excessively deep directory structure that can not be deleted. It does need to be quite deep: 8000 characters generally represent quite a few directories.
I can reproduce this simply with the following script:
Initially, I was tempted to blame aufs, but I can run the same experiment outside of a Docker container without hitting the same problem. I generated a directory structure over 10000 directories deep (aufs running on top of ext4), with a path length in excess of 90000 characters, but ran out of patience before finding any upper limit to aufs' path length.
The coreutils configuration runs code from
getcwd-path-max.m4
(from the gnulib project), which includes C code to probe the limits ofPATH_MAX
forgetcwd()
. It does this by generating a directory structure that runs out toPATH_MAX
, and testing thatgetcwd()
remains well-behaved. At the end, it politely attempts to remove this directory structure, but silently fails. This does not have any immediate impact on the configuration of coreutils, though a second attempt to./configure
does have a different result (asmkdir confdir3/
fails). It leaves the user unable to delete the coreutils directory after building, triggering (for instance) this downstream OpenEmbedded/Yocto defect: https://bugzilla.yoctoproject.org/show_bug.cgi?id=7338Work-arounds:
From the top, manually move the deep structure up by one level, and then delete it, eg:
For the coreutils build, patch
getcwd-path-max.m4
(and regenerate the./configure
script).Build coreutils in a data volume.
Delete the directory structure from the underlying fs (not ideal - requires root privilege).
Docker 1.5.0
Host Ubuntu 15.04
Guest Ubuntu 14.04
The text was updated successfully, but these errors were encountered: