Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: expand Dockerfile ENV $VARIABLES in WORKDIR #2637

Closed
brickZA opened this issue Nov 10, 2013 · 45 comments
Closed

Feature request: expand Dockerfile ENV $VARIABLES in WORKDIR #2637

brickZA opened this issue Nov 10, 2013 · 45 comments

Comments

@brickZA
Copy link

brickZA commented Nov 10, 2013

The following Dockerfile snippet:

ENV APP_PATH /app
WORKDIR $APP_PATH/workspace

results in the PWD being set to '/$APP_PATH/workspace' and not '/app/workspace' as expected.

@crosbymichael
Copy link
Contributor

@brickZA Yes, this will not work because we don't execute cd in a shell, we do a syscall to change the dir.

@brickZA
Copy link
Author

brickZA commented Dec 14, 2013

Whoa there, could we keep this open as a feature request then? Seems like a useful and expected feature; not expanding env vars for cd seems to violate the principle of least surprise.

@tianon
Copy link
Member

tianon commented Dec 17, 2013

Docker never expands environment variables. You'll see the same behavior if you try something like CMD ["echo","$SOME_ENV_VAR"], because $SOME_ENV_VAR is only expanded by the shell, and the [""] syntax bypasses the shell entirely (that's the whole point). The WORKDIR argument is (and has to be) more like the [""] syntax than the shell, because executing cd in a shell wouldn't have any useful effect for us anyhow. This is why RUN cd ... doesn't work, which you could also quite easily argue violates the principle of least surprise.

We also now have the issue of this being a backwards-incompatible change if introduced, so it'd need to be carefully vetted and transitioned somehow. I think you'll find more support if you can provide a clean transition path from where we are now to shell variables expansion (and a nice simple patch that does so), but IANTM for this area, so take all this for what it's worth (as it's simply my two cents on the matter).

@tianon
Copy link
Member

tianon commented Dec 17, 2013

I stand corrected: https://github.com/dotcloud/docker/blob/master/buildfile.go#L141 (ReplaceEnvMatches in buildfile.go)

We currently expand ENV vars inside other ENV vars, AFAIK mostly so that things like ENV PATH $PATH:/something/more work.

@tianon
Copy link
Member

tianon commented Dec 29, 2013

And you know, we also do this currently for ADD (not just ENV), which absolutely makes this surprising behavior for WORKDIR to not have. I'm reopening this on the grounds that ADD and WORKDIR should be consistent at least, especially given that they're related to one another.

Whether we remove the support from ADD or add the support to WORKDIR is still up for debate IMO. I think it's questionable at best in both cases, but can see the appeal.

@tianon tianon reopened this Dec 29, 2013
@brianclements
Copy link

I can vouch for the usefulness of expanding WORKDIR vars. I have large builds in which I like to set a version number for a piece of software and use that var to download files, make directories, build, change directories again, add/files etc. I think this would be very useful and have run up against this issue myself.

As far as backward compatibility, I would think that since WORKDIR never expanded variables before, no one would have them in existing Dockerfiles, thus not creating any unexpected behavior, when suddenly implemented, no?

@deeky666
Copy link

I'm having the exact same use case as @brianclements using an environment variable for version number of a package to build. So I'm 👍 on this :)

@SvenDowideit
Copy link
Contributor

+1

FROM debian:jessie
ENV KERNEL_VERSION  3.14.2

ADD https://www.kernel.org/pub/linux/kernel/v3.x/linux-$KERNEL_VERSION.tar.xz /
RUN tar Jxf /linux-$KERNEL_VERSION.tar.xz
WORKDIR /linux-$KERNEL_VERSION

would be nice.

@SvenDowideit SvenDowideit changed the title Dockerfile ENV $VARIABLES not expanded for WORKDIR Feature request: expand Dockerfile ENV $VARIABLES in WORKDIR May 2, 2014
@thaJeztah
Copy link
Member

OFF TOPIC @SvenDowideit just out of interest - bit confused by your example; docker containers use the host kernel, what does your example do?

@rogaha
Copy link
Contributor

rogaha commented May 9, 2014

@tianon +1 for the consistency between ADD and WORKDIR

@brianclements
Copy link

Ran into this again today. I have a universally available container to pool all my logs into so that every app runs with --volumes-from my log container. But in order for it to work, I need to access the already universal and unique $HOSTNAME var that each container is launched with.

# Because I can't pass $HOSTNAME to supervisord for it's logs, I need
# to somehow make this directory my PWD
WORKDIR /log/$HOSTNAME # doesn't work! So instead we have too...
# get ready for it....
CMD mkdir -p /log/$HOSTNAME &&\
    echo "/log/$HOSTNAME" >> /tmp/startpath &&\
    cd $(cat /tmp/startpath) &&\
    supervisord -c /config/supervisor/supervisord.conf &&\
    supervisorctl -c /config/supervisor/supervisord.conf
# ugly
# as
# sin

Any word on this feature? As I mentioned before, I can't see how it would be backwards incompatible, at least from the user's perspective. I don't know the code's internals.

@creack
Copy link
Contributor

creack commented May 9, 2014

@brianclements Having $HOSTNAME in workdir won't work. Something that we could do for consistency with ADD is resolv the variables "inline". I.E $HOSTNAME will resolv the $HOSTNAME at the time you call WORKDIR. If you don't have a ENV HOSTNAME <something> before this, it will not resolv.

@brianclements
Copy link

@creack I think that's a good middle ground for sure. In all but the above example I gave, your solution would have satisfied my need. As a side note: I realized that even if full expansion was available, in my above example at least, it would expand the name of the container that built my Dockerfile, not the one that ends up running it which is not what I want.

@SvenDowideit
Copy link
Contributor

@thaJeztah I work on boot2docker amongst other things :)

@thaJeztah
Copy link
Member

@SvenDowideit Didn't think of building 'distros' like that inside a docker container, but shows how flexible docker is! Would actually make a nice blog-post?

@mindscratch
Copy link

+1

@vvalgis
Copy link

vvalgis commented Jul 14, 2014

➕1️⃣

@llonchj
Copy link

llonchj commented Jul 21, 2014

+1

Will be nice to be build different image versions

ADD https://www.kernel.org/pub/linux/kernel/v3.x/linux-$KERNEL_VERSION.tar.xz /

emmanuel added a commit to emmanuel/docker-kafka that referenced this issue Jul 21, 2014
@richburroughs
Copy link

Just ran into this tonight, I'm doing a similar thing to some other folks, trying to set an app version with ENV and then plug that in for some install commands.

It would be great to be able to do this.

ENV PE_INSTALL_VERSION 3.3.0
ADD puppet-enterprise-${PE_INSTALL_VERSION}-el-6-x86_64.tar.gz /tmp
WORKDIR /tmp/puppet-enterprise-${PE_INSTALL_VERSION}-el-6-x86_64

The ADD part works fine.

sreinhardt pushed a commit to sreinhardt/Docker-Nagios that referenced this issue Sep 8, 2014
Apparently WORKDIR uses a syscall and does not expand variables, despite that other commands do. Hopefully this will be changed in the future. Everything else can be left variablized.

moby/moby#2637
@tianon
Copy link
Member

tianon commented Sep 11, 2014

I think this was fixed by the Dockerfile parser rewrite. 😄

@mindscratch
Copy link

+1

@erikh
Copy link
Contributor

erikh commented Sep 17, 2014

It was. :)

On Sep 11, 2014, at 12:03 AM, Tianon Gravi notifications@github.com wrote:

I think this was fixed by the Dockerfile parser rewrite.


Reply to this email directly or view it on GitHub.

@erikh
Copy link
Contributor

erikh commented Sep 17, 2014

Closing this. If you still see the issue in master (or in 1.3), please reopen the issue.

@sjackman
Copy link

Using $HOME in ENV PATH does not work, such as with ENV PATH $HOME/bin:/usr/bin:/bin. Most oddly, RUN echo $PATH gives the impression it did work and prints /root/bin:/usr/bin:/bin as expected, when in fact the actual contents of PATH reported by docker inspect is $HOME/bin:/usr/bin:/bin, and executables in ~/bin will not be found by the shell.

Dockerfile

FROM ubuntu
ENV PATH $HOME/bin:/usr/bin:/bin
RUN echo $PATH

docker build .

Sending build context to Docker daemon 15.36 kB
Sending build context to Docker daemon 
Step 0 : FROM ubuntu
 ---> eca7633ed783
Step 1 : ENV PATH $HOME/bin:/usr/bin:/bin
 ---> Running in a2fb2bbed404
 ---> 5158ee37ada4
Removing intermediate container a2fb2bbed404
Step 2 : RUN echo $PATH
 ---> Running in 3893ab154307
/root/bin:/usr/bin:/bin
 ---> f470fb06aca4
Removing intermediate container 3893ab154307
Successfully built f470fb06aca4

WAT

❯❯❯ docker inspect f470fb06aca4 |jq '.[0].Config.Env'
[
  "PATH=$HOME/bin:/usr/bin:/bin"
]

Expected

❯❯❯ docker inspect f470fb06aca4 |jq '.[0].Config.Env'
[
  "PATH=/root/bin:/usr/bin:/bin"
]

@tianon
Copy link
Member

tianon commented Oct 23, 2014

Looks like a really, really solid case for why we shouldn't do our env var substitution in RUN lines. 😉 (ie, hides bugs)

@sjackman
Copy link

There's a strange double expansion going on here, I'm guessing. I think what's happening is that Docker is expanding RUN echo $PATH to RUN echo $HOME/bin:/usr/bin:/bin, which then gets expanded by the shell to echo /root/bin:/usr/bin:/bin. The fix to this issue is for Docker to expand RUN echo $PATH to RUN echo '$HOME/bin:/usr/bin:/bin', note the single quotes around the expanded argument, to prevent this unexpected second expansion by the shell.

@brianclements
Copy link

@sjackman double check that you have actually set the initial value of $HOME. By default the variable doesn't exists in the ubuntu image. This might be the source of the variable behavior.

@brianclements
Copy link

Nevermind, it is set by default. I think at some point it wasn't, hence why I remember having to set it manually. Or maybe it was set to / or something. Anyway, @tianon, his env var substitution was using the ENV directive, not RUN, is that not a correct way to do it?

@erikh
Copy link
Contributor

erikh commented Oct 24, 2014

in 1.3, ENV replaces happen in all statements. This was unfortunately elided from the release notes, so this creates a lot of confusion right now.

There is an escaping fix coming in 1.3.1 which should resolve your inability to work with them. This was my mistake, sorry.

On Oct 23, 2014, at 9:53 PM, Brian Clements notifications@github.com wrote:

Nevermind, it is set by default. I think at some point it wasn't, hence why I remember having to set it manually. Or maybe it was set to / or something. Anyway, @tianon https://github.com/tianon, his env var substitution was using the ENV directive, not RUN, is that not a correct way to do it?


Reply to this email directly or view it on GitHub #2637 (comment).

@rorysavage77
Copy link

It would be nice if there is a better way to handle variables in all cases. Feature Request please?

@erikh
Copy link
Contributor

erikh commented Oct 30, 2014

1.3.1 should have the ENV fixes most people want out of this thread. It should be released today (if not already).

On Oct 30, 2014, at 9:44 AM, Rory Savage notifications@github.com wrote:

It would be nice if there is a better way to handle variables in all cases.


Reply to this email directly or view it on GitHub #2637 (comment).

@stongo
Copy link

stongo commented Nov 26, 2014

I'm still a little confused about this issue. I'm trying to create a deploy script with a Build Worker in mind. I was hoping I could pass environment variables to the Dockerfile from docker build

Dockerfile:

FROM debian:wheezy
ENV DEPLOY_BRANCH $BRANCH
RUN echo "$DEPLOY_BRANCH"

Build command:

BRANCH=test docker build -t test/test .

Result:

Step 2 : RUN echo "$DEPLOY_BRANCH"
 ---> Running in 70a1c682e83a
$BRANCH

Is there anyway to do this using docker 1.3.2 ?

@brianclements
Copy link

@stongo Double check that it actually works IN your container by touching a file or something and checking afterward. I'm pretty sure that it doesn't expand the values in the log output visually, but it expands as expected when running the commands.

@erikh
Copy link
Contributor

erikh commented Nov 26, 2014

Hi Marcus,

What you’re seeing is intended behavior. You cannot supply external environment variables to a docker build for a variety of technical and design reasons that I just don’t have the time to get into right now, sorry.

However, what you can do to do this at runtime is create a boot.sh that starts your app with your environment variable, and stuff that into CMD. Then you can do this:

docker run -e BRANCH=production -it myimage

Does that help?

-Erik

On Nov 26, 2014, at 7:19 AM, Marcus Stong notifications@github.com wrote:

I'm still a little confused about this issue. I'm trying to create a deploy script with a Build Worker in mind. I was hoping I could pass environment variables to the Dockerfile from docker build

Dockerfile:

FROM debian:wheezy
ENV DEPLOY_BRANCH $BRANCH
RUN echo "$DEPLOY_BRANCH"
Build command:

BRANCH=test docker build -t test/test .

Result:

Step 2 : RUN echo "$DEPLOY_BRANCH"
---> Running in 70a1c682e83a
$BRANCH
Is there anyway to do this using docker 1.3.2 ?


Reply to this email directly or view it on GitHub #2637 (comment).

@stongo
Copy link

stongo commented Nov 27, 2014

@erikh I am definitely very familiar with passing runtime environment variables to containers, but it doesn't really help in my use case.
I'm trying to create a node.js application build worker that essentially builds and packages an application branch after a successful CI test run for example.
As it stands now, the application Dockerfile fetches the code, and runs npm install to install all dependencies. I'm going for a "build once, deploy many times" type of architecture.
Having to use the environment flag in docker run would mean the application code would be fetched and npm install run everytime docker run is called, which kind of defeats the purpose of "build once, deploy many times."
I can definitely think of many use cases for having some sort of variable substitution in Dockerfiles.
In the meantime I'll obviously have to re-think how the build worker works :)

@zoechi
Copy link
Contributor

zoechi commented Nov 27, 2014

@stongo using sed/awk/... to replace a string in the Dockerfile before docker build could work.

@brianclements
Copy link

@stongo One work around that I've used, is to have a file in my context called "build-env". What I do is source it and run my desired command in the same RUN step. So for example:

build-env

VERSION=stable

Dockerfile

FROM radial/axle-base:latest
ADD build-env /build-env
RUN source build-env && mkdir /$VERSION
RUN ls /

I've needed to do something like this before and I liked the idea of a designated drop-in file for specifying my build-envs, which is easier to overwrite via command line IMHO, rather then a series of sed/awk commands. It's hackish but I don't have to edit my Dockerfile, which feels very wrong to do.

@stongo
Copy link

stongo commented Nov 27, 2014

@brianclements that seems like the least smelly way to do it - Thanks for the tip!

@erikh
Copy link
Contributor

erikh commented Nov 27, 2014

You should be able to populate the image with all copies of your code, not just one environment, then use the -e flag to determine which codebase to use.

The problem with your approach is that we can’t guarantee the image will be built the same every time, which breaks our model of how the builder works. This will never be a supported use case, so I suggested another method. I hope this helps.

-Erik

On Nov 27, 2014, at 7:55 AM, Marcus Stong notifications@github.com wrote:

@erikh https://github.com/erikh I am definitely very familiar with passing runtime environment variables to containers, but it doesn't really help in my use case.
I'm trying to create a node.js application build worker that essentially builds and packages an application branch after a successful CI test run for example.
As it stands now, the application Dockerfile fetches the code, and runs npm install to install all dependencies. I'm going for a "build once, deploy many times" type of architecture.
Having to use the environment flag in docker run would mean the application code would be fetched and npm install run everytime docker run is called, which kind of defeats the purpose of "build once, deploy many times."
I can definitely think of many use cases for having some sort of variable substitution in Dockerfiles.
In the meantime I'll obviously have to re-think how the build worker works :)


Reply to this email directly or view it on GitHub #2637 (comment).

@zoechi
Copy link
Contributor

zoechi commented Nov 28, 2014

@erikh I just started working with Docker recently. Are the reasons for this decision somewhere documented or is there a thread where this was discussed?

To some extend this seems reasonable but what I spent the most time with since I started working with Docker is to work around intentional limitations.
What's the purpose of having a repeatable build when I am not able to build what I need in the first place?
In the long run I see only few ways how this can work out.
Someone creates a fork of Docker with less restrictions or the users start collecting and sharing workarounds more efficiently so not every user has to waste so much time finding out how to get what he needs.

@stongo
Copy link

stongo commented Dec 2, 2014

@zoechi @erikh I think this thread is a worthwhile conversation. If people are working around and hacking together solutions to bypass imposed limitations, it could be argued facilitating inconsistent builds as a non-default option in docker is worth discussing at least.

@zoechi
Copy link
Contributor

zoechi commented Dec 2, 2014

@stongo As mentioned, I'm quite new to Docker and there a lot going on all around Docker, but I see people struggling with missing flexibility when building images and running containers often enough to recognize this is not a minority.
I did quite a bit of research to learn how to use Docker and read quite a lot already, but all explanation I have found so far is "security" and "repeatable builds" but haven't found extensive discussions about advantages or disadvantages, just that nobody is interested in allowing more flexibility.
I appreciate it a lot that the people working on Docker care about security and I understand that repeatable builds are important in some scenarios, but I also think that most of the use cases where people struggle with the limitations are valid.
The way it is ignored how users struggle calls for alternatives and I guess Rocket is not the last initiative we'll see (fragmentation will bring a lot of other problems of course but I guess this is unavoidable anyway).

I also think there should be a setting that lifts the restrictions (for example a directive in the Dockerfile). Public registries, container running environments, docker pull (by default), and so on, should be free to reject images where the Dockerfile contains such a directive.

@stongo
Copy link

stongo commented Dec 2, 2014

@zoechi +1
I must agree there are many valid use cases that more flexibility is necessary. As long as a user understands the risks of certain directives or options and they are not default behavior, it's hard to justify not allowing such flexibility IMHO

@brianclements
Copy link

hi @erikh, I'm really not trolling, I honestly want to know why you think my method above can't be considered "reproducible" especially if packaging "build-env" along with my source code in my repo? Conceptually, it's specifying a value in a file, and uploading it into the container via build time, it affects the end result which is captured as a docker image with a tag. How is that any different then changing a configuration file for my binary, then rebuilding? I would argue actually, that this is more of a "templating" strategy, whereby instead of editing my dockerfile manually, or forking it and modifying it, I just do this with a drop-in file at build time. What is the effective difference between three different Dockerfiles, each with manually selected $VERSION variables, vs. one Dockerfile that uses my hack above to produce 3 different image results? I always thought the reproducibility concern was around doing something like SSH into a running container and tinkering while it's running rather then how we built the image.

@erikh
Copy link
Contributor

erikh commented Dec 2, 2014

Hey folks, this has deviated significantly from "Expose environment variables in workdir" which has already been resolved, hence the status of "closed" in this issue.

If you guys wish to debate the merits of, or propose an alternative solution to our current Dockerfile system please go ahead in a new PR or issue. I would like to remind you to understand how caching and Dockerfiles work in general, via our builder tutorial: https://docs.docker.com/reference/builder/. There are also our forums at http://forums.docker.com and a mailing list that can be used to have this conversation.

As it stands now however, we're having a discussion that's way off course, largely due to a misconception of how dockerfiles work, on a ticket that has already been closed at least a month prior. People who are mentioned in the original ticket and about 1300 other people get emailed every single time a message is sent here. This is not the place for a tutorial or "Dockerfile tips", nor is it the place to discuss changes to the dockerfile execution model.

Since this conversation is rapidly devolving into a "I'm going to repeat myself" festival I'm going to lock this.

@moby moby locked and limited conversation to collaborators Dec 2, 2014
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.