Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Dockerfile add INCLUDE #735

Closed
dysinger opened this issue May 28, 2013 · 176 comments
Closed

Proposal: Dockerfile add INCLUDE #735

dysinger opened this issue May 28, 2013 · 176 comments
Labels
area/builder kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny

Comments

@dysinger
Copy link

No description provided.

@shykes
Copy link
Contributor

shykes commented Jun 21, 2013

+1

1 similar comment
@keeb-zz
Copy link
Contributor

keeb-zz commented Jun 21, 2013

+1

@ptone
Copy link

ptone commented Jun 21, 2013

Yes this would be great to see +1

@jpfuentes2
Copy link

I think this would be a great feature as I want to leverage some of my knowledge/experience with systems like Chef whereby you can compose complex builds using smaller/simpler building blocks.

Does anyone have a way they're implementing this now?

@crosbymichael
Copy link
Contributor

Can someone give me a few examples on how they would use this? ping @ptone @keeb @dysinger

@ptone
Copy link

ptone commented Aug 12, 2013

Lets say I have build file snippets for different components of web architecture.

I may want to include 1 or more of them on a single container image - or in general bundle things that always go together.

Simple examples might be nginx and varnish always go on the same container, or redis and pyredis.

I might have a "Scientific Python" list of python packages and linked libs, that I may want to include in other images.

The problem with FROM and base images is that you can't remix things the same way you can with includes. A base image is all or nothing - if you don't want something, your only hope is that you can go back to a 'lower' base image, and re-add stuff manually that you want, skipping the stuff you don't.

It is essentially a case of inheritance vs composition in many ways.

@binocarlos
Copy link

+1 to this - @crosbymichael @ptone @keeb @dysinger @shykes

I am at this exact point where I have an appserver base image (which is just node.js).

I also have a collection of Dockerfiles representing little bits of services that have deps - so:

ImageMagick:

from appserver
run apt-get install imagemagick

RedisSession:

from appserver
run apt-get install redis-server

To have a container that is both ImageMagick and RedisSession:

from appserver
run apt-get install imagemagick
run apt-get install redis-server

Whereas the following syntax means I can build up a folder of modules and include them by name in the application Dockerfile:

from appserver
include ./modules/imagemagick
include ./modules/redis-server

Now, because Docker is so darn brilliantly good : ) this is currently trivial to do (i.e. read module file and inject here) and so I'm not sure if for me this is a required feature.

However - it would make a user app's Dockerfile (i.e. the last thing in the chain for this scenario) much more about composing modules together than about creating a strict inheritance tree where some combinations are not possible - my 2 cents on what is a joy to work with otherwise :)

@Thermionix
Copy link
Contributor

+1 would also be good to reference out generic blocks (possibly to a github raw link?)

@frankscholten
Copy link

+1

flavio added a commit to flavio/docker that referenced this issue Oct 15, 2013
Added the 'include' command to dockerfile build as suggested by issue moby#735.
Right now the include command works only with files in the same directory of
the main Dockerfile or with remote ones.
@joelreymont
Copy link

I'll re-implement this on top of #2266.

@prologic
Copy link
Contributor

+1 Turns Docker and the Dockerfile into a portable rudimentary configuration management system for the constructions of portable Docker images :)

@ghost
Copy link

ghost commented Feb 14, 2014

This would help a lot. +1

@deeky666
Copy link

+1

@peenuty
Copy link

peenuty commented Mar 10, 2014

+1

1 similar comment
@newhoggy
Copy link

+1

@jfinkhaeuser
Copy link

Not to sound negative here (pun intended), but -1.

I completely get the use cases for an include statement. But then I also get the need for parametrized Dockerfiles, and then for conditionals, etc. Continue down that path, and you'll end up implementing a programming language in Dockerfiles, which may even become turing complete. The cynicism in that statement is free of charge, by the way ;)

Why not use a preprocessing step instead? You don't even have to program your own, you could use m4 (used in autotools for this purpose) or the C preprocessor (used in IMake, which does a similar job as autotools but is pretty much defunct these days).

Makefile:

Dockerfile: Dockerfile.in *.docker
  cpp -o Dockerfile Dockerfile.in

build: Dockerfile
  docker build -rm -t my/image .

Dockerfile:

FROM ubuntu:latest
MAINTAINER me

#include "imagemagick.docker"
#include "redis.docker"

Run make and it'll re-build the Dockerfile if any input file changed. Run make build and it'll re-build the Dockerfile if any input file changed, and continue to build the image.

Look ma, no code!

@prologic
Copy link
Contributor

On Tue, Mar 11, 2014 at 6:45 PM, Jens Finkhaeuser
notifications@github.comwrote:

Not to sound negative here (pun intended), but -1.

I completely get the use cases for an include statement. But then I also
get the need for parametrized Dockerfiles, and then for conditionals, etc.
Continue down that path, and you'll end up implementing a programming
language in Dockerfiles, which may even become turing complete. The
cynicism in that statement is free of charge, by the way ;)

Why not use a preprocessing step instead? You don't even have to program
your own, you could use m4 (used in autotools for this purpose) or the C
preprocessor (used in IMake, which does a similar job as autotools but is
pretty much defunct these days).

I'm in agreement with this. Having toyed with design and implemtantions of
a few languages myself over the years
turning Dockerfile(s) into a "scripting" language even if it's not turning
complete sounds like something that Docker should not do.

As Jens clearly points out there are better more appropriate tools for this
job.

cheers
James

James Mills / prologic

E: prologic@shortcircuit.net.au
W: prologic.shortcircuit.net.au

@hunterloftis
Copy link

+1

The slippery-slope argument for a turing-complete scripting language in Dockerfiles seems a bit extreme. INCLUDE (or FROM ./relative/path) just lets you create a common base image locally so you can reference a file system (for example, in your app's repository) instead of relying on the Docker registry for what should be a self-contained app definition.

@prologic
Copy link
Contributor

prologic commented Apr 9, 2014

On Wed, Apr 9, 2014 at 12:15 PM, Hunter Loftis notifications@github.comwrote:

The slippery-slope argument for a turing-complete scripting language in
Dockerfiles seems a bit extreme. INCLUDE (or FROM ./relative/path) just
lets you create a common base image locally so you can reference a file
system (for example, in your app's repository) instead of relying on the
Docker registry for what should be a self-contained app definition.

I don't agree with this. The very notion of referencing and building a base
image is already there
and it only accesses the public registry (or a private onf if you're so
inclined) if you don't have said image.

I'm still -1 on this -- it adds more complexity for little gain. I'd rather
see Docker pick up some YAML-style
configuration format for "configuring one or more containers" ala fig et
all.

cheers
James

James Mills / prologic

E: prologic@shortcircuit.net.au
W: prologic.shortcircuit.net.au

@jfinkhaeuser
Copy link

The slippery-slope argument stems from experience with a bunch of other DSLs. There's a general trend for DSLs to become turing complete over time.

The include statement in itself presents little danger here, but consider that in almost every language, include or import statements are linked to the concept of an include path, quite often set via an environment variable.

There's a reason for that: having include statements means you can collect building blocks into reusable libraries, and having reusable libraries means you'll want to use the same building blocks in multiple Dockerfiles. The best way to do that is to be agnostic to the actual file location in your include statement, but instead have a "canonical" name for a building block, which is usually a file path relative to some externally provided include path.

So what's wrong with that? Nothing, except (and I'm quoting you here):

(...) instead of relying on the Docker registry for what should be a self-contained app definition.

I agree. A Dockerfile should be a self-contained app definition. Once you include stuff from any location that's shared between projects, though, you have anything but that - the needs of one project will lead to modifications of the shared file and those may not reflect the needs of the other project any longer. Worse, any kind of traceability - this version of the Dockerfile should build the same image again - is completely gone.

Besides, once you have re-usable building blocks, parametrizing them is the obvious next step. That's how libraries work.

@hunterloftis
Copy link

Then follow a common, well-understood example that has certainly not become turing complete:

https://developer.mozilla.org/en-US/docs/Web/CSS/@import

@ryedog
Copy link

ryedog commented Apr 14, 2014

+1 on include (or even variables that can be declared on the command line on build would be helpful)

@ChristianKniep
Copy link

+1 on include as well

I have to deal with one site where I have to set http_proxy and one without.

@donn
Copy link

donn commented May 18, 2022

It's now becoming even more of a problem given that the docker-container driver of buildx cannot access the local image cache, meaning a temporary common image has to be pushed to Docker Hub to be used in a later build. I urge the maintainers to reconsider.

@mccolljr
Copy link

mccolljr commented May 18, 2022

It has been almost 10 years since this issue was originally opened.
It has seen overwhelming support from users in the issue thread.
It seems that the only "real" counterpoint is that... this might eventually lead to more features that would
make the Dockerfile description language turing complete?
It also appears that the idea of FROM ./relative/path has been rejected?

Am I understanding the state of affairs correctly? If so, this is both surprising and disappointing.

@endrift
Copy link

endrift commented May 18, 2022

It's quite clear that upstream doesn't care at all, despite the outpouring of public requests. The only thing at this point that surprises me is that they haven't made their disdain clear by locking the issue.

@perllaghu
Copy link

It's quite clear that upstream doesn't care at all, despite the outpouring of public requests. The only thing at this point that surprises me is that they haven't made their disdain clear by locking the issue.

.... or, perhaps, they're waiting for someone else to do some coding & create a Pull Request with [some part of] this solved?

@endrift
Copy link

endrift commented May 19, 2022

I don't think anyone's gonna give that much free labor to an open source project that's commercially backed already.

@vlad-ivanov-name
Copy link

I don't think anyone's gonna give that much free labor to an open source project that's commercially backed already.

The labor spent on implementing this is nothing compared to potential payoff :) But when it comes to open source projects, very often it's not a question of implementing some feature, it's a question of getting it reviewed and merged. The design and the implementation strategy should be agreed upon; criteria for merging should be defined before someone can start implementing, otherwise it will be wasted effort.

That said, I think some bigger software shops could benefit from this feature even if it's not merged upstream by maintaining a fork.

@IamTheCarl
Copy link

Not to sound negative here (pun intended), but -1.

I completely get the use cases for an include statement. But then I also get the need for parametrized Dockerfiles, and then for conditionals, etc. Continue down that path, and you'll end up implementing a programming language in Dockerfiles, which may even become turing complete. The cynicism in that statement is free of charge, by the way ;)

Why not use a preprocessing step instead? You don't even have to program your own, you could use m4 (used in autotools for this purpose) or the C preprocessor (used in IMake, which does a similar job as autotools but is pretty much defunct these days).

Makefile:

Dockerfile: Dockerfile.in *.docker
  cpp -o Dockerfile Dockerfile.in

build: Dockerfile
  docker build -rm -t my/image .

Dockerfile:

FROM ubuntu:latest
MAINTAINER me

#include "imagemagick.docker"
#include "redis.docker"

Run make and it'll re-build the Dockerfile if any input file changed. Run make build and it'll re-build the Dockerfile if any input file changed, and continue to build the image.

Look ma, no code!

I got desperate and decided to give this a try, and then quickly realized that cpp can't tell the difference between a C preprocessor directive and a Docker comment.

Has anyone actually tried this method and gotten it to work?

@tanzislam
Copy link

tanzislam commented Jun 25, 2022

@IamTheCarl You need to use C-style comments (/* ... */) instead. The Buildah project does it, and I've also done it at work (with a wrapper makefile to run the C preprocessor before running docker compose build, etc.).

@IamTheCarl
Copy link

That's just bonkers... I didn't know Docker supports C-style comments or does cpp just strip those out before they can be a problem?

Well, I tried out edrevo's tool which not only was easier to set up than expected (you don't need to install anything on the host) and it gave me all the benefits I was hoping to find.

@endrift
Copy link

endrift commented Jun 26, 2022

I believe CPP strips them out.

@vovech
Copy link

vovech commented Sep 23, 2022

Please don't use edrevo's. Not maintained and doesn't work with buildkit (which will became a default soon):

Parallel building completely messed up (docker compose build including):
edrevo/dockerfile-plus#13

Currently, I didn't find any acceptable solution.

@sffc
Copy link

sffc commented Sep 28, 2022

If you are using Podman, according to the podman-build man page, you can build from a Containerfile.in that gets preprocessed into a Containerfile complete with include statements, presumably with inspiration from how a Makefile.in gets preprocessed into a Makefile.

@c33s
Copy link

c33s commented Dec 15, 2022

working example:

Containerfile.in:

FROM alpine:latest

#include "Containerfile.defaults"

RUN ls -lsa

Containerfile.defaults:

RUN echo "hello world from included file"
podman build -f Containerfile.in .

@amohar
Copy link

amohar commented Jan 22, 2023

I think the reasons for not implementing this feature are completely bonkers. I understand that having functionality like INCLUDE can lead to bad coding practices, as @jfinkhaeuser mentioned (like Dockerfile not being self-contained), but then we should simply throw out computing altogether, as there is not a single programming language or scripting language that doesn't introduce the possibility of bad practices. Not having the functionality implemented is not the way to mitigate them.

I, for instance, am having an issue where I want to have many (a few dozen) different Dockerfiles for many different containers, but each starts with the same base installation procedure. Then I need the possibility to do additional env setups for each one. And I need this to work locally and in the cloud. You can't convince me that having such a setup separately for each container is a good practice because it is not; the potential for human error while updating the base part is huge.

So now I'm left with having to implement one of the hacks. I either have to have a bunch of repositories that cost money for a bunch of base images that I'm not actually using or figure out a hack that will create and drop them during the build process. Or I have to figure out a 3rd-party way to preprocess them, which is a pain in the ass doing for both local and cloud. And in the end, I'm ending up with a hack instead of an accepted practice for doing an obviously often-seen scenario.

My opinion is that the reason for not implementing this is just the basic team's unwillingness to do it instead of some logical reasoning why implementing this would be a bad idea because there would be some official explanation. And if this is not the case, as somebody said this is both surprising and disappointing.

@jfinkhaeuser
Copy link

It's kind of amusing that I'm being quoted here in 2023. While I believe my argument is sound enough, also today. But also, it's not as if I care enough.

Trust me on one thing, though, nobody listens to me. I doubt my argument is the reason this is still not done a decade later.

@creative-resort
Copy link

It looks, like an equivalent of this functionality is meanwhile provided by Docker "Bake" and "BuildKit", outside of Dockerfiles.
https://www.docker.com/blog/dockerfiles-now-support-multiple-build-contexts/
(Create Build Pipelines by Linking bake Targets)

Reference:
https://docs.docker.com/build/bake/build-contexts/#using-a-result-of-one-target-as-a-base-image-in-another-target

@gkpln3
Copy link

gkpln3 commented May 6, 2023

I've encountered the same thing with so many projects I ended up creating my own Dockerfile generation framework to provide a solution for this problem.

Its called Sand and can be found here: https://github.com/gkpln3/Sand

You can try it

pip3 install docker-sand

Star the repo if it has helped 😀

@rod-dot-codes
Copy link

I've encountered the same thing with so many projects I ended up creating my own Dockerfile generation framework to provide a solution for this problem.

Its called Sand and can be found here: https://github.com/gkpln3/Sand

You can try it

pip3 install docker-sand

Star the repo if it has helped 😀

Thanks for the tool, but I hope ten years of subscribing to this issue will not end up having been in my top 10 biggest disappointments in life - at the moment, it's probably around 30th, but the fact I'm forgetting some of my other disappointments is pushing this further to the top.

I really don't think it's adequate to work around this, especially if Podman implements the ability to include files. I mean, CI pipelines are complicated enough - why not allow us to source different parts of a multi-stage Pod Dockerfile from separate files - even in the same folder?

By chance, this issue has also served me well as a periodic Github health-check for my non-work Github email subscriptions - the flurry every 3 months or so has provided ample shaking of head time. Third place medal feels like the right emoji here.🥉

@black-snow
Copy link

Didn't go through the whole thing but let me quickly add my current use case:

I have one Dockerfile to build a DB image with a reasonable health check frequency for production.
For my testcontainers I want a more aggressive probing, though, to wait for the health check to be green and run the tests as quickly as possible.

Now I don't want to copypaste the rest of the build file and maintain two of them - I just want to have a different healthcheck in the latter one. I also don't want to rely on any existing image (multi-stage).

@devthejo
Copy link

devthejo commented Sep 2, 2023

Hello folk's to address this need and more generally the needs of factorization on dockerfile I developed this buildkit custom syntax frontend plugin: devthefuture/dockerfile-x

just one line at the top of file # syntax = devthefuture/dockerfile-x and that's it !

# syntax = devthefuture/dockerfile-x

FROM ./base/dockerfile

COPY --from=./build/dockerfile#build-stage /app /app

INCLUDE ./other/dockerfile

@rishiraj88
Copy link

Wow! @devthejo

@vlad-ivanov-name

This comment was marked as outdated.

@devthejo
Copy link

devthejo commented Sep 2, 2023

@devthejo the link shows 404 for me - perhaps the repo is private?

@vlad-ivanov-name nope, but the orga was, good catch, it's fixed now, thanks

@vlad-ivanov-name
Copy link

vlad-ivanov-name commented Sep 2, 2023

Thanks for posting your work!

To me a major problem with those alternative frontends is that as far as I understand, there is no way to "layer" them on top of the official docker/dockerfile. So all of the additional features of the official frontend would be unavailable.

@devthejo
Copy link

devthejo commented Sep 2, 2023

Thanks for posting your work!

To me a major problem with those alternative frontends is that as far as I understand, there is no way to "layer" them on top of the official docker/dockerfile. So all of the additional features of the official frontend would be unavailable.

Here is how I developed this tool, the nodejs part is responsible to compile the custom syntax dockerfile to a standard dockerfile, in a superset way, then the buildkit frontend service translate this final standard dockerfile to llb (with minimal code and relying on official buildkit packages).
So if new features are developed on docker/dockerfile (it doesn't happen so often IMHO), there is two ways to address this, upgrade the package in the golang part of dockerfile-x (as I was saying there is no so much code, the maintenance cost seem very light for me).
The second way is to not use the custom syntax as frontend but precompile the Dockerfile-X to standard Dockerfile using only the nodejs part of project that is a ready to use CLI (I will add it to the documentation), you can use it via npx dockerfile-x (this command line is documented using --help). If new features are added to docker/dockerfiles as new keywords etc... they will be supported without the need to make change on this lib. So you have to precompile you dockerfile, you can think it as a template engine specially designed for Dockerfiles ;-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/builder kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny
Projects
None yet
Development

Successfully merging a pull request may close this issue.