Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: docker embedded DNS server #17195

Closed
phemmer opened this issue Oct 20, 2015 · 53 comments
Closed

Proposal: docker embedded DNS server #17195

phemmer opened this issue Oct 20, 2015 · 53 comments
Labels
area/networking kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny

Comments

@phemmer
Copy link
Contributor

phemmer commented Oct 20, 2015

There are numerous open issues in regards to docker and DNS handling within containers: (#17190 #16619 #15978 #14627 #15819 and likely many others which I was fuzzy on)

One solution I think which would solve all these issues would be if docker acted as a DNS server. It would answer lookup requests for linked containers, and when the request isn't for a linked container, it would forward it upstream (to the host's name servers).

The /etc/hosts file inside the container would then be static, containing only the container itself.
We could also not touch /etc/hosts at all, and leave the container's entry to DNS. This would allow image builds to manipulate the file and persist the changes.

For performance, it would probably be good if docker cached the upstream DNS records. Records come back with a TTL, so docker should cache the record until this TTL expires.

@thaJeztah thaJeztah added kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny area/networking labels Oct 20, 2015
@thaJeztah
Copy link
Member

/cc @mrjana @mavenugo

@thockin
Copy link
Contributor

thockin commented Oct 20, 2015

I agree DNS would be better than mucking (non-atomically!!) with /etc/hosts. However, if we go this route, please make it optional. Some of us already have DNS solutions at a higher-level and don't want this granularity.

@phemmer
Copy link
Contributor Author

phemmer commented Oct 20, 2015

@thockin what do you mean optional? For docker to continue doing what it is today (modifying /etc/hosts)?
Also, if you already have a DNS solution, wouldn't it be acceptable to forward DNS queries, for which docker doesn't have an answer to, on to the server providing the custom solution?

The only scenario I can think of where modifying /etc/hosts works and a DNS server doesn't is if the container is modifying /etc/resolv.conf, and isn't using the nameserver provided by docker.

@thockin
Copy link
Contributor

thockin commented Oct 20, 2015

I mean I don't want docker participating in name resolution neither by DNS nor by managing /etc/hosts - AT ALL. The name that docker has for a container is NOT the name I want reflected in DNS. The stuff that is happening to /etc/hosts right now is terrifying - I am pretty sure it can not possibly be done safely (atomically).

I understand how people who are not using higher-level management systems might think built-in DNS is useful. But the higher levels already handle things like DNS with broader scope and visibility. It needs to be "batteries included, but removable"

@phemmer
Copy link
Contributor Author

phemmer commented Oct 20, 2015

Can you point me at an example of this "higher-level management system"? It would be nice if we had something that worked with custom DNS solutions without having to provide another docker run flag, or at least a single-purpose one.

What if we had a flag to control the domain docker responds to queries on? E.G. --dnsdomain docker.local. If you have a container named foo, docker would only respond to foo.docker.local. We could then provide a /etc/resolv.conf with search docker.local ndots:0, so that attempting to resolve just foo would pass through docker to the upstream DNS server, but when no result is found, the container would append the search domain and try again, to which docker would answer.

@thockin
Copy link
Contributor

thockin commented Oct 20, 2015

@phemmer Kubernetes runs a real DNS server and populates it with the Kubernetes names of all the running containers. The name that is given to a container is NOT the name exposed in DNS. The container name includes things like a prefix to delineate user-defined containers from system-defined containers, a checksum, etc. On top of that our cluster space is chopped up into sub-domains (namespaces) wherein names are unique, but they are not unique across namespaces - I can have a 'foo' container in the 'a' subdomain and in the the 'b' subdomain. There are more issues with docker doing it automatically, but I won't bore you. The point is that it is a layering violation and we need it off.

I think "should manage DNS" should be a docker daemon flag (and should default to false).

@pidster
Copy link
Contributor

pidster commented Oct 21, 2015

@phemmer Weave has DNS too and would run into similar issues as k8s

(Disclaimer, I work for Weave).

@phemmer
Copy link
Contributor Author

phemmer commented Oct 21, 2015

As far as I can see, the --dnsdomain proposal would solve both use cases, without having to introduce another code path we have to follow (by disabling the feature entirely in certain cases).

@thockin
Copy link
Contributor

thockin commented Oct 21, 2015

@phemmer without a real spec to riff on, it's hard to know exactly what you mean :)

I am saying that I want Docker doing exactly nothing in this space. It's a waste of cycles and RAM for Docker to muss with DNS (for our use case).

@mavenugo
Copy link
Contributor

@phemmer 👍 on the proposal.
but a lots of details to discuss to make it a reality. I can share more inputs on this post 1.9 release.

@mfischer-zd
Copy link

@thockin I don't see an inherent conflict here, depending on the options available to the user.

I think it would be very useful in the default case to run a host DNS server exposed only to the containers, have it publish local container names, and configure /etc/resolv.conf in each container to point to the server.

However, it would also be useful to do the following to accommodate bespoke configurations:

  1. Make the /etc/resolv.conf configuration in the container optional via docker run. (The resolver IP should be exposed to the container via an environment variable, though, so that people can run their own resolvers in their containers and use the Docker-provided one as a forwarder as needed.)
  2. Make the domain search order customizable via docker run.
  3. Make the publication of the container name into the host DNS service optional via docker run.
  4. Allow the user to customize the host DNS server's forwarder via docker daemon.

@thaJeztah
Copy link
Member

@mfischer-zd run actually has --dns and --dns-search options.

@mfischer-zd
Copy link

@thaJeztah Thanks, updated comment

@phemmer
Copy link
Contributor Author

phemmer commented Oct 23, 2015

I've actually started working on this on my own. The version I'm working on maintains the same functionality present today (including --dns, --dns-search, --add-host, etc), with no additional features. (it also lives entirely in the libnetwork package, nothing at all in docker. so kinda the wrong repo :-P)
I might create a PR for it, but dunno. It depends on how pretty I get the code (I'm a harsh critic of my own work).

@thockin
Copy link
Contributor

thockin commented Oct 24, 2015

This sounds like it will be useful for a lot of people. But I still ask that it can be completely turned off. It''s a 100% waste on our machines, unless it is a totally generically programmable DNS server (which I don't think is right for Docker)

@mariusGundersen
Copy link
Contributor

Wouldn't it make sense for the service discovery to be plugin based, like networking and volumes are today? That way you could set up per container what kind of service discovery driver you want (/etc/hosts, host dns, kubernetes dns, custom solution, etc).

@phemmer
Copy link
Contributor Author

phemmer commented Oct 24, 2015

@mariusGundersen We already have plugable service discovery via the --dns=... option.

@amoghe
Copy link
Contributor

amoghe commented Oct 26, 2015

Some datapoints on this topic:

I've been using containers for testing and development starting from the 1.3 release

  1. I started using tonistiigi/dnsdock along with docker for our discovery needs (on a single host) because --dns and --link aren't sufficient for us due to constraints on the ordering of container launch. dnsdock works by listening to new containers being brought up by docker. This means needing to launch dnsdock first, with the docker socket mapped into it (not terrible, since it is a upfront operation). Its also quite configurable (hackable, really) because it exposes DNS records based on the container name and some special env vars in the container. But I cannot emphasize how well this stupidly simple approach has worked for us.
  2. There are cases where we need this functionality completely turned off . One such case is multiple users bringing up their dev containers on a shared machine. In this case there is no need for these containers to know about each other.
  3. An earlier comment (from @thockin) about "... unless its a generically programmable DNS server" is one that I resonate with. For us, Case 2 is more common than Case 1, and hence we explicitly wire in the DNS container when deployments need it. Other times we're happy with isolated containers.

@thockin
Copy link
Contributor

thockin commented Oct 26, 2015

To be specific, what I meant with generically programmable was things like

  • can I add SRV records? Can I add multi-record A responses? Can I
    control PTR? TXT?

On Mon, Oct 26, 2015 at 12:16 PM, Akshay Moghe notifications@github.com
wrote:

Some datapoints on this topic:

I've been using containers for testing and development starting from the
1.3 release

I started using tonistiigi/dnsdock
https://github.com/tonistiigi/dnsdock along with docker for our
discovery needs (on a single host) because --dns and --link aren't
sufficient for us due to constraints on the ordering of container launch.
dnsdock works by listening to new containers being brought up by
docker. This means needing to launch dnsdock first, with the docker socket
mapped into it (not terrible, since it is a upfront operation). Its also
quite configurable (hackable, really) because it exposes DNS records based
on the container name and some special env vars in the container. But I
cannot emphasize how well this stupidly simple approach has worked for us.
2.

There are cases where we need this functionality completely turned
off
. One such case is multiple users bringing up their dev
containers on a shared machine. In this case there is no need for these
containers to know about each other.
3.

An earlier comment (from @thockin https://github.com/thockin) about
"... unless its a generically programmable DNS server" is one that I
resonate with. For us, Case 2 is more common than Case 1, and hence we
explicitly wire in the DNS container when deployments need it. Other times
we're happy with isolated containers.


Reply to this email directly or view it on GitHub
#17195 (comment).

@mfischer-zd
Copy link

@mavenugo @thaJeztah I'd be happy to try to contribute some resources to this, but would need to know where to begin. Feel free to drop me a line.

@thaJeztah
Copy link
Member

@mfischer-zd thanks for offering, reading back here, @phemmer was already working on a PoC, so I suggest to help once that PR is created (we try to avoid having multiple "competing" PRs).

@mavenugo I know this has been talked about, so I'm not sure if there was already worked on this internally?

@mfischer-zd
Copy link

@phemmer let me know how I can help.

@phemmer
Copy link
Contributor Author

phemmer commented Oct 26, 2015

PoC exists here which I submitted to the docker-dev mailing list soliciting feedback on the integration points. Been pretty quiet though.

@thaJeztah
Copy link
Member

@phemmer apologies for that, it's been busy times, but I'll be my usual PITA and try to get some maintainers to have a look at your proposal.

(thanks a lot for that PoC, please don't see the "quiet" as it not being appreciated)

@phemmer
Copy link
Contributor Author

phemmer commented Oct 26, 2015

No worries, it's only been 1 working day. I've been around here enough to know it can usually take quite a few :-)

@tiborvass
Copy link
Contributor

@phemmer ❤️

@ibuildthecloud
Copy link
Contributor

I like the idea but making it optional is absolutely required (as mentioned by @thockin). It would make sense to me that per network you could set the default value of whether you want this Docker embedded DNS, then also at a container level is nice (basically at this point it's --dns). The "default docker0 network" can have this turned on by default so that users use and get used to the DNS behaviour.

@mfischer-zd
Copy link

A DNS server pointed to by nothing costs little. I don't think it's necessary to have a master shut-off switch.

@thockin
Copy link
Contributor

thockin commented Oct 31, 2015

Sure, the Docker daemon is already the single largest non-user process (in
RSS), why not bloat it more?
On Oct 31, 2015 10:54 AM, "Michael S. Fischer" notifications@github.com
wrote:

A DNS server pointed to by nothing costs little. I don't think it's
necessary to have a master shut-off switch.


Reply to this email directly or view it on GitHub
#17195 (comment).

@mfischer-zd
Copy link

@thockin Please check your sarcasm at the door. Your opinions and experiences are valuable, but your attitude is not.

As for the substantive concerns:

First, there aren't really ways to have your cake and eat it too with respect to adding a DNS service into the agent. AFAIK Go doesn't support shared libraries, so in order to gain text pages back by disabling the DNS service, either the DNS service has to be moved into a separate process (this would be a first for agent mode, and would add a significant amount of complexity to it), or it would have to be disabled at compile time.

Second, as a practical matter, Docker memory consumption tends to be noise compared to the actual applications run on the system. (My current 1.8.3 RSS is approximately 45MB.) Modern servers are configured standard with at least 1GB RAM (t2.micro) in virtual instances; bare metal to 256GB and beyond. My experience tells me that most users will not miss a few megabytes.

People running in extremely memory-constrained environments are more than welcome to run custom Docker builds if they desire to leave out features they don't need. If DNS service can be excluded via a compile-time switch, that would make it very easy to do.

@thockin
Copy link
Contributor

thockin commented Nov 1, 2015

On Sun, Nov 1, 2015 at 9:05 AM, Michael S. Fischer
notifications@github.com wrote:

First, there aren't really ways to have your cake and eat it too with respect to
adding a DNS service into the agent. AFAIK Go doesn't support shared libraries,
so in order to gain text pages back by disabling the DNS service, either the DNS
service has to be moved into a separate process (this would be a first for agent
mode, and would add a significant amount of complexity to it), or it would have to
be disabled at compile time.

First, code pages that are not executed are not faulted in, so that's
a red herring. Second code pages that ARE faulted in a reclaimable,
so that ALSO is a red herring. My point wasn't about code pages but
everything else that has to happen to run DNS. Consuming port 53, for
example. Also CPU and memory.

Running in-process vs out of process is a more relevant discussion.
Composability is powerful and processes are a really nice boundary.

Second, as a practical matter, Docker memory consumption tends to be noise
compared to the actual applications run on the system. (My current 1.8.3 RSS
is approximately 45MB.) Modern servers are configured standard with at least
1GB VRAM (t2.micro) in virtual instances; bare metal to 256GB and beyond.
My experience tells me that most users will not miss an extra few megabytes.

45 MB of 1GB is 4.3%. That's "just a few megabytes" but it's actually
a substantial fraction of the machine. Add to that kernel overhead
and other things and you'll be pushing 10% of your machine just for
the privilege of walking in the door. This way of thinking is
dangerous. Infrastructure devs need to be cognizant of the cost of
their solutions.

@mfischer-zd
Copy link

First, code pages that are not executed are not faulted in, so that's a red herring. Second code pages that ARE faulted in a reclaimable, so that ALSO is a red herring. My point wasn't about code pages but everything else that has to happen to run DNS. Consuming port 53, for example. Also CPU and memory.

You're right about text pages being demand-faulted in, so that's actually a point in my favor. That leaves the following, assuming the server isn't actually being addressed by any clients:

  • Heap consumption - possibly some fixed allocation, but demand-based allocation should be minimal
  • Opening port 53 on the vifs assigned to the containers - what's the problem here, exactly?
  • CPU - shouldn't be implicated

I suggest we actually instrument the impact before debating this further.

@mrjana
Copy link
Contributor

mrjana commented Nov 2, 2015

First, code pages that are not executed are not faulted in, so that's
a red herring. Second code pages that ARE faulted in a reclaimable,
so that ALSO is a red herring. My point wasn't about code pages but
everything else that has to happen to run DNS.

Every page is reclaimable unless you have locked in all your pages into memory using mlock. Trying to understand memory usage of a process using RSS (as compared to the RSS of other processes) is a tricky matter. A bigger RSS by one process when compared to other processes just means that that process has been more active than others. If you consider the whole kernel as a user process for a minute and monitor RSS of the kernel, it is more than likely to be higher than any other process since it will be the most active, as everybody will request something or the other from the kernel. My point is in many instances docker is the most active process and it's RSS is likely to be higher. But that doesn't mean there is no room for improvement in docker memory usage. Every component including the kernel has room for improvement in terms of memory usage.

Consuming port 53, for example. Also CPU and memory.

Why should it consume port 53 in the host name space? That's a question the design should provide but it's not a given. In fact for achieving SD within the Network context it may be necessary to open and bind that socket inside the container namespace.

Running in-process vs out of process is a more relevant discussion.
Composability is powerful and processes are a really nice boundary.

With the static linking of go programs let's not assume that splitting things into many processes is automatically going to save us memory. Remember the kernel has no idea how to share the same exact text pages which are duplicated in every go program because as far as kernel is concerned they are different text pages. Unless KSM is enabled which can get super heavy weight on CPU by it's own right.

45 MB of 1GB is 4.3%. That's "just a few megabytes" but it's actually
a substantial fraction of the machine. Add to that kernel overhead
and other things and you'll be pushing 10% of your machine just for
the privilege of walking in the door. This way of thinking is
dangerous. Infrastructure devs need to be cognizant of the cost of
their solutions.

In my 2GB linux machine, kernel consumes about 368M of physical memory and this is after dropping all caches. This is aproximately 17% of my total physical memory. So what should we do?

@thockin
Copy link
Contributor

thockin commented Nov 2, 2015

On Sun, Nov 1, 2015 at 4:55 PM, Michael S. Fischer
notifications@github.com wrote:

You're right about text pages being demand-faulted in, so that's actually a point in my favor. That leaves the following, assuming the server isn't actually being addressed by any clients:

Heap consumption - possibly some fixed allocation, but demand-based allocation should be minimal

Since so many people have DNS solutions, anything > 0 is a waste, but
I'm fine to try it and see.

Opening port 53 on the vif assigned to the containers - what's the problem here, exactly?

Good point - it can be virtual.

CPU - shouldn't be implicated

See above - anything > 0 is pointless for many people with existing solutions.

@thockin
Copy link
Contributor

thockin commented Nov 2, 2015

On Sun, Nov 1, 2015 at 4:56 PM, Jana Radhakrishnan
notifications@github.com wrote:

Every page is reclaimable unless you have locked in all your pages into memory using mlock.

Every CLEAN page is reclaimable. Some dirty pages are reclaimable
(through writeback, very expensive by comparison). Some dirty pages
are anonymous and are just not.

Running in-process vs out of process is a more relevant discussion.
Composability is powerful and processes are a really nice boundary.

With the static linking of go programs let's not assume that splitting things into many processes is automatically going to save us memory. Remember the kernel has no idea how to share the same exact text pages which are duplicated in every go program because as far as kernel is concerned they are different text pages. Unless KSM is enabled which can get super heavy weight on CPU by it's own right.

It means the people who want the functionality spend money and the
people who don't don't.

In my 2GB linux machine, kernel consumes about 368M of physical memory and this is after dropping all caches. This is aproximately 17% of my total physical memory. So what should we do?

No offense, but I have a LOT more faith in the kernel's memory
frugality than I do in Docker's or in Kubernetes's (and Go's).

@aidanhs
Copy link
Contributor

aidanhs commented Nov 16, 2015

A couple of weeks ago I submitted a PR to libnetwork to permit enabling and disabling service discovery on networks - moby/libnetwork#722. It needs design review to decide whether service discovery toggling should 1) be for the whole network or 2) just affect containers (aka services) started after the toggle point.

If the decision is 1 (which I'm leaning towards), you can just keep a track of how many networks have service discovery enabled and, if there aren't any, shut down the dns server.

There are going to be people who will want the resolution done via /etc/hosts rather than dns, so it'll need to be configurable regardless.

@mfischer-zd
Copy link

There are going to be people who will want the resolution done via /etc/hosts rather than dns, so it'll need to be configurable regardless

But it doesn't work reliably. It seems to me that it's better to drop support for it than tell people not to use it when it gets corrupted.

@thockin
Copy link
Contributor

thockin commented Nov 16, 2015

Agree. We should not offer people solutions that are broken-by-design if
we have better alternatives.

On Mon, Nov 16, 2015 at 5:37 AM, Michael S. Fischer <
notifications@github.com> wrote:

There are going to be people who will want the resolution done via
/etc/hosts rather than dns, so it'll need to be configurable regardless

But it doesn't work reliably. It seems to me that it's better to drop
support for it than tell people not to use it when it gets corrupted.


Reply to this email directly or view it on GitHub
#17195 (comment).

@gesellix
Copy link
Contributor

I would prefer to make it an optional feature, it's ok to disable it by default. All alternatives rely on a special service running somewhere, while /etc/hosts is easily consumable by the most basic tools. With the Docker 1.9 overlay network it has become even more convenient.
I'm not running containers in a highly dynamic environment, so the risk of a broken /etc/hosts file is not so important to me. Maybe I'm the only one with such a setup, but I hope I'm not ;-)
Especially keeping in mind that every host resolution needs to be prepared for retries, since either the /etc/hosts may be corrupt or the external dns server is not available, the issues of using /etc/hosts are not so different compared to a dns server.
That said: I also want some built-in dns service, with awareness of containers being added or removed, and optionally configurable to always listen on the same ip address.

@mfischer-zd
Copy link

Especially keeping in mind that every host resolution needs to be prepared for retries, since either the /etc/hosts may be corrupt or the external dns server is not available, the issues of using /etc/hosts are not so different compared to a dns server.

Once /etc/hosts is corrupted, it's likely to be corrupted for the lifetime of the container, since updates are edge-triggered. The temporary failure of a DNS server, on the other hand, could be due to any number of issues, and the probability of restoration during the lifetime of a container is much higher.

I don't follow your consumption argument -- is there any evidence that users are regularly consuming /etc/hosts manually? My experience is that most applications are just using the standard resolver libraries, in which case they don't really care what the data source is as long as the nsswitch.conf file is correct, which it usually is by default.

@mavenugo
Copy link
Contributor

Yes. In order to address all the issues mentioned in the proposal, the clean solution is to cut-over seamlessly to DNS based discovery without impacting the applications. As @mfischer-zd pointed out in most cases applications should not be tied to /etc/hosts based resolution ( @gesellix please correct if this is an incorrect assumption that impacts your use-case ).

We have been discussing and designing a proper /etc/hosts replacement with DNS targeting 1.10. Also, thanks to @phemmer's initial proposal (and initial implementation), the discussion is happening in docker-dev. @sanimej is working on a proposal with libnetwork design goals for this requirement. @mfischer-zd please participate in the discussion & also please reach out in #docker-network channel.

@gesellix
Copy link
Contributor

is there any evidence that users are regularly consuming /etc/hosts manually?

Currently I do use the /etc/hosts, but I would love not to read the file manually.

In my use case it's most important to have a stable configuration, i.e. I either need to rely on /etc/hosts being updated by Docker or by configuring a static dns ip address, where the dns server should be updated by Docker as easy as possible (which is what the proposals are addressing?).

Now you'll tell me that there exist solutions which listen to Docker events and update a dns server, but when running the dns server in Docker its ip address might change, so in my use case I still prefer the /etc/hosts file. My problem is not that I prefer /etc/hosts, but the need for a stable/static container ip address.

Maybe I'm missing something, so if you have any suggestions, I'd be glad. Otherwise I'd prefer if the proposals kept such a use case in mind - I'll try to participate in the discussions.

@aidanhs
Copy link
Contributor

aidanhs commented Nov 17, 2015

Agree. We should not offer people solutions that are broken-by-design if we have better alternatives.

I don't agree that /etc/hosts is broken by design - it's broken by implementation.

On reflection, if the move to DNS comes with the ability to easily query services (either via AXFR or docker service ls $NETWORK returning an IP) then actually I'm probably ok with getting rid of /etc/hosts.

I'm interested in how you're planning to allow different backend DNS servers (due to the --dns argument to docker run) for individual containers in a network, but I suppose that's 'just' an implementation detail.

@dhananjaysathe
Copy link

@mavenugo taking off from the discussion on twitter and after having a read through the discuss here https://groups.google.com/forum/#!topic/docker-dev/WXkMiPJqh7I i have a suggestion to make , i may be wrong and may miss something so please forgive my error and correct me if i'm wrong.

From what i understand the libnetwork uses a distributed kv store to store node-ip information or host records.
Without runnning into the questions about port being used or using some different hacks to make a "dynamic" dns servers we could directly use this data to write a resolver using the libc nss module (eg https://github.com/ryandoyle/nss-etcd perhaps perhaps drawing on https://github.com/phemmer/libnetwork/compare/master...dns by @phemmer) .

This way we don't need to mess with either resolv.conf or /etc/hosts. Not just that when you have this data in the resolver the ns query overhead & latency will probably be lower. Most of the backing kv stores are fairly robust and reliable eliminating potential SPOF that would be incurred by an explicit instance running a dns server. Additionally there is even potential to modify this (perhaps out of the scope of discussion) to provide basic load balancing like yahoo's l3dsr

Also as far as the user is concerned he has a prestine hosts and resolv file. We could potentially ensure the nsswith.conf adds this mechanism into its resolver array which very few people modify anyway.
Let me know what you folks think.

@aidanhs
Copy link
Contributor

aidanhs commented Nov 19, 2015

The problem with that is that go binaries (a popular choice for creating stripped down docker containers) don't look at nsswitch.conf because they're typically statically compiled and can't load the shared libraries.
Even if go binaries could load shared libs, nsswitch.conf wouldn't work in single binary containers anyway because libc isn't present (and won't work in alpine by default because it uses musl, which doesn't respect nsswitch).

Concrete example of nsswitch.conf being ignored by a go binary we're all familiar with: #1715

I do like the idea. But the current 'docker best practices' and ecosystem of containers means that this probably isn't even an 80% solution.

@sanimej
Copy link

sanimej commented Nov 19, 2015

@dhananjaysathe Thats an interesting idea. In addition to the issue @aidanhs mentioned it also requires changing nsswitch.conf. Ideally we want a solution to replace /etc/hosts based discovery as transparently as possible.

In the docker-dev mail thread I mentioned that libnetwork has all the required host/ip info without any driver dependency. Some of it could be from the distributed KV store (may not be etcd). But for local scope networks distributed KV store is not used. Not sure if that will work for nss-etcd. But I have to take a better look at its implementation

@jethroborsje
Copy link

I am not sure if Docker should have this DNS server ability on board, because I agree there are lots of use cases where you don't want Docker messing with DNS stuff at all. Furthermore, I think it is pretty easy to fix this problem by using: https://github.com/gliderlabs/resolvable. I use it in my local dev environment and it works straight out of the box.

@sanimej
Copy link

sanimej commented Nov 23, 2015

@phemmer moby/libnetwork#767 explains the proposal for embedded DNS server with the design we discussed last week in docker-dev mailer. PTAL

@sanimej
Copy link

sanimej commented Nov 23, 2015

@jethroborsje Not sure what you mean by Docker 'messing with DNS stuff'. Take a look at the proposal which has a different design. moby/libnetwork#767

As explained there, the docker embedded DNS server is not a general purpose DNS server and will not interfere with other external name servers.

The problem with resolvable is that all the existing deployments have to do things differently for their existing containers to work which we want to avoid. In the current form doesn't look like it will work for multi-host networks.

@erikh
Copy link
Contributor

erikh commented Jan 15, 2016

https://github.com/docker/dnsserver.

I'm a visionary. :P

@bboreham
Copy link
Contributor

The current implementation does not seem to have a CLI- or API-exposed way to disable it. Please add this, at least to provide a get-out option if any bugs surface.

@Johnxiaoyi
Copy link

The same question as above. Do we have some flag or api option to disable the embedded dns? thanks very much.

@thaJeztah
Copy link
Member

I think this can be closed now, because the embedded DNS server was implemented

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny
Projects
None yet
Development

No branches or pull requests