This page explains how to configure your DNS Pod(s) and customize the DNS resolution process in your cluster.
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your cluster must be running the CoreDNS add-on.
Your Kubernetes server must be at or later than version v1.12.
To check the version, enter kubectl version
.
DNS is a built-in Kubernetes service launched automatically using the addon managercluster add-on.
kube-dns
in the metadata.name
field.kube-dns
Service name to resolve addresses internal to the cluster. Using a Service named kube-dns
abstracts away the implementation detail of which DNS provider is running behind that common name.If you are running CoreDNS as a Deployment, it will typically be exposed as a Kubernetes Service with a static IP address. The kubelet passes DNS resolver information to each container with the --cluster-dns=<dns-service-ip>
flag.
DNS names also need domains. You configure the local domain in the kubelet with the flag --cluster-domain=<default-local-domain>
.
The DNS server supports forward lookups (A and AAAA records), port lookups (SRV records), reverse IP address lookups (PTR records), and more. For more information, see DNS for Services and Pods.
If a Pod's dnsPolicy
is set to default
, it inherits the name resolution configuration from the node that the Pod runs on. The Pod's DNS resolution should behave the same as the node. But see Known issues.
If you don't want this, or if you want a different DNS config for pods, you can use the kubelet's --resolv-conf
flag. Set this flag to "" to prevent Pods from inheriting DNS. Set it to a valid file path to specify a file other than /etc/resolv.conf
for DNS inheritance.
CoreDNS is a general-purpose authoritative DNS server that can serve as cluster DNS, complying with the DNS specifications.
CoreDNS is a DNS server that is modular and pluggable, with plugins adding new functionalities. The CoreDNS server can be configured by maintaining a Corefile, which is the CoreDNS configuration file. As a cluster administrator, you can modify the ConfigMap for the CoreDNS Corefile to change how DNS service discovery behaves for that cluster.
In Kubernetes, CoreDNS is installed with the following default Corefile configuration:
apiVersion:v1kind:ConfigMapmetadata:name:corednsnamespace:kube-systemdata:Corefile:| .:53 { errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance }
The Corefile configuration includes the following plugins of CoreDNS:
http://localhost:8080/health
. In this extended syntax lameduck
will make the process unhealthy then wait for 5 seconds before the process is shut down.ttl
allows you to set a custom TTL for responses. The default is 5 seconds. The minimum TTL allowed is 0 seconds, and the maximum is capped at 3600 seconds. Setting TTL to 0 will prevent records from being cached.pods insecure
option is provided for backward compatibility with kube-dns
.pods verified
option, which returns an A record only if there exists a pod in the same namespace with a matching IP.pods disabled
option can be used if you don't use pod records.http://localhost:9153/metrics
in the Prometheus format (also known as OpenMetrics).You can modify the default CoreDNS behavior by modifying the ConfigMap.
CoreDNS has the ability to configure stub-domains and upstream nameservers using the forward plugin.
If a cluster operator has a Consul domain server located at "10.150.0.1", and all Consul names have the suffix ".consul.local". To configure it in CoreDNS, the cluster administrator creates the following stanza in the CoreDNS ConfigMap.
consul.local:53 { errors cache 30 forward . 10.150.0.1 }
To explicitly force all non-cluster DNS lookups to go through a specific nameserver at 172.16.0.1, point the forward
to the nameserver instead of /etc/resolv.conf
forward . 172.16.0.1
The final ConfigMap along with the default Corefile
configuration looks like:
apiVersion:v1kind:ConfigMapmetadata:name:corednsnamespace:kube-systemdata:Corefile:| .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . 172.16.0.1 cache 30 loop reload loadbalance } consul.local:53 { errors cache 30 forward . 10.150.0.1 }