Switch DNS addons from skydns to kubedns

Unified skydns templates using a simple underscore based template and
added transform sed scripts to transform into salt and sed yaml
templates

Moved all content out of cluster/addons/dns into build/kube-dns and
saltbase/salt/kube-dns
pull/6/head
Girish Kalele 2016-05-27 12:05:24 -07:00
parent 5762ebfc63
commit 4c1047d359
31 changed files with 450 additions and 1736 deletions

View File

@ -933,6 +933,9 @@ function kube::release::package_kube_manifests_tarball() {
local objects
objects=$(cd "${KUBE_ROOT}/cluster/addons" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo)
tar c -C "${KUBE_ROOT}/cluster/addons" ${objects} | tar x -C "${dst_dir}"
objects=$(cd "${KUBE_ROOT}/cluster/saltbase/salt/kube-dns" && find . \( -name \*.yaml -or -name \*.yaml.in -or -name \*.json \) | grep -v demo)
mkdir -p "${dst_dir}/dns"
tar c -C "${KUBE_ROOT}/cluster/saltbase/salt/kube-dns" ${objects} | tar x -C "${dst_dir}/dns"
# This is for coreos only. ContainerVM, GCI, or Trusty does not use it.
cp -r "${KUBE_ROOT}/cluster/gce/coreos/kube-manifests"/* "${release_stage}/"

View File

@ -4,3 +4,6 @@ Tim Hockin <thockin@google.com>
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/dns/MAINTAINERS.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/kube-dns/MAINTAINERS.md?pixel)]()

View File

@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Makefile for the Docker image gcr.io/google_containers/kube2sky
# Makefile for the Docker image gcr.io/google_containers/kubedns-<ARCH>
# MAINTAINER: Tim Hockin <thockin@google.com>
# If you update this image please bump the tag value before pushing.
#
@ -22,7 +22,7 @@
# Default registry, arch and tag. This can be overwritten by arguments to make
PLATFORM?=linux
ARCH?=amd64
TAG?=1.1
TAG?=1.2
REGISTRY?=gcr.io/google_containers
GOLANG_VERSION=1.6

View File

@ -148,8 +148,8 @@ set:
```
Second, you need to start the DNS server ReplicationController and Service. See
the example files ([ReplicationController](skydns-rc.yaml.in) and
[Service](skydns-svc.yaml.in)), but keep in mind that these are templated for
the example files ([ReplicationController](../../cluster/saltbase/salt/skydns-rc.yaml.in) and
[Service](../../cluster/saltbase/salt/skydns-svc.yaml.in)), but keep in mind that these are templated for
Salt. You will need to replace the `{{ <param> }}` blocks with your own values
for the config variables mentioned above. Other than the templating, these are
normal kubernetes objects, and can be instantiated with `kubectl create`.
@ -217,14 +217,14 @@ If you see that, DNS is working correctly.
## How does it work?
SkyDNS depends on etcd for what to serve, but it doesn't really need all of
<del>SkyDNS depends on etcd for what to serve, but it doesn't really need all of
what etcd offers (at least not in the way we use it). For simplicity, we run
etcd and SkyDNS together in a pod, and we do not try to link etcd instances
across replicas. A helper container called [kube2sky](kube2sky/) also runs in
the pod and acts a bridge between Kubernetes and SkyDNS. It finds the
Kubernetes master through the `kubernetes` service (via environment
variables), pulls service info from the master, and writes that to etcd for
SkyDNS to find.
SkyDNS to find.</del>
## Inheriting DNS from the node
When running a pod, kubelet will prepend the cluster DNS server and search
@ -252,11 +252,14 @@ some of those settings will be lost. As a partial workaround, the node can run
entries. You can also use kubelet's `--resolv-conf` flag.
## Making changes
Please observe the release process for making changes to the `kube2sky`
image that is documented in [RELEASES.md](kube2sky/RELEASES.md). Any significant changes
to the YAML template for `kube-dns` should result a bump of the version number
for the `kube-dns` replication controller and well as the `version` label. This
will permit a rolling update of `kube-dns`.
The container containing the kube-dns binary needs to be built for every
architecture and pushed to the registry manually whenever the kube-dns binary
has code changes. Every significant change to the functionality should result
in a bump of the TAG in the Makefile.
Any significant changes to the YAML template for `kube-dns` should result a bump
of the version number for the `kube-dns` replication controller and well as the
`version` label. This will permit a rolling update of `kube-dns`.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/dns/README.md?pixel)]()
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/build/kube-dns/README.md?pixel)]()

View File

@ -1 +0,0 @@
kube2sky

View File

@ -1,40 +0,0 @@
## Version 1.15 (Apr 7 2016 Lucas Käldström <lucas.kaldstrom@hotmail.co.uk>)
- No code changes since 1.14
- Built in a dockerized env instead of using go on host to make it more reliable. `1.15` was built with `go1.6`
- Made it possible to compile this image for multiple architectures, so the main naming of this image is
now `gcr.io/google_containers/kube2sky-arch:tag`. `arch` may be one of `amd64`, `arm`, `arm64` or `ppc64le`.
`gcr.io/google_containers/kube2sky:tag` is still pushed for backward compability
## Version 1.14 (Mar 4 2016 Abhishek Shah <abshah@google.com>)
- If Endpoint has hostnames-map annotation (endpoints.net.beta.kubernetes.io/hostnames-map),
the hostnames supplied via the annotation will be used to generate A Records for Headless Service.
## Version 1.13 (Mar 1 2016 Prashanth.B <beeps@google.com>)
- Synchronously wait for the Kubernetes service at startup.
- Add a SIGTERM/SIGINT handler.
## Version 1.12 (Dec 15 2015 Abhishek Shah <abshah@google.com>)
- Gave pods their own cache store. (034ecbd)
- Allow pods to have dns. (717660a)
## Version 1.10 (Jun 19 2015 Tim Hockin <thockin@google.com>)
- Fall back on service account tokens if no other auth is specified.
## Version 1.9 (May 28 2015 Abhishek Shah <abshah@google.com>)
- Add SRV support.
## Version 1.8 (May 28 2015 Vishnu Kannan <vishnuk@google.com>)
- Avoid making connections to the master insecure by default
- Let users override the master URL in kubeconfig via a flag
## Version 1.7 (May 25 2015 Vishnu Kannan <vishnuk@google.com>)
- Adding support for headless services. All pods backing a headless service is
addressible via DNS RR.
## Version 1.4 (Fri May 15 2015 Tim Hockin <thockin@google.com>)
- First Changelog entry

View File

@ -1,86 +0,0 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Makefile for the Docker image gcr.io/google_containers/kube2sky
# MAINTAINER: Tim Hockin <thockin@google.com>
# If you update this image please bump the tag value before pushing.
#
# Usage:
# [ARCH=amd64] [TAG=1.14] [REGISTRY=gcr.io/google_containers] [BASEIMAGE=busybox] make (build|push)
# Default registry, arch and tag. This can be overwritten by arguments to make
ARCH?=amd64
TAG?=1.15
REGISTRY?=gcr.io/google_containers
GOLANG_VERSION=1.6
GOARM=6
KUBE_ROOT=$(shell pwd)/../../../..
TEMP_DIR:=$(shell mktemp -d)
ifeq ($(ARCH),amd64)
BASEIMAGE?=busybox
endif
ifeq ($(ARCH),arm)
BASEIMAGE?=armel/busybox
endif
ifeq ($(ARCH),arm64)
BASEIMAGE?=aarch64/busybox
endif
ifeq ($(ARCH),ppc64le)
BASEIMAGE?=ppc64le/busybox
endif
all: container
kube2sky: kube2sky.go
# Only build kube2sky. This requires go and godep in PATH
CGO_ENABLED=0 GOARCH=$(ARCH) GOARM=$(GOARM) go build -a -installsuffix cgo --ldflags '-w' ./kube2sky.go
container:
# Copy the content in this dir to the temp dir
cp ./* $(TEMP_DIR)
# Build the binary dockerized. Mount the whole Kubernetes source first, and then the temporary dir to kube2sky source.
# It runs "make kube2sky" inside the docker container, and the binary is put in the temporary dir.
docker run -it \
-v $(KUBE_ROOT):/go/src/k8s.io/kubernetes \
-v $(TEMP_DIR):/go/src/k8s.io/kubernetes/cluster/addons/dns/kube2sky \
golang:$(GOLANG_VERSION) /bin/bash -c \
"go get github.com/tools/godep \
&& make -C /go/src/k8s.io/kubernetes/cluster/addons/dns/kube2sky kube2sky ARCH=$(ARCH)"
# Replace BASEIMAGE with the real base image
cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile
# And build the image
docker build -t $(REGISTRY)/kube2sky-$(ARCH):$(TAG) $(TEMP_DIR)
push: container
gcloud docker push $(REGISTRY)/kube2sky-$(ARCH):$(TAG)
ifeq ($(ARCH),amd64)
# Backward compatability. TODO: deprecate this image tag
docker tag -f $(REGISTRY)/kube2sky-$(ARCH):$(TAG) $(REGISTRY)/kube2sky:$(TAG)
gcloud docker push $(REGISTRY)/kube2sky:$(TAG)
endif
clean:
rm -f kube2sky
test: clean
go test -v --vmodule=*=4
.PHONY: all kube2sky container push clean test

View File

@ -1,37 +0,0 @@
# kube2sky
==============
A bridge between Kubernetes and SkyDNS. This will watch the kubernetes API for
changes in Services and then publish those changes to SkyDNS through etcd.
For now, this is expected to be run in a pod alongside the etcd and SkyDNS
containers.
## Namespaces
Kubernetes namespaces become another level of the DNS hierarchy. See the
description of `--domain` below.
## Flags
`--domain`: Set the domain under which all DNS names will be hosted. For
example, if this is set to `kubernetes.io`, then a service named "nifty" in the
"default" namespace would be exposed through DNS as
"nifty.default.svc.kubernetes.io".
`--v`: Set logging level
`--etcd-mutation-timeout`: For how long the application will keep retrying etcd
mutation (insertion or removal of a dns entry) before giving up and crashing.
`--etcd-server`: The etcd server that is being used by skydns.
`--kube-master-url`: URL of kubernetes master. Required if `--kubecfg_file` is not set.
`--kubecfg-file`: Path to kubecfg file that contains the master URL and tokens to authenticate with the master.
`--log-dir`: If non empty, write log files in this directory
`--logtostderr`: Logs to stderr instead of files
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/dns/kube2sky/README.md?pixel)]()

View File

@ -1,43 +0,0 @@
# Cutting a release
Until we have a proper setup for building this automatically with every binary
release, here are the steps for making a release. We make releases when they
are ready, not on every PR.
1. Build the container for testing: `make container PREFIX=<your-docker-hub> TAG=rc`
2. Manually deploy this to your own cluster by updating the replication
controller and deleting the running pod(s).
3. Verify it works.
4. Update the TAG version in `Makefile` and update the `Changelog`. Update the
`*.yaml.in` to point to the new tag. Send a PR but mark it as "DO NOT MERGE".
5. Once the PR is approved, build and push the container for real for all architectures:
```console
# Build for linux/amd64 (default)
$ make push ARCH=amd64
# ---> gcr.io/google_containers/kube2sky-amd64:TAG
# ---> gcr.io/google_containers/kube2sky:TAG (image with backwards-compatible naming)
$ make push ARCH=arm
# ---> gcr.io/google_containers/kube2sky-arm:TAG
$ make push ARCH=arm64
# ---> gcr.io/google_containers/kube2sky-arm64:TAG
$ make push ARCH=ppc64le
# ---> gcr.io/google_containers/kube2sky-ppc64le:TAG
```
6. Manually deploy this to your own cluster by updating the replication
controller and deleting the running pod(s).
7. Verify it works.
8. Allow the PR to be merged.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/dns/kube2sky/RELEASES.md?pixel)]()

View File

@ -1,697 +0,0 @@
/*
Copyright 2014 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// kube2sky is a bridge between Kubernetes and SkyDNS. It watches the
// Kubernetes master for changes in Services and manifests them into etcd for
// SkyDNS to serve as DNS records.
package main
import (
"encoding/json"
"fmt"
"hash/fnv"
"net/http"
"net/url"
"os"
"os/signal"
"strings"
"sync"
"syscall"
"time"
etcd "github.com/coreos/go-etcd/etcd"
"github.com/golang/glog"
skymsg "github.com/skynetservices/skydns/msg"
flag "github.com/spf13/pflag"
kapi "k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/api/endpoints"
"k8s.io/kubernetes/pkg/api/unversioned"
kcache "k8s.io/kubernetes/pkg/client/cache"
"k8s.io/kubernetes/pkg/client/restclient"
kclient "k8s.io/kubernetes/pkg/client/unversioned"
kclientcmd "k8s.io/kubernetes/pkg/client/unversioned/clientcmd"
kframework "k8s.io/kubernetes/pkg/controller/framework"
kselector "k8s.io/kubernetes/pkg/fields"
etcdutil "k8s.io/kubernetes/pkg/storage/etcd/util"
utilflag "k8s.io/kubernetes/pkg/util/flag"
"k8s.io/kubernetes/pkg/util/validation"
"k8s.io/kubernetes/pkg/util/wait"
)
// The name of the "master" Kubernetes Service.
const kubernetesSvcName = "kubernetes"
var (
argDomain = flag.String("domain", "cluster.local", "domain under which to create names")
argEtcdMutationTimeout = flag.Duration("etcd-mutation-timeout", 10*time.Second, "crash after retrying etcd mutation for a specified duration")
argEtcdServer = flag.String("etcd-server", "http://127.0.0.1:4001", "URL to etcd server")
argKubecfgFile = flag.String("kubecfg-file", "", "Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens")
argKubeMasterURL = flag.String("kube-master-url", "", "URL to reach kubernetes master. Env variables in this flag will be expanded.")
healthzPort = flag.Int("healthz-port", 8081, "port on which to serve a kube2sky HTTP readiness probe.")
)
const (
// Maximum number of attempts to connect to etcd server.
maxConnectAttempts = 12
// Resync period for the kube controller loop.
resyncPeriod = 30 * time.Minute
// A subdomain added to the user specified domain for all services.
serviceSubdomain = "svc"
// A subdomain added to the user specified dmoain for all pods.
podSubdomain = "pod"
)
type etcdClient interface {
Set(path, value string, ttl uint64) (*etcd.Response, error)
RawGet(key string, sort, recursive bool) (*etcd.RawResponse, error)
Delete(path string, recursive bool) (*etcd.Response, error)
}
type nameNamespace struct {
name string
namespace string
}
type kube2sky struct {
// Etcd client.
etcdClient etcdClient
// DNS domain name.
domain string
// Etcd mutation timeout.
etcdMutationTimeout time.Duration
// A cache that contains all the endpoints in the system.
endpointsStore kcache.Store
// A cache that contains all the services in the system.
servicesStore kcache.Store
// A cache that contains all the pods in the system.
podsStore kcache.Store
// Lock for controlling access to headless services.
mlock sync.Mutex
}
// Removes 'subdomain' from etcd.
func (ks *kube2sky) removeDNS(subdomain string) error {
glog.V(2).Infof("Removing %s from DNS", subdomain)
resp, err := ks.etcdClient.RawGet(skymsg.Path(subdomain), false, true)
if err != nil {
return err
}
if resp.StatusCode == http.StatusNotFound {
glog.V(2).Infof("Subdomain %q does not exist in etcd", subdomain)
return nil
}
_, err = ks.etcdClient.Delete(skymsg.Path(subdomain), true)
return err
}
func (ks *kube2sky) writeSkyRecord(subdomain string, data string) error {
// Set with no TTL, and hope that kubernetes events are accurate.
_, err := ks.etcdClient.Set(skymsg.Path(subdomain), data, uint64(0))
return err
}
// Generates skydns records for a headless service.
func (ks *kube2sky) newHeadlessService(subdomain string, service *kapi.Service) error {
// Create an A record for every pod in the service.
// This record must be periodically updated.
// Format is as follows:
// For a service x, with pods a and b create DNS records,
// a.x.ns.domain. and, b.x.ns.domain.
ks.mlock.Lock()
defer ks.mlock.Unlock()
key, err := kcache.MetaNamespaceKeyFunc(service)
if err != nil {
return err
}
e, exists, err := ks.endpointsStore.GetByKey(key)
if err != nil {
return fmt.Errorf("failed to get endpoints object from endpoints store - %v", err)
}
if !exists {
glog.V(1).Infof("Could not find endpoints for service %q in namespace %q. DNS records will be created once endpoints show up.", service.Name, service.Namespace)
return nil
}
if e, ok := e.(*kapi.Endpoints); ok {
return ks.generateRecordsForHeadlessService(subdomain, e, service)
}
return nil
}
func getSkyMsg(ip string, port int) *skymsg.Service {
return &skymsg.Service{
Host: ip,
Port: port,
Priority: 10,
Weight: 10,
Ttl: 30,
}
}
func (ks *kube2sky) generateRecordsForHeadlessService(subdomain string, e *kapi.Endpoints, svc *kapi.Service) error {
// TODO: remove this after v1.4 is released and the old annotations are EOL
podHostnames, err := getPodHostnamesFromAnnotation(e.Annotations)
if err != nil {
return err
}
for idx := range e.Subsets {
for subIdx := range e.Subsets[idx].Addresses {
address := &e.Subsets[idx].Addresses[subIdx]
endpointIP := address.IP
b, err := json.Marshal(getSkyMsg(endpointIP, 0))
if err != nil {
return err
}
recordValue := string(b)
var recordLabel string
if hostLabel, exists := getHostname(address, podHostnames); exists {
recordLabel = hostLabel
} else {
recordLabel = getHash(recordValue)
}
recordKey := buildDNSNameString(subdomain, recordLabel)
glog.V(2).Infof("Setting DNS record: %v -> %q\n", recordKey, recordValue)
if err := ks.writeSkyRecord(recordKey, recordValue); err != nil {
return err
}
for portIdx := range e.Subsets[idx].Ports {
endpointPort := &e.Subsets[idx].Ports[portIdx]
portSegment := buildPortSegmentString(endpointPort.Name, endpointPort.Protocol)
if portSegment != "" {
err := ks.generateSRVRecord(subdomain, portSegment, recordLabel, recordKey, int(endpointPort.Port))
if err != nil {
return err
}
}
}
}
}
return nil
}
func getHostname(address *kapi.EndpointAddress, podHostnames map[string]endpoints.HostRecord) (string, bool) {
if len(address.Hostname) > 0 {
return address.Hostname, true
}
if hostRecord, exists := podHostnames[address.IP]; exists && len(validation.IsDNS1123Label(hostRecord.HostName)) == 0 {
return hostRecord.HostName, true
}
return "", false
}
func getPodHostnamesFromAnnotation(annotations map[string]string) (map[string]endpoints.HostRecord, error) {
hostnames := map[string]endpoints.HostRecord{}
if annotations != nil {
if serializedHostnames, exists := annotations[endpoints.PodHostnamesAnnotation]; exists && len(serializedHostnames) > 0 {
err := json.Unmarshal([]byte(serializedHostnames), &hostnames)
if err != nil {
return nil, err
}
}
}
return hostnames, nil
}
func (ks *kube2sky) getServiceFromEndpoints(e *kapi.Endpoints) (*kapi.Service, error) {
key, err := kcache.MetaNamespaceKeyFunc(e)
if err != nil {
return nil, err
}
obj, exists, err := ks.servicesStore.GetByKey(key)
if err != nil {
return nil, fmt.Errorf("failed to get service object from services store - %v", err)
}
if !exists {
glog.V(1).Infof("could not find service for endpoint %q in namespace %q", e.Name, e.Namespace)
return nil, nil
}
if svc, ok := obj.(*kapi.Service); ok {
return svc, nil
}
return nil, fmt.Errorf("got a non service object in services store %v", obj)
}
func (ks *kube2sky) addDNSUsingEndpoints(subdomain string, e *kapi.Endpoints) error {
ks.mlock.Lock()
defer ks.mlock.Unlock()
svc, err := ks.getServiceFromEndpoints(e)
if err != nil {
return err
}
if svc == nil || kapi.IsServiceIPSet(svc) {
// No headless service found corresponding to endpoints object.
return nil
}
// Remove existing DNS entry.
if err := ks.removeDNS(subdomain); err != nil {
return err
}
return ks.generateRecordsForHeadlessService(subdomain, e, svc)
}
func (ks *kube2sky) handleEndpointAdd(obj interface{}) {
if e, ok := obj.(*kapi.Endpoints); ok {
name := buildDNSNameString(ks.domain, serviceSubdomain, e.Namespace, e.Name)
ks.mutateEtcdOrDie(func() error { return ks.addDNSUsingEndpoints(name, e) })
}
}
func (ks *kube2sky) handlePodCreate(obj interface{}) {
if e, ok := obj.(*kapi.Pod); ok {
// If the pod ip is not yet available, do not attempt to create.
if e.Status.PodIP != "" {
name := buildDNSNameString(ks.domain, podSubdomain, e.Namespace, santizeIP(e.Status.PodIP))
ks.mutateEtcdOrDie(func() error { return ks.generateRecordsForPod(name, e) })
}
}
}
func (ks *kube2sky) handlePodUpdate(old interface{}, new interface{}) {
oldPod, okOld := old.(*kapi.Pod)
newPod, okNew := new.(*kapi.Pod)
// Validate that the objects are good
if okOld && okNew {
if oldPod.Status.PodIP != newPod.Status.PodIP {
ks.handlePodDelete(oldPod)
ks.handlePodCreate(newPod)
}
} else if okNew {
ks.handlePodCreate(newPod)
} else if okOld {
ks.handlePodDelete(oldPod)
}
}
func (ks *kube2sky) handlePodDelete(obj interface{}) {
if e, ok := obj.(*kapi.Pod); ok {
if e.Status.PodIP != "" {
name := buildDNSNameString(ks.domain, podSubdomain, e.Namespace, santizeIP(e.Status.PodIP))
ks.mutateEtcdOrDie(func() error { return ks.removeDNS(name) })
}
}
}
func (ks *kube2sky) generateRecordsForPod(subdomain string, service *kapi.Pod) error {
b, err := json.Marshal(getSkyMsg(service.Status.PodIP, 0))
if err != nil {
return err
}
recordValue := string(b)
recordLabel := getHash(recordValue)
recordKey := buildDNSNameString(subdomain, recordLabel)
glog.V(2).Infof("Setting DNS record: %v -> %q, with recordKey: %v\n", subdomain, recordValue, recordKey)
if err := ks.writeSkyRecord(recordKey, recordValue); err != nil {
return err
}
return nil
}
func (ks *kube2sky) generateRecordsForPortalService(subdomain string, service *kapi.Service) error {
b, err := json.Marshal(getSkyMsg(service.Spec.ClusterIP, 0))
if err != nil {
return err
}
recordValue := string(b)
recordLabel := getHash(recordValue)
recordKey := buildDNSNameString(subdomain, recordLabel)
glog.V(2).Infof("Setting DNS record: %v -> %q, with recordKey: %v\n", subdomain, recordValue, recordKey)
if err := ks.writeSkyRecord(recordKey, recordValue); err != nil {
return err
}
// Generate SRV Records
for i := range service.Spec.Ports {
port := &service.Spec.Ports[i]
portSegment := buildPortSegmentString(port.Name, port.Protocol)
if portSegment != "" {
err = ks.generateSRVRecord(subdomain, portSegment, recordLabel, subdomain, int(port.Port))
if err != nil {
return err
}
}
}
return nil
}
func santizeIP(ip string) string {
return strings.Replace(ip, ".", "-", -1)
}
func buildPortSegmentString(portName string, portProtocol kapi.Protocol) string {
if portName == "" {
// we don't create a random name
return ""
}
if portProtocol == "" {
glog.Errorf("Port Protocol not set. port segment string cannot be created.")
return ""
}
return fmt.Sprintf("_%s._%s", portName, strings.ToLower(string(portProtocol)))
}
func (ks *kube2sky) generateSRVRecord(subdomain, portSegment, recordName, cName string, portNumber int) error {
recordKey := buildDNSNameString(subdomain, portSegment, recordName)
srv_rec, err := json.Marshal(getSkyMsg(cName, portNumber))
if err != nil {
return err
}
if err := ks.writeSkyRecord(recordKey, string(srv_rec)); err != nil {
return err
}
return nil
}
func (ks *kube2sky) addDNS(subdomain string, service *kapi.Service) error {
// if ClusterIP is not set, a DNS entry should not be created
if !kapi.IsServiceIPSet(service) {
return ks.newHeadlessService(subdomain, service)
}
if len(service.Spec.Ports) == 0 {
glog.Info("Unexpected service with no ports, this should not have happend: %v", service)
}
return ks.generateRecordsForPortalService(subdomain, service)
}
// Implements retry logic for arbitrary mutator. Crashes after retrying for
// etcd-mutation-timeout.
func (ks *kube2sky) mutateEtcdOrDie(mutator func() error) {
timeout := time.After(ks.etcdMutationTimeout)
for {
select {
case <-timeout:
glog.Fatalf("Failed to mutate etcd for %v using mutator: %v", ks.etcdMutationTimeout, mutator())
default:
if err := mutator(); err != nil {
delay := 50 * time.Millisecond
glog.V(1).Infof("Failed to mutate etcd using mutator: %v due to: %v. Will retry in: %v", mutator, err, delay)
time.Sleep(delay)
} else {
return
}
}
}
}
func buildDNSNameString(labels ...string) string {
var res string
for _, label := range labels {
if res == "" {
res = label
} else {
res = fmt.Sprintf("%s.%s", label, res)
}
}
return res
}
// Returns a cache.ListWatch that gets all changes to services.
func createServiceLW(kubeClient *kclient.Client) *kcache.ListWatch {
return kcache.NewListWatchFromClient(kubeClient, "services", kapi.NamespaceAll, kselector.Everything())
}
// Returns a cache.ListWatch that gets all changes to endpoints.
func createEndpointsLW(kubeClient *kclient.Client) *kcache.ListWatch {
return kcache.NewListWatchFromClient(kubeClient, "endpoints", kapi.NamespaceAll, kselector.Everything())
}
// Returns a cache.ListWatch that gets all changes to pods.
func createEndpointsPodLW(kubeClient *kclient.Client) *kcache.ListWatch {
return kcache.NewListWatchFromClient(kubeClient, "pods", kapi.NamespaceAll, kselector.Everything())
}
func (ks *kube2sky) newService(obj interface{}) {
if s, ok := obj.(*kapi.Service); ok {
name := buildDNSNameString(ks.domain, serviceSubdomain, s.Namespace, s.Name)
ks.mutateEtcdOrDie(func() error { return ks.addDNS(name, s) })
}
}
func (ks *kube2sky) removeService(obj interface{}) {
if s, ok := obj.(*kapi.Service); ok {
name := buildDNSNameString(ks.domain, serviceSubdomain, s.Namespace, s.Name)
ks.mutateEtcdOrDie(func() error { return ks.removeDNS(name) })
}
}
func (ks *kube2sky) updateService(oldObj, newObj interface{}) {
// TODO: We shouldn't leave etcd in a state where it doesn't have a
// record for a Service. This removal is needed to completely clean
// the directory of a Service, which has SRV records and A records
// that are hashed according to oldObj. Unfortunately, this is the
// easiest way to purge the directory.
ks.removeService(oldObj)
ks.newService(newObj)
}
func newEtcdClient(etcdServer string) (*etcd.Client, error) {
var (
client *etcd.Client
err error
)
for attempt := 1; attempt <= maxConnectAttempts; attempt++ {
if _, err = etcdutil.GetEtcdVersion(etcdServer); err == nil {
break
}
if attempt == maxConnectAttempts {
break
}
glog.Infof("[Attempt: %d] Attempting access to etcd after 5 second sleep", attempt)
time.Sleep(5 * time.Second)
}
if err != nil {
return nil, fmt.Errorf("failed to connect to etcd server: %v, error: %v", etcdServer, err)
}
glog.Infof("Etcd server found: %v", etcdServer)
// loop until we have > 0 machines && machines[0] != ""
poll, timeout := 1*time.Second, 10*time.Second
if err := wait.Poll(poll, timeout, func() (bool, error) {
if client = etcd.NewClient([]string{etcdServer}); client == nil {
return false, fmt.Errorf("etcd.NewClient returned nil")
}
client.SyncCluster()
machines := client.GetCluster()
if len(machines) == 0 || len(machines[0]) == 0 {
return false, nil
}
return true, nil
}); err != nil {
return nil, fmt.Errorf("Timed out after %s waiting for at least 1 synchronized etcd server in the cluster. Error: %v", timeout, err)
}
return client, nil
}
func expandKubeMasterURL() (string, error) {
parsedURL, err := url.Parse(os.ExpandEnv(*argKubeMasterURL))
if err != nil {
return "", fmt.Errorf("failed to parse --kube-master-url %s - %v", *argKubeMasterURL, err)
}
if parsedURL.Scheme == "" || parsedURL.Host == "" || parsedURL.Host == ":" {
return "", fmt.Errorf("invalid --kube-master-url specified %s", *argKubeMasterURL)
}
return parsedURL.String(), nil
}
// TODO: evaluate using pkg/client/clientcmd
func newKubeClient() (*kclient.Client, error) {
var (
config *restclient.Config
err error
masterURL string
)
// If the user specified --kube-master-url, expand env vars and verify it.
if *argKubeMasterURL != "" {
masterURL, err = expandKubeMasterURL()
if err != nil {
return nil, err
}
}
if masterURL != "" && *argKubecfgFile == "" {
// Only --kube-master-url was provided.
config = &restclient.Config{
Host: masterURL,
ContentConfig: restclient.ContentConfig{GroupVersion: &unversioned.GroupVersion{Version: "v1"}},
}
} else {
// We either have:
// 1) --kube-master-url and --kubecfg-file
// 2) just --kubecfg-file
// 3) neither flag
// In any case, the logic is the same. If (3), this will automatically
// fall back on the service account token.
overrides := &kclientcmd.ConfigOverrides{}
overrides.ClusterInfo.Server = masterURL // might be "", but that is OK
rules := &kclientcmd.ClientConfigLoadingRules{ExplicitPath: *argKubecfgFile} // might be "", but that is OK
if config, err = kclientcmd.NewNonInteractiveDeferredLoadingClientConfig(rules, overrides).ClientConfig(); err != nil {
return nil, err
}
}
glog.Infof("Using %s for kubernetes master", config.Host)
glog.Infof("Using kubernetes API %v", config.GroupVersion)
return kclient.New(config)
}
func watchForServices(kubeClient *kclient.Client, ks *kube2sky) kcache.Store {
serviceStore, serviceController := kframework.NewInformer(
createServiceLW(kubeClient),
&kapi.Service{},
resyncPeriod,
kframework.ResourceEventHandlerFuncs{
AddFunc: ks.newService,
DeleteFunc: ks.removeService,
UpdateFunc: ks.updateService,
},
)
go serviceController.Run(wait.NeverStop)
return serviceStore
}
func watchEndpoints(kubeClient *kclient.Client, ks *kube2sky) kcache.Store {
eStore, eController := kframework.NewInformer(
createEndpointsLW(kubeClient),
&kapi.Endpoints{},
resyncPeriod,
kframework.ResourceEventHandlerFuncs{
AddFunc: ks.handleEndpointAdd,
UpdateFunc: func(oldObj, newObj interface{}) {
// TODO: Avoid unwanted updates.
ks.handleEndpointAdd(newObj)
},
},
)
go eController.Run(wait.NeverStop)
return eStore
}
func watchPods(kubeClient *kclient.Client, ks *kube2sky) kcache.Store {
eStore, eController := kframework.NewInformer(
createEndpointsPodLW(kubeClient),
&kapi.Pod{},
resyncPeriod,
kframework.ResourceEventHandlerFuncs{
AddFunc: ks.handlePodCreate,
UpdateFunc: func(oldObj, newObj interface{}) {
ks.handlePodUpdate(oldObj, newObj)
},
DeleteFunc: ks.handlePodDelete,
},
)
go eController.Run(wait.NeverStop)
return eStore
}
func getHash(text string) string {
h := fnv.New32a()
h.Write([]byte(text))
return fmt.Sprintf("%x", h.Sum32())
}
// waitForKubernetesService waits for the "Kuberntes" master service.
// Since the health probe on the kube2sky container is essentially an nslookup
// of this service, we cannot serve any DNS records if it doesn't show up.
// Once the Service is found, we start replying on this containers readiness
// probe endpoint.
func waitForKubernetesService(client *kclient.Client) (svc *kapi.Service) {
name := fmt.Sprintf("%v/%v", kapi.NamespaceDefault, kubernetesSvcName)
glog.Infof("Waiting for service: %v", name)
var err error
servicePollInterval := 1 * time.Second
for {
svc, err = client.Services(kapi.NamespaceDefault).Get(kubernetesSvcName)
if err != nil || svc == nil {
glog.Infof("Ignoring error while waiting for service %v: %v. Sleeping %v before retrying.", name, err, servicePollInterval)
time.Sleep(servicePollInterval)
continue
}
break
}
return
}
// setupSignalHandlers runs a goroutine that waits on SIGINT or SIGTERM and logs it
// before exiting.
func setupSignalHandlers() {
sigChan := make(chan os.Signal)
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
// This program should always exit gracefully logging that it received
// either a SIGINT or SIGTERM. Since kube2sky is run in a container
// without a liveness probe as part of the kube-dns pod, it shouldn't
// restart unless the pod is deleted. If it restarts without logging
// anything it means something is seriously wrong.
// TODO: Remove once #22290 is fixed.
go func() {
glog.Fatalf("Received signal %s", <-sigChan)
}()
}
// setupHealthzHandlers sets up a readiness and liveness endpoint for kube2sky.
func setupHealthzHandlers(ks *kube2sky) {
http.HandleFunc("/readiness", func(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "ok\n")
})
}
func main() {
flag.CommandLine.SetNormalizeFunc(utilflag.WarnWordSepNormalizeFunc)
flag.Parse()
var err error
setupSignalHandlers()
// TODO: Validate input flags.
domain := *argDomain
if !strings.HasSuffix(domain, ".") {
domain = fmt.Sprintf("%s.", domain)
}
ks := kube2sky{
domain: domain,
etcdMutationTimeout: *argEtcdMutationTimeout,
}
if ks.etcdClient, err = newEtcdClient(*argEtcdServer); err != nil {
glog.Fatalf("Failed to create etcd client - %v", err)
}
kubeClient, err := newKubeClient()
if err != nil {
glog.Fatalf("Failed to create a kubernetes client: %v", err)
}
// Wait synchronously for the Kubernetes service and add a DNS record for it.
ks.newService(waitForKubernetesService(kubeClient))
glog.Infof("Successfully added DNS record for Kubernetes service.")
ks.endpointsStore = watchEndpoints(kubeClient, &ks)
ks.servicesStore = watchForServices(kubeClient, &ks)
ks.podsStore = watchPods(kubeClient, &ks)
// We declare kube2sky ready when:
// 1. It has retrieved the Kubernetes master service from the apiserver. If this
// doesn't happen skydns will fail its liveness probe assuming that it can't
// perform any cluster local DNS lookups.
// 2. It has setup the 3 watches above.
// Once ready this container never flips to not-ready.
setupHealthzHandlers(&ks)
glog.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", *healthzPort), nil))
}

View File

@ -1,467 +0,0 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"encoding/json"
"fmt"
"net/http"
"path"
"strings"
"testing"
"time"
"github.com/coreos/go-etcd/etcd"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
kapi "k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/client/cache"
)
type fakeEtcdClient struct {
// TODO: Convert this to real fs to better simulate etcd behavior.
writes map[string]string
}
func (ec *fakeEtcdClient) Set(key, value string, ttl uint64) (*etcd.Response, error) {
ec.writes[key] = value
return nil, nil
}
func (ec *fakeEtcdClient) Delete(key string, recursive bool) (*etcd.Response, error) {
for p := range ec.writes {
if (recursive && strings.HasPrefix(p, key)) || (!recursive && p == key) {
delete(ec.writes, p)
}
}
return nil, nil
}
func (ec *fakeEtcdClient) RawGet(key string, sort, recursive bool) (*etcd.RawResponse, error) {
values := ec.Get(key)
if len(values) == 0 {
return &etcd.RawResponse{StatusCode: http.StatusNotFound}, nil
}
return &etcd.RawResponse{StatusCode: http.StatusOK}, nil
}
func (ec *fakeEtcdClient) Get(key string) []string {
values := make([]string, 0, 10)
minSeparatorCount := 0
key = strings.ToLower(key)
for path := range ec.writes {
if strings.HasPrefix(path, key) {
separatorCount := strings.Count(path, "/")
if minSeparatorCount == 0 || separatorCount < minSeparatorCount {
minSeparatorCount = separatorCount
values = values[:0]
values = append(values, ec.writes[path])
} else if separatorCount == minSeparatorCount {
values = append(values, ec.writes[path])
}
}
}
return values
}
const (
testDomain = "cluster.local."
basePath = "/skydns/local/cluster"
serviceSubDomain = "svc"
podSubDomain = "pod"
)
func newKube2Sky(ec etcdClient) *kube2sky {
return &kube2sky{
etcdClient: ec,
domain: testDomain,
etcdMutationTimeout: time.Second,
endpointsStore: cache.NewStore(cache.MetaNamespaceKeyFunc),
servicesStore: cache.NewStore(cache.MetaNamespaceKeyFunc),
}
}
func getEtcdPathForA(name, namespace, subDomain string) string {
return path.Join(basePath, subDomain, namespace, name)
}
func getEtcdPathForSRV(portName, protocol, name, namespace string) string {
return path.Join(basePath, serviceSubDomain, namespace, name, fmt.Sprintf("_%s", strings.ToLower(protocol)), fmt.Sprintf("_%s", strings.ToLower(portName)))
}
type hostPort struct {
Host string `json:"host"`
Port int `json:"port"`
}
func getHostPort(service *kapi.Service) *hostPort {
return &hostPort{
Host: service.Spec.ClusterIP,
Port: int(service.Spec.Ports[0].Port),
}
}
func getHostPortFromString(data string) (*hostPort, error) {
var res hostPort
err := json.Unmarshal([]byte(data), &res)
return &res, err
}
func assertDnsServiceEntryInEtcd(t *testing.T, ec *fakeEtcdClient, serviceName, namespace string, expectedHostPort *hostPort) {
key := getEtcdPathForA(serviceName, namespace, serviceSubDomain)
values := ec.Get(key)
//require.True(t, exists)
require.True(t, len(values) > 0, "entry not found.")
actualHostPort, err := getHostPortFromString(values[0])
require.NoError(t, err)
assert.Equal(t, expectedHostPort.Host, actualHostPort.Host)
}
func assertDnsPodEntryInEtcd(t *testing.T, ec *fakeEtcdClient, podIP, namespace string) {
key := getEtcdPathForA(podIP, namespace, podSubDomain)
values := ec.Get(key)
//require.True(t, exists)
require.True(t, len(values) > 0, "entry not found.")
}
func assertDnsPodEntryNotInEtcd(t *testing.T, ec *fakeEtcdClient, podIP, namespace string) {
key := getEtcdPathForA(podIP, namespace, podSubDomain)
values := ec.Get(key)
//require.True(t, exists)
require.True(t, len(values) == 0, "entry found.")
}
func assertSRVEntryInEtcd(t *testing.T, ec *fakeEtcdClient, portName, protocol, serviceName, namespace string, expectedPortNumber, expectedEntriesCount int) {
srvKey := getEtcdPathForSRV(portName, protocol, serviceName, namespace)
values := ec.Get(srvKey)
assert.Equal(t, expectedEntriesCount, len(values))
for i := range values {
actualHostPort, err := getHostPortFromString(values[i])
require.NoError(t, err)
assert.Equal(t, expectedPortNumber, actualHostPort.Port)
}
}
func newHeadlessService(namespace, serviceName string) kapi.Service {
service := kapi.Service{
ObjectMeta: kapi.ObjectMeta{
Name: serviceName,
Namespace: namespace,
},
Spec: kapi.ServiceSpec{
ClusterIP: "None",
Ports: []kapi.ServicePort{
{Port: 0},
},
},
}
return service
}
func newService(namespace, serviceName, clusterIP, portName string, portNumber int) kapi.Service {
service := kapi.Service{
ObjectMeta: kapi.ObjectMeta{
Name: serviceName,
Namespace: namespace,
},
Spec: kapi.ServiceSpec{
ClusterIP: clusterIP,
Ports: []kapi.ServicePort{
{Port: int32(portNumber), Name: portName, Protocol: "TCP"},
},
},
}
return service
}
func newPod(namespace, podName, podIP string) kapi.Pod {
pod := kapi.Pod{
ObjectMeta: kapi.ObjectMeta{
Name: podName,
Namespace: namespace,
},
Status: kapi.PodStatus{
PodIP: podIP,
},
}
return pod
}
func newSubset() kapi.EndpointSubset {
subset := kapi.EndpointSubset{
Addresses: []kapi.EndpointAddress{},
Ports: []kapi.EndpointPort{},
}
return subset
}
func newSubsetWithOnePort(portName string, port int, ips ...string) kapi.EndpointSubset {
subset := newSubset()
subset.Ports = append(subset.Ports, kapi.EndpointPort{Port: int32(port), Name: portName, Protocol: "TCP"})
for _, ip := range ips {
subset.Addresses = append(subset.Addresses, kapi.EndpointAddress{IP: ip})
}
return subset
}
func newSubsetWithTwoPorts(portName1 string, portNumber1 int, portName2 string, portNumber2 int, ips ...string) kapi.EndpointSubset {
subset := newSubsetWithOnePort(portName1, portNumber1, ips...)
subset.Ports = append(subset.Ports, kapi.EndpointPort{Port: int32(portNumber2), Name: portName2, Protocol: "TCP"})
return subset
}
func newEndpoints(service kapi.Service, subsets ...kapi.EndpointSubset) kapi.Endpoints {
endpoints := kapi.Endpoints{
ObjectMeta: service.ObjectMeta,
Subsets: []kapi.EndpointSubset{},
}
for _, subset := range subsets {
endpoints.Subsets = append(endpoints.Subsets, subset)
}
return endpoints
}
func TestHeadlessService(t *testing.T) {
const (
testService = "testservice"
testNamespace = "default"
)
ec := &fakeEtcdClient{make(map[string]string)}
k2s := newKube2Sky(ec)
service := newHeadlessService(testNamespace, testService)
assert.NoError(t, k2s.servicesStore.Add(&service))
endpoints := newEndpoints(service, newSubsetWithOnePort("", 80, "10.0.0.1", "10.0.0.2"), newSubsetWithOnePort("", 8080, "10.0.0.3", "10.0.0.4"))
// We expect 4 records.
expectedDNSRecords := 4
assert.NoError(t, k2s.endpointsStore.Add(&endpoints))
k2s.newService(&service)
assert.Equal(t, expectedDNSRecords, len(ec.writes))
k2s.removeService(&service)
assert.Empty(t, ec.writes)
}
func TestHeadlessServiceWithNamedPorts(t *testing.T) {
const (
testService = "testservice"
testNamespace = "default"
)
ec := &fakeEtcdClient{make(map[string]string)}
k2s := newKube2Sky(ec)
service := newHeadlessService(testNamespace, testService)
assert.NoError(t, k2s.servicesStore.Add(&service))
endpoints := newEndpoints(service, newSubsetWithTwoPorts("http1", 80, "http2", 81, "10.0.0.1", "10.0.0.2"), newSubsetWithOnePort("https", 443, "10.0.0.3", "10.0.0.4"))
// We expect 10 records. 6 SRV records. 4 POD records.
expectedDNSRecords := 10
assert.NoError(t, k2s.endpointsStore.Add(&endpoints))
k2s.newService(&service)
assert.Equal(t, expectedDNSRecords, len(ec.writes))
assertSRVEntryInEtcd(t, ec, "http1", "tcp", testService, testNamespace, 80, 2)
assertSRVEntryInEtcd(t, ec, "http2", "tcp", testService, testNamespace, 81, 2)
assertSRVEntryInEtcd(t, ec, "https", "tcp", testService, testNamespace, 443, 2)
endpoints.Subsets = endpoints.Subsets[:1]
k2s.handleEndpointAdd(&endpoints)
// We expect 6 records. 4 SRV records. 2 POD records.
expectedDNSRecords = 6
assert.Equal(t, expectedDNSRecords, len(ec.writes))
assertSRVEntryInEtcd(t, ec, "http1", "tcp", testService, testNamespace, 80, 2)
assertSRVEntryInEtcd(t, ec, "http2", "tcp", testService, testNamespace, 81, 2)
k2s.removeService(&service)
assert.Empty(t, ec.writes)
}
func TestHeadlessServiceEndpointsUpdate(t *testing.T) {
const (
testService = "testservice"
testNamespace = "default"
)
ec := &fakeEtcdClient{make(map[string]string)}
k2s := newKube2Sky(ec)
service := newHeadlessService(testNamespace, testService)
assert.NoError(t, k2s.servicesStore.Add(&service))
endpoints := newEndpoints(service, newSubsetWithOnePort("", 80, "10.0.0.1", "10.0.0.2"))
expectedDNSRecords := 2
assert.NoError(t, k2s.endpointsStore.Add(&endpoints))
k2s.newService(&service)
assert.Equal(t, expectedDNSRecords, len(ec.writes))
endpoints.Subsets = append(endpoints.Subsets,
newSubsetWithOnePort("", 8080, "10.0.0.3", "10.0.0.4"),
)
expectedDNSRecords = 4
k2s.handleEndpointAdd(&endpoints)
assert.Equal(t, expectedDNSRecords, len(ec.writes))
k2s.removeService(&service)
assert.Empty(t, ec.writes)
}
func TestHeadlessServiceWithDelayedEndpointsAddition(t *testing.T) {
const (
testService = "testservice"
testNamespace = "default"
)
ec := &fakeEtcdClient{make(map[string]string)}
k2s := newKube2Sky(ec)
service := newHeadlessService(testNamespace, testService)
assert.NoError(t, k2s.servicesStore.Add(&service))
// Headless service DNS records should not be created since
// corresponding endpoints object doesn't exist.
k2s.newService(&service)
assert.Empty(t, ec.writes)
// Add an endpoints object for the service.
endpoints := newEndpoints(service, newSubsetWithOnePort("", 80, "10.0.0.1", "10.0.0.2"), newSubsetWithOnePort("", 8080, "10.0.0.3", "10.0.0.4"))
// We expect 4 records.
expectedDNSRecords := 4
k2s.handleEndpointAdd(&endpoints)
assert.Equal(t, expectedDNSRecords, len(ec.writes))
}
// TODO: Test service updates for headless services.
// TODO: Test headless service addition with delayed endpoints addition
func TestAddSinglePortService(t *testing.T) {
const (
testService = "testservice"
testNamespace = "default"
)
ec := &fakeEtcdClient{make(map[string]string)}
k2s := newKube2Sky(ec)
service := newService(testNamespace, testService, "1.2.3.4", "", 0)
k2s.newService(&service)
expectedValue := getHostPort(&service)
assertDnsServiceEntryInEtcd(t, ec, testService, testNamespace, expectedValue)
}
func TestUpdateSinglePortService(t *testing.T) {
const (
testService = "testservice"
testNamespace = "default"
)
ec := &fakeEtcdClient{make(map[string]string)}
k2s := newKube2Sky(ec)
service := newService(testNamespace, testService, "1.2.3.4", "", 0)
k2s.newService(&service)
assert.Len(t, ec.writes, 1)
newService := service
newService.Spec.ClusterIP = "0.0.0.0"
k2s.updateService(&service, &newService)
expectedValue := getHostPort(&newService)
assertDnsServiceEntryInEtcd(t, ec, testService, testNamespace, expectedValue)
}
func TestDeleteSinglePortService(t *testing.T) {
const (
testService = "testservice"
testNamespace = "default"
)
ec := &fakeEtcdClient{make(map[string]string)}
k2s := newKube2Sky(ec)
service := newService(testNamespace, testService, "1.2.3.4", "", 80)
// Add the service
k2s.newService(&service)
assert.Len(t, ec.writes, 1)
// Delete the service
k2s.removeService(&service)
assert.Empty(t, ec.writes)
}
func TestServiceWithNamePort(t *testing.T) {
const (
testService = "testservice"
testNamespace = "default"
)
ec := &fakeEtcdClient{make(map[string]string)}
k2s := newKube2Sky(ec)
// create service
service := newService(testNamespace, testService, "1.2.3.4", "http1", 80)
k2s.newService(&service)
expectedValue := getHostPort(&service)
assertDnsServiceEntryInEtcd(t, ec, testService, testNamespace, expectedValue)
assertSRVEntryInEtcd(t, ec, "http1", "tcp", testService, testNamespace, 80, 1)
assert.Len(t, ec.writes, 2)
// update service
newService := service
newService.Spec.Ports[0].Name = "http2"
k2s.updateService(&service, &newService)
expectedValue = getHostPort(&newService)
assertDnsServiceEntryInEtcd(t, ec, testService, testNamespace, expectedValue)
assertSRVEntryInEtcd(t, ec, "http2", "tcp", testService, testNamespace, 80, 1)
assert.Len(t, ec.writes, 2)
// Delete the service
k2s.removeService(&service)
assert.Empty(t, ec.writes)
}
func TestBuildDNSName(t *testing.T) {
expectedDNSName := "name.ns.svc.cluster.local."
assert.Equal(t, expectedDNSName, buildDNSNameString("local.", "cluster", "svc", "ns", "name"))
newExpectedDNSName := "00.name.ns.svc.cluster.local."
assert.Equal(t, newExpectedDNSName, buildDNSNameString(expectedDNSName, "00"))
}
func TestPodDns(t *testing.T) {
const (
testPodIP = "1.2.3.4"
sanitizedPodIP = "1-2-3-4"
testNamespace = "default"
testPodName = "testPod"
)
ec := &fakeEtcdClient{make(map[string]string)}
k2s := newKube2Sky(ec)
// create pod without ip address yet
pod := newPod(testNamespace, testPodName, "")
k2s.handlePodCreate(&pod)
assert.Empty(t, ec.writes)
// create pod
pod = newPod(testNamespace, testPodName, testPodIP)
k2s.handlePodCreate(&pod)
assertDnsPodEntryInEtcd(t, ec, sanitizedPodIP, testNamespace)
// update pod with same ip
newPod := pod
newPod.Status.PodIP = testPodIP
k2s.handlePodUpdate(&pod, &newPod)
assertDnsPodEntryInEtcd(t, ec, sanitizedPodIP, testNamespace)
// update pod with different ip's
newPod = pod
newPod.Status.PodIP = "4.3.2.1"
k2s.handlePodUpdate(&pod, &newPod)
assertDnsPodEntryInEtcd(t, ec, "4-3-2-1", testNamespace)
assertDnsPodEntryNotInEtcd(t, ec, "1-2-3-4", testNamespace)
// Delete the pod
k2s.handlePodDelete(&newPod)
assert.Empty(t, ec.writes)
}
func TestSanitizeIP(t *testing.T) {
expectedIP := "1-2-3-4"
assert.Equal(t, expectedIP, santizeIP("1.2.3.4"))
}

View File

@ -1,130 +0,0 @@
# This file should be kept in sync with cluster/images/hyperkube/dns-rc.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
replicas: {{ pillar['dns_replicas'] }}
selector:
k8s-app: kube-dns
version: v11
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd-amd64:2.2.1
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky-amd64:1.15
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
# Kube2sky watches all pods.
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
- --domain={{ pillar['dns_domain'] }}
- name: skydns
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain={{ pillar['dns_domain'] }}.
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.

View File

@ -1,21 +0,0 @@
# This file should be kept in sync with cluster/images/hyperkube/dns-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ pillar['dns_server'] }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -1,78 +0,0 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Build skydns
#
# Usage:
# [ARCH=amd64] [TAG=1.0] [REGISTRY=gcr.io/google_containers] [BASEIMAGE=busybox] make (build|push)
# Default registry and arch. This can be overwritten by arguments to make
ARCH?=amd64
# Version of this image, not the version of skydns
TAG=1.0
REGISTRY?=gcr.io/google_containers
GOLANG_VERSION=1.6
GOARM=6
TEMP_DIR:=$(shell mktemp -d)
ifeq ($(ARCH),amd64)
BASEIMAGE?=busybox
GOBIN_DIR=/go/bin/
else
# If not the GOARCH == the host arch, the binary directory is here
GOBIN_DIR=/go/bin/linux_$(ARCH)
endif
ifeq ($(ARCH),arm)
BASEIMAGE?=armel/busybox
endif
ifeq ($(ARCH),arm64)
BASEIMAGE?=aarch64/busybox
endif
ifeq ($(ARCH),ppc64le)
BASEIMAGE?=ppc64le/busybox
endif
# Do not change this default value
all: skydns
skydns:
# Only build skydns. This requires go in PATH
CGO_ENABLED=0 GOARCH=$(ARCH) GOARM=$(GOARM) go get -a -installsuffix cgo --ldflags '-w' github.com/skynetservices/skydns
container:
# Copy the content in this dir to the temp dir
cp ./* $(TEMP_DIR)
# Build skydns in a container. We mount the temporary dir
docker run -it -v $(TEMP_DIR):$(GOBIN_DIR) golang:$(GOLANG_VERSION) /bin/bash -c "make -C $(GOBIN_DIR) skydns ARCH=$(ARCH)"
# Replace BASEIMAGE with the real base image
cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile
# And build the image
docker build -t $(REGISTRY)/skydns-$(ARCH):$(TAG) $(TEMP_DIR)
push: container
gcloud docker push $(REGISTRY)/skydns-$(ARCH):$(TAG)
ifeq ($(ARCH),amd64)
# Backward compatability. TODO: deprecate this image tag
docker tag -f $(REGISTRY)/skydns-$(ARCH):$(TAG) $(REGISTRY)/skydns:$(TAG)
gcloud docker push $(REGISTRY)/skydns:$(TAG)
endif
clean:
rm -f skydns

View File

@ -1,30 +0,0 @@
# skydns for kubernetes
=======================
This container only exists until skydns itself is reduced in some way. At the
time of this writing, it is over 600 MB large.
#### How to release
This image is compiled for multiple architectures.
If you're rebuilding the image, please bump the `TAG` in the Makefile.
```console
# Build for linux/amd64 (default)
$ make push ARCH=amd64
# ---> gcr.io/google_containers/skydns-amd64:TAG
# ---> gcr.io/google_containers/skydns:TAG (image with backwards-compatible naming)
$ make push ARCH=arm
# ---> gcr.io/google_containers/skydns-arm:TAG
$ make push ARCH=arm64
# ---> gcr.io/google_containers/skydns-arm64:TAG
$ make push ARCH=ppc64le
# ---> gcr.io/google_containers/skydns-ppc64le:TAG
```
If you don't want to push the images, run `make` or `make build` instead
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/dns/skydns/README.md?pixel)]()

View File

@ -12,32 +12,32 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# This file should be kept in sync with cluster/addons/dns/skydns-rc.yaml.in
# This file should be kept in sync with cluster/saltbase/salt/kube-dns/skydns-rc.yaml.in
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v11
name: kube-dns-v13
namespace: kube-system
labels:
k8s-app: kube-dns
version: v11
version: v13
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-dns
version: v11
version: v13
template:
metadata:
labels:
k8s-app: kube-dns
version: v11
version: v13
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
- name: kubedns
# ARCH will be replaced with the architecture it's built for. Check out the Makefile for more details
image: gcr.io/google_containers/etcd-ARCH:2.2.5
image: gcr.io/google_containers/kubedns-ARCH:1.2
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
@ -45,33 +45,6 @@ spec:
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 500Mi
requests:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky-ARCH:1.15
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
# Kube2sky watches all pods.
memory: 200Mi
requests:
cpu: 100m
@ -95,26 +68,22 @@ spec:
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube2sky"
# command = "/kube-dns"
- --domain=cluster.local
- name: skydns
image: gcr.io/google_containers/skydns-ARCH:1.0
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
- --dns-port=10053
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/dnsmasq-ARCH:1.1
args:
- -machines=http://127.0.0.1:4001
- -addr=0.0.0.0:53
- -ns-rotate=false
- -domain=cluster.local.
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
ports:
- containerPort: 53
name: dns
@ -138,7 +107,4 @@ spec:
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.

View File

@ -0,0 +1,34 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Makefile for the skydns underscore templates to Salt/Pillar and other formats.
# If you update the *.base templates, please run this Makefile before pushing.
#
# Usage:
# make
all: transform
# .base -> .in pattern rule
%.in: %.base
sed -f transforms2salt.sed $< | sed s/__SOURCE_FILENAME__/$</g > $@
# .base -> .sed pattern rule
%.sed: %.base
sed -f transforms2sed.sed $< | sed s/__SOURCE_FILENAME__/$</g > $@
transform: skydns-rc.yaml.in skydns-svc.yaml.in skydns-rc.yaml.sed skydns-svc.yaml.sed
.PHONY: transform

View File

@ -0,0 +1,28 @@
# SkyDNS Replication Controllers and Service templates
This directory contains the base UNDERSCORE templates that can be used
to generate the skydns-rc.yaml.in and skydns.rc.yaml.in needed in Salt format.
Due to a varied preference in templating language choices, the transform
Makefile in this directory should be enhanced to generate all required
formats from the base underscore templates.
## Base Template files
These are the authoritative base templates.
Run 'make' to generate the Salt and Sed yaml templates from these.
skydns-rc.yaml.base
skydns-svc.yaml.base
## Generated Salt files
skydns-rc.yaml.in
skydns-svc.yaml.in
## Generated Sed files
skydns-rc.yaml.sed
skydns-svc.yaml.sed
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/saltbase/salt/kube-dns/README.md?pixel)]()

View File

@ -1,20 +0,0 @@
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ pillar['dns_server'] }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -0,0 +1,112 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file should be kept in sync with cluster/images/hyperkube/dns-rc.yaml
# __MACHINE_GENERATED_WARNING__
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v13
namespace: kube-system
labels:
k8s-app: kube-dns
version: v13
kubernetes.io/cluster-service: "true"
spec:
replicas: __PILLAR__DNS__REPLICAS__
selector:
k8s-app: kube-dns
version: v13
template:
metadata:
labels:
k8s-app: kube-dns
version: v13
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.2
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain=__PILLAR__DNS__DOMAIN__
- --dns-port=10053
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/dnsmasq:1.1
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.__PILLAR__DNS__DOMAIN__ 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default # Don't use cluster DNS.

View File

@ -1,27 +1,45 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file should be kept in sync with cluster/images/hyperkube/dns-rc.yaml
# Warning: This is a file generated from the base underscore template file: skydns-rc.yaml.base
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v12
name: kube-dns-v13
namespace: kube-system
labels:
k8s-app: kube-dns
version: v12
version: v13
kubernetes.io/cluster-service: "true"
spec:
replicas: {{ pillar['dns_replicas'] }}
selector:
k8s-app: kube-dns
version: v12
version: v13
template:
metadata:
labels:
k8s-app: kube-dns
version: v12
version: v13
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.1
image: gcr.io/google_containers/kubedns-amd64:1.2
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
@ -53,7 +71,7 @@ spec:
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain={{ pillar['dns_domain'] }}.
- --domain={{ pillar['dns_domain'] }}
- --dns-port=10053
ports:
- containerPort: 10053
@ -76,7 +94,7 @@ spec:
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
image: gcr.io/google_containers/exechealthz-amd64:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:

View File

@ -0,0 +1,112 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file should be kept in sync with cluster/images/hyperkube/dns-rc.yaml
# Warning: This is a file generated from the base underscore template file: skydns-rc.yaml.base
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v13
namespace: kube-system
labels:
k8s-app: kube-dns
version: v13
kubernetes.io/cluster-service: "true"
spec:
replicas: $DNS_REPLICAS
selector:
k8s-app: kube-dns
version: v13
template:
metadata:
labels:
k8s-app: kube-dns
version: v13
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.2
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 100m
memory: 50Mi
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 30
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain=$DNS_DOMAIN}}
- --dns-port=10053
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/dnsmasq:1.1
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
args:
- -cmd=nslookup kubernetes.default.svc.$DNS_DOMAIN}} 127.0.0.1 >/dev/null
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default # Don't use cluster DNS.

View File

@ -12,8 +12,27 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM BASEIMAGE
MAINTAINER Tim Hockin <thockin@google.com>
ADD kube2sky /
ADD kube2sky.go /
ENTRYPOINT ["/kube2sky"]
# This file should be kept in sync with cluster/addons/dns/skydns-rc.yaml.in
# __MACHINE_GENERATED_WARNING__
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: __PILLAR__DNS__SERVER__
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -0,0 +1,38 @@
# Copyright 2016 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file should be kept in sync with cluster/addons/dns/skydns-rc.yaml.in
# Warning: This is a file generated from the base underscore template file: skydns-svc.yaml.base
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ pillar['dns_server'] }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -12,7 +12,27 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM BASEIMAGE
MAINTAINER Tim Hockin <thockin@google.com>
ADD skydns /
ENTRYPOINT ["/skydns"]
# This file should be kept in sync with cluster/addons/dns/skydns-rc.yaml.in
# Warning: This is a file generated from the base underscore template file: skydns-svc.yaml.base
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: $DNS_SERVER_IP
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -0,0 +1,4 @@
s/__PILLAR__DNS__SERVER__/{{ pillar['dns_server'] }}/g
s/__PILLAR__DNS__REPLICAS__/{{ pillar['dns_replicas'] }}/g
s/__PILLAR__DNS__DOMAIN__/{{ pillar['dns_domain'] }}/g
s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

View File

@ -0,0 +1,4 @@
s/__PILLAR__DNS__SERVER__/$DNS_SERVER_IP/g
s/__PILLAR__DNS__REPLICAS__/$DNS_REPLICAS/g
s/__PILLAR__DNS__DOMAIN__/$DNS_DOMAIN}}/g
s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

View File

@ -189,7 +189,7 @@ KUBE_DNS_DOMAIN="cluster.local"
KUBE_DNS_REPLICAS=1
```
To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../cluster/addons/dns/#how-do-i-configure-it)
To know more on DNS service you can look [here](http://issue.k8s.io/6667). Related documents can be found [here](../../build/kube-dns/#how-do-i-configure-it)
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -38,7 +38,7 @@ This is a toy example demonstrating how to use kubernetes DNS.
### Step Zero: Prerequisites
This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides/). Make sure DNS is enabled in your setup, see [DNS doc](../../cluster/addons/dns/).
This example assumes that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides/). Make sure DNS is enabled in your setup, see [DNS doc](../../build/kube-dns/).
```sh
$ cd kubernetes

View File

@ -98,7 +98,7 @@ this example.
* Kubernetes version 1.2 is required due to using newer features, such
at PV Claims and Deployments. Run `kubectl version` to see your
cluster version.
* [Cluster DNS](../../cluster/addons/dns/) will be used for service discovery.
* [Cluster DNS](../../build/kube-dns/) will be used for service discovery.
* An [external load balancer](http://kubernetes.io/docs/user-guide/services/#type-loadbalancer)
will be used to access WordPress.
* [Persistent Volume Claims](http://kubernetes.io/docs/user-guide/persistent-volumes/)