Rework `cluster/juju` to reflect current work

This commit imports the latest development focus from the Charmer team
working to deliver Kubernetes charms with Juju.

Notable Changes:

- The charm is now assembled from layers in $JUJU_ROOT/layers
- Prior, the juju provider would compile and fat-pack the charms, this
  new approach delivers the entirety of Kubernetes via hyperkube.
- Adds Kubedns as part of `cluster/kube-up.sh` and verification
- Removes the hard-coded port 8080 for the Kubernetes Master
- Includes TLS validation
- Validates kubernetes config from leader charm
- Targets Juju 2.0 commands
pull/6/head
Charles Butler 2016-03-07 12:23:01 -05:00
parent a750bf667f
commit ba113ea30b
23 changed files with 1148 additions and 121 deletions

View File

@ -1,53 +1,18 @@
kubernetes-local:
services:
kubernetes-master:
charm: local:trusty/kubernetes-master
annotations:
"gui-x": "600"
"gui-y": "0"
expose: true
options:
version: "local"
docker:
charm: cs:trusty/docker
num_units: 2
options:
latest: true
annotations:
"gui-x": "0"
"gui-y": "0"
flannel-docker:
charm: cs:~kubernetes/trusty/flannel-docker
annotations:
"gui-x": "0"
"gui-y": "300"
kubernetes:
charm: local:trusty/kubernetes
annotations:
"gui-x": "300"
"gui-y": "300"
etcd:
charm: cs:~kubernetes/trusty/etcd
annotations:
"gui-x": "300"
"gui-y": "0"
relations:
- - "flannel-docker:network"
- "docker:network"
- - "flannel-docker:network"
- "kubernetes-master:network"
- - "flannel-docker:docker-host"
- "docker:juju-info"
- - "flannel-docker:docker-host"
- "kubernetes-master:juju-info"
- - "flannel-docker:db"
- "etcd:client"
- - "kubernetes:docker-host"
- "docker:juju-info"
- - "etcd:client"
- "kubernetes:etcd"
- - "etcd:client"
- "kubernetes-master:etcd"
- - "kubernetes-master:minions-api"
- "kubernetes:api"
series: trusty
services:
kubernetes:
charm: local:trusty/kubernetes
annotations:
"gui-x": "600"
"gui-y": "0"
expose: true
num_units: 2
etcd:
charm: cs:~lazypower/trusty/etcd
annotations:
"gui-x": "300"
"gui-y": "0"
num_units: 1
relations:
- - "kubernetes:etcd"
- "etcd:db"
series: trusty

View File

@ -0,0 +1,15 @@
#!/usr/bin/python
from subprocess import check_output
import yaml
out = check_output(['juju', 'status', 'kubernetes', '--format=yaml'])
try:
parsed_output = yaml.safe_load(out)
model = parsed_output['services']['kubernetes']['units']
for unit in model:
if 'workload-status' in model[unit].keys():
if 'leader' in model[unit]['workload-status']['message']:
print(unit)
except:
pass

View File

@ -0,0 +1,74 @@
# kubernetes
[Kubernetes](https://github.com/kubernetes/kubernetes) is an open
source system for managing application containers across multiple hosts.
This version of Kubernetes uses [Docker](http://www.docker.io/) to package,
instantiate and run containerized applications.
This charm is an encapsulation of the
[Running Kubernetes locally via
Docker](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md)
document. The released hyperkube image (`gcr.io/google_containers/hyperkube`)
is currently pulled from a [Google owned container repository
repository](https://cloud.google.com/container-registry/). For this charm to
work it will need access to the repository to `docker pull` the images.
This charm was built from other charm layers using the reactive framework. The
`layer:docker` is the base layer. For more information please read [Getting
Started Developing charms](https://jujucharms.com/docs/devel/developer-getting-started)
# Deployment
The kubernetes charms require a relation to a distributed key value store
(ETCD) which Kubernetes uses for persistent storage of all of its REST API
objects.
```
juju deploy trusty/etcd
juju deploy local:trusty/kubernetes
juju add-relation kubernetes etcd
```
# Configuration
For your convenience this charm supports some configuration options to set up
a Kuberentes cluster that works in your environment:
**version**: Set the version of the Kubernetes containers to deploy.
The default value is "v1.0.6". Changing the version causes the all the
Kubernetes containers to be restarted.
**cidr**: Set the IP range for the Kubernetes cluster. eg: 10.1.0.0/16
## State Events
While this charm is meant to be a top layer, it can be used to build other
solutions. This charm sets or removes states from the reactive framework that
other layers could react appropriately. The states that other layers would be
interested in are as follows:
**kubelet.available** - The hyperkube container has been run with the kubelet
service and configuration that started the apiserver, controller-manager and
scheduler containers.
**proxy.available** - The hyperkube container has been run with the proxy
service and configuration that handles Kubernetes networking.
**kubectl.package.created** - Indicates the availability of the `kubectl`
application along with the configuration needed to contact the cluster
securely. You will need to download the `/home/ubuntu/kubectl_package.tar.gz`
from the kubernetes leader unit to your machine so you can control the cluster.
**skydns.available** - Indicates when the Domain Name System (DNS) for the
cluster is operational.
# Kubernetes information
- [Kubernetes github project](https://github.com/kubernetes/kubernetes)
- [Kubernetes issue tracker](https://github.com/kubernetes/kubernetes/issues)
- [Kubernetes Documenation](https://github.com/kubernetes/kubernetes/tree/master/docs)
- [Kubernetes releases](https://github.com/kubernetes/kubernetes/releases)
# Contact
* Charm Author: Matthew Bruzek <Matthew.Bruzek@canonical.com>
* Charm Contributor: Charles Butler <Charles.Butler@canonical.com>

View File

@ -0,0 +1,2 @@
guestbook-example:
description: Launch the guestbook example in your k8s cluster

View File

@ -0,0 +1,21 @@
#!/bin/bash
# Launch the Guestbook example in Kubernetes. This will use the pod and service
# definitions from `files/guestbook-example/*.yaml` to launch a leader/follower
# redis cluster, with a web-front end to collect user data and store in redis.
# This example app can easily scale across multiple nodes, and exercises the
# networking, pod creation/scale, service definition, and replica controller of
# kubernetes.
#
# Lifted from github.com/kubernetes/kubernetes/examples/guestbook-example
kubectl create -f files/guestbook-example/redis-master-service.yaml
kubectl create -f files/guestbook-example/frontend-service.yaml
kubectl create -f files/guestbook-example/frontend-controller.yaml
kubectl create -f files/guestbook-example/redis-master-controller.yaml
kubectl create -f files/guestbook-example/redis-master-controller.yaml
kubectl create -f files/guestbook-example/redis-slave-service.yaml
kubectl create -f files/guestbook-example/redis-slave-controller.yaml

View File

@ -0,0 +1,14 @@
options:
version:
type: string
default: "v1.1.7"
description: |
The version of Kubernetes to use in this charm. The version is
inserted in the configuration files that specify the hyperkube
container to use when starting a Kubernetes cluster. Changing this
value will restart the Kubernetes cluster.
cidr:
type: string
default: 10.1.0.0/16
description: |
Network CIDR to assign to K8s service groups

View File

@ -0,0 +1,28 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: frontend
labels:
name: frontend
spec:
replicas: 3
selector:
name: frontend
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v3
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access environment variables to find service host
# info, comment out the 'value: dns' line above, and uncomment the
# line below.
# value: env
ports:
- containerPort: 80

View File

@ -0,0 +1,15 @@
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
name: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
# the port that this service should serve on
- port: 80
selector:
name: frontend

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-master
labels:
name: redis-master
spec:
replicas: 1
selector:
name: redis-master
template:
metadata:
labels:
name: redis-master
spec:
containers:
- name: master
image: redis
ports:
- containerPort: 6379

View File

@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
name: redis-master
spec:
ports:
# the port that this service should serve on
- port: 6379
targetPort: 6379
selector:
name: redis-master

View File

@ -0,0 +1,28 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: redis-slave
labels:
name: redis-slave
spec:
replicas: 2
selector:
name: redis-slave
template:
metadata:
labels:
name: redis-slave
spec:
containers:
- name: worker
image: gcr.io/google_samples/gb-redisslave:v1
env:
- name: GET_HOSTS_FROM
value: dns
# If your cluster config does not include a dns service, then to
# instead access an environment variable to find the master
# service's host, comment out the 'value: dns' line above, and
# uncomment the line below.
# value: env
ports:
- containerPort: 6379

View File

@ -0,0 +1,12 @@
apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
name: redis-slave
spec:
ports:
# the port that this service should serve on
- port: 6379
selector:
name: redis-slave

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 76 KiB

View File

@ -0,0 +1 @@
includes: ['layer:docker', 'layer:flannel', 'layer:tls', 'interface:etcd']

View File

@ -0,0 +1,17 @@
name: kubernetes
summary: Kubernetes is an application container orchestration platform.
maintainers:
- Matthew Bruzek <matthew.bruzek@canonical.com>
- Charles Butler <charles.butler@canonical.com>
description: |
Kubernetes is an open-source platform for deplying, scaling, and operations
of appliation containers across a cluster of hosts. Kubernetes is portable
in that it works with public, private, and hybrid clouds. Extensible through
a pluggable infrastructure. Self healing in that it will automatically
restart and place containers on healthy nodes if a node ever goes away.
tags:
- infrastructure
subordinate: false
requires:
etcd:
interface: etcd

View File

@ -0,0 +1,321 @@
import os
from shlex import split
from shutil import copy2
from subprocess import check_call
from charms.docker.compose import Compose
from charms.reactive import hook
from charms.reactive import remove_state
from charms.reactive import set_state
from charms.reactive import when
from charms.reactive import when_not
from charmhelpers.core import hookenv
from charmhelpers.core.hookenv import is_leader
from charmhelpers.core.hookenv import status_set
from charmhelpers.core.templating import render
from charmhelpers.core import unitdata
from charmhelpers.core.host import chdir
from contextlib import contextmanager
@hook('config-changed')
def config_changed():
'''If the configuration values change, remove the available states.'''
config = hookenv.config()
if any(config.changed(key) for key in config.keys()):
hookenv.log('Configuration options have changed.')
# Use the Compose class that encapsulates the docker-compose commands.
compose = Compose('files/kubernetes')
hookenv.log('Removing kubelet container and kubelet.available state.')
# Stop and remove the Kubernetes kubelet container..
compose.kill('kubelet')
compose.rm('kubelet')
# Remove the state so the code can react to restarting kubelet.
remove_state('kubelet.available')
hookenv.log('Removing proxy container and proxy.available state.')
# Stop and remove the Kubernetes proxy container.
compose.kill('proxy')
compose.rm('proxy')
# Remove the state so the code can react to restarting proxy.
remove_state('proxy.available')
if config.changed('version'):
hookenv.log('Removing kubectl.downloaded state so the new version'
' of kubectl will be downloaded.')
remove_state('kubectl.downloaded')
@when('tls.server.certificate available')
@when_not('k8s.server.certificate available')
def server_cert():
'''When the server certificate is available, get the server certificate from
the charm unit data and write it to the proper directory. '''
destination_directory = '/srv/kubernetes'
# Save the server certificate from unitdata to /srv/kubernetes/server.crt
save_certificate(destination_directory, 'server')
# Copy the unitname.key to /srv/kubernetes/server.key
copy_key(destination_directory, 'server')
set_state('k8s.server.certificate available')
@when('tls.client.certificate available')
@when_not('k8s.client.certficate available')
def client_cert():
'''When the client certificate is available, get the client certificate
from the charm unitdata and write it to the proper directory. '''
destination_directory = '/srv/kubernetes'
if not os.path.isdir(destination_directory):
os.makedirs(destination_directory)
os.chmod(destination_directory, 0o770)
# The client certificate is also available on charm unitdata.
client_cert_path = 'easy-rsa/easyrsa3/pki/issued/client.crt'
kube_cert_path = os.path.join(destination_directory, 'client.crt')
if os.path.isfile(client_cert_path):
# Copy the client.crt to /srv/kubernetes/client.crt
copy2(client_cert_path, kube_cert_path)
# The client key is only available on the leader.
client_key_path = 'easy-rsa/easyrsa3/pki/private/client.key'
kube_key_path = os.path.join(destination_directory, 'client.key')
if os.path.isfile(client_key_path):
# Copy the client.key to /srv/kubernetes/client.key
copy2(client_key_path, kube_key_path)
@when('tls.certificate.authority available')
@when_not('k8s.certificate.authority available')
def ca():
'''When the Certificate Authority is available, copy the CA from the
/usr/local/share/ca-certificates/k8s.crt to the proper directory. '''
# Ensure the /srv/kubernetes directory exists.
directory = '/srv/kubernetes'
if not os.path.isdir(directory):
os.makedirs(directory)
os.chmod(directory, 0o770)
# Normally the CA is just on the leader, but the tls layer installs the
# CA on all systems in the /usr/local/share/ca-certificates directory.
ca_path = '/usr/local/share/ca-certificates/{0}.crt'.format(
hookenv.service_name())
# The CA should be copied to the destination directory and named 'ca.crt'.
destination_ca_path = os.path.join(directory, 'ca.crt')
if os.path.isfile(ca_path):
copy2(ca_path, destination_ca_path)
set_state('k8s.certificate.authority available')
@when('kubelet.available', 'proxy.available', 'cadvisor.available')
def final_messaging():
'''Lower layers emit messages, and if we do not clear the status messaging
queue, we are left with whatever the last method call sets status to. '''
# It's good UX to have consistent messaging that the cluster is online
if is_leader():
status_set('active', 'Kubernetes leader running')
else:
status_set('active', 'Kubernetes follower running')
@when('kubelet.available', 'proxy.available', 'cadvisor.available')
@when_not('skydns.available')
def launch_skydns():
'''Create a kubernetes service and resource controller for the skydns
service. '''
# Only launch and track this state on the leader.
# Launching duplicate SkyDNS rc will raise an error
if not is_leader():
return
cmd = "kubectl create -f files/manifests/skydns-rc.yml"
check_call(split(cmd))
cmd = "kubectl create -f files/manifests/skydns-svc.yml"
check_call(split(cmd))
set_state('skydns.available')
@when('docker.available')
@when_not('etcd.available')
def relation_message():
'''Take over messaging to let the user know they are pending a relationship
to the ETCD cluster before going any further. '''
status_set('waiting', 'Waiting for relation to ETCD')
@when('etcd.available', 'tls.server.certificate available')
@when_not('kubelet.available', 'proxy.available')
def master(etcd):
'''Install and run the hyperkube container that starts kubernetes-master.
This actually runs the kubelet, which in turn runs a pod that contains the
other master components. '''
render_files(etcd)
# Use the Compose class that encapsulates the docker-compose commands.
compose = Compose('files/kubernetes')
status_set('maintenance', 'Starting the Kubernetes kubelet container.')
# Start the Kubernetes kubelet container using docker-compose.
compose.up('kubelet')
set_state('kubelet.available')
# Open the secure port for api-server.
hookenv.open_port(6443)
status_set('maintenance', 'Starting the Kubernetes proxy container')
# Start the Kubernetes proxy container using docker-compose.
compose.up('proxy')
set_state('proxy.available')
status_set('active', 'Kubernetes started')
@when('proxy.available')
@when_not('kubectl.downloaded')
def download_kubectl():
'''Download the kubectl binary to test and interact with the cluster.'''
status_set('maintenance', 'Downloading the kubectl binary')
version = hookenv.config()['version']
cmd = 'wget -nv -O /usr/local/bin/kubectl https://storage.googleapis.com/' \
'kubernetes-release/release/{0}/bin/linux/amd64/kubectl'
cmd = cmd.format(version)
hookenv.log('Downloading kubelet: {0}'.format(cmd))
check_call(split(cmd))
cmd = 'chmod +x /usr/local/bin/kubectl'
check_call(split(cmd))
set_state('kubectl.downloaded')
status_set('active', 'Kubernetes installed')
@when('kubectl.downloaded')
@when_not('kubectl.package.created')
def package_kubectl():
'''Package the kubectl binary and configuration to a tar file for users
to consume and interact directly with Kubernetes.'''
if not is_leader():
return
context = 'default-context'
cluster_name = 'kubernetes'
public_address = hookenv.unit_public_ip()
directory = '/srv/kubernetes'
key = 'client.key'
ca = 'ca.crt'
cert = 'client.crt'
user = 'ubuntu'
port = '6443'
with chdir(directory):
# Create the config file with the external address for this server.
cmd = 'kubectl config set-cluster --kubeconfig={0}/config {1} ' \
'--server=https://{2}:{3} --certificate-authority={4}'
check_call(split(cmd.format(directory, cluster_name, public_address,
port, ca)))
# Create the credentials.
cmd = 'kubectl config set-credentials --kubeconfig={0}/config {1} ' \
'--client-key={2} --client-certificate={3}'
check_call(split(cmd.format(directory, user, key, cert)))
# Create a default context with the cluster.
cmd = 'kubectl config set-context --kubeconfig={0}/config {1}' \
' --cluster={2} --user={3}'
check_call(split(cmd.format(directory, context, cluster_name, user)))
# Now make the config use this new context.
cmd = 'kubectl config use-context --kubeconfig={0}/config {1}'
check_call(split(cmd.format(directory, context)))
# Copy the kubectl binary to this directory
cmd = 'cp -v /usr/local/bin/kubectl {0}'.format(directory)
check_call(split(cmd))
# Create an archive with all the necessary files.
cmd = 'tar -cvzf /home/ubuntu/kubectl_package.tar.gz ca.crt client.crt client.key config kubectl' # noqa
check_call(split(cmd))
set_state('kubectl.package.created')
@when('proxy.available')
@when_not('cadvisor.available')
def start_cadvisor():
'''Start the cAdvisor container that gives metrics about the other
application containers on this system. '''
compose = Compose('files/kubernetes')
compose.up('cadvisor')
set_state('cadvisor.available')
status_set('active', 'cadvisor running on port 8088')
hookenv.open_port(8088)
@when('sdn.available')
def gather_sdn_data():
'''Get the Software Defined Network (SDN) information and return it as a
dictionary.'''
# SDN Providers pass data via the unitdata.kv module
db = unitdata.kv()
# Generate an IP address for the DNS provider
subnet = db.get('sdn_subnet')
if subnet:
ip = subnet.split('/')[0]
dns_server = '.'.join(ip.split('.')[0:-1]) + '.10'
addedcontext = {}
addedcontext['dns_server'] = dns_server
return addedcontext
return {}
def copy_key(directory, prefix):
'''Copy the key from the easy-rsa/easyrsa3/pki/private directory to the
specified directory. '''
if not os.path.isdir(directory):
os.makedirs(directory)
os.chmod(directory, 0o770)
# Must remove the path characters from the local unit name.
path_name = hookenv.local_unit().replace('/', '_')
# The key is not in unitdata it is in the local easy-rsa directory.
local_key_path = 'easy-rsa/easyrsa3/pki/private/{0}.key'.format(path_name)
key_name = '{0}.key'.format(prefix)
# The key should be copied to this directory.
destination_key_path = os.path.join(directory, key_name)
# Copy the key file from the local directory to the destination.
copy2(local_key_path, destination_key_path)
def render_files(reldata=None):
'''Use jinja templating to render the docker-compose.yml and master.json
file to contain the dynamic data for the configuration files.'''
context = {}
# Load the context manager with sdn and config data.
context.update(gather_sdn_data())
context.update(hookenv.config())
if reldata:
context.update({'connection_string': reldata.connection_string()})
charm_dir = hookenv.charm_dir()
rendered_kube_dir = os.path.join(charm_dir, 'files/kubernetes')
if not os.path.exists(rendered_kube_dir):
os.makedirs(rendered_kube_dir)
rendered_manifest_dir = os.path.join(charm_dir, 'files/manifests')
if not os.path.exists(rendered_manifest_dir):
os.makedirs(rendered_manifest_dir)
# Add the manifest directory so the docker-compose file can have.
context.update({'manifest_directory': rendered_manifest_dir,
'private_address': hookenv.unit_get('private-address')})
# Render the files/kubernetes/docker-compose.yml file that contains the
# definition for kubelet and proxy.
target = os.path.join(rendered_kube_dir, 'docker-compose.yml')
render('docker-compose.yml', target, context)
# Render the files/manifests/master.json that contains parameters for the
# apiserver, controller, and controller-manager
target = os.path.join(rendered_manifest_dir, 'master.json')
render('master.json', target, context)
# Render files/kubernetes/skydns-svc.yaml for SkyDNS service
target = os.path.join(rendered_manifest_dir, 'skydns-svc.yml')
render('skydns-svc.yml', target, context)
# Render files/kubernetes/skydns-rc.yaml for SkyDNS pods
target = os.path.join(rendered_manifest_dir, 'skydns-rc.yml')
render('skydns-rc.yml', target, context)
def save_certificate(directory, prefix):
'''Get the certificate from the charm unitdata, and write it to the proper
directory. The parameters are: destination directory, and prefix to use
for the key and certificate name.'''
if not os.path.isdir(directory):
os.makedirs(directory)
os.chmod(directory, 0o770)
# Grab the unitdata key value store.
store = unitdata.kv()
certificate_data = store.get('tls.{0}.certificate'.format(prefix))
certificate_name = '{0}.crt'.format(prefix)
# The certificate should be saved to this directory.
certificate_path = os.path.join(directory, certificate_name)
# write the server certificate out to the correct location
with open(certificate_path, 'w') as fp:
fp.write(certificate_data)

View File

@ -0,0 +1,78 @@
# https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md
# docker run \
# --volume=/:/rootfs:ro \
# --volume=/sys:/sys:ro \
# --volume=/dev:/dev \
# --volume=/var/lib/docker/:/var/lib/docker:rw \
# --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
# --volume=/var/run:/var/run:rw \
# --volume=/var/lib/juju/agents/unit-k8s-0/charm/files/manifests:/etc/kubernetes/manifests:rw \
# --volume=/srv/kubernetes:/srv/kubernetes \
# --net=host \
# --pid=host \
# --privileged=true \
# -ti \
# gcr.io/google_containers/hyperkube:v1.0.6 \
# /hyperkube kubelet --containerized --hostname-override="127.0.0.1" \
# --address="0.0.0.0" --api-servers=http://localhost:8080 \
# --config=/etc/kubernetes/manifests
kubelet:
image: gcr.io/google_containers/hyperkube:{{version}}
net: host
pid: host
privileged: true
restart: always
volumes:
- /:/rootfs:ro
- /sys:/sys:ro
- /dev:/dev
- /var/lib/docker/:/var/lib/docker:rw
- /var/lib/kubelet/:/var/lib/kubelet:rw
- /var/run:/var/run:rw
- {{manifest_directory}}:/etc/kubernetes/manifests:rw
- /srv/kubernetes:/srv/kubernetes
command: |
/hyperkube kubelet --containerized --hostname-override="{{private_address}}"
--address="0.0.0.0" --api-servers=http://localhost:8080
--config=/etc/kubernetes/manifests {% if dns_server %}
--cluster-dns={{dns_server}} --cluster-domain=cluster.local {% endif %}
--tls-cert-file="/srv/kubernetes/server.crt"
--tls-private-key-file="/srv/kubernetes/server.key"
# docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 \
# /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 \
# --data-dir=/var/etcd/data
etcd:
net: host
image: gcr.io/google_containers/etcd:2.0.12
command: |
/usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001
--data-dir=/var/etcd/data
# docker run \
# -d \
# --net=host \
# --privileged \
# gcr.io/google_containers/hyperkube:v${K8S_VERSION} \
# /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
proxy:
net: host
privileged: true
restart: always
image: gcr.io/google_containers/hyperkube:{{version}}
command: /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
# cAdvisor (Container Advisor) provides container users an understanding of
# the resource usage and performance characteristics of their running containers.
cadvisor:
image: google/cadvisor:latest
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
ports:
- 8088:8080
restart: always

View File

@ -0,0 +1,61 @@
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {"name":"k8s-master"},
"spec":{
"hostNetwork": true,
"containers":[
{
"name": "controller-manager",
"image": "gcr.io/google_containers/hyperkube:{{version}}",
"command": [
"/hyperkube",
"controller-manager",
"--master=127.0.0.1:8080",
"--v=2"
]
},
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube:{{version}}",
"command": [
"/hyperkube",
"apiserver",
"--address=0.0.0.0",
"--client_ca_file=/srv/kubernetes/ca.crt",
"--cluster-name=kubernetes",
"--etcd-servers={{connection_string}}",
"--service-cluster-ip-range={{cidr}}",
"--tls-cert-file=/srv/kubernetes/server.crt",
"--tls-private-key-file=/srv/kubernetes/server.key",
"--v=2"
],
"volumeMounts": [
{
"mountPath": "/srv/kubernetes",
"name": "certs-kubernetes",
"readOnly": true
}
]
},
{
"name": "scheduler",
"image": "gcr.io/google_containers/hyperkube:{{version}}",
"command": [
"/hyperkube",
"scheduler",
"--master=127.0.0.1:8080",
"--v=2"
]
}
],
"volumes": [
{
"hostPath": {
"path": "/srv/kubernetes"
},
"name": "certs-kubernetes"
}
]
}
}

View File

@ -0,0 +1,92 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-dns-v8
namespace: kube-system
labels:
k8s-app: kube-dns
version: v8
kubernetes.io/cluster-service: "true"
spec:
{% if dns_replicas -%} replicas: {{ dns_replicas }} {% else %} replicas: 1 {% endif %}
selector:
k8s-app: kube-dns
version: v8
template:
metadata:
labels:
k8s-app: kube-dns
version: v8
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd:2.0.9
resources:
limits:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -data-dir
- /var/etcd/data
- -listen-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -advertise-client-urls
- http://127.0.0.1:2379,http://127.0.0.1:4001
- -initial-cluster-token
- skydns-etcd
volumeMounts:
- name: etcd-storage
mountPath: /var/etcd/data
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.11
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
{% if dns_domain -%}- -domain={{ dns_domain }} {% else %} - -domain=cluster.local {% endif %}
- -kube_master_url=http://{{private_address}}:8080
- name: skydns
image: gcr.io/google_containers/skydns:2015-03-11-001
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://localhost:4001
- -addr=0.0.0.0:53
{% if dns_domain -%}- -domain={{ dns_domain }}. {% else %} - -domain=cluster.local. {% endif %}
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
- name: healthz
image: gcr.io/google_containers/exechealthz:1.0
resources:
limits:
cpu: 10m
memory: 20Mi
args:
{% if dns_domain -%}- -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} localhost >/dev/null {% else %} - -cmd=nslookup kubernetes.default.svc.kubernetes.local localhost >/dev/null {% endif %}
- -port=8080
ports:
- containerPort: 8080
protocol: TCP
volumes:
- name: etcd-storage
emptyDir: {}
dnsPolicy: Default # Don't use cluster DNS.

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ dns_server }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -43,6 +43,6 @@ function gather_installation_reqs() {
sudo apt-get update
fi
package_status 'juju-quickstart'
package_status 'juju-deployer'
package_status 'juju'
package_status 'charm-tools'
}

View File

@ -25,38 +25,28 @@ JUJU_PATH=$(dirname ${UTIL_SCRIPT})
KUBE_ROOT=$(readlink -m ${JUJU_PATH}/../../)
# Use the config file specified in $KUBE_CONFIG_FILE, or config-default.sh.
source "${JUJU_PATH}/${KUBE_CONFIG_FILE-config-default.sh}"
# This attempts installation of Juju - This really needs to support multiple
# providers/distros - but I'm super familiar with ubuntu so assume that for now.
source ${JUJU_PATH}/prereqs/ubuntu-juju.sh
export JUJU_REPOSITORY=${JUJU_PATH}/charms
#KUBE_BUNDLE_URL='https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml'
KUBE_BUNDLE_PATH=${JUJU_PATH}/bundles/local.yaml
# Build the binaries on the local system and copy the binaries to the Juju charm.
function build-local() {
local targets=(
cmd/kube-proxy \
cmd/kube-apiserver \
cmd/kube-controller-manager \
cmd/kubelet \
plugin/cmd/kube-scheduler \
cmd/kubectl \
test/e2e/e2e.test \
)
# Make a clean environment to avoid compiler errors.
make clean
# Build the binaries locally that are used in the charms.
make all WHAT="${targets[*]}"
local OUTPUT_DIR=_output/local/bin/linux/amd64
mkdir -p cluster/juju/charms/trusty/kubernetes-master/files/output
# Copy the binaries from the output directory to the charm directory.
cp -v $OUTPUT_DIR/* cluster/juju/charms/trusty/kubernetes-master/files/output
# This used to build the kubernetes project. Now it rebuilds the charm(s)
# living in `cluster/juju/layers`
charm build -o $JUJU_REPOSITORY -s trusty ${JUJU_PATH}/layers/kubernetes
}
function detect-master() {
local kubestatus
# Capturing a newline, and my awk-fu was weak - pipe through tr -d
kubestatus=$(juju status --format=oneline kubernetes-master | grep kubernetes-master/0 | awk '{print $3}' | tr -d "\n")
kubestatus=$(juju status --format=oneline kubernetes | grep ${KUBE_MASTER_NAME} | awk '{print $3}' | tr -d "\n")
export KUBE_MASTER_IP=${kubestatus}
export KUBE_SERVER=http://${KUBE_MASTER_IP}:8080
export KUBE_SERVER=https://${KUBE_MASTER_IP}:6433
}
function detect-nodes() {
@ -74,25 +64,14 @@ function detect-nodes() {
export NUM_NODES=${#KUBE_NODE_IP_ADDRESSES[@]}
}
function get-password() {
export KUBE_USER=admin
# Get the password from the basic-auth.csv file on kubernetes-master.
export KUBE_PASSWORD=$(juju run --unit kubernetes-master/0 "cat /srv/kubernetes/basic-auth.csv" | grep ${KUBE_USER} | cut -d, -f1)
}
function kube-up() {
build-local
if [[ -d "~/.juju/current-env" ]]; then
juju quickstart -i --no-browser
else
juju quickstart --no-browser
fi
# The juju-deployer command will deploy the bundle and can be run
# multiple times to continue deploying the parts that fail.
juju deployer -c ${KUBE_BUNDLE_PATH}
juju deploy ${KUBE_BUNDLE_PATH}
source "${KUBE_ROOT}/cluster/common.sh"
get-password
# Sleep due to juju bug http://pad.lv/1432759
sleep-status
@ -100,31 +79,22 @@ function kube-up() {
detect-nodes
local prefix=$RANDOM
export KUBE_CERT="/tmp/${prefix}-kubecfg.crt"
export KUBE_KEY="/tmp/${prefix}-kubecfg.key"
export CA_CERT="/tmp/${prefix}-kubecfg.ca"
export CONTEXT="juju"
export KUBECONFIG=/tmp/${prefix}/config
# Copy the cert and key to this machine.
(
umask 077
juju scp kubernetes-master/0:/srv/kubernetes/apiserver.crt ${KUBE_CERT}
juju run --unit kubernetes-master/0 'chmod 644 /srv/kubernetes/apiserver.key'
juju scp kubernetes-master/0:/srv/kubernetes/apiserver.key ${KUBE_KEY}
juju run --unit kubernetes-master/0 'chmod 600 /srv/kubernetes/apiserver.key'
cp ${KUBE_CERT} ${CA_CERT}
create-kubeconfig
mkdir -p /tmp/${prefix}
juju scp ${KUBE_MASTER_NAME}:kubectl_package.tar.gz /tmp/${prefix}/
ls -al /tmp/${prefix}/
tar xfz /tmp/${prefix}/kubectl_package.tar.gz -C /tmp/${prefix}
)
}
function kube-down() {
local force="${1-}"
# Remove the binary files from the charm directory.
rm -rf cluster/juju/charms/trusty/kubernetes-master/files/output/
local jujuenv
jujuenv=$(cat ~/.juju/current-environment)
juju destroy-environment ${jujuenv} ${force} || true
jujuenv=$(juju switch)
juju destroy-model ${jujuenv} ${force} || true
}
function prepare-e2e() {
@ -140,23 +110,13 @@ function sleep-status() {
jujustatus=''
echo "Waiting up to 15 minutes to allow the cluster to come online... wait for it..." 1>&2
jujustatus=$(juju status kubernetes-master --format=oneline)
if [[ $jujustatus == *"started"* ]];
then
return
fi
while [[ $i < $maxtime && $jujustatus != *"started"* ]]; do
sleep 15
i+=15
jujustatus=$(juju status kubernetes-master --format=oneline)
while [[ $i < $maxtime && -z $jujustatus ]]; do
sleep 15
i+=15
jujustatus=$(${JUJU_PATH}/identify-leaders.py)
export KUBE_MASTER_NAME=${jujustatus}
done
# sleep because we cannot get the status back of where the minions are in the deploy phase
# thanks to a generic "started" state and our service not actually coming online until the
# minions have received the binary from the master distribution hub during relations
echo "Sleeping an additional minute to allow the cluster to settle" 1>&2
sleep 60
}
# Execute prior to running tests to build a release if required for environment.

View File

@ -116,7 +116,7 @@ kubectl="${KUBECTL_PATH:-${kubectl}}"
if [[ "$KUBERNETES_PROVIDER" == "gke" ]]; then
detect-project &> /dev/null
elif [[ "$KUBERNETES_PROVIDER" == "ubuntu" || "$KUBERNETES_PROVIDER" == "juju" ]]; then
elif [[ "$KUBERNETES_PROVIDER" == "ubuntu" ]]; then
detect-master > /dev/null
config=(
"--server=http://${KUBE_MASTER_IP}:8080"