mirror of https://github.com/k3s-io/k3s
Merge pull request #26446 from mbruzek/juju-master-worker
Automatic merge from submit-queue Implementing a proper master/worker split in the juju cluster code. ``` release-note-none ``` General updates to the cluster/juju Kubernetes provider, to bring it up to date. Updating the skydns templates to version 11 Updating the etcd container definition to include arch. Updating the master template to include arch and version for hyperkube container. Adding dns_domain configuration options. Adding storage layer options. [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()pull/6/head
commit
203e1e9663
|
@ -32,12 +32,45 @@ juju add-relation kubernetes etcd
|
|||
For your convenience this charm supports some configuration options to set up
|
||||
a Kuberentes cluster that works in your environment:
|
||||
|
||||
**version**: Set the version of the Kubernetes containers to deploy.
|
||||
The default value is "v1.0.6". Changing the version causes the all the
|
||||
Kubernetes containers to be restarted.
|
||||
**version**: Set the version of the Kubernetes containers to deploy. The
|
||||
version string must be in the following format "v#.#.#" where the numbers
|
||||
match with the
|
||||
[kubernetes release labels](https://github.com/kubernetes/kubernetes/releases)
|
||||
of the [kubernetes github project](https://github.com/kubernetes/kubernetes).
|
||||
Changing the version causes the all the Kubernetes containers to be restarted.
|
||||
|
||||
**cidr**: Set the IP range for the Kubernetes cluster. eg: 10.1.0.0/16
|
||||
|
||||
# Storage
|
||||
The kubernetes charm is built to handle multiple storage devices if the cloud
|
||||
provider works with
|
||||
[Juju storage](https://jujucharms.com/docs/devel/charms-storage).
|
||||
|
||||
The 16.04 (xenial) release introduced [ZFS](https://en.wikipedia.org/wiki/ZFS)
|
||||
to Ubuntu. The xenial charm can use ZFS witha raidz pool. A raidz pool
|
||||
distributes parity along with the data (similar to a raid5 pool) and can suffer
|
||||
the loss of one drive while still retaining data. The raidz pool requires a
|
||||
minimum of 3 disks, but will accept more if they are provided.
|
||||
|
||||
You can add storage to the kubernetes charm in increments of 3 or greater:
|
||||
|
||||
```
|
||||
juju add-storage kubernetes/0 disk-pool=ebs,3,1G
|
||||
```
|
||||
|
||||
**Note**: Due to a limitation of raidz you can not add individual disks to an
|
||||
existing pool. Should you need to expand the storage of the raidz pool, the
|
||||
additional add-storage commands must be the same number of disks as the original
|
||||
command. At this point the charm will have two raidz pools added together, both
|
||||
of which could handle the loss of one disk each.
|
||||
|
||||
The storage code handles the addition of devices to the charm and when it
|
||||
recieves three disks creates a raidz pool that is mounted at the /srv/kubernetes
|
||||
directory by default. If you need the storage in another location you must
|
||||
change the `mount-point` value in layer.yaml before the charms is deployed.
|
||||
|
||||
To avoid data loss you must attach the storage before making the connection to
|
||||
the etcd cluster.
|
||||
|
||||
## State Events
|
||||
While this charm is meant to be a top layer, it can be used to build other
|
||||
|
|
|
@ -1,14 +1,21 @@
|
|||
options:
|
||||
version:
|
||||
type: string
|
||||
default: "v1.1.7"
|
||||
default: "v1.2.3"
|
||||
description: |
|
||||
The version of Kubernetes to use in this charm. The version is
|
||||
inserted in the configuration files that specify the hyperkube
|
||||
container to use when starting a Kubernetes cluster. Changing this
|
||||
value will restart the Kubernetes cluster.
|
||||
The version of Kubernetes to use in this charm. The version is inserted
|
||||
in the configuration files that specify the hyperkube container to use
|
||||
when starting a Kubernetes cluster. Changing this value will restart the
|
||||
Kubernetes cluster.
|
||||
cidr:
|
||||
type: string
|
||||
default: 10.1.0.0/16
|
||||
description: |
|
||||
Network CIDR to assign to K8s service groups
|
||||
Network CIDR to assign to Kubernetes service groups. This must not
|
||||
overlap with any IP ranges assigned to nodes for pods.
|
||||
dns_domain:
|
||||
type: string
|
||||
default: cluster.local
|
||||
description: |
|
||||
The domain name to use for the Kubernetes cluster by the
|
||||
skydns service.
|
||||
|
|
|
@ -1 +1,6 @@
|
|||
includes: ['layer:docker', 'layer:flannel', 'layer:tls', 'interface:etcd']
|
||||
includes: ['layer:leadership', 'layer:docker', 'layer:flannel', 'layer:storage', 'layer:tls', 'interface:etcd']
|
||||
repo: https://github.com/mbruzek/layer-k8s.git
|
||||
options:
|
||||
storage:
|
||||
storage-driver: zfs
|
||||
mount-point: '/srv/kubernetes'
|
||||
|
|
|
@ -15,3 +15,5 @@ subordinate: false
|
|||
requires:
|
||||
etcd:
|
||||
interface: etcd
|
||||
series:
|
||||
- 'trusty'
|
||||
|
|
|
@ -17,23 +17,42 @@
|
|||
import os
|
||||
|
||||
from shlex import split
|
||||
from shutil import copy2
|
||||
from subprocess import call
|
||||
from subprocess import check_call
|
||||
from subprocess import check_output
|
||||
|
||||
from charms.docker.compose import Compose
|
||||
from charms.reactive import hook
|
||||
from charms.reactive import remove_state
|
||||
from charms.reactive import set_state
|
||||
from charms.reactive import when
|
||||
from charms.reactive import when_any
|
||||
from charms.reactive import when_not
|
||||
|
||||
from charmhelpers.core import hookenv
|
||||
from charmhelpers.core.hookenv import is_leader
|
||||
from charmhelpers.core.hookenv import status_set
|
||||
from charmhelpers.core.hookenv import leader_set
|
||||
from charmhelpers.core.hookenv import leader_get
|
||||
from charmhelpers.core.templating import render
|
||||
from charmhelpers.core import unitdata
|
||||
from charmhelpers.core.host import chdir
|
||||
from contextlib import contextmanager
|
||||
|
||||
import tlslib
|
||||
|
||||
|
||||
@when('leadership.is_leader')
|
||||
def i_am_leader():
|
||||
'''The leader is the Kubernetes master node. '''
|
||||
leader_set({'master-address': hookenv.unit_private_ip()})
|
||||
|
||||
|
||||
@when_not('tls.client.authorization.required')
|
||||
def configure_easrsa():
|
||||
'''Require the tls layer to generate certificates with "clientAuth". '''
|
||||
# By default easyrsa generates the server certificates without clientAuth
|
||||
# Setting this state before easyrsa is configured ensures the tls layer is
|
||||
# configured to generate certificates with client authentication.
|
||||
set_state('tls.client.authorization.required')
|
||||
|
||||
|
||||
@hook('config-changed')
|
||||
|
@ -41,38 +60,48 @@ def config_changed():
|
|||
'''If the configuration values change, remove the available states.'''
|
||||
config = hookenv.config()
|
||||
if any(config.changed(key) for key in config.keys()):
|
||||
hookenv.log('Configuration options have changed.')
|
||||
hookenv.log('The configuration options have changed.')
|
||||
# Use the Compose class that encapsulates the docker-compose commands.
|
||||
compose = Compose('files/kubernetes')
|
||||
hookenv.log('Removing kubelet container and kubelet.available state.')
|
||||
# Stop and remove the Kubernetes kubelet container..
|
||||
compose.kill('kubelet')
|
||||
compose.rm('kubelet')
|
||||
# Remove the state so the code can react to restarting kubelet.
|
||||
remove_state('kubelet.available')
|
||||
hookenv.log('Removing proxy container and proxy.available state.')
|
||||
# Stop and remove the Kubernetes proxy container.
|
||||
compose.kill('proxy')
|
||||
compose.rm('proxy')
|
||||
# Remove the state so the code can react to restarting proxy.
|
||||
remove_state('proxy.available')
|
||||
if is_leader():
|
||||
hookenv.log('Removing master container and kubelet.available state.') # noqa
|
||||
# Stop and remove the Kubernetes kubelet container.
|
||||
compose.kill('master')
|
||||
compose.rm('master')
|
||||
# Remove the state so the code can react to restarting kubelet.
|
||||
remove_state('kubelet.available')
|
||||
else:
|
||||
hookenv.log('Removing kubelet container and kubelet.available state.') # noqa
|
||||
# Stop and remove the Kubernetes kubelet container.
|
||||
compose.kill('kubelet')
|
||||
compose.rm('kubelet')
|
||||
# Remove the state so the code can react to restarting kubelet.
|
||||
remove_state('kubelet.available')
|
||||
hookenv.log('Removing proxy container and proxy.available state.')
|
||||
# Stop and remove the Kubernetes proxy container.
|
||||
compose.kill('proxy')
|
||||
compose.rm('proxy')
|
||||
# Remove the state so the code can react to restarting proxy.
|
||||
remove_state('proxy.available')
|
||||
|
||||
if config.changed('version'):
|
||||
hookenv.log('Removing kubectl.downloaded state so the new version'
|
||||
' of kubectl will be downloaded.')
|
||||
hookenv.log('The version changed removing the states so the new '
|
||||
'version of kubectl will be downloaded.')
|
||||
remove_state('kubectl.downloaded')
|
||||
remove_state('kubeconfig.created')
|
||||
|
||||
|
||||
@when('tls.server.certificate available')
|
||||
@when_not('k8s.server.certificate available')
|
||||
def server_cert():
|
||||
'''When the server certificate is available, get the server certificate from
|
||||
the charm unit data and write it to the proper directory. '''
|
||||
destination_directory = '/srv/kubernetes'
|
||||
# Save the server certificate from unitdata to /srv/kubernetes/server.crt
|
||||
save_certificate(destination_directory, 'server')
|
||||
# Copy the unitname.key to /srv/kubernetes/server.key
|
||||
copy_key(destination_directory, 'server')
|
||||
'''When the server certificate is available, get the server certificate
|
||||
from the charm unitdata and write it to the kubernetes directory. '''
|
||||
server_cert = '/srv/kubernetes/server.crt'
|
||||
server_key = '/srv/kubernetes/server.key'
|
||||
# Save the server certificate from unit data to the destination.
|
||||
tlslib.server_cert(None, server_cert, user='ubuntu', group='ubuntu')
|
||||
# Copy the server key from the default location to the destination.
|
||||
tlslib.server_key(None, server_key, user='ubuntu', group='ubuntu')
|
||||
set_state('k8s.server.certificate available')
|
||||
|
||||
|
||||
|
@ -80,70 +109,57 @@ def server_cert():
|
|||
@when_not('k8s.client.certficate available')
|
||||
def client_cert():
|
||||
'''When the client certificate is available, get the client certificate
|
||||
from the charm unitdata and write it to the proper directory. '''
|
||||
destination_directory = '/srv/kubernetes'
|
||||
if not os.path.isdir(destination_directory):
|
||||
os.makedirs(destination_directory)
|
||||
os.chmod(destination_directory, 0o770)
|
||||
# The client certificate is also available on charm unitdata.
|
||||
client_cert_path = 'easy-rsa/easyrsa3/pki/issued/client.crt'
|
||||
kube_cert_path = os.path.join(destination_directory, 'client.crt')
|
||||
if os.path.isfile(client_cert_path):
|
||||
# Copy the client.crt to /srv/kubernetes/client.crt
|
||||
copy2(client_cert_path, kube_cert_path)
|
||||
# The client key is only available on the leader.
|
||||
client_key_path = 'easy-rsa/easyrsa3/pki/private/client.key'
|
||||
kube_key_path = os.path.join(destination_directory, 'client.key')
|
||||
if os.path.isfile(client_key_path):
|
||||
# Copy the client.key to /srv/kubernetes/client.key
|
||||
copy2(client_key_path, kube_key_path)
|
||||
from the charm unitdata and write it to the kubernetes directory. '''
|
||||
client_cert = '/srv/kubernetes/client.crt'
|
||||
client_key = '/srv/kubernetes/client.key'
|
||||
# Save the client certificate from the default location to the destination.
|
||||
tlslib.client_cert(None, client_cert, user='ubuntu', group='ubuntu')
|
||||
# Copy the client key from the default location to the destination.
|
||||
tlslib.client_key(None, client_key, user='ubuntu', group='ubuntu')
|
||||
set_state('k8s.client.certficate available')
|
||||
|
||||
|
||||
@when('tls.certificate.authority available')
|
||||
@when_not('k8s.certificate.authority available')
|
||||
def ca():
|
||||
'''When the Certificate Authority is available, copy the CA from the
|
||||
/usr/local/share/ca-certificates/k8s.crt to the proper directory. '''
|
||||
# Ensure the /srv/kubernetes directory exists.
|
||||
directory = '/srv/kubernetes'
|
||||
if not os.path.isdir(directory):
|
||||
os.makedirs(directory)
|
||||
os.chmod(directory, 0o770)
|
||||
# Normally the CA is just on the leader, but the tls layer installs the
|
||||
# CA on all systems in the /usr/local/share/ca-certificates directory.
|
||||
ca_path = '/usr/local/share/ca-certificates/{0}.crt'.format(
|
||||
hookenv.service_name())
|
||||
# The CA should be copied to the destination directory and named 'ca.crt'.
|
||||
destination_ca_path = os.path.join(directory, 'ca.crt')
|
||||
if os.path.isfile(ca_path):
|
||||
copy2(ca_path, destination_ca_path)
|
||||
set_state('k8s.certificate.authority available')
|
||||
default location to the /srv/kubernetes directory. '''
|
||||
ca_crt = '/srv/kubernetes/ca.crt'
|
||||
# Copy the Certificate Authority to the destination directory.
|
||||
tlslib.ca(None, ca_crt, user='ubuntu', group='ubuntu')
|
||||
set_state('k8s.certificate.authority available')
|
||||
|
||||
|
||||
@when('kubelet.available', 'proxy.available', 'cadvisor.available')
|
||||
def final_messaging():
|
||||
'''Lower layers emit messages, and if we do not clear the status messaging
|
||||
queue, we are left with whatever the last method call sets status to. '''
|
||||
# It's good UX to have consistent messaging that the cluster is online
|
||||
if is_leader():
|
||||
status_set('active', 'Kubernetes leader running')
|
||||
else:
|
||||
status_set('active', 'Kubernetes follower running')
|
||||
|
||||
|
||||
@when('kubelet.available', 'proxy.available', 'cadvisor.available')
|
||||
@when('kubelet.available', 'leadership.is_leader')
|
||||
@when_not('skydns.available')
|
||||
def launch_skydns():
|
||||
'''Create a kubernetes service and resource controller for the skydns
|
||||
service. '''
|
||||
'''Create the "kube-system" namespace, the skydns resource controller, and
|
||||
the skydns service. '''
|
||||
hookenv.log('Creating kubernetes skydns on the master node.')
|
||||
# Only launch and track this state on the leader.
|
||||
# Launching duplicate SkyDNS rc will raise an error
|
||||
if not is_leader():
|
||||
# Run a command to check if the apiserver is responding.
|
||||
return_code = call(split('kubectl cluster-info'))
|
||||
if return_code != 0:
|
||||
hookenv.log('kubectl command failed, waiting for apiserver to start.')
|
||||
remove_state('skydns.available')
|
||||
# Return without setting skydns.available so this method will retry.
|
||||
return
|
||||
cmd = "kubectl create -f files/manifests/skydns-rc.yml"
|
||||
check_call(split(cmd))
|
||||
cmd = "kubectl create -f files/manifests/skydns-svc.yml"
|
||||
check_call(split(cmd))
|
||||
# Check for the "kube-system" namespace.
|
||||
return_code = call(split('kubectl get namespace kube-system'))
|
||||
if return_code != 0:
|
||||
# Create the kube-system namespace that is used by the skydns files.
|
||||
check_call(split('kubectl create namespace kube-system'))
|
||||
# Check for the skydns replication controller.
|
||||
return_code = call(split('kubectl get -f files/manifests/skydns-rc.yml'))
|
||||
if return_code != 0:
|
||||
# Create the skydns replication controller from the rendered file.
|
||||
check_call(split('kubectl create -f files/manifests/skydns-rc.yml'))
|
||||
# Check for the skydns service.
|
||||
return_code = call(split('kubectl get -f files/manifests/skydns-svc.yml'))
|
||||
if return_code != 0:
|
||||
# Create the skydns service from the rendered file.
|
||||
check_call(split('kubectl create -f files/manifests/skydns-svc.yml'))
|
||||
set_state('skydns.available')
|
||||
|
||||
|
||||
|
@ -155,86 +171,94 @@ def relation_message():
|
|||
status_set('waiting', 'Waiting for relation to ETCD')
|
||||
|
||||
|
||||
@when('etcd.available', 'tls.server.certificate available')
|
||||
@when('etcd.available', 'kubeconfig.created')
|
||||
@when_not('kubelet.available', 'proxy.available')
|
||||
def master(etcd):
|
||||
'''Install and run the hyperkube container that starts kubernetes-master.
|
||||
This actually runs the kubelet, which in turn runs a pod that contains the
|
||||
other master components. '''
|
||||
def start_kubelet(etcd):
|
||||
'''Run the hyperkube container that starts the kubernetes services.
|
||||
When the leader, run the master services (apiserver, controller, scheduler)
|
||||
using the master.json from the rendered manifest directory.
|
||||
When a follower, start the node services (kubelet, and proxy). '''
|
||||
render_files(etcd)
|
||||
# Use the Compose class that encapsulates the docker-compose commands.
|
||||
compose = Compose('files/kubernetes')
|
||||
status_set('maintenance', 'Starting the Kubernetes kubelet container.')
|
||||
# Start the Kubernetes kubelet container using docker-compose.
|
||||
compose.up('kubelet')
|
||||
set_state('kubelet.available')
|
||||
# Open the secure port for api-server.
|
||||
hookenv.open_port(6443)
|
||||
status_set('maintenance', 'Starting the Kubernetes proxy container')
|
||||
# Start the Kubernetes proxy container using docker-compose.
|
||||
compose.up('proxy')
|
||||
set_state('proxy.available')
|
||||
status_set('active', 'Kubernetes started')
|
||||
status_set('maintenance', 'Starting the Kubernetes services.')
|
||||
if is_leader():
|
||||
compose.up('master')
|
||||
set_state('kubelet.available')
|
||||
# Open the secure port for api-server.
|
||||
hookenv.open_port(6443)
|
||||
else:
|
||||
# Start the Kubernetes kubelet container using docker-compose.
|
||||
compose.up('kubelet')
|
||||
set_state('kubelet.available')
|
||||
# Start the Kubernetes proxy container using docker-compose.
|
||||
compose.up('proxy')
|
||||
set_state('proxy.available')
|
||||
status_set('active', 'Kubernetes services started')
|
||||
|
||||
|
||||
@when('proxy.available')
|
||||
@when('docker.available')
|
||||
@when_not('kubectl.downloaded')
|
||||
def download_kubectl():
|
||||
'''Download the kubectl binary to test and interact with the cluster.'''
|
||||
status_set('maintenance', 'Downloading the kubectl binary')
|
||||
version = hookenv.config()['version']
|
||||
cmd = 'wget -nv -O /usr/local/bin/kubectl https://storage.googleapis.com/' \
|
||||
'kubernetes-release/release/{0}/bin/linux/amd64/kubectl'
|
||||
cmd = cmd.format(version)
|
||||
cmd = 'wget -nv -O /usr/local/bin/kubectl https://storage.googleapis.com' \
|
||||
'/kubernetes-release/release/{0}/bin/linux/{1}/kubectl'
|
||||
cmd = cmd.format(version, arch())
|
||||
hookenv.log('Downloading kubelet: {0}'.format(cmd))
|
||||
check_call(split(cmd))
|
||||
cmd = 'chmod +x /usr/local/bin/kubectl'
|
||||
check_call(split(cmd))
|
||||
set_state('kubectl.downloaded')
|
||||
status_set('active', 'Kubernetes installed')
|
||||
|
||||
|
||||
@when('kubectl.downloaded')
|
||||
@when_not('kubectl.package.created')
|
||||
def package_kubectl():
|
||||
'''Package the kubectl binary and configuration to a tar file for users
|
||||
to consume and interact directly with Kubernetes.'''
|
||||
if not is_leader():
|
||||
return
|
||||
context = 'default-context'
|
||||
cluster_name = 'kubernetes'
|
||||
public_address = hookenv.unit_public_ip()
|
||||
@when('kubectl.downloaded', 'leadership.is_leader', 'k8s.certificate.authority available', 'k8s.client.certficate available') # noqa
|
||||
@when_not('kubeconfig.created')
|
||||
def master_kubeconfig():
|
||||
'''Create the kubernetes configuration for the master unit. The master
|
||||
should create a package with the client credentials so the user can
|
||||
interact securely with the apiserver.'''
|
||||
hookenv.log('Creating Kubernetes configuration for master node.')
|
||||
directory = '/srv/kubernetes'
|
||||
key = 'client.key'
|
||||
ca = 'ca.crt'
|
||||
cert = 'client.crt'
|
||||
user = 'ubuntu'
|
||||
port = '6443'
|
||||
ca = '/srv/kubernetes/ca.crt'
|
||||
key = '/srv/kubernetes/client.key'
|
||||
cert = '/srv/kubernetes/client.crt'
|
||||
# Get the public address of the apiserver so users can access the master.
|
||||
server = 'https://{0}:{1}'.format(hookenv.unit_public_ip(), '6443')
|
||||
# Create the client kubeconfig so users can access the master node.
|
||||
create_kubeconfig(directory, server, ca, key, cert)
|
||||
# Copy the kubectl binary to this directory.
|
||||
cmd = 'cp -v /usr/local/bin/kubectl {0}'.format(directory)
|
||||
check_call(split(cmd))
|
||||
# Use a context manager to run the tar command in a specific directory.
|
||||
with chdir(directory):
|
||||
# Create the config file with the external address for this server.
|
||||
cmd = 'kubectl config set-cluster --kubeconfig={0}/config {1} ' \
|
||||
'--server=https://{2}:{3} --certificate-authority={4}'
|
||||
check_call(split(cmd.format(directory, cluster_name, public_address,
|
||||
port, ca)))
|
||||
# Create the credentials.
|
||||
cmd = 'kubectl config set-credentials --kubeconfig={0}/config {1} ' \
|
||||
'--client-key={2} --client-certificate={3}'
|
||||
check_call(split(cmd.format(directory, user, key, cert)))
|
||||
# Create a default context with the cluster.
|
||||
cmd = 'kubectl config set-context --kubeconfig={0}/config {1}' \
|
||||
' --cluster={2} --user={3}'
|
||||
check_call(split(cmd.format(directory, context, cluster_name, user)))
|
||||
# Now make the config use this new context.
|
||||
cmd = 'kubectl config use-context --kubeconfig={0}/config {1}'
|
||||
check_call(split(cmd.format(directory, context)))
|
||||
# Copy the kubectl binary to this directory
|
||||
cmd = 'cp -v /usr/local/bin/kubectl {0}'.format(directory)
|
||||
# Create a package with kubectl and the files to use it externally.
|
||||
cmd = 'tar -cvzf /home/ubuntu/kubectl_package.tar.gz ca.crt client.crt client.key kubeconfig kubectl' # noqa
|
||||
check_call(split(cmd))
|
||||
set_state('kubeconfig.created')
|
||||
|
||||
# Create an archive with all the necessary files.
|
||||
cmd = 'tar -cvzf /home/ubuntu/kubectl_package.tar.gz ca.crt client.crt client.key config kubectl' # noqa
|
||||
check_call(split(cmd))
|
||||
set_state('kubectl.package.created')
|
||||
|
||||
@when('kubectl.downloaded', 'k8s.certificate.authority available', 'k8s.server.certificate available') # noqa
|
||||
@when_not('kubeconfig.created', 'leadership.is_leader')
|
||||
def node_kubeconfig():
|
||||
'''Create the kubernetes configuration (kubeconfig) for this unit.
|
||||
The the nodes will create a kubeconfig with the server credentials so
|
||||
the services can interact securely with the apiserver.'''
|
||||
hookenv.log('Creating Kubernetes configuration for worker node.')
|
||||
directory = '/var/lib/kubelet'
|
||||
ca = '/srv/kubernetes/ca.crt'
|
||||
cert = '/srv/kubernetes/server.crt'
|
||||
key = '/srv/kubernetes/server.key'
|
||||
# Get the private address of the apiserver for communication between units.
|
||||
server = 'https://{0}:{1}'.format(leader_get('master-address'), '6443')
|
||||
# Create the kubeconfig for the other services.
|
||||
kubeconfig = create_kubeconfig(directory, server, ca, key, cert)
|
||||
# Install the kubeconfig in the root user's home directory.
|
||||
install_kubeconfig(kubeconfig, '/root/.kube', 'root')
|
||||
# Install the kubeconfig in the ubunut user's home directory.
|
||||
install_kubeconfig(kubeconfig, '/home/ubuntu/.kube', 'ubuntu')
|
||||
set_state('kubeconfig.created')
|
||||
|
||||
|
||||
@when('proxy.available')
|
||||
|
@ -244,53 +268,110 @@ def start_cadvisor():
|
|||
application containers on this system. '''
|
||||
compose = Compose('files/kubernetes')
|
||||
compose.up('cadvisor')
|
||||
set_state('cadvisor.available')
|
||||
status_set('active', 'cadvisor running on port 8088')
|
||||
hookenv.open_port(8088)
|
||||
status_set('active', 'cadvisor running on port 8088')
|
||||
set_state('cadvisor.available')
|
||||
|
||||
|
||||
@when('kubelet.available', 'kubeconfig.created')
|
||||
@when_any('proxy.available', 'cadvisor.available', 'skydns.available')
|
||||
def final_message():
|
||||
'''Issue some final messages when the services are started. '''
|
||||
# TODO: Run a simple/quick health checks before issuing this message.
|
||||
status_set('active', 'Kubernetes running.')
|
||||
|
||||
|
||||
@when('sdn.available')
|
||||
def gather_sdn_data():
|
||||
'''Get the Software Defined Network (SDN) information and return it as a
|
||||
dictionary.'''
|
||||
dictionary. '''
|
||||
sdn_data = {}
|
||||
# The dictionary named 'pillar' is a construct of the k8s template files.
|
||||
pillar = {}
|
||||
# SDN Providers pass data via the unitdata.kv module
|
||||
db = unitdata.kv()
|
||||
# Generate an IP address for the DNS provider
|
||||
# Ideally the DNS address should come from the sdn cidr.
|
||||
subnet = db.get('sdn_subnet')
|
||||
if subnet:
|
||||
ip = subnet.split('/')[0]
|
||||
dns_server = '.'.join(ip.split('.')[0:-1]) + '.10'
|
||||
addedcontext = {}
|
||||
addedcontext['dns_server'] = dns_server
|
||||
return addedcontext
|
||||
return {}
|
||||
# Generate the DNS ip address on the SDN cidr (this is desired).
|
||||
pillar['dns_server'] = get_dns_ip(subnet)
|
||||
else:
|
||||
# There is no SDN cider fall back to the kubernetes config cidr option.
|
||||
pillar['dns_server'] = get_dns_ip(hookenv.config().get('cidr'))
|
||||
# The pillar['dns_server'] value is used the skydns-svc.yml file.
|
||||
pillar['dns_replicas'] = 1
|
||||
# The pillar['dns_domain'] value is ued in the skydns-rc.yml
|
||||
pillar['dns_domain'] = hookenv.config().get('dns_domain')
|
||||
# Use a 'pillar' dictionary so we can reuse the upstream skydns templates.
|
||||
sdn_data['pillar'] = pillar
|
||||
return sdn_data
|
||||
|
||||
|
||||
def copy_key(directory, prefix):
|
||||
'''Copy the key from the easy-rsa/easyrsa3/pki/private directory to the
|
||||
specified directory. '''
|
||||
def install_kubeconfig(kubeconfig, directory, user):
|
||||
'''Copy the a file from the target to a new directory creating directories
|
||||
if necessary. '''
|
||||
# The file and directory must be owned by the correct user.
|
||||
chown = 'chown {0}:{0} {1}'
|
||||
if not os.path.isdir(directory):
|
||||
os.makedirs(directory)
|
||||
os.chmod(directory, 0o770)
|
||||
# Must remove the path characters from the local unit name.
|
||||
path_name = hookenv.local_unit().replace('/', '_')
|
||||
# The key is not in unitdata it is in the local easy-rsa directory.
|
||||
local_key_path = 'easy-rsa/easyrsa3/pki/private/{0}.key'.format(path_name)
|
||||
key_name = '{0}.key'.format(prefix)
|
||||
# The key should be copied to this directory.
|
||||
destination_key_path = os.path.join(directory, key_name)
|
||||
# Copy the key file from the local directory to the destination.
|
||||
copy2(local_key_path, destination_key_path)
|
||||
# Change the ownership of the config file to the right user.
|
||||
check_call(split(chown.format(user, directory)))
|
||||
# kubectl looks for a file named "config" in the ~/.kube directory.
|
||||
config = os.path.join(directory, 'config')
|
||||
# Copy the kubeconfig file to the directory renaming it to "config".
|
||||
cmd = 'cp -v {0} {1}'.format(kubeconfig, config)
|
||||
check_call(split(cmd))
|
||||
# Change the ownership of the config file to the right user.
|
||||
check_call(split(chown.format(user, config)))
|
||||
|
||||
|
||||
def create_kubeconfig(directory, server, ca, key, cert, user='ubuntu'):
|
||||
'''Create a configuration for kubernetes in a specific directory using
|
||||
the supplied arguments, return the path to the file.'''
|
||||
context = 'default-context'
|
||||
cluster_name = 'kubernetes'
|
||||
# Ensure the destination directory exists.
|
||||
if not os.path.isdir(directory):
|
||||
os.makedirs(directory)
|
||||
# The configuration file should be in this directory named kubeconfig.
|
||||
kubeconfig = os.path.join(directory, 'kubeconfig')
|
||||
# Create the config file with the address of the master server.
|
||||
cmd = 'kubectl config set-cluster --kubeconfig={0} {1} ' \
|
||||
'--server={2} --certificate-authority={3}'
|
||||
check_call(split(cmd.format(kubeconfig, cluster_name, server, ca)))
|
||||
# Create the credentials using the client flags.
|
||||
cmd = 'kubectl config set-credentials --kubeconfig={0} {1} ' \
|
||||
'--client-key={2} --client-certificate={3}'
|
||||
check_call(split(cmd.format(kubeconfig, user, key, cert)))
|
||||
# Create a default context with the cluster.
|
||||
cmd = 'kubectl config set-context --kubeconfig={0} {1} ' \
|
||||
'--cluster={2} --user={3}'
|
||||
check_call(split(cmd.format(kubeconfig, context, cluster_name, user)))
|
||||
# Make the config use this new context.
|
||||
cmd = 'kubectl config use-context --kubeconfig={0} {1}'
|
||||
check_call(split(cmd.format(kubeconfig, context)))
|
||||
|
||||
hookenv.log('kubectl configuration created at {0}.'.format(kubeconfig))
|
||||
return kubeconfig
|
||||
|
||||
|
||||
def get_dns_ip(cidr):
|
||||
'''Get an IP address for the DNS server on the provided cidr.'''
|
||||
# Remove the range from the cidr.
|
||||
ip = cidr.split('/')[0]
|
||||
# Take the last octet off the IP address and replace it with 10.
|
||||
return '.'.join(ip.split('.')[0:-1]) + '.10'
|
||||
|
||||
|
||||
def render_files(reldata=None):
|
||||
'''Use jinja templating to render the docker-compose.yml and master.json
|
||||
file to contain the dynamic data for the configuration files.'''
|
||||
context = {}
|
||||
# Load the context manager with sdn and config data.
|
||||
# Load the context data with SDN data.
|
||||
context.update(gather_sdn_data())
|
||||
# Add the charm configuration data to the context.
|
||||
context.update(hookenv.config())
|
||||
if reldata:
|
||||
# Add the etcd relation data to the context.
|
||||
context.update({'connection_string': reldata.connection_string()})
|
||||
charm_dir = hookenv.charm_dir()
|
||||
rendered_kube_dir = os.path.join(charm_dir, 'files/kubernetes')
|
||||
|
@ -299,39 +380,53 @@ def render_files(reldata=None):
|
|||
rendered_manifest_dir = os.path.join(charm_dir, 'files/manifests')
|
||||
if not os.path.exists(rendered_manifest_dir):
|
||||
os.makedirs(rendered_manifest_dir)
|
||||
# Add the manifest directory so the docker-compose file can have.
|
||||
context.update({'manifest_directory': rendered_manifest_dir,
|
||||
|
||||
# Update the context with extra values, arch, manifest dir, and private IP.
|
||||
context.update({'arch': arch(),
|
||||
'master_address': leader_get('master-address'),
|
||||
'manifest_directory': rendered_manifest_dir,
|
||||
'public_address': hookenv.unit_get('public-address'),
|
||||
'private_address': hookenv.unit_get('private-address')})
|
||||
|
||||
# Adapted from: http://kubernetes.io/docs/getting-started-guides/docker/
|
||||
target = os.path.join(rendered_kube_dir, 'docker-compose.yml')
|
||||
# Render the files/kubernetes/docker-compose.yml file that contains the
|
||||
# definition for kubelet and proxy.
|
||||
target = os.path.join(rendered_kube_dir, 'docker-compose.yml')
|
||||
render('docker-compose.yml', target, context)
|
||||
# Render the files/manifests/master.json that contains parameters for the
|
||||
# apiserver, controller, and controller-manager
|
||||
target = os.path.join(rendered_manifest_dir, 'master.json')
|
||||
render('master.json', target, context)
|
||||
# Render files/kubernetes/skydns-svc.yaml for SkyDNS service
|
||||
target = os.path.join(rendered_manifest_dir, 'skydns-svc.yml')
|
||||
render('skydns-svc.yml', target, context)
|
||||
# Render files/kubernetes/skydns-rc.yaml for SkyDNS pods
|
||||
target = os.path.join(rendered_manifest_dir, 'skydns-rc.yml')
|
||||
render('skydns-rc.yml', target, context)
|
||||
|
||||
if is_leader():
|
||||
# Source: https://github.com/kubernetes/...master/cluster/images/hyperkube # noqa
|
||||
target = os.path.join(rendered_manifest_dir, 'master.json')
|
||||
# Render the files/manifests/master.json that contains parameters for
|
||||
# the apiserver, controller, and controller-manager
|
||||
render('master.json', target, context)
|
||||
# Source: ...master/cluster/addons/dns/skydns-svc.yaml.in
|
||||
target = os.path.join(rendered_manifest_dir, 'skydns-svc.yml')
|
||||
# Render files/kubernetes/skydns-svc.yaml for SkyDNS service.
|
||||
render('skydns-svc.yml', target, context)
|
||||
# Source: ...master/cluster/addons/dns/skydns-rc.yaml.in
|
||||
target = os.path.join(rendered_manifest_dir, 'skydns-rc.yml')
|
||||
# Render files/kubernetes/skydns-rc.yaml for SkyDNS pod.
|
||||
render('skydns-rc.yml', target, context)
|
||||
|
||||
|
||||
def save_certificate(directory, prefix):
|
||||
'''Get the certificate from the charm unitdata, and write it to the proper
|
||||
directory. The parameters are: destination directory, and prefix to use
|
||||
for the key and certificate name.'''
|
||||
if not os.path.isdir(directory):
|
||||
os.makedirs(directory)
|
||||
os.chmod(directory, 0o770)
|
||||
# Grab the unitdata key value store.
|
||||
store = unitdata.kv()
|
||||
certificate_data = store.get('tls.{0}.certificate'.format(prefix))
|
||||
certificate_name = '{0}.crt'.format(prefix)
|
||||
# The certificate should be saved to this directory.
|
||||
certificate_path = os.path.join(directory, certificate_name)
|
||||
# write the server certificate out to the correct location
|
||||
with open(certificate_path, 'w') as fp:
|
||||
fp.write(certificate_data)
|
||||
def status_set(level, message):
|
||||
'''Output status message with leadership information.'''
|
||||
if is_leader():
|
||||
message = '(master) {0}'.format(message)
|
||||
hookenv.status_set(level, message)
|
||||
|
||||
|
||||
def arch():
|
||||
'''Return the package architecture as a string. Raise an exception if the
|
||||
architecture is not supported by kubernetes.'''
|
||||
# Get the package architecture for this system.
|
||||
architecture = check_output(['dpkg', '--print-architecture']).rstrip()
|
||||
# Convert the binary result into a string.
|
||||
architecture = architecture.decode('utf-8')
|
||||
# Validate the architecture is supported by kubernetes.
|
||||
if architecture not in ['amd64', 'arm', 'arm64', 'ppc64le']:
|
||||
message = 'Unsupported machine architecture: {0}'.format(architecture)
|
||||
status_set('blocked', message)
|
||||
raise Exception(message)
|
||||
return architecture
|
||||
|
|
|
@ -1,25 +1,31 @@
|
|||
# https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md
|
||||
# http://kubernetes.io/docs/getting-started-guides/docker/
|
||||
|
||||
# # Start kubelet and then start master components as pods
|
||||
# docker run \
|
||||
# --volume=/:/rootfs:ro \
|
||||
# --volume=/sys:/sys:ro \
|
||||
# --volume=/dev:/dev \
|
||||
# --volume=/var/lib/docker/:/var/lib/docker:rw \
|
||||
# --volume=/var/lib/kubelet/:/var/lib/kubelet:rw \
|
||||
# --volume=/var/run:/var/run:rw \
|
||||
# --volume=/var/lib/juju/agents/unit-k8s-0/charm/files/manifests:/etc/kubernetes/manifests:rw \
|
||||
# --volume=/srv/kubernetes:/srv/kubernetes \
|
||||
# --net=host \
|
||||
# --pid=host \
|
||||
# --privileged=true \
|
||||
# -ti \
|
||||
# gcr.io/google_containers/hyperkube:v1.0.6 \
|
||||
# /hyperkube kubelet --containerized --hostname-override="127.0.0.1" \
|
||||
# --address="0.0.0.0" --api-servers=http://localhost:8080 \
|
||||
# --config=/etc/kubernetes/manifests
|
||||
# --privileged \
|
||||
# --restart=on-failure \
|
||||
# -d \
|
||||
# -v /sys:/sys:ro \
|
||||
# -v /var/run:/var/run:rw \
|
||||
# -v /:/rootfs:ro \
|
||||
# -v /var/lib/docker/:/var/lib/docker:rw \
|
||||
# -v /var/lib/kubelet/:/var/lib/kubelet:rw \
|
||||
# gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
|
||||
# /hyperkube kubelet \
|
||||
# --address=0.0.0.0 \
|
||||
# --allow-privileged=true \
|
||||
# --enable-server \
|
||||
# --api-servers=http://localhost:8080 \
|
||||
# --config=/etc/kubernetes/manifests-multi \
|
||||
# --cluster-dns=10.0.0.10 \
|
||||
# --cluster-domain=cluster.local \
|
||||
# --containerized \
|
||||
# --v=2
|
||||
|
||||
kubelet:
|
||||
image: gcr.io/google_containers/hyperkube:{{version}}
|
||||
master:
|
||||
image: gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}
|
||||
net: host
|
||||
pid: host
|
||||
privileged: true
|
||||
|
@ -27,42 +33,92 @@ kubelet:
|
|||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /sys:/sys:ro
|
||||
- /dev:/dev
|
||||
- /var/lib/docker/:/var/lib/docker:rw
|
||||
- /var/lib/kubelet/:/var/lib/kubelet:rw
|
||||
- /var/run:/var/run:rw
|
||||
- {{manifest_directory}}:/etc/kubernetes/manifests:rw
|
||||
- {{ manifest_directory }}:/etc/kubernetes/manifests:rw
|
||||
- /srv/kubernetes:/srv/kubernetes
|
||||
command: |
|
||||
/hyperkube kubelet --containerized --hostname-override="{{private_address}}"
|
||||
--address="0.0.0.0" --api-servers=http://localhost:8080
|
||||
--config=/etc/kubernetes/manifests {% if dns_server %}
|
||||
--cluster-dns={{dns_server}} --cluster-domain=cluster.local {% endif %}
|
||||
/hyperkube kubelet
|
||||
--address="0.0.0.0"
|
||||
--allow-privileged=true
|
||||
--api-servers=http://localhost:8080
|
||||
--cluster-dns={{ pillar['dns_server'] }}
|
||||
--cluster-domain={{ pillar['dns_domain'] }}
|
||||
--config=/etc/kubernetes/manifests
|
||||
--containerized
|
||||
--hostname-override="{{ private_address }}"
|
||||
--tls-cert-file="/srv/kubernetes/server.crt"
|
||||
--tls-private-key-file="/srv/kubernetes/server.key"
|
||||
--v=2
|
||||
|
||||
# docker run --net=host -d gcr.io/google_containers/etcd:2.0.12 \
|
||||
# /usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001 \
|
||||
# --data-dir=/var/etcd/data
|
||||
etcd:
|
||||
# Start kubelet without the config option and only kubelet starts.
|
||||
# kubelet gets the tls credentials from /var/lib/kubelet/kubeconfig
|
||||
# docker run \
|
||||
# --net=host \
|
||||
# --pid=host \
|
||||
# --privileged \
|
||||
# --restart=on-failure \
|
||||
# -d \
|
||||
# -v /sys:/sys:ro \
|
||||
# -v /var/run:/var/run:rw \
|
||||
# -v /:/rootfs:ro \
|
||||
# -v /var/lib/docker/:/var/lib/docker:rw \
|
||||
# -v /var/lib/kubelet/:/var/lib/kubelet:rw \
|
||||
# gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
|
||||
# /hyperkube kubelet \
|
||||
# --allow-privileged=true \
|
||||
# --api-servers=http://${MASTER_IP}:8080 \
|
||||
# --address=0.0.0.0 \
|
||||
# --enable-server \
|
||||
# --cluster-dns=10.0.0.10 \
|
||||
# --cluster-domain=cluster.local \
|
||||
# --containerized \
|
||||
# --v=2
|
||||
|
||||
|
||||
kubelet:
|
||||
image: gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}
|
||||
net: host
|
||||
image: gcr.io/google_containers/etcd:2.0.12
|
||||
pid: host
|
||||
privileged: true
|
||||
restart: always
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker/:/var/lib/docker:rw
|
||||
- /var/lib/kubelet/:/var/lib/kubelet:rw
|
||||
- /var/run:/var/run:rw
|
||||
- /srv/kubernetes:/srv/kubernetes
|
||||
command: |
|
||||
/usr/local/bin/etcd --addr=127.0.0.1:4001 --bind-addr=0.0.0.0:4001
|
||||
--data-dir=/var/etcd/data
|
||||
/hyperkube kubelet
|
||||
--address="0.0.0.0"
|
||||
--allow-privileged=true
|
||||
--api-servers=https://{{ master_address }}:6443
|
||||
--cluster-dns={{ pillar['dns_server'] }}
|
||||
--cluster-domain={{ pillar['dns_domain'] }}
|
||||
--containerized
|
||||
--hostname-override="{{ private_address }}"
|
||||
--v=2
|
||||
|
||||
# docker run \
|
||||
# -d \
|
||||
# --net=host \
|
||||
# --privileged \
|
||||
# gcr.io/google_containers/hyperkube:v${K8S_VERSION} \
|
||||
# /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
# --restart=on-failure \
|
||||
# gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
|
||||
# /hyperkube proxy \
|
||||
# --master=http://${MASTER_IP}:8080 \
|
||||
# --v=2
|
||||
proxy:
|
||||
net: host
|
||||
privileged: true
|
||||
restart: always
|
||||
image: gcr.io/google_containers/hyperkube:{{version}}
|
||||
command: /hyperkube proxy --master=http://127.0.0.1:8080 --v=2
|
||||
image: gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}
|
||||
command: |
|
||||
/hyperkube proxy
|
||||
--master=http://{{ master_address }}:8080
|
||||
--v=2
|
||||
|
||||
# cAdvisor (Container Advisor) provides container users an understanding of
|
||||
# the resource usage and performance characteristics of their running containers.
|
||||
|
|
|
@ -7,55 +7,81 @@
|
|||
"containers":[
|
||||
{
|
||||
"name": "controller-manager",
|
||||
"image": "gcr.io/google_containers/hyperkube:{{version}}",
|
||||
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
|
||||
"command": [
|
||||
"/hyperkube",
|
||||
"controller-manager",
|
||||
"--master=127.0.0.1:8080",
|
||||
"--service-account-private-key-file=/srv/kubernetes/server.key",
|
||||
"--root-ca-file=/srv/kubernetes/ca.crt",
|
||||
"--min-resync-period=3m",
|
||||
"--v=2"
|
||||
]
|
||||
],
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "data",
|
||||
"mountPath": "/srv/kubernetes"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "apiserver",
|
||||
"image": "gcr.io/google_containers/hyperkube:{{version}}",
|
||||
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
|
||||
"command": [
|
||||
"/hyperkube",
|
||||
"apiserver",
|
||||
"--address=0.0.0.0",
|
||||
"--client_ca_file=/srv/kubernetes/ca.crt",
|
||||
"--cluster-name=kubernetes",
|
||||
"--etcd-servers={{connection_string}}",
|
||||
"--service-cluster-ip-range={{cidr}}",
|
||||
"--service-cluster-ip-range={{ cidr }}",
|
||||
"--insecure-bind-address=0.0.0.0",
|
||||
"--etcd-servers={{ connection_string }}",
|
||||
"--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota",
|
||||
"--client-ca-file=/srv/kubernetes/ca.crt",
|
||||
"--basic-auth-file=/srv/kubernetes/basic_auth.csv",
|
||||
"--min-request-timeout=300",
|
||||
"--tls-cert-file=/srv/kubernetes/server.crt",
|
||||
"--tls-private-key-file=/srv/kubernetes/server.key",
|
||||
"--v=2"
|
||||
],
|
||||
"--token-auth-file=/srv/kubernetes/known_tokens.csv",
|
||||
"--allow-privileged=true",
|
||||
"--v=4"
|
||||
],
|
||||
"volumeMounts": [
|
||||
{
|
||||
"mountPath": "/srv/kubernetes",
|
||||
"name": "certs-kubernetes",
|
||||
"readOnly": true
|
||||
}
|
||||
]
|
||||
{
|
||||
"name": "data",
|
||||
"mountPath": "/srv/kubernetes"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "scheduler",
|
||||
"image": "gcr.io/google_containers/hyperkube:{{version}}",
|
||||
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
|
||||
"command": [
|
||||
"/hyperkube",
|
||||
"scheduler",
|
||||
"--master=127.0.0.1:8080",
|
||||
"--v=2"
|
||||
]
|
||||
},
|
||||
{
|
||||
"name": "setup",
|
||||
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
|
||||
"command": [
|
||||
"/setup-files.sh",
|
||||
"IP:{{ private_address }},IP:{{ public_address }},DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local"
|
||||
],
|
||||
"volumeMounts": [
|
||||
{
|
||||
"name": "data",
|
||||
"mountPath": "/data"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"volumes": [
|
||||
{
|
||||
"hostPath": {
|
||||
"path": "/srv/kubernetes"
|
||||
},
|
||||
"name": "certs-kubernetes"
|
||||
}
|
||||
]
|
||||
{
|
||||
"hostPath": {
|
||||
"path": "/srv/kubernetes"
|
||||
},
|
||||
"name": "data"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,29 +1,36 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: kube-dns-v8
|
||||
name: kube-dns-v11
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
version: v8
|
||||
version: v11
|
||||
kubernetes.io/cluster-service: "true"
|
||||
spec:
|
||||
{% if dns_replicas -%} replicas: {{ dns_replicas }} {% else %} replicas: 1 {% endif %}
|
||||
replicas: {{ pillar['dns_replicas'] }}
|
||||
selector:
|
||||
k8s-app: kube-dns
|
||||
version: v8
|
||||
version: v11
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
version: v8
|
||||
version: v11
|
||||
kubernetes.io/cluster-service: "true"
|
||||
spec:
|
||||
containers:
|
||||
- name: etcd
|
||||
image: gcr.io/google_containers/etcd:2.0.9
|
||||
image: gcr.io/google_containers/etcd-{{ arch }}:2.2.1
|
||||
resources:
|
||||
# TODO: Set memory limits when we've profiled the container for large
|
||||
# clusters, then set request = limit to keep this container in
|
||||
# guaranteed class. Currently, this container falls into the
|
||||
# "burstable" category so the kubelet doesn't backoff from restarting it.
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 500Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
command:
|
||||
|
@ -40,26 +47,60 @@ spec:
|
|||
- name: etcd-storage
|
||||
mountPath: /var/etcd/data
|
||||
- name: kube2sky
|
||||
image: gcr.io/google_containers/kube2sky:1.11
|
||||
image: gcr.io/google_containers/kube2sky:1.14
|
||||
resources:
|
||||
# TODO: Set memory limits when we've profiled the container for large
|
||||
# clusters, then set request = limit to keep this container in
|
||||
# guaranteed class. Currently, this container falls into the
|
||||
# "burstable" category so the kubelet doesn't backoff from restarting it.
|
||||
limits:
|
||||
cpu: 100m
|
||||
# Kube2sky watches all pods.
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 60
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /readiness
|
||||
port: 8081
|
||||
scheme: HTTP
|
||||
# we poll on pod startup for the Kubernetes master service and
|
||||
# only setup the /readiness HTTP server once that's available.
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
args:
|
||||
# command = "/kube2sky"
|
||||
{% if dns_domain -%}- -domain={{ dns_domain }} {% else %} - -domain=cluster.local {% endif %}
|
||||
- -kube_master_url=http://{{ private_address }}:8080
|
||||
- --domain={{ pillar['dns_domain'] }}
|
||||
- --kube-master-url=http://{{ private_address }}:8080
|
||||
- name: skydns
|
||||
image: gcr.io/google_containers/skydns:2015-03-11-001
|
||||
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
|
||||
resources:
|
||||
# TODO: Set memory limits when we've profiled the container for large
|
||||
# clusters, then set request = limit to keep this container in
|
||||
# guaranteed class. Currently, this container falls into the
|
||||
# "burstable" category so the kubelet doesn't backoff from restarting it.
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
args:
|
||||
# command = "/skydns"
|
||||
- -machines=http://localhost:4001
|
||||
- -machines=http://127.0.0.1:4001
|
||||
- -addr=0.0.0.0:53
|
||||
{% if dns_domain -%}- -domain={{ dns_domain }}. {% else %} - -domain=cluster.local. {% endif %}
|
||||
- -ns-rotate=false
|
||||
- -domain={{ pillar['dns_domain'] }}.
|
||||
ports:
|
||||
- containerPort: 53
|
||||
name: dns
|
||||
|
@ -67,21 +108,18 @@ spec:
|
|||
- containerPort: 53
|
||||
name: dns-tcp
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
- name: healthz
|
||||
image: gcr.io/google_containers/exechealthz:1.0
|
||||
resources:
|
||||
# keep request = limit to keep this container in guaranteed class
|
||||
limits:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
args:
|
||||
{% if dns_domain -%}- -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} localhost >/dev/null {% else %} - -cmd=nslookup kubernetes.default.svc.kubernetes.local localhost >/dev/null {% endif %}
|
||||
- -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null
|
||||
- -port=8080
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
|
|
|
@ -10,7 +10,7 @@ metadata:
|
|||
spec:
|
||||
selector:
|
||||
k8s-app: kube-dns
|
||||
clusterIP: {{ dns_server }}
|
||||
clusterIP: {{ pillar['dns_server'] }}
|
||||
ports:
|
||||
- name: dns
|
||||
port: 53
|
||||
|
|
|
@ -20,11 +20,11 @@ cluster/gce/gci/configure-helper.sh: local reconcile_cidr="true"
|
|||
cluster/gce/gci/configure-helper.sh: sed -i -e "s@{{pillar\['allow_privileged'\]}}@true@g" "${src_file}"
|
||||
cluster/gce/trusty/configure-helper.sh: sed -i -e "s@{{pillar\['allow_privileged'\]}}@true@g" "${src_file}"
|
||||
cluster/gce/util.sh: local node_ip=$(gcloud compute instances describe --project "${PROJECT}" --zone "${ZONE}" \
|
||||
cluster/juju/layers/kubernetes/reactive/k8s.py: check_call(split(cmd.format(directory, cluster_name, public_address,
|
||||
cluster/juju/layers/kubernetes/reactive/k8s.py: check_call(split(cmd.format(directory, context, cluster_name, user)))
|
||||
cluster/juju/layers/kubernetes/reactive/k8s.py: cluster_name = 'kubernetes'
|
||||
cluster/juju/layers/kubernetes/templates/master.json: "--client_ca_file=/srv/kubernetes/ca.crt",
|
||||
cluster/juju/layers/kubernetes/templates/skydns-rc.yml: - -kube_master_url=http://{{ private_address }}:8080
|
||||
cluster/juju/layers/kubernetes/reactive/k8s.py: check_call(split(cmd.format(kubeconfig, cluster_name, server, ca)))
|
||||
cluster/juju/layers/kubernetes/reactive/k8s.py: check_call(split(cmd.format(kubeconfig, context, cluster_name, user)))
|
||||
cluster/juju/layers/kubernetes/reactive/k8s.py: client_key = '/srv/kubernetes/client.key'
|
||||
cluster/juju/layers/kubernetes/reactive/k8s.py: tlslib.client_key(None, client_key, user='ubuntu', group='ubuntu')
|
||||
cluster/lib/logging.sh: local source_file=${BASH_SOURCE[$frame_no]}
|
||||
cluster/lib/logging.sh: local source_file=${BASH_SOURCE[$stack_skip]}
|
||||
cluster/log-dump.sh: for node_name in "${NODE_NAMES[@]}"; do
|
||||
|
|
Loading…
Reference in New Issue