Automatic merge from submit-queue
Add namespace-{list, create, delete} actions to the kubernetes-master layer
**What this PR does / why we need it**:
This PR adds namespace-{list,create,delete} actions to the juju kubernetes-master layer.
**Which issue this PR fixes**: fixes#43712
**Special notes for your reviewer**:
Original PR https://github.com/juju-solutions/kubernetes/pull/109
**Release note**:
```
Add namespace-{list,create,delete} actions to the juju kubernetes-master layer
```
Automatic merge from submit-queue (batch tested with PRs 42486, 44780)
Hostname patch for vsphere provider limitations with juju
**What this PR does / why we need it**:
The Juju VSphere provider doesn't set a unique hostname which causes issues when scaling worker-pools and they all have the hostname `ubuntuguest`. Instead we assign the JUJU_UNIT_NAME to that hostname to prevent the collision which allows the master to sort out that there are multiple units and not one attempting re-registration.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/237
**Special notes for your reviewer**:
The charm-pre-exec runs before it installs the charm software so the validation can happen quickly. Check hostname output, as well as kubectl get no post deployment.
```release-note
Resolves juju vsphere hostname bug showing only a single node in a scaled node-pool.
```
This patch sets the hostname to a unique identifier (the juju unit name)
during pre-deployment of the charm. This may not be a FQDN resolveable
hostname but will prevent hostname collision.
Automatic merge from submit-queue (batch tested with PRs 44722, 44704, 44681, 44494, 39732)
prevent installation of docker from upstream
**What this PR does / why we need it**: Disallows installation of upstream docker from PPA in the Juju kubernetes-worker charm.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Disallows installation of upstream docker from PPA in the Juju kubernetes-worker charm.
```
Automatic merge from submit-queue (batch tested with PRs 42177, 42176, 44721)
Removed fluentd-gcp manifest pod
```release-note
Fluentd manifest pod is no longer created on non-registered master when creating clusters using kube-up.sh.
```
Automatic merge from submit-queue
Remove the old docker-multinode files that were built into the hyperkube image
**What this PR does / why we need it**:
ref: https://goo.gl/VxSaKx
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
The hyperkube image has been slimmed down and no longer includes addon manifests and other various scripts. These were introduced for the now removed docker-multinode setup system.
```
cc @jbeda @brendandburns @bgrant0607 @justinsb @mikedanese
Automatic merge from submit-queue (batch tested with PRs 44687, 44689, 44661)
Retry in get-kube.sh to avoid download flakes.
GCS has up to 2% 5xx rates, so retrying is critical.
This is currently failing about 8 times per day [according to the dashboard](https://storage.googleapis.com/k8s-gubernator/triage/index.html?test=Extract#be2f33fb1e6dd2389d12). It could be backported to reduce the flake rate.
Relase note:
```release-note
NONE
```
Automatic merge from submit-queue
select one api endpoint at random when deploying kubernetes-core charm
**What this PR does / why we need it**: Fixes a bug in the kubernetes-worker Juju charm code that attempted to give kube-proxy more than one api endpoint.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/255
**Release note**:
```release-note
Fixes a bug in the kubernetes-worker Juju charm code that attempted to give kube-proxy more than one api endpoint.
```
Automatic merge from submit-queue
Update gcr.io/google_containers/porter image to 4524579c0e
**What this PR does / why we need it**: updates the porter image to one built at 4524579c0e using go1.8.1.
This incorporates #44638, which has a new dummy certificate that is compliant with go1.8+.
Image has already been pushed.
**Release note**:
```release-note
NONE
```
/assign @liggitt
/cc @luxas @lavalamp
Automatic merge from submit-queue
Fix ceph-secret type to kubernetes.io/rbd in kubernetes-master charm
**What this PR does / why we need it**:
This fixes the type of the ceph-secret secret that's created by the kubernetes-master charm.
Without the `kubernetes.io/rbd` type, automatic provisioning of PVCs doesn't work.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix ceph-secret type to kubernetes.io/rbd in kubernetes-master charm
```
Automatic merge from submit-queue
issue_43986: fix docu with non-functional proxy
The documentation defines a couple of replication-controller and service
to provision a docker-registry somewhere on the cluster and have it
available by the name viz. A record of
kube-registry.default.svc.<clustername>.
On each node, http-proxies are placed as daemon-set with the
kube-registry DNS name set as upstream, so that the registry is
available on each host under endpoint localhost:5000
Because in the documentation, selector-identifiers are the same for
"upstream" registry and proxies, the proxies themselves register under
the service intended for the upstream and now have themselves as
upstream under a different port, where connection attempts result in
"connection refused".
Adapting selectors to be unique as in this patch fixes the problem.
**What this PR does / why we need it**:
Patch fixes (cf. above) erroneous documentation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#43986
**Special notes for your reviewer**:
Thank you for your consideration.
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 43000, 44500, 44457, 44553, 44267)
Add Kubernetes 1.6 support to Juju charms
**What this PR does / why we need it**:
This adds Kubernetes 1.6 support to Juju charms.
This includes some large architectural changes in order to support multiple versions of Kubernetes with a single release of the charms. There are a few bug fixes in here as well, for issues that we discovered during testing.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Thanks to @marcoceppi, @ktsakalozos, @jacekn, @mbruzek, @tvansteenburgh for their work in this feature branch as well!
**Release note**:
```release-note
Add Kubernetes 1.6 support to Juju charms
Add metric collection to charms for autoscaling
Update kubernetes-e2e charm to fail when test suite fails
Update Juju charms to use snaps
Add registry action to the kubernetes-worker charm
Add support for kube-proxy cluster-cidr option to kubernetes-worker charm
Fix kubernetes-master charm starting services before TLS certs are saved
Fix kubernetes-worker charm failures in LXD
Fix stop hook failure on kubernetes-worker charm
Fix handling of juju kubernetes-worker.restart-needed state
Fix nagios checks in charms
```
The documentation defines a couple of replication-controller and service
to provision a docker-registry somewhere on the cluster and have it
available by the name viz. A record of
kube-registry.default.svc.<clustername>.
On each node, http-proxies are placed as daemon-set with the
kube-registry DNS name set as upstream, so that the registry is
available on each host under endpoint localhost:5000
Because in the documentation, selector-identifiers are the same for
"upstream" registry and proxies, the proxies themselves register under
the service intended for the upstream and now have themselves as
upstream under a different port, where connection attempts result in
"connection refused".
Adapting selectors to be unique as in this patch fixes the problem.
modified: cluster/addons/registry/README.md
modified: cluster/addons/registry/registry-rc.yaml
modified: cluster/addons/registry/registry-svc.yaml
Automatic merge from submit-queue (batch tested with PRs 40055, 42085, 44509, 44568, 43956)
Change the default CLUSTER_IP_RANGE used by e2e
The existing choice intersects with the range reserved for auto
subnets and cannot be used with some GCP features.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 44406, 41543, 44071, 44374, 44299)
Enable service account token lookup by default
Fixes#24167
```release-note
kube-apiserver: --service-account-lookup now defaults to true, requiring the Secret API object containing the token to exist in order for a service account token to be valid. This enables service account tokens to be revoked by deleting the Secret object containing the token.
```
Automatic merge from submit-queue
Make get-kube.sh work properly the "ci/latest" pointer
**What this PR does / why we need it**: this is a (late) followup from #36419, fixing a bug discovered in https://github.com/kubernetes/kubernetes/pull/36419#issuecomment-265679578.
Basically, `get-kube-binaries.sh` looks at `$KUBERNETES_RELEASE_URL`, but we weren't properly overriding it in `get-kube.sh` when downloading binaries from the CI release bucket. With this change, we set the variable correctly, and everything works:
```console
$ KUBERNETES_RELEASE=ci/latest ~/code/kubernetes/src/k8s.io/kubernetes/cluster/get-kube.sh
Downloading kubernetes release v1.7.0-alpha.0.2068+3a3dc827e45426
from https://dl.k8s.io/ci/v1.7.0-alpha.0.2068+3a3dc827e45426/kubernetes.tar.gz
to /tmp/foo/kubernetes.tar.gz
Is this ok? [Y]/n
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 1004 0 --:--:-- --:--:-- --:--:-- 1006
100 6023k 100 6023k 0 0 10.9M 0 --:--:-- --:--:-- --:--:-- 10.9M
Unpacking kubernetes release v1.7.0-alpha.0.2068+3a3dc827e45426
Kubernetes release: v1.7.0-alpha.0.2068+3a3dc827e45426
Server: linux/amd64 (to override, set KUBERNETES_SERVER_ARCH)
Client: linux/amd64 (autodetected)
Will download kubernetes-server-linux-amd64.tar.gz from https://dl.k8s.io/ci/v1.7.0-alpha.0.2068+3a3dc827e45426
Will download and extract kubernetes-client-linux-amd64.tar.gz from https://dl.k8s.io/ci/v1.7.0-alpha.0.2068+3a3dc827e45426
Is this ok? [Y]/n
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 991 0 --:--:-- --:--:-- --:--:-- 987
100 348M 100 348M 0 0 39.1M 0 0:00:08 0:00:08 --:--:-- 34.2M
md5sum(kubernetes-server-linux-amd64.tar.gz)=e71c9b48f6551797a74de2b83b501c44
sha1sum(kubernetes-server-linux-amd64.tar.gz)=688dcf567b60e27e3d9bf97436154543432768cf
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 1019 0 --:--:-- --:--:-- --:--:-- 1025
100 29.0M 100 29.0M 0 0 32.2M 0 --:--:-- --:--:-- --:--:-- 95.4M
md5sum(kubernetes-client-linux-amd64.tar.gz)=8e6a90298411ae5a0e943b1c0e182b1d
sha1sum(kubernetes-client-linux-amd64.tar.gz)=187a2d2c1c6ae1ead32ec4c1fa51f695223edaae
Extracting /tmp/foo/kubernetes/client/kubernetes-client-linux-amd64.tar.gz into /tmp/foo/kubernetes/platforms/linux/amd64
Add '/tmp/foo/kubernetes/client/bin' to your PATH to use newly-installed binaries.
Creating a kubernetes on gce...
...
```
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Ensure only 1 Swift URL is used in cluster operations
**What this PR does / why we need it**:
Extracts only 1 Swift URL if multiple are returned from Keystone.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
https://github.com/kubernetes/kubernetes/issues/34930
**Special notes for your reviewer**:
**Release note**:
```release-note
Heat cluster operations now support environments that have multiple Swift URLs
```
Automatic merge from submit-queue
Use auto mode networks instead of legacy networks in GCP
Use of the --range flag creates legacy networks in GCP.
Legacy networks will not support new GCP features.
```release-note
NONE
```
Automatic merge from submit-queue
Add support for IP aliases for pod IPs (GCP alpha feature)
```release-note
Adds support for allocation of pod IPs via IP aliases.
# Adds KUBE_GCE_ENABLE_IP_ALIASES flag to the cluster up scripts (`kube-{up,down}.sh`).
KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes. This feature is currently
only available on GCE.
## Usage
$ CLUSTER_IP_RANGE=10.100.0.0/16 KUBE_GCE_ENABLE_IP_ALIASES=true bash -x cluster/kube-up.sh
# Adds CloudAllocator to the node CIDR allocator (kubernetes-controller manager).
If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.
- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
```
KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes.
NODE_IP_RANGE will control the node instance IP cidr
KUBE_GCE_IP_ALIAS_SIZE controls the size of each podCIDR
IP_ALIAS_SUBNETWORK controls the name of the subnet created for the cluster