Automatic merge from submit-queue (batch tested with PRs 43884, 44712, 45124, 43883)
Increase Dashboard memory limits
**What this PR does / why we need it**: Increases memory requests and limits for Dashboard.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/dashboard/issues/1431
**Special notes for your reviewer**: Dashboard crashes on large clusters, this change should fix that problem.
**Release note**:
```release-note
Increase Dashboard's memory requests and limits
```
Automatic merge from submit-queue
Support arbitrary alphanumeric strings as prerelease identifiers
**What this PR does / why we need it**: this is basically an extension of #43642, but supports more general prerelease identifiers, per the spec at http://semver.org/#spec-item-9.
These regular expressions are still a bit more restrictive than the SemVer spec allows (we disallow hyphens, and we require the format `-foo.N` instead of arbitrary `-foo.X.bar.Y.bazZ`), but this should support our needs without changing too much more logic or breaking other assumptions.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
README.md: Update outdated links
**What this PR does / why we need it**:
the PR aims to update some links.
Some links with "#" would not redirect to right point of pages.
Other links without "#" can work, but they are outdated. I change them by the way.
**Special notes for your reviewer**:
**Release note**:
```release-note
```
none
Automatic merge from submit-queue
Retry calls we report config changes quickly.
**What this PR does / why we need it**: In Juju deployments of Kubernetes the status of the charms is updated when a status-update is triggered periodically. As a result changes in config variables may take up to 10 minutes to be reflected on the charms status. See bug below.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/263
**Special notes for your reviewer**:
**Release note**:
```
Kubernetes clusters deployed with Juju pick up config changes faster.
```
Automatic merge from submit-queue (batch tested with PRs 41583, 45117, 45123)
Adds the cifs-common package
**What this PR does / why we need it**: Enables mounting of CIFS volumes. Required for Azure.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/227
**Release note**:
```release-note
Added CIFS PV support for Juju Charms
```
Automatic merge from submit-queue
Remove the Rackspace provider
**What this PR does / why we need it**:
To aid the effort of moving providers out of the cluster dir, I'm
removing Rackspace and leaving behind a README.md simply as a
placeholder until the entire dir is deleted.
**Which issue this PR fixes**
Fixes#6962
**Release note**:
```release-note
Deployment of Kubernetes clusters on Rackspace using the in-tree bash deployment (i.e. cluster/kube-up.sh or get-kube.sh) is obsolete and support has been removed.```
Currently StatefulSet(s) fail when you use local-up-cluster without
setting a cloud provider. In this PR, we use set the
kubernetes.io/host-path provisioner as the default provisioner when
there CLOUD_PROVIDER is not specified. This enables e2e test(s)
(specifically StatefulSetBasic) to work.
Automatic merge from submit-queue (batch tested with PRs 41530, 44814, 43620, 41985)
Fixes juju kubernetes master: 1. Get certs from a dead leader. 2. Append tokens.
**What this PR does / why we need it**:
Fixes two issues with the Juju kubernetes master.
1. Grab certificates from a leader that is already removed.
2. Append (not truncate) auth tokens
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes#43563fixes#43519
**Special notes for your reviewer**:
**Release note**:
```
Recover certificates from leadership context in case all masters die in a Juju deployment
```
Automatic merge from submit-queue
Add metrics exporter to the fluentd-gcp deployment
Metrics exporter container reads metrics from the `/metrics` endpoint in fluentd and exports them directly to the Stackdriver. It assumes that Stackdriver Monitoring API is enabled.
/cc @fgrzadkowski
Automatic merge from submit-queue (batch tested with PRs 42432, 44628, 45101, 44921)
Use correct option name in the kubernetes-worker layer registry action
**What this PR does / why we need it**: It fixes#44920
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44920
**Special notes for your reviewer**:
**Release note**:
```
Ensure kubernetes-worker juju layer registry action uses correct ingress controller option name
```
Automatic merge from submit-queue
Bump GLBC version to 0.9.3
**What this PR does / why we need it**:
Bumps version of GLBC shipped with K8s
https://github.com/kubernetes/ingress/releases/tag/0.9.3
```
Major Changelog:
Bug fix: adding backends to existing backend-services #652
Bug fix: handling of secret-based SSL Certs #639
Add second LB healthcheck/proxy traffic source CIDR #574#479
Support backside re-encryption (HTTPS) #519
```
The two noted bugs are common occurrences for GKE users
**Release note**:
```release-note
Bump GLBC version to 0.9.3
```
Fixes#6962
To aid the effort of moving providers out of the cluster dir, I'm
removing Rackspace and leaving behind a README.md simply as a
placeholder until the entire dir is deleted.
Automatic merge from submit-queue (batch tested with PRs 42740, 44980, 45039, 41627, 45044)
Update kubernetes-e2e charm to use snaps
**What this PR does / why we need it**:
This updates the kubernetes-e2e charm to use snaps instead of Juju resources for payload delivery.
The main advantage of this is that it decouples the charm from the e2e payload, allowing us to support multiple versions of Kubernetes with a single release of the charm.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Update kubernetes-e2e charm to use snaps
```
Automatic merge from submit-queue (batch tested with PRs 44591, 44549)
Update repo-infra bazel dependency and use new gcs_upload rule
This PR provides similar functionality to push-build.sh entirely within Bazel rules (though it relies on gsutil).
It's an alternative to #44306.
Depends on https://github.com/kubernetes/repo-infra/pull/13.
**Release note**:
```release-note
NONE
```
The master-components started state triggers a daemon recycle. The guard
was to prevent the daemons from being cycled too often and interrupting
normal workflow. This additional state check is guarded against the etcd
connection string from changing, allowing the current behavior but
triggers a re-configure and recycle of the api-control plane when etcd
units are scaling up and down.
Automatic merge from submit-queue
Support running Ubuntu image on GCE
**What this PR does / why we need it**:
This PR (on top of #44629) contains the script changes for running Ubuntu image on GCE.
**Special notes for your reviewer**:
We made change in `gci/node.yaml` and `gci/master.yaml` to ensure that Kubernetes jobs can start automatically after reboot. This is not needed for GCI but required by Ubuntu. See https://github.com/kubernetes/kubernetes/pull/44744#discussion_r113105970 for details. With this change, Ubuntu could use the same provisioning scripts as GCI's.
Ran e2e tests using the following command and all tests passed.
```
KUBE_GCE_NODE_IMAGE=ubuntu-gke-1604-xenial-v20170420-1 KUBE_GCE_NODE_PROJECT=ubuntu-os-gke-cloud KUBE_NODE_OS_DISTRIBUTION=ubuntu GINKGO_PARALLEL=y GINKGO_PARALLEL_NODES=30 go run hack/e2e.go -- -v --build --up --test --test_args="--ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]" --down
```
Also tested manually for both GCI and Ubuntu images.
**Release note**:
`Support Ubuntu 16.04 image on GCE`
Automatic merge from submit-queue
Send dns details only after cdk-addons are configured
**What this PR does / why we need it**: This is a bugfix on the deployment of Kubernetes via Juju. See issue below.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#40386 and
https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/262
**Special notes for your reviewer**:
**Release note**:
```
Fix KubeDNS issue in Juju deployments.
```
Automatic merge from submit-queue
Allow disabling log dump for nodes (in preparation for using logexporter)
This is, in part, a change required for allowing usage of [logexporter](https://github.com/kubernetes/test-infra/tree/master/logexporter) for dumping node logs to GCS directly, instead of doing it through log-dump.sh.
cc @kubernetes/test-infra-maintainers @wojtek-t @gmarek @fejta
Automatic merge from submit-queue (batch tested with PRs 44931, 44808)
Closes#44392
**What this PR does / why we need it**:
Fix the pause action with regard to the new behavior where
--delete-local-data=false by default. Historically --force was all that
was required, this flag has changed to be more descriptive of the
actions it's taking.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44392
**Release note**:
```release-note
Added support to the pause action in the kubernetes-worker charm for new flag --delete-local-data
```
Fix the pause action with regard to the new behavior where
--delete-local-data=false by default. Historically --force was all that
was required, this flag has changed to be more descriptive of the
actions it's taking.
Automatic merge from submit-queue
Add namespace-{list, create, delete} actions to the kubernetes-master layer
**What this PR does / why we need it**:
This PR adds namespace-{list,create,delete} actions to the juju kubernetes-master layer.
**Which issue this PR fixes**: fixes#43712
**Special notes for your reviewer**:
Original PR https://github.com/juju-solutions/kubernetes/pull/109
**Release note**:
```
Add namespace-{list,create,delete} actions to the juju kubernetes-master layer
```
Automatic merge from submit-queue (batch tested with PRs 42486, 44780)
Hostname patch for vsphere provider limitations with juju
**What this PR does / why we need it**:
The Juju VSphere provider doesn't set a unique hostname which causes issues when scaling worker-pools and they all have the hostname `ubuntuguest`. Instead we assign the JUJU_UNIT_NAME to that hostname to prevent the collision which allows the master to sort out that there are multiple units and not one attempting re-registration.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/237
**Special notes for your reviewer**:
The charm-pre-exec runs before it installs the charm software so the validation can happen quickly. Check hostname output, as well as kubectl get no post deployment.
```release-note
Resolves juju vsphere hostname bug showing only a single node in a scaled node-pool.
```
This patch sets the hostname to a unique identifier (the juju unit name)
during pre-deployment of the charm. This may not be a FQDN resolveable
hostname but will prevent hostname collision.
Using Ubuntu on GCE to run cluster e2e tests requires slightly different
node.yaml and master.yaml files than GCI, because Ubuntu uses systemd as
PID 1, wheras GCI uses upstart with a systemd delegate. Therefore the
e2e tests fail using those files since the kubernetes services are not
brought back up after a node/master reboot.
Automatic merge from submit-queue (batch tested with PRs 44722, 44704, 44681, 44494, 39732)
prevent installation of docker from upstream
**What this PR does / why we need it**: Disallows installation of upstream docker from PPA in the Juju kubernetes-worker charm.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Disallows installation of upstream docker from PPA in the Juju kubernetes-worker charm.
```
Automatic merge from submit-queue (batch tested with PRs 42177, 42176, 44721)
Removed fluentd-gcp manifest pod
```release-note
Fluentd manifest pod is no longer created on non-registered master when creating clusters using kube-up.sh.
```
Automatic merge from submit-queue
Remove the old docker-multinode files that were built into the hyperkube image
**What this PR does / why we need it**:
ref: https://goo.gl/VxSaKx
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
The hyperkube image has been slimmed down and no longer includes addon manifests and other various scripts. These were introduced for the now removed docker-multinode setup system.
```
cc @jbeda @brendandburns @bgrant0607 @justinsb @mikedanese
Automatic merge from submit-queue (batch tested with PRs 44687, 44689, 44661)
Retry in get-kube.sh to avoid download flakes.
GCS has up to 2% 5xx rates, so retrying is critical.
This is currently failing about 8 times per day [according to the dashboard](https://storage.googleapis.com/k8s-gubernator/triage/index.html?test=Extract#be2f33fb1e6dd2389d12). It could be backported to reduce the flake rate.
Relase note:
```release-note
NONE
```
Automatic merge from submit-queue
select one api endpoint at random when deploying kubernetes-core charm
**What this PR does / why we need it**: Fixes a bug in the kubernetes-worker Juju charm code that attempted to give kube-proxy more than one api endpoint.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/255
**Release note**:
```release-note
Fixes a bug in the kubernetes-worker Juju charm code that attempted to give kube-proxy more than one api endpoint.
```
Automatic merge from submit-queue
Update gcr.io/google_containers/porter image to 4524579c0e
**What this PR does / why we need it**: updates the porter image to one built at 4524579c0e using go1.8.1.
This incorporates #44638, which has a new dummy certificate that is compliant with go1.8+.
Image has already been pushed.
**Release note**:
```release-note
NONE
```
/assign @liggitt
/cc @luxas @lavalamp
Automatic merge from submit-queue
Fix ceph-secret type to kubernetes.io/rbd in kubernetes-master charm
**What this PR does / why we need it**:
This fixes the type of the ceph-secret secret that's created by the kubernetes-master charm.
Without the `kubernetes.io/rbd` type, automatic provisioning of PVCs doesn't work.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix ceph-secret type to kubernetes.io/rbd in kubernetes-master charm
```
Automatic merge from submit-queue
issue_43986: fix docu with non-functional proxy
The documentation defines a couple of replication-controller and service
to provision a docker-registry somewhere on the cluster and have it
available by the name viz. A record of
kube-registry.default.svc.<clustername>.
On each node, http-proxies are placed as daemon-set with the
kube-registry DNS name set as upstream, so that the registry is
available on each host under endpoint localhost:5000
Because in the documentation, selector-identifiers are the same for
"upstream" registry and proxies, the proxies themselves register under
the service intended for the upstream and now have themselves as
upstream under a different port, where connection attempts result in
"connection refused".
Adapting selectors to be unique as in this patch fixes the problem.
**What this PR does / why we need it**:
Patch fixes (cf. above) erroneous documentation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#43986
**Special notes for your reviewer**:
Thank you for your consideration.
**Release note**:
```release-note
```
Automatic merge from submit-queue (batch tested with PRs 43000, 44500, 44457, 44553, 44267)
Add Kubernetes 1.6 support to Juju charms
**What this PR does / why we need it**:
This adds Kubernetes 1.6 support to Juju charms.
This includes some large architectural changes in order to support multiple versions of Kubernetes with a single release of the charms. There are a few bug fixes in here as well, for issues that we discovered during testing.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
Thanks to @marcoceppi, @ktsakalozos, @jacekn, @mbruzek, @tvansteenburgh for their work in this feature branch as well!
**Release note**:
```release-note
Add Kubernetes 1.6 support to Juju charms
Add metric collection to charms for autoscaling
Update kubernetes-e2e charm to fail when test suite fails
Update Juju charms to use snaps
Add registry action to the kubernetes-worker charm
Add support for kube-proxy cluster-cidr option to kubernetes-worker charm
Fix kubernetes-master charm starting services before TLS certs are saved
Fix kubernetes-worker charm failures in LXD
Fix stop hook failure on kubernetes-worker charm
Fix handling of juju kubernetes-worker.restart-needed state
Fix nagios checks in charms
```
The documentation defines a couple of replication-controller and service
to provision a docker-registry somewhere on the cluster and have it
available by the name viz. A record of
kube-registry.default.svc.<clustername>.
On each node, http-proxies are placed as daemon-set with the
kube-registry DNS name set as upstream, so that the registry is
available on each host under endpoint localhost:5000
Because in the documentation, selector-identifiers are the same for
"upstream" registry and proxies, the proxies themselves register under
the service intended for the upstream and now have themselves as
upstream under a different port, where connection attempts result in
"connection refused".
Adapting selectors to be unique as in this patch fixes the problem.
modified: cluster/addons/registry/README.md
modified: cluster/addons/registry/registry-rc.yaml
modified: cluster/addons/registry/registry-svc.yaml
Automatic merge from submit-queue (batch tested with PRs 40055, 42085, 44509, 44568, 43956)
Change the default CLUSTER_IP_RANGE used by e2e
The existing choice intersects with the range reserved for auto
subnets and cannot be used with some GCP features.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 44406, 41543, 44071, 44374, 44299)
Enable service account token lookup by default
Fixes#24167
```release-note
kube-apiserver: --service-account-lookup now defaults to true, requiring the Secret API object containing the token to exist in order for a service account token to be valid. This enables service account tokens to be revoked by deleting the Secret object containing the token.
```
Automatic merge from submit-queue
Make get-kube.sh work properly the "ci/latest" pointer
**What this PR does / why we need it**: this is a (late) followup from #36419, fixing a bug discovered in https://github.com/kubernetes/kubernetes/pull/36419#issuecomment-265679578.
Basically, `get-kube-binaries.sh` looks at `$KUBERNETES_RELEASE_URL`, but we weren't properly overriding it in `get-kube.sh` when downloading binaries from the CI release bucket. With this change, we set the variable correctly, and everything works:
```console
$ KUBERNETES_RELEASE=ci/latest ~/code/kubernetes/src/k8s.io/kubernetes/cluster/get-kube.sh
Downloading kubernetes release v1.7.0-alpha.0.2068+3a3dc827e45426
from https://dl.k8s.io/ci/v1.7.0-alpha.0.2068+3a3dc827e45426/kubernetes.tar.gz
to /tmp/foo/kubernetes.tar.gz
Is this ok? [Y]/n
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 1004 0 --:--:-- --:--:-- --:--:-- 1006
100 6023k 100 6023k 0 0 10.9M 0 --:--:-- --:--:-- --:--:-- 10.9M
Unpacking kubernetes release v1.7.0-alpha.0.2068+3a3dc827e45426
Kubernetes release: v1.7.0-alpha.0.2068+3a3dc827e45426
Server: linux/amd64 (to override, set KUBERNETES_SERVER_ARCH)
Client: linux/amd64 (autodetected)
Will download kubernetes-server-linux-amd64.tar.gz from https://dl.k8s.io/ci/v1.7.0-alpha.0.2068+3a3dc827e45426
Will download and extract kubernetes-client-linux-amd64.tar.gz from https://dl.k8s.io/ci/v1.7.0-alpha.0.2068+3a3dc827e45426
Is this ok? [Y]/n
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 991 0 --:--:-- --:--:-- --:--:-- 987
100 348M 100 348M 0 0 39.1M 0 0:00:08 0:00:08 --:--:-- 34.2M
md5sum(kubernetes-server-linux-amd64.tar.gz)=e71c9b48f6551797a74de2b83b501c44
sha1sum(kubernetes-server-linux-amd64.tar.gz)=688dcf567b60e27e3d9bf97436154543432768cf
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 161 100 161 0 0 1019 0 --:--:-- --:--:-- --:--:-- 1025
100 29.0M 100 29.0M 0 0 32.2M 0 --:--:-- --:--:-- --:--:-- 95.4M
md5sum(kubernetes-client-linux-amd64.tar.gz)=8e6a90298411ae5a0e943b1c0e182b1d
sha1sum(kubernetes-client-linux-amd64.tar.gz)=187a2d2c1c6ae1ead32ec4c1fa51f695223edaae
Extracting /tmp/foo/kubernetes/client/kubernetes-client-linux-amd64.tar.gz into /tmp/foo/kubernetes/platforms/linux/amd64
Add '/tmp/foo/kubernetes/client/bin' to your PATH to use newly-installed binaries.
Creating a kubernetes on gce...
...
```
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Ensure only 1 Swift URL is used in cluster operations
**What this PR does / why we need it**:
Extracts only 1 Swift URL if multiple are returned from Keystone.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
https://github.com/kubernetes/kubernetes/issues/34930
**Special notes for your reviewer**:
**Release note**:
```release-note
Heat cluster operations now support environments that have multiple Swift URLs
```
Automatic merge from submit-queue
Use auto mode networks instead of legacy networks in GCP
Use of the --range flag creates legacy networks in GCP.
Legacy networks will not support new GCP features.
```release-note
NONE
```
Automatic merge from submit-queue
Add support for IP aliases for pod IPs (GCP alpha feature)
```release-note
Adds support for allocation of pod IPs via IP aliases.
# Adds KUBE_GCE_ENABLE_IP_ALIASES flag to the cluster up scripts (`kube-{up,down}.sh`).
KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes. This feature is currently
only available on GCE.
## Usage
$ CLUSTER_IP_RANGE=10.100.0.0/16 KUBE_GCE_ENABLE_IP_ALIASES=true bash -x cluster/kube-up.sh
# Adds CloudAllocator to the node CIDR allocator (kubernetes-controller manager).
If CIDRAllocatorType is set to `CloudCIDRAllocator`, then allocation
of CIDR allocation instead is done by the external cloud provider and
the node controller is only responsible for reflecting the allocation
into the node spec.
- Splits off the rangeAllocator from the cidr_allocator.go file.
- Adds cloudCIDRAllocator, which is used when the cloud provider allocates
the CIDR ranges externally. (GCE support only)
- Updates RBAC permission for node controller to include PATCH
```
KUBE_GCE_ENABLE_IP_ALIASES=true will enable allocation of PodCIDR ips
using the ip alias mechanism rather than using routes.
NODE_IP_RANGE will control the node instance IP cidr
KUBE_GCE_IP_ALIAS_SIZE controls the size of each podCIDR
IP_ALIAS_SUBNETWORK controls the name of the subnet created for the cluster