Automatic merge from submit-queue
Remove duplicated nginx image. Use nginx-slim instead
This PR removes the image `gcr.io/google_containers/nginx:1.7.9` and uses `gcr.io/google_containers/nginx-slim:0.7`.
Besides removing the duplication `1.7.9` is 16 months old.
Automatic merge from submit-queue
Enable setting up Kubernetes cluster in Ubuntu on Azure
Implement basic cloud provider functionality to deploy Kubernetes on
Azure. SaltStack is used to deploy Kubernetes on top of Ubuntu
virtual machines. OpenVpn provides network connectivity. For
kubelet authentication, we use basic authentication (username and
password). The scripts use the legacy Azure Service Management APIs.
We have set up a nightly test job in our Jenkins server for federated
testing to run the e2e test suite on Azure. With the cloud provider
scripts in this commit, 14 e2e test cases pass in this environment.
We plan to implement additional Azure functionality to support more
test cases.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/21207)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Add Calico as policy provider in GCE
Adds Calico as policy provider to GCE, enforcing the extensions/v1beta1 NetworkPolicy API.
Still to do:
- [x] Enable NetworkPolicy API when POLICY_PROVIDER is provided.
- [x] Fix CNI plugin, policy controller versions.
CC @thockin - does this general approach look good?
Automatic merge from submit-queue
Tracked addition of federation, sed support in kube DNS
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
The kube DNS app recently gained support for federation (whatever that
is), including a new Salt parameter. This broke the deployAddons.sh script for cluster ubuntu. The DNS app also gained alternate
templates, intended to be friendly to `sed`. Fortunately, those do
not demand a federation parameter.
This PR fixes up the ` cluster/ubuntu/deployAddons.sh` script to track those changes, by switching to the `sed`-friendly templates.
Automatic merge from submit-queue
mount instanceid file from config drive when using openstack cloud provider
fix https://github.com/kubernetes/kubernetes/issues/23191, the instanceid file is read however we do not mount it as a volume, and it would cause the cloud provider contacts the metadata server, in some cases, the metadata server is not able to serve, then the cloud provider would fail to initialize, we should avoid that.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/23733)
<!-- Reviewable:end -->
Automatic merge from submit-queue
federation: Updating KubeDNS to try finding a local service first for federation query
Ref https://github.com/kubernetes/kubernetes/issues/26762
Updating KubeDNS to try to find a local service first for federation query.
Without this change, KubeDNS always returns the DNS hostname, even if a local service exists.
Have updated the code to first remove federation name from path if it exists, so that the default search for local service happens. If we dont find a local service, then we try to find the DNS hostname.
Will appreciate a strong review since this is my first change to KubeDNS.
https://github.com/kubernetes/kubernetes/pull/25727 was the original PR that added federation support to KubeDNS.
cc @kubernetes/sig-cluster-federation @quinton-hoole @madhusudancs @bprashanth @mml
Automatic merge from submit-queue
Support journal logs in fluentd-gcp on GCI
This maintains a single common image for each rather than having to fork out separate images, relying on different commands in yaml manifests to differentiate in the behavior. This is treading on top of @adityakali's #27906, but I wasn't able to get in touch with him this afternoon until very recently. He's handling making sure that the new yaml manifests are used when running on GCI.
```release-note
```
Only run the systemd-journal plugin when on a platform that requests it.
The plugin crashes the fluentd process if the journal isn't present, so
it can't just be run blindly in all configurations.
Following from #27830, this copies the source onto the instance and
displays the location of it prominently (keeping the download link for
anyone that just wants to curl it).
Example output (this tag doesn't exist yet):
---
Welcome to Kubernetes v1.4.0!
You can find documentation for Kubernetes at:
http://docs.kubernetes.io/
The source for this release can be found at:
/usr/local/share/doc/kubernetes/kubernetes-src.tar.gz
Or you can download it at:
https://storage.googleapis.com/kubernetes-release/release/v1.4.0/kubernetes-src.tar.gz
It is based on the Kubernetes source at:
https://github.com/kubernetes/kubernetes/tree/v1.4.0
For Kubernetes copyright and licensing information, see:
/usr/local/share/doc/kubernetes/LICENSES
---
Automatic merge from submit-queue
Pushing a new KubeDNS image and updating the YAML files
Updating KubeDNS image to include https://github.com/kubernetes/kubernetes/pull/27845
@kubernetes/sig-cluster-federation @girishkalele @mml
Automatic merge from submit-queue
increase addon check interval
Do static pods have a crash loop back off? If so, this test would be much faster if we restarted the kubelet to clear that.
Fixes#26770
Following from #27830, this copies the source onto the instance and
displays the location of it prominently (keeping the download link for
anyone that just wants to curl it).
Example output (this tag doesn't exist yet):
---
Welcome to Kubernetes v1.4.0!
You can find documentation for Kubernetes at:
http://docs.kubernetes.io/
The source for this release can be found at:
/usr/local/share/doc/kubernetes/kubernetes-src.tar.gz
Or you can download it at:
https://storage.googleapis.com/kubernetes-release/release/v1.4.0/kubernetes-src.tar.gz
It is based on the Kubernetes source at:
https://github.com/kubernetes/kubernetes/tree/v1.4.0
For Kubernetes copyright and licensing information, see:
/usr/local/share/doc/kubernetes/LICENSES
---
Automatic merge from submit-queue
AWS kube-up: move to Docker 1.11.2
This is to mirror GCE
Also we remove support for vivid as Docker no longer packages for it, and remove some of the unreachable distro code in aws kube-up.
Also bump the AMI to a 1.3 version (with preinstalled Docker 1.11.2)
Fixes https://github.com/kubernetes/kubernetes/issues/27654
Automatic merge from submit-queue
Update to dnsmasq:1.3 and make hyperkube always use the latest addons
This bumps dnsmasq to a version that works on all architectures: https://github.com/kubernetes/contrib/pull/1192 (which have to be pushed first indeed)
Also I removed the manifests in hyperkube addons in favor for machine-generated ones, which will avoid mistakes.
This one is required for `v1.3`, so it has to be cherrypicked I think...
It makes docker and docker-multinode addons work again...
(Yes, we'll probably get rid of docker in favor for minikube, but we'll have to have it in this release at least)
@girishkalele @thockin @ArtfulCoder @david-mcmahon @bgrant0607 @mikedanese
This works around a linux kernel bug with overly aggressive caching of
ARP entries, which was causing problems when we reused IP addresses in
VPCs, for example with an ASG in a relatively small subnet.
See #23395 for more explanation.
Fixes#23395
Vivid is EOL, and Docker is no longer packaged for it.
Remove support for it in 1.3 (in 1.2 we had warned users it was EOL).
Also remove unused wheezy, trusty & coreos & do general cleanup.
Implement basic cloud provider functionality to deploy Kubernetes on
Azure. SaltStack is used to deploy Kubernetes on top of Ubuntu
virtual machines. OpenVpn provides network connectivity. For
kubelet authentication, we use basic authentication (username and
password). The scripts use the legacy Azure Service Management APIs.
We have set up a nightly test job in our Jenkins server for federated
testing to run the e2e test suite on Azure. With the cloud provider
scripts in this commit, 14 e2e test cases pass in this environment.
We plan to implement additional Azure functionality to support more
test cases.
This first reverts commit 8e8437dad8.
Also resolves conflicts with docs on f334fc41
And resolves conflicts with https://github.com/kubernetes/kubernetes/pull/22231/commits
to make people switching between two different methods of setting up by
setting env variables.
Conflicts:
cluster/get-kube.sh
cluster/saltbase/salt/README.md
cluster/saltbase/salt/kube-proxy/default
cluster/saltbase/salt/top.sls
- Improve reliability of network address detection by using MAC
address. VMware has a MAC OUI that reliably distinguishes the VM's
NICs from the other NICs (like the CBR). This doesn't rely on the
unreliable reporting of the portgroup.
- Persist route changes. We configure routes on the master and nodes,
but previously we didn't persist them so they didn't last across
reboots. This persists them in /etc/network/interfaces
- Fix regression that didn't configure auth for kube-apiserver with
Photon Controller.
- Reliably run apt-get update: Not doing this can cause apt to fail.
- Remove unused nginx config in salt
Automatic merge from submit-queue
Exit image puller subshell
Exit the subshell with 0 so even if the last docker pull fails the pod doesn't end up in the error state.
Automatic merge from submit-queue
Enable support for memory eviction configuration via salt
Added evictions based on memory by default whenever the available memory is < 100Mi.
Updated GCE and GCI.
Automatic merge from submit-queue
Bump cluster autoscaler version and enable scale down by default
Follow up of https://github.com/kubernetes/contrib/pull/1148.
cc: @piosz @fgrzadkowski @jszczepkowski
Automatic merge from submit-queue
Add collection of the new glbc and cluster-autoscaler logs
I've incremented the version numbers by 2 to avoid conflicting with #26652. I'll make sure the potential conflict between the images gets resolved reasonably.
cc @piosz @bprashanth @aledbf
Automatic merge from submit-queue
Switch DNS addons from skydns to kubedns
Change GCI and trusty cluster-helper scripts to use kubedns instead of skydns.
Unified skydns templates using a simple underscore based template and
added transform sed scripts to transform into salt and sed yaml
templates
Moved all content out of cluster/addons/dns into build/kube-dns and
saltbase/salt/kube-dns
Automatic merge from submit-queue
Add node problem detector as an addon pod.
```release-note
Introduce a new add-on pod NodeProblemDetector.
NodeProblemDetector is a DaemonSet running on each node, monitoring node health and reporting
node problems as NodeCondition and Event. Currently it already supports kernel log monitoring, and
will support more problem detection in the future. It is enabled by default on gce now.
```
This PR enables NodeProblemDetector as an add-on pod.
/cc @mikedanese @kubernetes/sig-node
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Configuration for GCP webhook authentication and authorization
This PR adds configuration for GCP webhook authentication and authorization in ContainerVM and GCI. The change of configure-vm.sh and kube-apiserver.manifest is directly copied from @cjcullen's PR #25380 and #25296. The change in GCI script configure-helper.sh includes the support for webhook authentication and authorization, and also some code refactor to improve readability.
@cjcullen @roberthbailey @zmerlynn please review it. The original PRs are P1, please mark this as P1.
cc/ @fabioy @kubernetes/goog-image FYI.
I verified it by running e2e tests on GCI cluster. Without the GCI side change, cluster creation fails as being capture by GKE Jenkins tests. I don't test when the two env GCP_AUTHN_URL and GCP_AUTHZ_URL are set, because they are only set in GKE. After this PR is merged, @cjcullen will test in GKE.
Automatic merge from submit-queue
Salt configuration for the new Cluster Autoscaler for GCE
Adds support for cloud autoscaler from contrib/cloud-autoscaler in kube-up.sh GCE script.
cc: @fgrzadkowski @piosz
Automatic merge from submit-queue
Openstack provider
Our pull request delivers solution to create Kubernetes cluster on the top of OpenStack. Heat OpenStack Orchestration engine describes the infrastructure for Kubernetes cluster. CentoOS images are used for Kubernetes host machines.
We tested our solution with DevStack and Citycloud provider.
We believe that our solution will fill the gap that which is on the market.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/21737)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Add an entry to the salt config to allow Debian jessie on GCE.
```release-note
Add an entry to the salt config to allow Debian jessie on GCE.
As with the existing Wheezy image on GCE, docker is expected
to already be installed in the image.
```
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
CentOS 7 Core nodes running on OpenStack with an SSL-enabled API
endpoint results in the following error without this patch:
F0425 19:00:58.124520 5 server.go:100] Cloud provider could not be initialized: could not init cloud provider "openstack": Post https://my.openstack.cloud:5000/v2.0/tokens: x509: failed to load system roots and no roots provided
The root cause is that the ca-bundle.crt file is actually a symlink
which points to a directory which wasn't previously exposed.
[root@kubernetesstack-master ~]# ls -l /etc/ssl/certs/ca-bundle.crt
lrwxrwxrwx. 1 root root 49 18 nov 11:02 /etc/ssl/certs/ca-bundle.crt -> /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
[root@kubernetesstack-master ~]#
Making the assumption that the person running kube-up has their
Openstack environment setup, those same variables are being passed
into heat, and then into openstack.conf.
The salt codebase was modified to add openstack as well.
Automatic merge from submit-queue
Switch to ABAC authorization from AllowAll
Switch from AllowAll to ABAC. All existing identities (that are created by deployment scripts) are given full permissions through ABAC. Manually created identities will need policies added to the `policy.jsonl` file on the master.
Automatic merge from submit-queue
don't source the kube-env in addon-manager
This was added in 2feb658ed7 which became unused after #23603 but wasn't removed
Automatic merge from submit-queue
Initial kube-up support for VMware's Photon Controller
This is for: https://github.com/kubernetes/kubernetes/issues/24121
Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/
This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons
It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.
Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.
A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown
Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.
We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.
We are aware of the kube-deploy efforts and will track them and
support them as appropriate.
This is for: https://github.com/kubernetes/kubernetes/issues/24121
Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/
This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons
It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.
Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.
A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown
Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.
We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.
We are aware of the kube-deploy efforts and will track them and
support them as appropriate.
Automatic merge from submit-queue
add HOME env variable for kube-addons service
Fix https://github.com/kubernetes/kubernetes/issues/23973.
Briefly, systemd service does not know the `HOME` environment variable which causes the kubectl write schema file into `/.kube` while it is expected to be `/root/.kube`.
Automatic merge from submit-queue
add labels to kube component static pods
```
$ k --namespace=kube-system get po -l 'tier in (control-plane)'
NAME READY STATUS RESTARTS AGE
kube-apiserver-k-7-master 1/1 Running 2 1m
kube-controller-manager-k-7-master 1/1 Running 1 1m
kube-scheduler-k-7-master 1/1 Running 0 54s
$ k --namespace=kube-system get po -l 'tier in (node)'
NAME READY STATUS RESTARTS AGE
kube-proxy-k-7-minion-eheu 1/1 Running 0 1m
kube-proxy-k-7-minion-mwo9 1/1 Running 0 1m
kube-proxy-k-7-minion-xw6m 1/1 Running 0 1m
```
cc @bgrant0607 @thockin @gmarek
Fixes#21267
Automatic merge from submit-queue
don't ship kube-registry-proxy and pause images in tars.
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
Files are taken from cluster/network-plugins/{bin,conf} to be consumed within a vagrant kube-up.sh environment.
Paths used for configuration files and the 'cni' name of the network provider are all from the kubernetes documentation, but the actual implementation in the salt automation doesn't seem to exist.
Use of NETWORK_PROVIDER=cni is documented as useable (as well as it's affects on the runtime args of kubelet),
however the actual implimentation in the salt automation doesnt seem to exist.
this change attempts to fix that for the vagrant usecase.
Automatic merge from submit-queue
use apply instead of create to setup namespaces and tokens in addon manager
when the addon manager restarts, it takes ~15 minutes (1000 seconds) to start the sync loop because it retries creation of namespace and tokens 100 times. Create fails if the tokens already exist. Just use apply.
Automatic merge from submit-queue
Create a new Deployment in kube-system for every version.
It appears that version numbers have already been properly added to these files. Small change to delete an old deployment entirely, so we can make a new one per version (like replication controllers).
We'll want to change this back once the kube-addons support deployments in a later version.
This should allow allow the non_masquerade_cidr option to get configured
in /etc/salt/minion.d/grains.conf, allowing the flag to used by kubelet
in /etc/sysconfig/kubelet. Default configuration is set in pillar
Allow the gcr.io/google_containers registry to be overridden
regionally by just blasting a new KUBE_ADDON_REGISTRY out. Instead of
adding every addon to Salt and asking all of the other consumers
(Trusty, Juju, Mesos, etc) to change, just script the sed ourselves.
This is probably the 9th grossest thing I've ever done, but it works
well, and it works quickly. I kind of wish it didn't.
It includes some performance improvements for parsing JSON (which is
very important for us, since all Docker logs are JSON) as well as a
couple new settings, like forcing of a flush of multiline logs after a
time period rather than having to wait until a new log is seen before
feeling confident flushing the previous one.
-Remove CPU limits to enable CPU bursting once 1.2 begins enforcing CPU limits.
-Add a memory limit for fluentd-es to match fluentd-gcp.
-Explicitly set requests to match limits.
This change revises the way to provide kube-system manifests for clusters on Trusty. Originally, we maintained copies of some manifests under cluster/gce/trusty/kube-manifests, which is not scalable and hard to maintain. With this change, clusters on Trusty will use the same source of manifests as ContainerVM. This change also fixes some minor problems such as shell variables and comments to meet the style guidance better.
Starting docker through Salt has always been problematic. Kubelet or
the babysitter process should start it. We've kept it around primarily
so we have a `service: docker` node for the Salt DAG.
Instead, we enable (but do not start) the Docker service in Salt. This
lets us keep the DAG node, but won't start it.
There's another bug in Salt, where watches will start the service even
on `service.enabled`. So we remove the watches, and move them to our
existing Salt bug-fix script.
The Docker 1.9.1 package on Debian is broken, and the service fails to
install when run unattended. This is treated as an installation failure
and causes everything to fail.
However, the service can be started by Salt once we're not installing
the package, and indeed we restart docker anyway.
So, on Debian, use a helper script to install the docker package. The
script sets up a policy-rc.d file to prevent the service starting, and
then cleanly removes it afterwards (this would be difficult to do in
Salt, I believe).