Automatic merge from submit-queue
Enable setting up Kubernetes cluster in Ubuntu on Azure
Implement basic cloud provider functionality to deploy Kubernetes on
Azure. SaltStack is used to deploy Kubernetes on top of Ubuntu
virtual machines. OpenVpn provides network connectivity. For
kubelet authentication, we use basic authentication (username and
password). The scripts use the legacy Azure Service Management APIs.
We have set up a nightly test job in our Jenkins server for federated
testing to run the e2e test suite on Azure. With the cloud provider
scripts in this commit, 14 e2e test cases pass in this environment.
We plan to implement additional Azure functionality to support more
test cases.
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/21207)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Add Calico as policy provider in GCE
Adds Calico as policy provider to GCE, enforcing the extensions/v1beta1 NetworkPolicy API.
Still to do:
- [x] Enable NetworkPolicy API when POLICY_PROVIDER is provided.
- [x] Fix CNI plugin, policy controller versions.
CC @thockin - does this general approach look good?
Implement basic cloud provider functionality to deploy Kubernetes on
Azure. SaltStack is used to deploy Kubernetes on top of Ubuntu
virtual machines. OpenVpn provides network connectivity. For
kubelet authentication, we use basic authentication (username and
password). The scripts use the legacy Azure Service Management APIs.
We have set up a nightly test job in our Jenkins server for federated
testing to run the e2e test suite on Azure. With the cloud provider
scripts in this commit, 14 e2e test cases pass in this environment.
We plan to implement additional Azure functionality to support more
test cases.
This first reverts commit 8e8437dad8.
Also resolves conflicts with docs on f334fc41
And resolves conflicts with https://github.com/kubernetes/kubernetes/pull/22231/commits
to make people switching between two different methods of setting up by
setting env variables.
Conflicts:
cluster/get-kube.sh
cluster/saltbase/salt/README.md
cluster/saltbase/salt/kube-proxy/default
cluster/saltbase/salt/top.sls
Automatic merge from submit-queue
Salt configuration for the new Cluster Autoscaler for GCE
Adds support for cloud autoscaler from contrib/cloud-autoscaler in kube-up.sh GCE script.
cc: @fgrzadkowski @piosz
Making the assumption that the person running kube-up has their
Openstack environment setup, those same variables are being passed
into heat, and then into openstack.conf.
The salt codebase was modified to add openstack as well.
This is for: https://github.com/kubernetes/kubernetes/issues/24121
Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/
This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons
It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.
Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.
A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown
Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.
We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.
We are aware of the kube-deploy efforts and will track them and
support them as appropriate.
Use of NETWORK_PROVIDER=cni is documented as useable (as well as it's affects on the runtime args of kubelet),
however the actual implimentation in the salt automation doesnt seem to exist.
this change attempts to fix that for the vagrant usecase.
When KUBE_E2E_STORAGE_TEST_ENVIRONMENT is set to 'true', kube-up.sh script
will:
- Install the right packages for all storage volumes.
- Use devicemapper as docker storage backend. 'aufs', the default one on
Debian, does not support extended attibutes required by Ceph RBD and Gluster
server containers.
Tested on GCE and Vagrant, e2e tests for storage volumes passes without any
additional configuration.
OpenContrail is an open-source based networking software which provides virtualization support for the cloud.
This change-set adds ability to install and provision opencontrail software for networking in kubernetes based cloud environment.
There are basically 3 components
o kube-network-manager -- plugin between contrail components and kubernets components
o provision_master.sh -- OpenContrail software installer and provisioner in master node
o provision_minion.sh -- OpenContrail software installer and provisioner in minion node(s)
These are driven via salt configuration files
One can provision opencontrail by just setting "export NETWORK_PROVIDER=opencontrail"
Optionally, OPENCONTRAIL_TAG, and OPENCONTRAIL_KUBERNETES_TAG can be used to
specify opencontrail and contrail-kubernetes software versions to install and provision.
Public-IP Subnet provided by contrail can be configured via OPENCONTRAIL_PUBLIC_SUBNET
environment variable
At this moment, plan is to add support for aws, gce and vagrant based platforms
For more information on contrail-kubernetes, please visit https://github.com/juniper/contrail-kubernetes For more information on opencontrail, please visit http://www.opencontrail.org
This registry can be accessed through proxies that run on each node
listening on port 5000. We send the proxy images to the nodes directly
to avoid requests that hit the network during cluster launch. For now,
we continue to pull the registry itself over the network, especially
given its large size (we should be able to dramatically shrink the
image). On GCE we create a PD and use that for storage, otherwise we
use an emptyDir. The registry is not enabled outside of GCE. All
communication is currently plain HTTP. In order to use SSL, we will
need to be able to request a certificate/key from the apiserver signed
by the apiserver's CA cert.
The AWS API requires a signature on method calls, including the
timestamp to prevent replay attacks. A time drift of up to 5 minutes
between client and server is tolerated.
However, if the client clock drifts by >5 minutes, the server will start
to reject API calls (with the cryptic "AWS was not able to validate the
provided access credentials").
To prevent this happening, we install ntp on all nodes.
Fix#11371