Remove a comment that disabled the redirection of output destined for
`/etc/salt/minion.d/grains.conf`. Must have been a commented added to
debug the generation of the line, to view it on `STDOUT`.
Addresses #15968
This patch removes KUBE_ENABLE_EXPERIMENTAL_API and similar calls in
favor of specifying desired features in KUBE_RUNTIME_CONFIG. Changes
have also been made to e2e scripts to re-enable using
KUBE_RUNTIME_CONFIG rather than EXPERIMENTAL_API env vars.
This also introduces KUBE_ENABLE_DAEMONSETS and KUBE_ENABLE_DEPLOYMENTS.
Signed-off-by: Christian Stewart <christian@paral.in>
When KUBE_E2E_STORAGE_TEST_ENVIRONMENT is set to 'true', kube-up.sh script
will:
- Install the right packages for all storage volumes.
- Use devicemapper as docker storage backend. 'aufs', the default one on
Debian, does not support extended attibutes required by Ceph RBD and Gluster
server containers.
Tested on GCE and Vagrant, e2e tests for storage volumes passes without any
additional configuration.
Fixes AWS ubuntu deployment due to extra-$(uname) vs extra-virtual
package being installed. See issue #14162
Signed-off-by: Christian Stewart <christian@paral.in>
We need this for some tests; not all the options are fully plumbed in,
but should enable experimental/v1alpha1, as needed for jobs tests.
In particular, ENABLE_NODE_AUTOSCALER is not yet actually implemented.
OpenContrail is an open-source based networking software which provides virtualization support for the cloud.
This change-set adds ability to install and provision opencontrail software for networking in kubernetes based cloud environment.
There are basically 3 components
o kube-network-manager -- plugin between contrail components and kubernets components
o provision_master.sh -- OpenContrail software installer and provisioner in master node
o provision_minion.sh -- OpenContrail software installer and provisioner in minion node(s)
These are driven via salt configuration files
One can provision opencontrail by just setting "export NETWORK_PROVIDER=opencontrail"
Optionally, OPENCONTRAIL_TAG, and OPENCONTRAIL_KUBERNETES_TAG can be used to
specify opencontrail and contrail-kubernetes software versions to install and provision.
Public-IP Subnet provided by contrail can be configured via OPENCONTRAIL_PUBLIC_SUBNET
environment variable
At this moment, plan is to add support for aws, gce and vagrant based platforms
For more information on contrail-kubernetes, please visit https://github.com/juniper/contrail-kubernetes For more information on opencontrail, please visit http://www.opencontrail.org
We were splitting the aufs storage into docker & kubernetes areas, but
the kubernetes area was filling up very quickly because empty volumes
went on there, and I had originally not sized it big enough for that.
Instead, create one volume for both so they can share space freely. We
can't do this for devicemapper, but that configuration seems to be
deprecated by Docker anyway.
GCE does this in its per-provider scripts; this does the same for AWS and lets
other providers do the same; I believe kube2sky requires 10.0.0.1 as a SAN.
If we're going to have a persistent disk on /mnt/master-pd, it seems risky
sometimes to have /mnt be a mounted volume.
A new consistent approach: we mount volumes under /mnt/<name>.
For parity with GCE, we really want to support aufs.
But we previously supported btrfs, so we want to expose that.
Most of the work here is required for aufs, and we let advanced users choose
devicemapper/btrfs if they have a setup that works for those configurations.
The latest salt version breaks the container_bridge.py _state function
We can lock to the same version as GCE. This is not a full fix,
because we can't update to the latest salt without breaking GCE,
but this at least unbreaks and sync AWS with GCE.
This isn't a straight copy from GCE, because we still use
the salt master on AWS (for now)
Fixes#8114
Tested on GCE.
Includes untested modifications for AWS and Vagrant.
No changes for any other distros.
Probably will work on other up-to-date providers
but beware. Symptom would be that service proxying
stops working.
1. Generates a token kube-proxy in AWS, GCE, and Vagrant setup scripts.
1. Distributes the token via salt-overlay, and salt to /var/lib/kube-proxy/kubeconfig
1. Changes kube-proxy args:
- use the --kubeconfig argument
- changes --master argument from http://MASTER:7080 to https://MASTER
- http -> https
- explicit port 7080 -> implied 443
Possible ways this might break other distros:
Mitigation: there is an default empty kubeconfig file.
If the distro does not populate the salt-overlay, then
it should get the empty, which parses to an empty
object, which, combined with the --master argument,
should still work.
Mitigation:
- azure: Special case to use 7080 in
- rackspace: way out of date, so don't care.
- vsphere: way out of date, so don't care.
- other distros: not using salt.
This is a stop-gap fix; we'd really like to use EC2 instance ids, but that is
blocked by #7092 or changing that health-check to not assume that the node name
is resolvable.
This stop-gap essentially reverts #7072 for AWS
Generates the new token on AWS, GCE, Vagrant.
Renames instance metadata from "kube-token" to "kubelet-token".
(Is this okay for GKE?)
Having separate tokens for kubelet and kube-proxy permits
using principle of least privilege, makes it easy to
rate limit the clients separately, allows annotation
of apiserver logs with the client identity at a finer grain
than just source-ip.