Commit Graph

85 Commits (224aebd2be20d658bda6aea8cb2cfa2370b68f5b)

Author SHA1 Message Date
Mike Danese e2c5c898fb move vagrant to masterless salt 2015-12-01 15:53:50 -08:00
Brad Erickson fc04b55088 Minion->Node rename: NODE_NAMES, NODE_NAME, NODE_PORT 2015-11-25 00:45:09 -08:00
Brad Erickson 6fe68a737e Minion->Node rename: NODE_IP_BASE, NODE_IP_RANGES, NODE_IP_RANGE, etc
NODE_IPS
NODE_IP
NODE_MEMORY_MB
2015-11-25 00:45:09 -08:00
Jan Safranek fe0741bffe Configure cluster for e2e tests.
When KUBE_E2E_STORAGE_TEST_ENVIRONMENT is set to 'true', kube-up.sh script
will:

- Install the right packages for all storage volumes.
- Use devicemapper as docker storage backend. 'aufs', the default one on
Debian, does not support extended attibutes required by Ceph RBD and Gluster
server containers.

Tested on GCE and Vagrant, e2e tests for storage volumes passes without any
additional configuration.
2015-10-29 11:03:34 +01:00
Ananth Suryanarayana d50d7763da Add opencontrail networking provisioning support in kubernetes salt based provisioning
OpenContrail is an open-source based networking software which provides virtualization support for the cloud.

This change-set adds ability to install and provision opencontrail software for networking in kubernetes based cloud environment.

There are basically 3 components

o kube-network-manager -- plugin between contrail components and kubernets components
o provision_master.sh -- OpenContrail software installer and provisioner in master node
o provision_minion.sh -- OpenContrail software installer and provisioner in minion node(s)

These are driven via salt configuration files

One can provision opencontrail by just setting "export NETWORK_PROVIDER=opencontrail"
Optionally, OPENCONTRAIL_TAG, and OPENCONTRAIL_KUBERNETES_TAG can be used to
specify opencontrail and contrail-kubernetes software versions to install and provision.

Public-IP Subnet provided by contrail can be configured via OPENCONTRAIL_PUBLIC_SUBNET
environment variable

At this moment, plan is to add support for aws, gce and vagrant based platforms

For more information on contrail-kubernetes, please visit https://github.com/juniper/contrail-kubernetes For more information on opencontrail, please visit http://www.opencontrail.org
2015-10-03 08:03:02 -07:00
derekwaynecarr c1b2f62299 Vagrant salt-minion should have low oom_score_adj and restart policy 2015-09-22 16:02:30 -04:00
k8s-merge-robot 37c4e2eba3 Merge pull request #13808 from derekwaynecarr/add_cockpit
Auto commit by PR queue bot
2015-09-17 08:36:34 -07:00
derekwaynecarr b59441f8c4 Add Fedora Cockpit to vagrant setup to administer/introspect kubernetes 2015-09-15 21:28:41 -04:00
derekwaynecarr 360e7620d3 Move vagrant to flannel 2015-09-15 15:42:38 -04:00
derekwaynecarr aff9ee5a40 Enable CFS quota in vagrant setup 2015-09-03 13:44:28 -04:00
Fred Jean 1305f54645 Booting a Kubernetes cluster on Vagrant
* Using Fedora 21 as the base box
* Discover the active network interfaces in the box to avoid hardcoding
  them in configuration.
* Use the master IP for the certificate.
2015-08-27 21:43:36 -06:00
Maru Newby 4711eff229 Vagrant: Make F21 fixup conditional
The default Fedora 21 image requires some manual networking fixup that
breaks Fedora 22.  This change ensures that the fixup in question is run
only for Fedora 21.
2015-08-13 13:38:54 -07:00
derekwaynecarr df0ca1c54c Fix vagrant kube-up 2015-08-11 01:10:34 -04:00
Piotr Szczesniak f48543aba5 Made enabling Kube UI configurable 2015-07-27 08:23:04 +02:00
Mike Danese d397d88499 Merge pull request #11390 from jfchevrette/fix-vagrant-eth1
Vagrant: virtualbox host-only network (eth1) not working after network restart
2015-07-24 13:12:24 -07:00
Wojciech Tyczynski a407051075 Merge pull request #11064 from derekwaynecarr/add_cert_ip_back
Some users of vagrant were getting different ip addresses in cert
2015-07-23 08:18:57 +02:00
Jean-Francois Chevrette 04d377eff8 properly make sure that eth1 is not managed by NetworkManager 2015-07-16 18:05:08 -04:00
Jean-Francois Chevrette e9bfe17f58 restart network twice to workaround bug 2015-07-16 14:57:23 -04:00
Jason Riddle b1fcb33c56 Change suggestion to use make quick-release 2015-07-13 13:36:00 -04:00
Jason Riddle 312d54c014 Add KUBE_RELEASE_RUN_TESTS=n to suggestion
Without KUBE_RELEASE_RUN_TESTS=n, it can take quite a while to build all of the necessary binaries since the tests have to run.
2015-07-11 19:33:11 -04:00
derekwaynecarr 4898b014ec Some users of vagrant were getting different ip addresses in cert 2015-07-10 12:01:47 -04:00
derekwaynecarr e2ddd2dd7b Missing ca crt in vagrant controllers 2015-07-08 10:59:10 -04:00
Eric Paris 58df58f3d7 Remove unused enable_node_monitoring option
Back in 1a7f7245e7 we dropped the one
place this was used, but left all of the variable and definitions and
garbage around cluster/
2015-06-25 20:57:56 -04:00
derekwaynecarr db202d4904 Remove nginx from vagrant 2015-06-23 13:07:50 -04:00
BenTheElder 4437312993 Fix vagrant client authorization. 2015-06-11 23:46:01 -04:00
derekwaynecarr 2168cee414 Upgrade to Fedora 21, Docker 1.6, clean-up SDN 2015-06-04 10:59:23 -04:00
Tim Hockin ac3cc3c518 Rename PORTAL_NET all over 2015-05-28 16:10:44 -07:00
derekwaynecarr 2f1dd9228f Fix Vagrant node registration and kube-push 2015-05-27 10:50:57 -04:00
invenfantasy 9ff8f7ec7d remove duplicate configuration 2015-05-24 23:20:03 +08:00
Eric Paris 6b3a6e6b98 Make copyright ownership statement generic
Instead of saying "Google Inc." (which is not always correct) say "The
Kubernetes Authors", which is generic.
2015-05-01 17:49:56 -04:00
Jan Safranek 6e810492fb Fixed name of kube-proxy path in deployment scripts. 2015-04-28 10:10:37 +02:00
Eric Tune 9044177bb6 Generate a token for kube-proxy.
Tested on GCE.
Includes untested modifications for AWS and Vagrant.
No changes for any other distros.
Probably will work on other up-to-date providers
but beware.  Symptom would be that service proxying
stops working.

 1. Generates a token kube-proxy in AWS, GCE, and Vagrant setup scripts.
 1. Distributes the token via salt-overlay, and salt to /var/lib/kube-proxy/kubeconfig
 1. Changes kube-proxy args:
   - use the --kubeconfig argument
   - changes --master argument from http://MASTER:7080 to https://MASTER
     - http -> https
     - explicit port 7080 -> implied 443

Possible ways this might break other distros:

Mitigation: there is an default empty kubeconfig file.
If the distro does not populate the salt-overlay, then
it should get the empty, which parses to an empty
object, which, combined with the --master argument,
should still work.

Mitigation:
  - azure: Special case to use 7080 in
  - rackspace: way out of date, so don't care.
  - vsphere: way out of date, so don't care.
  - other distros: not using salt.
2015-04-27 08:59:57 -07:00
Jan Safranek 1c8f888477 Fix vagrant setup broken by commit 7475efbcfb.
- 'local' can be used only inside bash functions
- s/KNOWN_TOKENS_FILE/known_tokens_file
2015-04-23 11:00:10 +02:00
Zach Loafman 86468cd29d Revert "Added kube-proxy token." 2015-04-22 10:55:08 -07:00
Zach Loafman b98f93bb4b Merge pull request #7112 from erictune/kubeconfig-secrets
Extend PR#5470 for AWS and Vagrant
2015-04-22 09:25:53 -07:00
Eric Tune 2ca8a9d15d Added kube-proxy token.
Generates the new token on AWS, GCE, Vagrant.
Renames instance metadata from "kube-token" to "kubelet-token".
(Is this okay for GKE?)

Having separate tokens for kubelet and kube-proxy permits
using principle of least privilege, makes it easy to
rate limit the clients separately, allows annotation
of apiserver logs with the client identity at a finer grain
than just source-ip.
2015-04-21 09:21:31 -07:00
Eric Tune 7475efbcfb Extend PR#5470 for AWS and Vagrant 2015-04-21 08:22:31 -07:00
yaoguo e597b41d93 Remove duplicate localhost setting 2015-04-10 00:10:47 +08:00
Abhishek Shah fb665ede4c Run etcd on localhost for all providers. 2015-04-03 14:00:44 -07:00
Derek Carr 2af9b54147 Merge pull request #6259 from zmerlynn/fix_cloud_provider
Eliminate grains.cloud_provider (in preference to grains.cloud) from SaltStack
2015-04-01 17:04:05 -04:00
Zach Loafman b581320bf7 Eliminate grains.cloud_provider (in preference to grains.cloud) from SaltStack
This variable can be entirely derived from grains.cloud, and it
simplifies the configuration somewhat. (Or someone convince me I'm
wrong. I'm happy to be wrong here.)
2015-04-01 08:32:32 -07:00
Zach Loafman 0806e3bde0 rm Salt grains.master_ip
This appears in the Salt documentation, is set by Vagrant, but has no
consumers. Remove vestigial references.
2015-03-31 17:31:47 -07:00
jayunit100 9b67949085 Fix vagrant so that ssh commands work OOTB, (squashed) move verify to vagrant/util.sh, remove run_provider_test, cleanup. 2015-03-18 15:02:12 -04:00
derekwaynecarr 468bf1da75 Enable common set of admission controllers across salt providers 2015-03-11 11:06:00 -04:00
derekwaynecarr 2ed8eed004 Make admission control plug-ins work from indexes 2015-03-06 09:36:57 -05:00
derekwaynecarr 5fdf6b131c Fix error provisioning kube-apiserver on vagrant 2015-02-27 10:17:46 -08:00
derekwaynecarr 87a41b0934 Improve vagrant reliablility, fix race condition with openvswitch and docker 2015-02-21 13:31:50 -05:00
derekwaynecarr 0bd0e12bbc Add support for Namespace as Kind
Add example for using namespaces
2015-02-10 09:50:50 -05:00
derekwaynecarr 4dd50a18c3 Fix vagrant regression, add flag to easily enable v1beta3 2015-01-30 12:16:24 -05:00
Zach Loafman a305269e18 Deferred creation of SkyDNS, monitoring and logging objects
This implements phase 1 of the proposal in #3579, moving the creation
of the pods, RCs, and services to the master after the apiserver is
available.

This is such a wide commit because our existing initial config story
is special:

* Add kube-addons service and associated salt configuration:
** We configure /etc/kubernetes/addons to be a directory of objects
that are appropriately configured for the current cluster.
** "/etc/init.d/kube-addons start" slurps up everything in that dir.
(Most of the difficult is the business logic in salt around getting
that directory built at all.)
** We cheat and overlay cluster/addons into saltbase/salt/kube-addons
as config files for the kube-addons meta-service.
* Change .yaml.in files to salt templates
* Rename {setup,teardown}-{monitoring,logging} to
{setup,teardown}-{monitoring,logging}-firewall to properly reflect
their real purpose now (the purpose of these functions is now ONLY to
bring up the firewall rules, and possibly to relay the IP to the user).
* Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both
functions were improperly configuring global rules, yet used
lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the
rule. The logging rule needed a $NETWORK specifier. The monitoring
rule tried gcloud describe first, but given the instancing, this feels
like a waste of time now.
* Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING,
ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master,
since these are needed there now.

(Desperately want just a yaml or json file we can share between
providers that has all this crap. Maybe #3525 is an answer?)

Huge caveats: I've gone pretty firm testing on GCE, including
twiddling the env variables and making sure the objects I expect to
come up, come up. I've tested that it doesn't break GKE bringup
somehow. But I haven't had a chance to test the other providers.
2015-01-21 12:25:50 -08:00