Ansible: Vagrant: kubernetes openstack deployer

vbox deployer separated and deprecated. Readded-later
pull/6/head
jayunit100 2015-07-24 09:57:43 -04:00 committed by Eric Paris
parent fd1024baa2
commit a008fe24bb
7 changed files with 181 additions and 41 deletions

View File

@ -9,6 +9,7 @@ can be real hardware, VMs, things in a public cloud, etc.
* Record the IP address/hostname of which machine you want to be your master (only support a single master)
* Record the IP address/hostname of the machine you want to be your etcd server (often same as master, only one)
* Record the IP addresses/hostname of the machines you want to be your nodes. (the master can also be a node)
* Make sure your ansible running machine has ansible 1.9 and python-netaddr installed.
### Configure the inventory file

View File

@ -0,0 +1,39 @@
## Vagrant deployer for Kubernetes Ansible
This deployer sets up a vagrant cluster and installs kubernetes with flannel on it.
The URI's in the Vagrantfile may need to be changed depending on the exact version of openstack which you have.
## Before you start !
If running the openstack provider, then of course, you need to modify the key credentials and so on to match your particular openstack credentials.
At the time of this writing (july 2 2015) no other providers are supported, but this recipe is pretty easy to port to virtualbox, kvm, and so on if you want.
## USAGE
To use, first modify the Vagrantfile to reflect the machines you want.
This is easy: You just change the number of nodes.
Then, update the kubernetes ansible data structure to include more nodes if you want them.
## Provider
Now make sure to install openstack provider for vagrant.
`vagrant plugin install vagrant-openstack-provider`
NOTE This is a more up-to-date provider than the similar `vagrant-openstack-plugin`.
# Now, vagrant up!
Now lets run it. Again, make sure you look at your openstack dashboard to see the URLs and security groups and tokens that you want. In general, you want an open security group (i.e. for port 8080 and so on) and you want an SSH key that is named that you can use to ssh into all machines, and make sure you set those in the Vagrantfile correctly. ESPECIALLY also make sure you set your tenant-name is right.
`VAGRANT_LOG=info vagrant up --provision-with=shell ; vagrant provision provision-with=ansible`
This will run a first pass provisioning, which sets up the raw machines, followed by a second pass,
which sets up kuberentes, etcd, and so on.

View File

@ -1,46 +1,91 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
require 'date'
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
config.vm.box = "chef/centos-7.0"
### This is a new provider, different then cloudbau's.
### RUN: vagrant plugin uninstall vagrant-openstack-plugin"
### Then RUN: "vagrant plugin install vagrant-openstack-provider"
require 'vagrant-openstack-provider'
# config.vm.network "public_network"
Total=3
config.vm.define "master", primary: true do |master|
master.vm.hostname = "master.vms.local"
master.vm.network "private_network", ip: "192.168.1.100"
VAGRANTFILE_API_VERSION = "2"
# Openstack + Hostmanager providers are best used with latest versions.
Vagrant.require_version ">= 1.7"
### If you want to change variables in all.yml, use this snippet.
### Just add a new line below as necessary...
### Commented out since its not really required for now.
# text = File.read('../group_vars/all.yml')
# new_contents = new_contents.gsub("dns_setup: true", "dns_setup: false")
# File.open('../group_vars/all.yml', "w") {|file| file.puts new_contents }
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
prefix = DateTime.now.strftime('%s')
### vagrant up --provision-with=shell works, but name doesnt :(
config.vm.provision "bootstrap", type: "shell" do |s|
s.path ="provision.sh"
end
(1..Total).each do |i|
# multi vm config
name = "kubernetes-vm-#{i}"
config.hostmanager.enabled = true
config.hostmanager.include_offline = false
config.vm.define "#{name}" do |n|
# common config
n.vm.box = "dummy"
n.vm.box_url = "https://github.com/cloudbau/vagrant-openstack-plugin/raw/master/dummy.box"
(1..1).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.hostname = "node-#{i}.vms.local"
node.vm.network "private_network", ip: "192.168.1.1#{i}"
node.vm.provision :ansible do |ansible|
ansible.host_key_checking = false
ansible.extra_vars = {
ansible_ssh_user: 'vagrant',
ansible_ssh_pass: 'vagrant',
user: 'vagrant'
}
#ansible.verbose = 'vvv'
ansible.playbook = "../cluster.yml"
ansible.inventory_path = "vinventory"
ansible.limit = 'all'
# Make sure the private key from the key pair is provided
n.ssh.username = "fedora"
n.ssh.private_key_path = "~/.ssh/id_rsa"
n.vm.provider :openstack do |os|
### The below parameters need to be modified per your openstack instance.
os.username = ENV['OS_USERNAME']
os.password = ENV['OS_PASSWORD']
os.flavor = "m1.small"
os.image = "Fedora 22 Cloud Base x86_64 (final)"
os.openstack_auth_url = "http://os1-public.osop.rhcloud.com:5000/v2.0/tokens/"
os.security_groups = ['default','newgroup']
os.openstack_compute_url = "http://os1-public.osop.rhcloud.com:8774/v2/4f8086dadf9b4e929b7d9f88aa5d548d"
os.server_name = name
config.ssh.username = "fedora" # login for the VM
config.vm.boot_timeout = 60*10
### Dont screw this up. AUTH can fail if you don't have tenant correct ~
os.tenant_name = "ENG Emerging Tech"
os.keypair_name = "JPeerindex"
os.region = "OS1Public"
os.floating_ip_pool = 'os1_public'
### Floating IP AUTO may or may not be a viable option for your openstack instance.
#os.floating_ip = "auto"
end
n.vm.provision "bootstrap", type:"shell", path: "provision.sh"
end
end
# This is how we create the ansible inventory, see it in .vagrant
# if you want to debug, run 'VAGRANT_LOG=info vagrant up'
# and you'll see exactly how the cluster comes up via ansible inv.
groups = {
"etcd" => ["kubernetes-vm-1"],
"masters" => ["kubernetes-vm-2"],
"nodes" => ["kubernetes-vm-3"],
"all_groups:children" => ["etcd","masters","nodes"]
}
# This sets up both flannel and kube.
config.vm.provision "ansible" do |ansible|
ansible.groups = groups
ansible.playbook = "../cluster.yml"
ansible.limit = "all" #otherwise the metadata wont be there for ipv4?
ansible.extra_vars = {
}
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = false
# Customize the amount of memory on the VM:
vb.memory = "2048"
# vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
end

View File

@ -0,0 +1,48 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# TODO: This is deprecated, as we will be merging vbox support to the main Vagrantfile soon enough !
#
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure(2) do |config|
config.vm.box = "chef/centos-7.0"
# config.vm.network "public_network"
config.vm.define "master", primary: true do |master|
master.vm.hostname = "master.vms.local"
master.vm.network "private_network", ip: "192.168.1.100"
end
(1..1).each do |i|
config.vm.define "node-#{i}" do |node|
node.vm.hostname = "node-#{i}.vms.local"
node.vm.network "private_network", ip: "192.168.1.1#{i}"
node.vm.provision :ansible do |ansible|
ansible.host_key_checking = false
ansible.extra_vars = {
ansible_ssh_user: 'vagrant',
ansible_ssh_pass: 'vagrant',
user: 'vagrant'
}
#ansible.verbose = 'vvv'
ansible.playbook = "../cluster.yml"
ansible.inventory_path = "vinventory"
ansible.limit = 'all'
end
end
end
config.vm.provider "virtualbox" do |vb|
# Display the VirtualBox GUI when booting the machine
vb.gui = false
# Customize the amount of memory on the VM:
vb.memory = "2048"
# vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
end
end

View File

@ -0,0 +1,12 @@
os_username: eparis
os_password: redhat
os_tenant: "RH US Business Group"
os_auth_url: "http://os1-public.osop.rhcloud.com:5000/v2.0"
os_region_name: "OS1Public"
os_ssh_key_name: "eparis"
os_flavor: "m1.small"
os_image: "Fedora 22 Cloud Base x86_64 (final)"
os_security_groups:
- "default"
#- some_other_group
os_floating_ip_pool: "os1_public"

View File

@ -0,0 +1,3 @@
echo "hello, here is a sample provisioning script that demonstrates everything works"
ls /vagrant
echo "As you can see ^ ... the shared folders even work . yay "

View File

@ -1,8 +0,0 @@
[masters]
192.168.1.100
[etcd]
192.168.1.100
[nodes]
192.168.1.11