mirror of https://github.com/k3s-io/k3s
Ansible: Vagrant: allow passing ansible tags to vagrant provision
Creating a cluster from scratch takes about 7 minutes. But if you just rebuild the binaries and want to update those you don't want to have to rerun the entire thing. There is an ansible tag 'binary-update' which will do that. Now one can do ``` ANSIBLE_TAGS=binary-update vagrant provision ``` And it will push the new binaries.pull/6/head
parent
8ba4d85fa9
commit
a25b34e1a4
|
@ -56,4 +56,10 @@ vagrant provision
|
||||||
### VirtualBox
|
### VirtualBox
|
||||||
Nothing special with VirtualBox. Hopefully `vagrant up` just works.
|
Nothing special with VirtualBox. Hopefully `vagrant up` just works.
|
||||||
|
|
||||||
|
|
||||||
|
## Random Information
|
||||||
|
If you just want to update the binaries on your systems (either pkgManager or localBuild) you can do so using the ansible binary-update tag. To do so with vagrant provision you would need to run
|
||||||
|
```
|
||||||
|
ANSIBLE_TAGS="binary-update" vagrant provision
|
||||||
|
```
|
||||||
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/contrib/ansible/vagrant/README.md?pixel)]()
|
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/contrib/ansible/vagrant/README.md?pixel)]()
|
||||||
|
|
|
@ -8,6 +8,7 @@ require "yaml"
|
||||||
require 'vagrant-openstack-provider'
|
require 'vagrant-openstack-provider'
|
||||||
|
|
||||||
$num_nodes = (ENV['NUM_NODES'] || 2).to_i
|
$num_nodes = (ENV['NUM_NODES'] || 2).to_i
|
||||||
|
ansible_tags = ENV['ANSIBLE_TAGS']
|
||||||
|
|
||||||
VAGRANTFILE_API_VERSION = "2"
|
VAGRANTFILE_API_VERSION = "2"
|
||||||
|
|
||||||
|
@ -119,13 +120,15 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
|
||||||
n.vm.hostname = name
|
n.vm.hostname = name
|
||||||
set_provider(n)
|
set_provider(n)
|
||||||
|
|
||||||
# This set up the vagrant hosts before we run the main playbook
|
if ansible_tags.nil?
|
||||||
# Today this just creates /etc/hosts so machines can talk via their
|
# This set up the vagrant hosts before we run the main playbook
|
||||||
# 'internal' IPs instead of the openstack public ip.
|
# Today this just creates /etc/hosts so machines can talk via their
|
||||||
n.vm.provision :ansible do |ansible|
|
# 'internal' IPs instead of the openstack public ip.
|
||||||
ansible.groups = groups
|
n.vm.provision :ansible do |ansible|
|
||||||
ansible.playbook = "./vagrant-ansible.yml"
|
ansible.groups = groups
|
||||||
ansible.limit = "all" #otherwise the metadata wont be there for ipv4?
|
ansible.playbook = "./vagrant-ansible.yml"
|
||||||
|
ansible.limit = "all" #otherwise the metadata wont be there for ipv4?
|
||||||
|
end
|
||||||
end
|
end
|
||||||
|
|
||||||
# This sets up both flannel and kube.
|
# This sets up both flannel and kube.
|
||||||
|
@ -133,6 +136,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
|
||||||
ansible.groups = groups
|
ansible.groups = groups
|
||||||
ansible.playbook = "../cluster.yml"
|
ansible.playbook = "../cluster.yml"
|
||||||
ansible.limit = "all" #otherwise the metadata wont be there for ipv4?
|
ansible.limit = "all" #otherwise the metadata wont be there for ipv4?
|
||||||
|
ansible.tags = ansible_tags
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
end
|
end
|
||||||
|
|
Loading…
Reference in New Issue