mirror of https://github.com/k3s-io/k3s
Merge pull request #6093 from brendandburns/docs
Revert the revert of the update to docs and scripts for GCE.pull/6/head
commit
1bf0cbeac6
|
@ -24,16 +24,32 @@ source "${KUBE_ROOT}/cluster/common.sh"
|
||||||
|
|
||||||
NODE_INSTANCE_PREFIX="${INSTANCE_PREFIX}-minion"
|
NODE_INSTANCE_PREFIX="${INSTANCE_PREFIX}-minion"
|
||||||
|
|
||||||
|
KUBE_PROMPT_FOR_UPDATE=y
|
||||||
|
|
||||||
# Verify prereqs
|
# Verify prereqs
|
||||||
function verify-prereqs {
|
function verify-prereqs {
|
||||||
local cmd
|
local cmd
|
||||||
for cmd in gcloud gsutil; do
|
for cmd in gcloud gsutil; do
|
||||||
which "${cmd}" >/dev/null || {
|
if ! which "${cmd}" >/dev/null; then
|
||||||
echo "Can't find ${cmd} in PATH, please fix and retry. The Google Cloud "
|
echo "Can't find ${cmd} in PATH. Do you wish to install the Google Cloud SDK? [Y/n]"
|
||||||
echo "SDK can be downloaded from https://cloud.google.com/sdk/."
|
local resp
|
||||||
exit 1
|
read resp
|
||||||
}
|
if [[ "${resp}" != "n" && "${resp}" != "N" ]]; then
|
||||||
|
curl https://sdk.cloud.google.com | bash
|
||||||
|
fi
|
||||||
|
if ! which "${cmd}" >/dev/null; then
|
||||||
|
echo "Can't find ${cmd} in PATH, please fix and retry. The Google Cloud "
|
||||||
|
echo "SDK can be downloaded from https://cloud.google.com/sdk/."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
done
|
done
|
||||||
|
# update and install components as needed
|
||||||
|
if [[ "${KUBE_PROMPT_FOR_UPDATE}" != "y" ]]; then
|
||||||
|
gcloud_prompt="-q"
|
||||||
|
fi
|
||||||
|
gcloud ${gcloud_prompt:-} components update preview || true
|
||||||
|
gcloud ${gcloud_prompt:-} components update || true
|
||||||
}
|
}
|
||||||
|
|
||||||
# Create a temp dir that'll be deleted at the end of this bash session.
|
# Create a temp dir that'll be deleted at the end of this bash session.
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Support Level | Notes
|
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Support Level | Notes
|
||||||
-------------- | ------------ | ------ | ---------- | ---------------------------------------------------- | ---------------------------- | -----
|
-------------- | ------------ | ------ | ---------- | ---------------------------------------------------- | ---------------------------- | -----
|
||||||
GCE | Saltstack | Debian | GCE | [docs](../../docs/getting-started-guides/gce.md) | Project | Tested with 0.9.2 by @satnam6502
|
GCE | Saltstack | Debian | GCE | [docs](../../docs/getting-started-guides/gce.md) | Project | Tested with 0.13.2 by @brendandburns
|
||||||
Vagrant | Saltstack | Fedora | OVS | [docs](../../docs/getting-started-guides/vagrant.md) | Project |
|
Vagrant | Saltstack | Fedora | OVS | [docs](../../docs/getting-started-guides/vagrant.md) | Project |
|
||||||
Bare-metal | custom | Fedora | _none_ | [docs](../../docs/getting-started-guides/fedora/fedora_manual_config.md) | Project | Uses K8s v0.13.2
|
Bare-metal | custom | Fedora | _none_ | [docs](../../docs/getting-started-guides/fedora/fedora_manual_config.md) | Project | Uses K8s v0.13.2
|
||||||
Bare-metal | Ansible | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/fedora_ansible_config.md) | Project | Uses K8s v0.13.2
|
Bare-metal | Ansible | Fedora | flannel | [docs](../../docs/getting-started-guides/fedora/fedora_ansible_config.md) | Project | Uses K8s v0.13.2
|
||||||
|
|
|
@ -2,59 +2,35 @@
|
||||||
|
|
||||||
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
|
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
|
||||||
|
|
||||||
### Getting VMs
|
|
||||||
|
|
||||||
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details.
|
|
||||||
2. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/quickstart#create_an_instance) part of the GCE Quickstart.
|
|
||||||
3. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/quickstart#ssh) part of the GCE Quickstart.
|
|
||||||
* Your GCE SSH key must either have no passcode or you need to be using `ssh-agent`.
|
|
||||||
* Ensure the GCE firewall isn't blocking port 22 to your VMs. By default, this should work but if you have edited firewall rules or created a new non-default network, you'll need to expose it: `gcloud compute firewall-rules create --network=<network-name> --description "SSH allowed from anywhere" --allow tcp:22 default-ssh`
|
|
||||||
4. You need to have the Google Cloud Storage API, and the Google Cloud Storage JSON API enabled. It is activated by default for new projects. Otherwise, it can be done in the Google Cloud Console. See the [Google Cloud Storage JSON API Overview](https://cloud.google.com/storage/docs/json_api/) for more details.
|
|
||||||
|
|
||||||
### Prerequisites for your workstation
|
|
||||||
|
|
||||||
1. You must be running Linux or Mac OS X on your workstation.
|
|
||||||
2. You must have the [Google Cloud SDK](https://developers.google.com/cloud/sdk/) installed. This will get you `gcloud` and `gsutil`.
|
|
||||||
3. Install `gcloud preview`: run `gcloud components update preview` to make sure it is.
|
|
||||||
4. Ensure that your other `gcloud` components are up-to-date by running `gcloud components update`.
|
|
||||||
5. If you want to build your own release, you need to have [Docker installed](https://docs.docker.com/installation/). On Mac OS X you can use [boot2docker](http://boot2docker.io/). (see also: https://docs.docker.com/installation/mac/)
|
|
||||||
6. Get or build a [binary release](binary_release.md) of Kubernetes.
|
|
||||||
|
|
||||||
### Starting a Cluster
|
### Starting a Cluster
|
||||||
|
|
||||||
Change into the `kubernetes` directory in which you have the binary release, and then do
|
You can install a cluster with one of two one-liners:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cluster/kube-up.sh
|
curl -sS https://get.k8s.io | bash
|
||||||
```
|
```
|
||||||
|
|
||||||
(If it fails, do `cluster/kube-down.sh` to clean up before trying again; otherwise, you'll get errors about resources that already exist.)
|
or
|
||||||
|
|
||||||
The script above relies on Google Storage to stage the Kubernetes release. It
|
```bash
|
||||||
then will start (by default) a single master VM along with 4 worker VMs. You
|
wget -q -O - https://get.k8s.io | bash
|
||||||
can tweak some of these parameters by editing `cluster/gce/config-default.sh`
|
```
|
||||||
You can view a transcript of a successful cluster creation
|
|
||||||
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
|
|
||||||
|
|
||||||
The instances must be able to connect to each other using their private IP. The
|
This will leave you with a ```kubernetes``` directory and a running cluster. Feel free to move the ```kubernetes``` directory to the appropriate directory (e.g. ```/opt/kubernetes```) then cd into that directory.
|
||||||
script uses the "default" network which should have a firewall rule called
|
|
||||||
"default-allow-internal" which allows traffic on any port on the private IPs.
|
```bash
|
||||||
If this rule is missing from the default network or if you change the network
|
mv kubernetes ${SOME_DIR}/kubernetes
|
||||||
being used in `cluster/config-default.sh` create a new rule with the following
|
cd ${SOME_DIR}/kubernetes
|
||||||
field values:
|
```
|
||||||
|
|
||||||
|
If you run into trouble please see the section on [troubleshooting](https://github.com/brendandburns/kubernetes/blob/docs/docs/getting-started-guides/gce.md#troubleshooting), or come ask questions on IRC at #google-containers on freenode.
|
||||||
|
|
||||||
* Source Ranges: `10.0.0.0/8`
|
|
||||||
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
|
|
||||||
|
|
||||||
### Running a container (simple version)
|
### Running a container (simple version)
|
||||||
|
|
||||||
Once you have your instances up and running, use cluster/kubectl.sh to access
|
Once you have your cluster created you can use ```${SOME_DIR}/kubernetes/cluster/kubectl.sh``` to access
|
||||||
the kubernetes api.
|
the kubernetes api.
|
||||||
|
|
||||||
Note: if you built the release from source you will need to run `hack/build-go.sh` to
|
|
||||||
build the go components, which include the `kubectl` commandline client. If you are
|
|
||||||
using a prebuilt release, the built client binaries are already included.
|
|
||||||
|
|
||||||
The `kubectl.sh` line below spins up two containers running
|
The `kubectl.sh` line below spins up two containers running
|
||||||
[Nginx](http://nginx.org/en/) running on port 80:
|
[Nginx](http://nginx.org/en/) running on port 80:
|
||||||
|
|
||||||
|
@ -139,3 +115,33 @@ Look in `examples/` for more examples
|
||||||
cd kubernetes
|
cd kubernetes
|
||||||
cluster/kube-down.sh
|
cluster/kube-down.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Customizing
|
||||||
|
The script above relies on Google Storage to stage the Kubernetes release. It
|
||||||
|
then will start (by default) a single master VM along with 4 worker VMs. You
|
||||||
|
can tweak some of these parameters by editing `kubernetes/cluster/gce/config-default.sh`
|
||||||
|
You can view a transcript of a successful cluster creation
|
||||||
|
[here](https://gist.github.com/satnam6502/fc689d1b46db9772adea).
|
||||||
|
|
||||||
|
### Troubleshooting
|
||||||
|
#### Creating VMs
|
||||||
|
|
||||||
|
1. You need a Google Cloud Platform account with billing enabled. Visit the [Google Developers Console](http://cloud.google.com/console) for more details.
|
||||||
|
2. Make sure you can start up a GCE VM from the command line. At least make sure you can do the [Create an instance](https://cloud.google.com/compute/docs/quickstart#create_an_instance) part of the GCE Quickstart.
|
||||||
|
3. Make sure you can ssh into the VM without interactive prompts. See the [Log in to the instance](https://cloud.google.com/compute/docs/quickstart#ssh) part of the GCE Quickstart.
|
||||||
|
* Your GCE SSH key must either have no passcode or you need to be using `ssh-agent`.
|
||||||
|
* Ensure the GCE firewall isn't blocking port 22 to your VMs. By default, this should work but if you have edited firewall rules or created a new non-default network, you'll need to expose it: `gcloud compute firewall-rules create --network=<network-name> --description "SSH allowed from anywhere" --allow tcp:22 default-ssh`
|
||||||
|
4. You need to have the Google Cloud Storage API, and the Google Cloud Storage JSON API enabled. It is activated by default for new projects. Otherwise, it can be done in the Google Cloud Console. See the [Google Cloud Storage JSON API Overview](https://cloud.google.com/storage/docs/json_api/) for more details.
|
||||||
|
|
||||||
|
|
||||||
|
#### Networking
|
||||||
|
The instances must be able to connect to each other using their private IP. The
|
||||||
|
script uses the "default" network which should have a firewall rule called
|
||||||
|
"default-allow-internal" which allows traffic on any port on the private IPs.
|
||||||
|
If this rule is missing from the default network or if you change the network
|
||||||
|
being used in `cluster/config-default.sh` create a new rule with the following
|
||||||
|
field values:
|
||||||
|
|
||||||
|
* Source Ranges: `10.0.0.0/8`
|
||||||
|
* Allowed Protocols and Port: `tcp:1-65535;udp:1-65535;icmp`
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue