5.4 KiB
Getting started on Google Compute Engine
The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
Getting VMs
- You need a Google Cloud Platform account with billing enabled. Visit the Google Developers Console for more details.
- Make sure you can start up a GCE VM from the command line. At least make sure you can do the Create an instance part of the GCE Quickstart.
- Make sure you can ssh into the VM without interactive prompts. See the Log in to the instance part of the GCE Quickstart.
- Your GCE SSH key must either have no passcode or you need to be using
ssh-agent
. - Ensure the GCE firewall isn't blocking port 22 to your VMs. By default, this should work but if you have edited firewall rules or created a new non-default network, you'll need to expose it:
gcloud compute firewall-rules create --network=<network-name> --description "SSH allowed from anywhere" --allow tcp:22 default-ssh
- You need to have the Google Cloud Storage API, and the Google Cloud Storage JSON API enabled. It is activated by default for new projects. Otherwise, it can be done in the Google Cloud Console. See the Google Cloud Storage JSON API Overview for more details.
Prerequisites for your workstation
- You must be running Linux or Mac OS X on your workstation.
- You must have the Google Cloud SDK installed. This will get you
gcloud
andgsutil
. - Install
gcloud preview
: rungcloud components update preview
to make sure it is. - Ensure that your other
gcloud
components are up-to-date by runninggcloud components update
. - If you want to build your own release, you need to have Docker installed. On Mac OS X you can use boot2docker. (see also: https://docs.docker.com/installation/mac/)
- Get or build a binary release of Kubernetes.
Starting a Cluster
Change into the kubernetes
directory in which you have the binary release, and then do
cluster/kube-up.sh
(If it fails, do cluster/kube-down.sh
to clean up before trying again; otherwise, you'll get errors about resources that already exist.)
The script above relies on Google Storage to stage the Kubernetes release. It
then will start (by default) a single master VM along with 4 worker VMs. You
can tweak some of these parameters by editing cluster/gce/config-default.sh
You can view a transcript of a successful cluster creation
here.
The instances must be able to connect to each other using their private IP. The
script uses the "default" network which should have a firewall rule called
"default-allow-internal" which allows traffic on any port on the private IPs.
If this rule is missing from the default network or if you change the network
being used in cluster/config-default.sh
create a new rule with the following
field values:
- Source Ranges:
10.0.0.0/8
- Allowed Protocols and Port:
tcp:1-65535;udp:1-65535;icmp
Running a container (simple version)
Once you have your instances up and running, use cluster/kubectl.sh to access the kubernetes api.
Note: if you built the release from source you will need to run hack/build-go.sh
to
build the go components, which include the kubectl
commandline client. If you are
using a prebuilt release, the built client binaries are already included.
The kubectl.sh
line below spins up two containers running
Nginx running on port 80:
cluster/kubectl.sh run-container my-nginx --image=dockerfile/nginx --replicas=2 --port=80
To stop the containers:
cluster/kubectl.sh stop rc my-nginx
To delete the containers:
cluster/kubectl.sh delete rc my-nginx
Running a container (more complete version)
cd kubernetes
cluster/kubectl.sh create -f docs/getting-started-guides/pod.json
Where pod.json contains something like:
{
"id": "php",
"kind": "Pod",
"apiVersion": "v1beta1",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "php",
"containers": [{
"name": "nginx",
"image": "dockerfile/nginx",
"ports": [{
"containerPort": 80,
"hostPort": 8081
}],
"livenessProbe": {
"enabled": true,
"type": "http",
"initialDelaySeconds": 30,
"httpGet": {
"path": "/index.html",
"port": 8081
}
}
}]
}
},
"labels": {
"name": "foo"
}
}
You can see your cluster's pods:
cluster/kubectl.sh get pods
and delete the pod you just created:
cluster/kubectl.sh delete pods php
Since this pod is scheduled on a minion running in GCE, you will have to enable incoming tcp traffic via the port specified in the pod manifest before you see the nginx welcome page. After doing so, it should be visible at http://:.
Look in examples/
for more examples
Tearing down the cluster
cd kubernetes
cluster/kube-down.sh