Replaced (or defined first instance of) GKE/GCE with Google Container Engine/Google Compute Engine

Fixes #10354
pull/6/head
RichieEscarez 2015-06-26 12:13:43 -07:00
parent ffb846a284
commit 899145da10
21 changed files with 34 additions and 34 deletions

View File

@ -38,7 +38,7 @@ these configurations the secure port is typically set to 6443.
A firewall rule is typically configured to allow external HTTPS access to port 443.
The above are defaults and reflect how Kubernetes is deployed to GCE using
The above are defaults and reflect how Kubernetes is deployed to Google Compute Engine using
kube-up.sh. Other cloud providers may vary.
## Use Cases vs IP:Ports

View File

@ -56,7 +56,7 @@ If you want more control over the upgrading process, you may use the following w
`kubectl update nodes $NODENAME --patch='{"apiVersion": "v1", "spec": {"unschedulable": false}}'`.
If you deleted the node's VM instance and created a new one, then a new schedulable node resource will
be created automatically when you create a new VM instance (if you're using a cloud provider that supports
node discovery; currently this is only GCE, not including CoreOS on GCE using kube-register). See [Node](node.md).
node discovery; currently this is only Google Compute Engine, not including CoreOS on Google Compute Engine using kube-register). See [Node](node.md).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/cluster_management.md?pixel)]()

View File

@ -4,7 +4,7 @@ There are multiple guides on running Kubernetes with [CoreOS](http://coreos.com)
* [Single Node Cluster](coreos/coreos_single_node_cluster.md)
* [Multi-node Cluster](coreos/coreos_multinode_cluster.md)
* [Setup Multi-node Cluster on GCE in an easy way](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
* [Setup Multi-node Cluster on Google Compute Engine in an easy way](https://github.com/rimusz/coreos-multi-node-k8s-gce/blob/master/README.md)
* [Multi-node cluster using cloud-config and Weave on Vagrant](https://github.com/errordeveloper/weave-demos/blob/master/poseidon/README.md)
* [Multi-node cluster using cloud-config and Vagrant](https://github.com/pires/kubernetes-vagrant-coreos-cluster/blob/master/README.md)
* [Yet another multi-node cluster using cloud-config and Vagrant](https://github.com/AntonioMeireles/kubernetes-vagrant-coreos-cluster/blob/master/README.md) (similar to the one above but with an increased, more *aggressive* focus on features and flexibility)

View File

@ -59,9 +59,9 @@ aws ec2 run-instances \
--user-data file://node.yaml
```
### GCE
### Google Compute Engine (GCE)
*Attention:* Replace ```<gce_image_id>``` below for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
*Attention:* Replace ```<gce_image_id>``` below for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
#### Provision the Master

View File

@ -28,9 +28,9 @@ aws ec2 run-instances \
--user-data file://standalone.yaml
```
### GCE
### Google Compute Engine (GCE)
*Attention:* Replace ```<gce_image_id>``` bellow for a [suitable version of CoreOS image for GCE](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
*Attention:* Replace ```<gce_image_id>``` bellow for a [suitable version of CoreOS image for Google Compute Engine](https://coreos.com/docs/running-coreos/cloud-providers/google-compute-engine/).
```
gcloud compute instances create standalone \

View File

@ -23,7 +23,7 @@ The example below creates a Kubernetes cluster with 4 worker node Virtual Machin
### Before you start
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) for hosted cluster installation and management.
If you want a simplified getting started experience and GUI for managing clusters, please consider trying [Google Container Engine](https://cloud.google.com/container-engine/) (GKE) for hosted cluster installation and management.
If you want to use custom binaries or pure open source Kubernetes, please continue with the instructions below.

View File

@ -1,6 +1,6 @@
# Cluster Level Logging with Elasticsearch and Kibana
On the GCE platform the default cluster level logging support targets
On the Google Compute Engine (GCE) platform the default cluster level logging support targets
[Google Cloud Logging](https://cloud.google.com/logging/docs/) as described at the [Logging](logging.md) getting
started page. Here we describe how to set up a cluster to ingest logs into Elasticsearch and view them using Kibana as an
alternative to Google Cloud Logging.

View File

@ -19,7 +19,7 @@ Here is the same information in a picture which shows how the pods might be plac
![Cluster](/examples/blog-logging/diagrams/cloud-logging.png)
This diagram shows four nodes created on a GCE cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pods execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
This diagram shows four nodes created on a Google Compute Engine cluster with the name of each VM node on a purple background. The internal and public IPs of each node are shown on gray boxes and the pods running in each node are shown in green boxes. Each pod box shows the name of the pod and the namespace it runs in, the IP address of the pod and the images which are run as part of the pods execution. Here we see that every node is running a fluentd-cloud-logging pod which is collecting the log output of the containers running on the same node and sending them to Google Cloud Logging. A pod which provides the
[cluster DNS service](/docs/dns.md) runs on one of the nodes and a pod which provides monitoring support runs on another node.
To help explain how cluster level logging works lets start off with a synthetic log generator pod specification [counter-pod.yaml](/examples/blog-logging/counter-pod.yaml):
@ -167,7 +167,7 @@ Here is some sample output:
![BigQuery](bigquery-logging.png)
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a GCE project called `myproject`. Only logs for the date 2015-06-11 are fetched.
We could also fetch the logs from Google Cloud Storage buckets to our desktop or laptop and then search them locally. The following command fetches logs for the counter pod running in a cluster which is itself in a Compute Engine project called `myproject`. Only logs for the date 2015-06-11 are fetched.
```

View File

@ -15,7 +15,7 @@ Getting started on Rackspace
* Supported Version: v0.18.1
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to GCE. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design.
In general, the dev-build-and-up.sh workflow for Rackspace is the similar to Google Compute Engine. The specific implementation is different due to the use of CoreOS, Rackspace Cloud Files and the overall network design.
These scripts should be used to deploy development environments for Kubernetes. If your account leverages RackConnect or non-standard networking, these scripts will most likely not work without modification.

View File

@ -34,7 +34,7 @@ $ export CONTAINER_RUNTIME=rkt
$ hack/local-up-cluster.sh
```
### CoreOS cluster on GCE
### CoreOS cluster on Google Compute Engine (GCE)
To use rkt as the container runtime for your CoreOS cluster on GCE, you need to specify the OS distribution, project, image:
```shell

View File

@ -17,7 +17,7 @@ Private registries may require keys to read images from them.
Credentials can be provided in several ways:
- Using Google Container Registry
- Per-cluster
- automatically configured on GCE/GKE
- automatically configured on Google Compute Engine or Google Container Engine
- all pods can read the project's private registry
- Configuring Nodes to Authenticate to a Private Registry
- all pods can read any configured private registries

View File

@ -85,7 +85,7 @@ as an introduction to various technologies and serves as a jumping-off point.
If some techniques become vastly preferable to others, we might detail them more
here.
### Google Compute Engine
### Google Compute Engine (GCE)
For the Google Compute Engine cluster configuration scripts, we use [advanced
routing](https://developers.google.com/compute/docs/networking#routing) to

View File

@ -69,7 +69,7 @@ Kubernetes from the node.
## Node Management
Unlike [Pods](pods.md) and [Services](services.md), a Node is not inherently
created by Kubernetes: it is either created from cloud providers like GCE,
created by Kubernetes: it is either created from cloud providers like Google Compute Engine,
or from your physical or virtual machines. What this means is that when
Kubernetes creates a node, it only creates a representation for the node.
After creation, Kubernetes will check whether the node is valid or not.

View File

@ -385,7 +385,7 @@ This makes some kinds of firewalling impossible.
LoadBalancers only support TCP, not UDP.
The `Type` field is designed as nested functionality - each level adds to the
previous. This is not strictly required on all cloud providers (e.g. GCE does
previous. This is not strictly required on all cloud providers (e.g. Google Compute Engine does
not need to allocate a `NodePort` to make `LoadBalancer` work, but AWS does)
but the current API requires it.

View File

@ -159,7 +159,7 @@ music-server name=music-db name=music-db 10.0.138.61 9200/TCP
NAME TYPE DATA
apiserver-secret Opaque 2
```
This shows 4 instances of Elasticsearch running. After making sure that port 9200 is accessible for this cluster (e.g. using a firewall rule for GCE) we can make queries via the service which will be fielded by the matching Elasticsearch pods.
This shows 4 instances of Elasticsearch running. After making sure that port 9200 is accessible for this cluster (e.g. using a firewall rule for Google Compute Engine) we can make queries via the service which will be fielded by the matching Elasticsearch pods.
```
$ curl 104.197.12.157:9200
{

View File

@ -18,7 +18,7 @@ This example shows how to build a simple, multi-tier web application using Kuber
- [Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)](#using-type-loadbalancer-for-the-frontend-service-cloud-provider-specific)
- [Create the Frontend Service](#create-the-frontend-service)
- [Accessing the guestbook site externally](#accessing-the-guestbook-site-externally)
- [GCE External Load Balancer Specifics](#gce-external-load-balancer-specifics)
- [Google Compute Engine External Load Balancer Specifics](#gce-external-load-balancer-specifics)
- [Step Seven: Cleanup](#step-seven-cleanup)
- [Troubleshooting](#troubleshooting)
@ -33,7 +33,7 @@ The web front end interacts with the redis master via javascript redis API calls
### Step Zero: Prerequisites
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../docs/getting-started-guides) for how to get started. As noted above, if you have a GKE cluster set up, go [here](https://cloud.google.com/container-engine/docs/tutorials/guestbook) instead.
This example requires a running Kubernetes cluster. See the [Getting Started guides](../../docs/getting-started-guides) for how to get started. As noted above, if you have a Google Container Engine cluster set up, go [here](https://cloud.google.com/container-engine/docs/tutorials/guestbook) instead.
### Step One: Start up the redis master
@ -136,7 +136,7 @@ $ kubectl logs <pod_name>
These logs will usually give you enough information to troubleshoot.
However, if you should want to ssh to the listed host machine, you can inspect various logs there directly as well. For example, with GCE, using `gcloud`, you can ssh like this:
However, if you should want to SSH to the listed host machine, you can inspect various logs there directly as well. For example, with Google Compute Engine, using `gcloud`, you can SSH like this:
```shell
me@workstation$ gcloud compute ssh kubernetes-minion-krxw
@ -442,7 +442,7 @@ spec:
#### Using 'type: LoadBalancer' for the frontend service (cloud-provider-specific)
For supported cloud providers, such as GCE/GKE, you can specify to use an external load balancer
For supported cloud providers, such as Google Compute Engine or Google Container Engine, you can specify to use an external load balancer
in the service `spec`, to expose the service onto an external load balancer IP.
To do this, uncomment the `type: LoadBalancer` line in the `frontend-service.yaml` file before you start the service.
@ -495,9 +495,9 @@ You should see a web page that looks something like this (without the messages).
If you are more advanced in the ops arena, you can also manually get the service IP from looking at the output of `kubectl get pods,services`, and modify your firewall using standard tools and services (firewalld, iptables, selinux) which you are already familiar with.
##### GCE External Load Balancer Specifics
##### Google Compute Engine External Load Balancer Specifics
In GCE, `kubectl` automatically creates forwarding rule for services with `LoadBalancer`.
In Google Compute Engine, `kubectl` automatically creates forwarding rule for services with `LoadBalancer`.
You can list the forwarding rules like this. The forwarding rule also indicates the external IP.
@ -507,13 +507,13 @@ NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
frontend us-central1 130.211.188.51 TCP us-central1/targetPools/frontend
```
In GCE, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion` (replace with your tags as appropriate):
In Google Compute Engine, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion` (replace with your tags as appropriate):
```shell
$ gcloud compute firewall-rules create --allow=tcp:80 --target-tags=kubernetes-minion kubernetes-minion-80
```
For GCE details about limiting traffic to specific sources, see the [GCE firewall documentation][gce-firewall-docs].
For Google Compute Engine details about limiting traffic to specific sources, see the [Google Compute Engine firewall documentation][gce-firewall-docs].
[cloud-console]: https://console.developer.google.com
[gce-firewall-docs]: https://cloud.google.com/compute/docs/networking#firewalls

View File

@ -10,7 +10,7 @@ then edit */etc/iscsi/initiatorname.iscsi* and */etc/iscsi/iscsid.conf* to match
I mostly followed these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi) to setup iSCSI target. and these [instructions](http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2) to setup iSCSI initiator.
**Setup B.** On Unbuntu 12.04 and Debian 7 nodes on GCE
**Setup B.** On Unbuntu 12.04 and Debian 7 nodes on Google Compute Engine (GCE)
GCE does not provide preconfigured Fedora 21 image, so I set up the iSCSI target on a preconfigured Ubuntu 12.04 image, mostly following these [instructions](http://www.server-world.info/en/note?os=Ubuntu_12.04&p=iscsi). My Kubernetes cluster on GCE was running Debian 7 images, so I followed these [instructions](http://www.server-world.info/en/note?os=Debian_7.0&p=iscsi&f=2) to set up the iSCSI initiator.

View File

@ -34,8 +34,8 @@ Next, start up a Kubernetes cluster:
wget -q -O - https://get.k8s.io | bash
```
Please see the [GCE getting started
guide](http://docs.k8s.io/getting-started-guides/gce.md) for full
Please see the [Google Compute Engine getting started
guide](../../docs/getting-started-guides/gce.md) for full
details and other options for starting a cluster.
Build a container for your Meteor app
@ -139,7 +139,7 @@ kubectl get services/meteor --template="{{range .status.loadBalancer.ingress}} {
```
You will have to open up port 80 if it's not open yet in your
environment. On GCE, you may run the below command.
environment. On Google Compute Engine, you may run the below command.
```
gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion
```

View File

@ -8,7 +8,7 @@ We'll create two Kubernetes [pods](http://docs.k8s.io/pods.md) to run mysql and
This example demonstrates several useful things, including: how to set up and use persistent disks with Kubernetes pods; how to define Kubernetes services to leverage docker-links-compatible service environment variables; and use of an external load balancer to expose the wordpress service externally and make it transparent to the user if the wordpress pod moves to a different cluster node.
## Get started on Google Compute Engine
## Get started on Google Compute Engine (GCE)
Because we're using the `GCEPersistentDisk` type of volume for persistent storage, this example is only applicable to [Google Compute Engine](https://cloud.google.com/compute/). Take a look at the [volumes documentation](/docs/volumes.md) for other options.

View File

@ -7,8 +7,8 @@ This guide assumes knowledge of Kubernetes fundamentals and that you have a clus
## Provisioning
A PersistentVolume in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
must first create storage (create their GCE disks, export their NFS shares, etc.) in order for Kubernetes to mount it.
A Persistent Volume (PV) in Kubernetes represents a real piece of underlying storage capacity in the infrastructure. Cluster administrators
must first create storage (create their Google Compute Engine (GCE) disks, export their NFS shares, etc.) in order for Kubernetes to mount it.
PVs are intended for "network volumes" like GCE Persistent Disks, NFS shares, and AWS ElasticBlockStore volumes. ```HostPath``` was included
for ease of development and testing. You'll create a local ```HostPath``` for this example.

View File

@ -105,7 +105,7 @@ type: LoadBalancer
The external load balancer allows us to access the service from outside via an external IP, which is 104.197.19.120 in this case.
Note that you may need to create a firewall rule to allow the traffic, assuming you are using GCE:
Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine:
```
$ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080
```