Merge pull request #19793 from MikeSpreitzer/issue/19221

Auto commit by PR queue bot
pull/6/head
k8s-merge-robot 2016-01-20 18:52:56 -08:00
commit 4e04a289d8
2 changed files with 5 additions and 5 deletions

View File

@ -53,10 +53,10 @@ grep -q "^${ETCD_VERSION}\$" binaries/.etcd 2>/dev/null || {
}
# k8s
KUBE_VERSION=${KUBE_VERSION:-"1.1.2"}
KUBE_VERSION=${KUBE_VERSION:-"1.1.4"}
echo "Prepare kubernetes ${KUBE_VERSION} release ..."
grep -q "^${KUBE_VERSION}\$" binaries/.kubernetes 2>/dev/null || {
curl -L https://github.com/GoogleCloudPlatform/kubernetes/releases/download/v${KUBE_VERSION}/kubernetes.tar.gz -o kubernetes.tar.gz
curl -L https://github.com/kubernetes/kubernetes/releases/download/v${KUBE_VERSION}/kubernetes.tar.gz -o kubernetes.tar.gz
tar xzf kubernetes.tar.gz
pushd kubernetes/server
tar xzf kubernetes-server-linux-amd64.tar.gz

View File

@ -61,7 +61,7 @@ work, which has been merge into this document.
Internet to download the necessary files, while worker nodes do not.
3. These guide is tested OK on Ubuntu 14.04 LTS 64bit server, but it can not work with
Ubuntu 15 which uses systemd instead of upstart.
4. Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.2, may work with higher versions.
4. Dependencies of this guide: etcd-2.2.1, flannel-0.5.5, k8s-1.1.4, may work with higher versions.
5. All the remote servers can be ssh logged in without a password by using key authentication.
@ -78,7 +78,7 @@ $ git clone https://github.com/kubernetes/kubernetes.git
#### Configure and start the Kubernetes cluster
The startup process will first download all the required binaries automatically.
By default etcd version is 2.2.1, flannel version is 0.5.5 and k8s version is 1.1.2.
By default etcd version is 2.2.1, flannel version is 0.5.5 and k8s version is 1.1.4.
You can customize your etcd version, flannel version, k8s version by changing corresponding variables
`ETCD_VERSION` , `FLANNEL_VERSION` and `KUBE_VERSION` like following.
@ -93,7 +93,7 @@ $ export ETCD_VERSION=2.2.0
For users who want to bring up a cluster with k8s version v1.1.1, `controller manager` may fail to start
due to [a known issue](https://github.com/kubernetes/kubernetes/issues/17109). You could raise it
up manually by using following command on the remote master server. Note that
you should do this only after `api-server` is up. Moreover this issue is fixed in v1.1.2.
you should do this only after `api-server` is up. Moreover this issue is fixed in v1.1.2 and later.
```console
$ sudo service kube-controller-manager start