Improve console output in sharing-clusters.md doc

pull/6/head
Satnam Singh 2015-07-19 01:46:32 +01:00
parent 99d2ea4acc
commit fddcc30683
1 changed files with 24 additions and 24 deletions

View File

@ -40,38 +40,38 @@ by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below.
**1. Create a cluster**
```bash
cluster/kube-up.sh
```console
$ cluster/kube-up.sh
```
**2. Copy `kubeconfig` to new host**
```bash
scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
```console
$ scp $HOME/.kube/config user@remotehost:/path/to/.kube/config
```
**3. On new host, make copied `config` available to `kubectl`**
* Option A: copy to default location
```bash
mv /path/to/.kube/config $HOME/.kube/config
```console
$ mv /path/to/.kube/config $HOME/.kube/config
```
* Option B: copy to working directory (from which kubectl is run)
```bash
mv /path/to/.kube/config $PWD
```console
$ mv /path/to/.kube/config $PWD
```
* Option C: manually pass `kubeconfig` location to `.kubectl`
```bash
```console
# via environment variable
export KUBECONFIG=/path/to/.kube/config
$ export KUBECONFIG=/path/to/.kube/config
# via commandline flag
kubectl ... --kubeconfig=/path/to/.kube/config
$ kubectl ... --kubeconfig=/path/to/.kube/config
```
## Manually Generating `kubeconfig`
@ -79,18 +79,18 @@ kubectl ... --kubeconfig=/path/to/.kube/config
`kubeconfig` is generated by `kube-up` but you can generate your own
using (any desired subset of) the following commands.
```bash
```console
# create kubeconfig entry
kubectl config set-cluster $CLUSTER_NICK
$ kubectl config set-cluster $CLUSTER_NICK \
--server=https://1.1.1.1 \
--certificate-authority=/path/to/apiserver/ca_file \
--embed-certs=true \
# Or if tls not needed, replace --certificate-authority and --embed-certs with
--insecure-skip-tls-verify=true
--insecure-skip-tls-verify=true \
--kubeconfig=/path/to/standalone/.kube/config
# create user entry
kubectl config set-credentials $USER_NICK
$ kubectl config set-credentials $USER_NICK \
# bearer token credentials, generated on kube master
--token=$token \
# use either username|password or token, not both
@ -98,11 +98,11 @@ kubectl config set-credentials $USER_NICK
--password=$password \
--client-certificate=/path/to/crt_file \
--client-key=/path/to/key_file \
--embed-certs=true
--embed-certs=true \
--kubeconfig=/path/to/standalone/.kubeconfig
# create context entry
kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK
$ kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK
```
Notes:
@ -112,8 +112,8 @@ Notes:
save config too. In the above commands the `--kubeconfig` file could be
omitted if you first run
```bash
export KUBECONFIG=/path/to/standalone/.kube/config
```console
$ export KUBECONFIG=/path/to/standalone/.kube/config
```
* The ca_file, key_file, and cert_file referenced above are generated on the
@ -135,16 +135,16 @@ and/or run `kubectl config -h`.
If you create clusters A, B on host1, and clusters C, D on host2, you can
make all four clusters available on both hosts by running
```bash
```console
# on host2, copy host1's default kubeconfig, and merge it from env
scp host1:/path/to/home1/.kube/config path/to/other/.kube/config
$ scp host1:/path/to/home1/.kube/config path/to/other/.kube/config
export $KUBECONFIG=path/to/other/.kube/config
$ export $KUBECONFIG=path/to/other/.kube/config
# on host1, copy host2's default kubeconfig and merge it from env
scp host2:/path/to/home2/.kube/config path/to/other/.kube/config
$ scp host2:/path/to/home2/.kube/config path/to/other/.kube/config
export $KUBECONFIG=path/to/other/.kube/config
$ export $KUBECONFIG=path/to/other/.kube/config
```
Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](kubeconfig-file.md).