# Sharing Cluster Access Client access to a running kubernetes cluster can be shared by copying the `kubectl` client config bundle ([.kubeconfig](kubeconfig-file.md)). This config bundle lives in `$HOME/.kube/config`, and is generated by `cluster/kube-up.sh`. Sample steps for sharing `kubeconfig` below. **1. Create a cluster** ```bash cluster/kube-up.sh ``` **2. Copy `kubeconfig` to new host** ```bash scp $HOME/.kube/config user@remotehost:/path/to/.kube/config ``` **3. On new host, make copied `config` available to `kubectl`** * Option A: copy to default location ```bash mv /path/to/.kube/config $HOME/.kube/config ``` * Option B: copy to working directory (from which kubectl is run) ```bash mv /path/to/.kube/config $PWD ``` * Option C: manually pass `kubeconfig` location to `.kubectl` ```bash # via environment variable export KUBECONFIG=/path/to/.kube/config # via commandline flag kubectl ... --kubeconfig=/path/to/.kube/config ``` ## Manually Generating `kubeconfig` `kubeconfig` is generated by `kube-up` but you can generate your own using (any desired subset of) the following commands. ```bash # create kubeconfig entry kubectl config set-cluster $CLUSTER_NICK --server=https://1.1.1.1 \ --certificate-authority=/path/to/apiserver/ca_file \ --embed-certs=true \ # Or if tls not needed, replace --certificate-authority and --embed-certs with --insecure-skip-tls-verify=true --kubeconfig=/path/to/standalone/.kube/config # create user entry kubectl config set-credentials $USER_NICK # bearer token credentials, generated on kube master --token=$token \ # use either username|password or token, not both --username=$username \ --password=$password \ --client-certificate=/path/to/crt_file \ --client-key=/path/to/key_file \ --embed-certs=true --kubeconfig=/path/to/standalone/.kubeconfig # create context entry kubectl config set-context $CONTEXT_NAME --cluster=$CLUSTER_NICKNAME --user=$USER_NICK ``` Notes: * The `--embed-certs` flag is needed to generate a standalone `kubeconfig`, that will work as-is on another host. * `--kubeconfig` is both the preferred file to load config from and the file to save config too. In the above commands the `--kubeconfig` file could be omitted if you first run ```bash export KUBECONFIG=/path/to/standalone/.kube/config ``` * The ca_file, key_file, and cert_file referenced above are generated on the kube master at cluster turnup. They can be found on the master under `/srv/kubernetes`. Bearer token/basic auth are also generated on the kube master. For more details on `kubeconfig` see [kubeconfig-file.md](kubeconfig-file.md), and/or run `kubectl config -h`. ## Merging `kubeconfig` Example `kubectl` loads and merges config from the following locations (in order) 1. `--kubeconfig=path/to/.kube/config` commandline flag 2. `KUBECONFIG=path/to/.kube/config` env variable 3. `$PWD/.kubeconfig` 4. `$HOME/.kube/config` If you create clusters A, B on host1, and clusters C, D on host2, you can make all four clusters available on both hosts by running ```bash # on host2, copy host1's default kubeconfig, and merge it from env scp host1:/path/to/home1/.kube/config path/to/other/.kube/config export $KUBECONFIG=path/to/other/.kube/config # on host1, copy host2's default kubeconfig and merge it from env scp host2:/path/to/home2/.kube/config path/to/other/.kube/config export $KUBECONFIG=path/to/other/.kube/config ``` Detailed examples and explanation of `kubeconfig` loading/merging rules can be found in [kubeconfig-file.md](http://docs.k8s.io/kubeconfig-file.md). [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/sharing-clusters.md?pixel)]()