From 429e9bda5ee3f7eb2f0b2416d1a2975e9151d90b Mon Sep 17 00:00:00 2001 From: Paul Morie Date: Thu, 30 Jul 2015 21:33:00 -0400 Subject: [PATCH] Add information about protections/risks to secrets user guide --- docs/user-guide/secrets.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/docs/user-guide/secrets.md b/docs/user-guide/secrets.md index 5cf9c004f4..1262d1664e 100644 --- a/docs/user-guide/secrets.md +++ b/docs/user-guide/secrets.md @@ -504,6 +504,9 @@ On most Kubernetes-project-maintained distributions, communication between user to the apiserver, and from apiserver to the kubelets, is protected by SSL/TLS. Secrets are protected when transmitted over these channels. +Secret data on nodes is stored in tmpfs volumes and thus does not come to rest +on the node. + There may be secrets for several pods on the same node. However, only the secrets that a pod requests are potentially visible within its containers. Therefore, one Pod does not have access to the secrets of another pod. @@ -515,12 +518,16 @@ Pod level](#use-case-two-containers). ### Risks + - In the API server secret data is stored as plaintext in etcd; therefore: + - Administrators should limit access to etcd to admin users + - Secret data in the API server is at rest on the disk that etcd uses; admins may want to wipe/shred disks + used by etcd when no longer in use - Applications still need to protect the value of secret after reading it from the volume, such as not accidentally logging it or transmitting it to an untrusted party. - A user who can create a pod that uses a secret can also see the value of that secret. Even if apiserver policy does not allow that user to read the secret object, the user could run a pod which exposes the secret. - If multiple replicas of etcd are run, then the secrets will be shared between them. + - If multiple replicas of etcd are run, then the secrets will be shared between them. By default, etcd does not secure peer-to-peer communication with SSL/TLS, though this can be configured. - It is not possible currently to control which users of a Kubernetes cluster can access a secret. Support for this is planned.