Flocker [1] is an open-source container data volume manager for
Dockerized applications.
This PR adds a volume plugin for Flocker.
The plugin interfaces the Flocker Control Service REST API [2] to
attachment attach the volume to the pod.
Each kubelet host should run Flocker agents (Container Agent and Dataset
Agent).
The kubelet will also require environment variables that contain the
host and port of the Flocker Control Service. (see Flocker architecture
[3] for more).
- `FLOCKER_CONTROL_SERVICE_HOST`
- `FLOCKER_CONTROL_SERVICE_PORT`
The contribution introduces a new 'flocker' volume type to the API with
fields:
- `datasetName`: which indicates the name of the dataset in Flocker
added to metadata;
- `size`: a human-readable number that indicates the maximum size of the
requested dataset.
Full documentation can be found docs/user-guide/volumes.md and examples
can be found at the examples/ folder
[1] https://clusterhq.com/flocker/introduction/
[2] https://docs.clusterhq.com/en/1.3.1/reference/api.html
[3] https://docs.clusterhq.com/en/1.3.1/concepts/architecture.html
Add an experimental network plugin implementation named "cni" that
uses the Container Networking Interface (CNI) specification for
configuring networking for pods.
https://github.com/appc/cni/blob/master/SPEC.md
Add a HostIPC field to the Pod Spec to create containers sharing
the same ipc of the host.
This feature must be explicitly enabled in apiserver using the
option host-ipc-sources.
Signed-off-by: Federico Simoncelli <fsimonce@redhat.com>
1. Add EvnetRecordQps and EventBurst parameter in kubelet.
2. If EvnetRecordQps and EventBurst was set, rate limit events in kubelet
with a independent ratelimiter as setted.
Allow the user to specify the resolver configuration file that is used
to determine the default DNS parameters. This defaults to the system's
/etc/resolv.conf.
Currently, whenever there is any update, kubelet would force all pod workers to
sync again, causing resource contention and hence performance degradation.
This commit flips kubelet to use incremental updates (as opposed to snapshots).
This allows us to know what pods have changed and send updates to those pod
workers only. The `SyncPods` function has been replaced with individual
handlers, each handling an operation (ADD, REMOVE, UPDATE). Pod workers are
still triggered periodically, and kubelet performs periodic cleanup as well.
This commit also spawns a new goroutine solely responsible for killing pods.
This is necessary because pod killing could hold up the sync loop for
indefinitely long amount of time now user can define the graceful termination
period in the container spec.
pflag can handle IP addresses so use the pflag code instead of doing it
ourselves. This means our code just uses net.IP and we don't have all of
the useless casting back and forth!
First is initializing a KubeletConfig that starts no background
processes. Second is running the config. Provide a legacy path
that won't impact older callers while making it easier to customize
the interfaces passed to the Kubelet.
Used by OpenShift to inject some custom interfaces to the Kubelet
for config management.
PR #10643 Started adding the dns names for the kubernetes master to self
sign certs which were created. The kubelet uses this same code, and thus
the kubelet cert started saying it was valid for these name as well.
While hardless, the kubelet cert shouldn't claim to be these things. So
make the caller explicitly list both their ip and dns subject alt names.
A cert from GCE shows:
- IP Address:23.236.49.122
- IP Address:10.0.0.1
- DNS:kubernetes,
- DNS:kubernetes.default
- DNS:kubernetes.default.svc
- DNS:kubernetes.default.svc.cluster.local
- DNS:e2e-test-zml-master
A similarly configured self signed cert shows:
- IP Address:23.236.49.122
- IP Address:10.0.0.1
- DNS:kubernetes
- DNS:kubernetes.default
- DNS:kubernetes.default.svc
So we are missing the fqdn kubernetes.default.svc.cluster.local. The
apiserver does not even know the fqdn! it's defined entirely by the
kubelet! We also do not have the cluster name certificate. This may be
--cluster-name= argument to the apiserver but will take a bit more
research.
Refactor GetNodeHostIP into pkg/util/node (instead of pkg/util to break import cycle).
Include internalIP in gce NodeAddresses. Remove NodeLegacyHostIP
Add support for pluggable Docker exec handlers. The default handler is
now Docker's native exec API call. The previous default, nsenter, can be
selected by passing --docker-exec-handler=nsenter when starting the
kubelet.