![]() Automatic merge from submit-queue Add client side event spam filtering **What this PR does / why we need it**: Add client side event spam filtering to stop excessive traffic to api-server from internal cluster components. this pr defines a per source+object event budget of 25 burst with refill of 1 every 5 minutes. i tested this pr on the following scenarios: **Scenario 1: Node with 50 crash-looping pods** ``` $ create 50 crash-looping pods on a single node $ kubectl run bad --image=busybox --replicas=50 --command -- derekisbad ``` Before: * POST events with peak of 1.7 per second, long-tail: 0.2 per second * PATCH events with peak of 5 per second, long-tail: 5 per second After: * POST events with peak of 1.7 per second, long-tail: 0.2 per second * PATCH events with peak of 3.6 per second, long-tail: 0.2 per second Observation: * https://github.com/kubernetes/kubernetes/pull/47462 capped the number of total events in the long-tail as expected, but did nothing to improve total spam of master. **Scenario 2: replication controller limited by quota** ``` $ kubectl create quota my-quota --hard=pods=1 $ kubectl run nginx --image=nginx --replicas=50 ``` Before: * POST events not relevant as aggregation worked well here. * PATCH events with peak and long-tail of 13.6 per second After: * POST events not relevant as aggregation worked well here. * PATCH events with peak: .35 per second, and long-tail of 0 **Which issue this PR fixes** fixes https://github.com/kubernetes/kubernetes/issues/47366 **Special notes for your reviewer**: this was a significant problem in a kube 1.5 cluster we are running where events were co-located in a single etcd. this cluster was normal to have larger numbers of unhealty pods as well as denial by quota. **Release note**: ```release-note add support for client-side spam filtering of events ``` |
||
---|---|---|
.github | ||
Godeps | ||
api | ||
build | ||
cluster | ||
cmd | ||
docs | ||
examples | ||
federation | ||
hack | ||
logo | ||
pkg | ||
plugin | ||
staging | ||
test | ||
third_party | ||
translations | ||
vendor | ||
.bazelrc | ||
.generated_files | ||
.gitattributes | ||
.gitignore | ||
.kazelcfg.json | ||
BUILD.bazel | ||
CHANGELOG.md | ||
CONTRIBUTING.md | ||
LICENSE | ||
Makefile | ||
Makefile.generated_files | ||
OWNERS | ||
OWNERS_ALIASES | ||
README.md | ||
SUPPORT.md | ||
Vagrantfile | ||
WORKSPACE | ||
code-of-conduct.md | ||
labels.yaml |
README.md
Kubernetes
![](https://github.com/kubernetes/kubernetes/raw/master/logo/logo.png)
Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.
Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If you are a company that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how Kubernetes plays a role, read the CNCF announcement.
To start using Kubernetes
See our documentation on kubernetes.io.
Try our interactive tutorial.
Take a free course on Scalable Microservices with Kubernetes.
To start developing Kubernetes
The community repository hosts all information about building Kubernetes from source, how to contribute code and documentation, who to contact about what, etc.
If you want to build Kubernetes right away there are two options:
You have a working Go environment.
$ go get -d k8s.io/kubernetes
$ cd $GOPATH/src/k8s.io/kubernetes
$ make
You have a working Docker environment.
$ git clone https://github.com/kubernetes/kubernetes
$ cd kubernetes
$ make quick-release
If you are less impatient, head over to the developer's documentation.
Support
If you need support, start with the troubleshooting guide and work your way through the process that we've outlined.
That said, if you have questions, reach out to us one way or another.