To prepare for implementing ControllerRef across all controllers,
this pushes the common adopt/orphan logic into ControllerRefManager
so each controller doesn't have to duplicate it.
This also shares the adopt/orphan logic between Pods and ReplicaSets,
so it lives in only one place.
Adds functionality to look at the components of the GOPATH looking for
the gazel executable, rather than treating the GOPATH as if it was a
single directory.
Automatic merge from submit-queue (batch tested with PRs 41505, 41484, 41544, 41514, 41022)
Proxy defer on update events
This PR is a series of discrete movements in refactoring some of kube-proxy's twistier code in prep to be more async. It should be reviewed one commit at a time. Each commit is a smallish movement, which should be easier to examine. I added significant tests along the way, which, unsurprisingly, found some bugs.
Automatic merge from submit-queue (batch tested with PRs 41505, 41484, 41544, 41514, 41022)
several issues hit while trying to make it easy to register APIs
I was trying to create a script that would register all API versions on a given server and ended up hitting several problems. These are the fixes.
@sttts I suspect that I won't be able to continue down the host-network approach, since that means I won't be able to use in-cluster DNS without some finagling. It *could* be set up (and we make it work as a for instance), but the simple enablement approach will be hosted on the infrastructure. I'll go back to that.
Automatic merge from submit-queue (batch tested with PRs 41505, 41484, 41544, 41514, 41022)
pkg/api/install: use apimachinery/announce+registered
Make core group a little bit less special.
Automatic merge from submit-queue (batch tested with PRs 41505, 41484, 41544, 41514, 41022)
add front proxy to kubeadm created kube-apiservers
The front proxy authenticator configuration has been in a release or two. It allows a front proxy (secured by mutual TLS auth) to provide user information for a request. The kube-aggregator uses this to securely terminate authentication (has to terminate TLS and thus client-certs) and communicate user info to backing API servers.
Since the kube-apiserver always verifies the front-proxy via a client certificate, this isn't open for abuse unless you already have access to either the signing key or client cert which kubeadm creates locally. If you got there, you already owned the box. Therefore, this adds the authenticator unconditionally.
@luxas Are there e2e tests for `kubeadm`?
@liggitt @kubernetes/sig-auth-misc
Automatic merge from submit-queue (batch tested with PRs 41505, 41484, 41544, 41514, 41022)
[Federation] Fixes the newly-added test for replicaset deletion upon namespace deletion.
See #41278 for the original test.
**Release note**:
```release-note
NONE
```
- Factor out token constants to kubeadmconstants.
- Move cmd/kubeadm/app/util/{,token/}tokens.go
- Use the token-id, token-secret, etc constants provided by the bootstrapapi package
- Move cmd/kubeadm/app/master/tokens.go to cmd/kubeadm/app/phases/token/csv.go
This refactor basically makes it possible to hook up kubeadm to the BootstrapSigner controller later on
`/var/run` is not world-writable on my OSX 10.11.x setup, so tests that
standup a secure apiserver fail with the default cert dir. Use a
tempdir instead.
This commit converts the HPA controller over to using the new version of
the HorizontalPodAutoscaler object found in autoscaling/v2alpha1. Note
that while the autoscaler will accept requests for object metrics, the
scale client will return an error on attempts to get object metrics
(since that requires the new custom metrics API, which is not yet
implemented).
This also enables the HPA object in v2alpha1 as a retrievable API
version by default.
Dead infra containers may still have network resources allocated to
them and may not be GC-ed for a long time. But allowing SyncPod()
to restart an infra container before the old one is destroyed
prevents network plugins from carrying the old network details
(eg IPAM) over to the new infra container.
The docker runtime doesn't tear down networking when GC-ing pods.
rkt already does so make docker do it too. To ensure this happens,
networking is always torn down for the container even if the
container itself is not deleted.
This prevents IPAM from leaking when the pod gets killed for
some reason outside kubelet (like docker restart) or when pods
are killed while kubelet isn't running.
Fixes: https://github.com/kubernetes/kubernetes/issues/14940
Related: https://github.com/kubernetes/kubernetes/pull/35572
We need to tear down networking when garbage collecting containers too,
and GC is run from a different goroutine in kubelet. We don't want
container network operations running for the same pod concurrently.
The PluginManager almost duplicates the network plugin interface, but
not quite since the Init() function should be called by whatever
actually finds and creates the network plugin instance. Only then
does it get passed off to the PluginManager.
The Manager synchronizes pod-specific network operations like setup,
teardown, and pod network status. It passes through all other
operations so that runtimes don't have to cache the network plugin
directly, but can use the PluginManager as a wrapper.
Automatic merge from submit-queue (batch tested with PRs 41466, 41456, 41550, 41238, 41416)
Don't use json.Marshal when printing error bodies
Internal types panic when json.Marshal is called to prevent accidental
use.
Fixes#40491
Automatic merge from submit-queue (batch tested with PRs 41466, 41456, 41550, 41238, 41416)
add check to authorization config
Prompt user to create the config when using abac/webhook.
Automatic merge from submit-queue (batch tested with PRs 41466, 41456, 41550, 41238, 41416)
Revert "Modify load test to not create too may services/endpoints"
Reverts kubernetes/kubernetes#40798
We seem to be ready to go for it now.
Automatic merge from submit-queue (batch tested with PRs 41466, 41456, 41550, 41238, 41416)
Delay Deletion of a Pod until volumes are cleaned up
#41436 fixed the bug that caused #41095 and #40239 to have to be reverted. Now that the bug is fixed, this shouldn't cause problems.
@vishh @derekwaynecarr @sjenning @jingxu97 @kubernetes/sig-storage-misc
Automatic merge from submit-queue
Bump fluentd-gcp google_cloud plugin version
Bump the version of `fluent-plugin-google-cloud` in fluentd-gcp image, because it's broken for version `0.5.2`.
Recently, gem `google-api-client` was updated to version `0.10.0`. The new version broke `fluent-plugin-google-cloud` which doesn't specify the upper version of `google-api-client` gem. I'm bumping the version used in our image to allow future changes in this release to be run and tested.
This PR doesn't bump the version, since no effective changes has happened, leaving this for the next PR to do.
CC @igorpeshansky