Add ELB proxy protocol support via the annotation
"service.beta.kubernetes.io/aws-load-balancer-proxy-protocol". This
allows servers like Nginx and Haproxy to retrieve the real IP address of
a remote client.
With quotes, the service doesn't start for systemd 219 with the error
saying the path of the netns cannot be found.
This PR fixes the bug by removing the quotes surround the netns path.
Remove test "5-1", it's flaky as it depends on order of execution of
goroutines. When the controller starts, existing claim is enqueued as
"initial sync event" and a new volume is enqueued to separate goroutine.
It is not deterministic which goroutine processes its events first and
there is no way how to tell that the claim event was processed.
Also, force resync of the controllers after the test to make sure all
events are processed.
Automatic merge from submit-queue
daemonset handle DeletedFinalStateUnknown
During an e2e run in OpenShift we ran into the DS controller panic when handling `DeletedFinalStateUnknown`. This PR checks for `DeletedFinalStateUnknown` and queues the embedded object if it is a `DaemonSet`.
@mikedanese - would you mind taking a look?
@deads2k
```
panic: interface conversion: interface is cache.DeletedFinalStateUnknown, not *extensions.DaemonSet
goroutine 4369 [running]:
k8s.io/kubernetes/pkg/controller/daemon.func·005(0x2f8a0c0, 0xc20b559680)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/daemon/controller.go:160 +0x50
k8s.io/kubernetes/pkg/controller/framework.ResourceEventHandlerFuncs.OnDelete(0xc20a0ae090, 0xc20a0ae0a0, 0xc20a0ae0b0, 0x2f8a0c0, 0xc20b559680)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:178 +0x41
k8s.io/kubernetes/pkg/controller/framework.(*ResourceEventHandlerFuncs).OnDelete(0xc20b8ebf20, 0x2f8a0c0, 0xc20b559680)
<autogenerated>:25 +0xb5
k8s.io/kubernetes/pkg/controller/framework.func·001(0x2f8a280, 0xc20b5522e0, 0x0, 0x0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:248 +0x4be
k8s.io/kubernetes/pkg/controller/framework.(*Controller).processLoop(0xc20bb727e0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:122 +0x6f
k8s.io/kubernetes/pkg/controller/framework.*Controller.(k8s.io/kubernetes/pkg/controller/framework.processLoop)·fm()
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:97 +0x27
k8s.io/kubernetes/pkg/util/wait.func·001()
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/util/wait/wait.go:66 +0x61
k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc209f8cfb8, 0x3b9aca00, 0x0, 0xc2080543c0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/util/wait/wait.go:67 +0x8f
k8s.io/kubernetes/pkg/util/wait.Until(0xc209f8cfb8, 0x3b9aca00, 0xc2080543c0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/util/wait/wait.go:47 +0x4a
k8s.io/kubernetes/pkg/controller/framework.(*Controller).Run(0xc20bb727e0, 0xc2080543c0)
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/framework/controller.go:97 +0x1fb
created by k8s.io/kubernetes/pkg/controller/daemon.(*DaemonSetsController).Run
/data/src/github.com/openshift/origin/Godeps/_workspace/src/k8s.io/kubernetes/pkg/controller/daemon/controller.go:212 +0xae
```
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin_check/1002/artifact/origin/artifacts/test-cmd/logs/openshift.log
Automatic merge from submit-queue
ScheduledJob validation
@erictune while playing earlier today I've noticed `suspend` isn't a pointer which requires it to be set. Additionally the validation for job selectors is too strict in that it requires the selector to match produced pods, which doesn't make sense for SJ, I've changed it to being forbidden to set entirely.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Reduce volume controller sync period
fixes#24236 and most probably also fixes#25294.
Needs #25881! With the cache, binder is not affected by sync period. Without the cache, binding of 1000 PVCs takes more than 5 minutes (instead of ~70 seconds).
15 seconds were chosen by fair 2d10 roll :-)
The controller needs to fill its caches before it starts binding/recycling/
deleting or provisioning volumes and claims. This was done using blocking
initial 'xxx added' from going through syncClaim/syncVolume. However, when
the caches were full, the controller waited for the next sync period to do
actual binding/recycling etc.
In this patch, the controller fills its caches directly from etcd and then
processes initial 'xxx added' events to reconcile the world and bind/recycle/
delete/provision stuff, resulting in faster binding after startup.
Fixes#25967 (properly)
Automatic merge from submit-queue
volume controller: Add cache with the latest version of PVs and PVCs
When the controller binds a PV to PVC, it saves both objects to etcd. However, there is still an old version of these objects in the controller Informer cache. So, when a new PVC comes, the PV is still seen as available and may get bound to the new PVC. This will be blocked by etcd, still, it creates unnecessary traffic that slows everything down.
To make everything worse, when periodic sync with the old PVC is performed, this PVC is seen by the controller as Pending (while it's already Bound on etcd) and will be bound to a different PV. Writing to this PV won't be blocked by etcd, only subsequent write of the PVC fails. So, the controller will need to roll back the PV in another transaction(s). The controller can keep itself pretty busy this way.
Also, we save bound PVs (and PVCs) as two transactions - we save say PV.Spec first and then .Status. The controller gets "PV.Spec updated" event from etcd and tries to fix the Status, as it seems to the controller it's outdated. This write again fails - there already is a correct version in etcd.
As we can't influence the Informer cache, it is read-only to the controller, this patch introduces second cache in the controller, which holds latest and greatest version on PVs and PVCs to prevent these useless writes to etcd . It gets updated with events from etcd *and* after etcd confirms successful save of PV/PVC modified by the controller.
The cache stores only *pointers* to PVs/PVCs, so in ideal case it shares the actual object data with the informer cache. They will diverge only for a short time when the controller modifies something and the informer cache did not get update events yet.
@kubernetes/sig-storage
Automatic merge from submit-queue
Move shell completion generation into 'kubectl completion' command
Remove static shell completion scripts from the repo and add `completion` command to kubectl:
```bash
$ source <(kubectl completion bash)
```
or
```bash
$ source <(kubectl completion zsh)
```
This makes maintenance easier because no static scripts must be generated and committed anymore in the repo.
Moreover, kubectl is self-contained again for the user including the latest completion code. I am thinking about the use-case of updating kubectl via gcloud (or some package manager). The completion code is always in-sync, without the need to download a `contrib/completion/bash/kubectl` file from github.
Opinions are welcome /cc @eparis @nak3
Fixes https://github.com/openshift/origin/issues/5290
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/23801)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Kubelet: Cache image history to eliminate the performance regression
Fix https://github.com/kubernetes/kubernetes/issues/25057.
The image history operation takes almost 50% of cpu usage in kubelet performance test. We should cache image history instead of getting it from runtime everytime.
This PR cached image history in imageStatsProvider and added unit test.
@yujuhong @vishh
/cc @kubernetes/sig-node
Mark v1.3 because this is a relatively significant performance regression.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
kubectl: cast scale errors to actual errors when deleting
Fixes some of the deployment reaper timeouts in e2e
@kubernetes/deployment @soltysh
Automatic merge from submit-queue
Do not call NewFlannelServer() unless flannel overlay is enabled
Ref: #26093
This makes so kubelet does not warn the user that iptables isn't in PATH, although the user didn't enable the flannel overlay.
@vishh @freehan @bprashanth
Automatic merge from submit-queue
Add assert.NotNil for test case
I hardcode the `DefaultInterfaceName` from `eth0` to `eth-k8sdefault` at release 1.2.0, in order to test my CNI plugins. When running the test, it panics and prints wrongly formatted messages as below.
In the test case `TestBuildSummary`, `containerInfoV2ToNetworkStats` will return `nil` if `DefaultInterfaceName` is not `eth0`. So maybe we should add `assert.NotNil` to the test case.
```
ok k8s.io/kubernetes/pkg/kubelet/server 0.591s
W0523 03:25:28.257074 2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=node:FooNode)
W0523 03:25:28.257322 2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=pod:test0_pod1)
W0523 03:25:28.257361 2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=pod:test0_pod0)
W0523 03:25:28.257419 2257 summary.go:311] Missing default interface "eth-k8sdefault" for s%!(EXTRA string=pod:test2_pod0)
--- FAIL: TestBuildSummary (0.00s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x471817]
goroutine 16 [running]:
testing.func·006()
/usr/src/go/src/testing/testing.go:441 +0x181
k8s.io/kubernetes/pkg/kubelet/server/stats.checkNetworkStats(0xc20806d3b0, 0x140bbc0, 0x4, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/stats/summary_test.go:296 +0xc07
k8s.io/kubernetes/pkg/kubelet/server/stats.TestBuildSummary(0xc20806d3b0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/stats/summary_test.go:124 +0x11d2
testing.tRunner(0xc20806d3b0, 0x1e43180)
/usr/src/go/src/testing/testing.go:447 +0xbf
created by testing.RunTests
/usr/src/go/src/testing/testing.go:555 +0xa8b
```
Automatic merge from submit-queue
Setting TLS1.2 minimum because TLS1.0 and TLS1.1 are vulnerable
TLS1.0 is known as vulnerable since it can be downgraded to SSL
https://blog.varonis.com/ssl-and-tls-1-0-no-longer-acceptable-for-pci-compliance/
TLS1.1 can be vulnerable if cipher RC4-SHA is used, and in Kubernetes it is, you can check it with
`
openssl s_client -cipher RC4-SHA -connect apiserver.k8s.example.com:443
`
https://www.globalsign.com/en/blog/poodle-vulnerability-expands-beyond-sslv3-to-tls/
Test suites like Qualys are reporting this Kubernetes issue as a level 3 vulnerability, they recommend to upgrade to TLS1.2 that is not affected, quoting Qualys:
`
RC4 should not be used where possible. One reason that RC4 was still being used was BEAST and Lucky13 attacks against CBC mode ciphers in
SSL and
TLS. However, TLSv 1.2 or later address these issues.
`
Automatic merge from submit-queue
Fix panic when the namespace flag is not present
We don't set the namespace in OpenShift, so we need to check if the namespace flag is present.
Automatic merge from submit-queue
Various kubenet fixes (panics and bugs and cidrs, oh my)
This PR fixes the following issues:
1. Corrects an inverse error-check that prevented `shaper.Reset` from ever being called with a correct ip address
2. Fix an issue where `parseCIDR` would fail after a kubelet restart due to an IP being stored instead of a CIDR being stored in the cache.
3. Fix an issue where kubenet could panic in TearDownPod if it was called before SetUpPod (e.g. after a kubelet restart).. because of bug number 1, this didn't happen except in rare situations (see 2 for why such a rare situation might happen)
This adds a test, but more would definitely be useful.
The commits are also granular enough I could split this up more if desired.
I'm also not super-familiar with this code, so review and feedback would be welcome.
Testing done:
```
$ cat examples/egress/egress.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: egress
name: egress-output
annotations: {"kubernetes.io/ingress-bandwidth": "300k"}
spec:
restartPolicy: Never
containers:
- name: egress
image: busybox
command: ["sh", "-c", "sleep 60"]
$ cat kubelet.log
...
Running: tc filter add dev cbr0 protocol ip parent 1:0 prio 1 u32 match ip dst 10.0.0.5/32 flowid 1:1
# setup
...
Running: tc filter del dev cbr0 parent 1:proto ip prio 1 handle 800::800 u32
# teardown
```
I also did various other bits of manual testing and logging to hunt down the panic and other issues, but don't have anything to paste for that
cc @dcbw @kubernetes/sig-network
Automatic merge from submit-queue
Add a NodeCondition "NetworkUnavaiable" to prevent scheduling onto a node until the routes have been created
This is new version of #26267 (based on top of that one).
The new workflow is:
- we have an "NetworkNotReady" condition
- Kubelet when it creates a node, it sets it to "true"
- RouteController will set it to "false" when the route is created
- Scheduler is scheduling only on nodes that doesn't have "NetworkNotReady ==true" condition
@gmarek @bgrant0607 @zmerlynn @cjcullen @derekwaynecarr @danwinship @dcbw @lavalamp @vishh
Automatic merge from submit-queue
rkt: Fix panic in setting ReadOnlyRootFS
What the title says. I wish this method were broken out in a reasonably unit testable way. fixing this panic is more important for the second though, testing will come in a later commit.
I observed the panic in a `./hack/local-up-cluster.sh` run with rkt as the container runtime.
This is also the panic that's failing our jenkins against master ([recent run](https://console.cloud.google.com/m/cloudstorage/b/rktnetes-jenkins/o/logs/kubernetes-e2e-gce/1946/artifacts/jenkins-e2e-minion-group-qjh3/kubelet.log for the log output of a recent run))
cc @tmrts @yifan-gu
Automatic merge from submit-queue
Sort revisions in rollout history as integers
Previously keys were sorted as strings, thus it was possible to see such order as 1, 10, 2, 3, 4, 5.
fixes: #25788
Automatic merge from submit-queue
rkt: Pass through podIP
This is needed for the /etc/hosts mount and the downward API to work.
Furthermore, this is required for the reported `PodStatus` to be
correct.
The `Status` bit mostly worked prior to #25062, and this restores that
functionality in addition to the new functionality.
In retrospect, the regression in status is large enough the prior PR should have included at least some of this; my bad for not realizing the full implications there.
#25902 is needed for downwards api stuff, but either merge order is fine as neither will break badly by itself.
cc @yifan-gu @dcbw
Automatic merge from submit-queue
Delay flush if the watch queue has pending items
Simple deferral of flush can reduce Syscalls when watch queues build up.
Simpler version of #24768Fixes#24729
@xiang90 @wojtek-t
Automatic merge from submit-queue
Stabilize map order in kubectl describe
Refs #25251.
Add `SortedResourceNames()` methods to map type aliases in order to achieve stable output order for `kubectl` descriptors.
This affects QoS classes, resource limits, and resource requests.
A few remarks:
1. I couldn't find map usages for described fields other than the ones mentioned above. Then again, I failed to identify those programmatically/systematically. Pointers given, I'd be happy to cover any gaps within this PR or along additional ones.
1. It's somewhat difficult to deterministically test a function that brings reliable ordering to Go maps due to its randomizing nature. None of the possibilities I came up with (rely a "probabilistic testing" against repeatedly created maps, add complexity through additional interfaces) seemed very appealing to me, so I went with testing my `sort.Interface` implementation and the changed logic in `kubectl.describeContainers()`.
1. It's apparently not possible to implement a single function that sorts any map's keys generically in Go without producing lots of boilerplate: a `map[<key type>]interface{}` is different from any other map type and thus requires explicit iteration on the caller site to convert back and forth. Unfortunately, this makes it hard to completely avoid code/test duplication.
Please let me know what you think.
Automatic merge from submit-queue
Round should avoid clearing s, save a string
Instead of saving bytes, save a string, which makes String() faster
and does not unduly penalize marshal. During parse, save the string
if it is in canonical form.
@wojtek-t @lavalamp this makes quantity.String() faster for a few cases
where it matters. We were also not clearing s properly before on Round()
Automatic merge from submit-queue
Make UnsafeConversion fast by inlining copies
Not ready yet (need to add a copy to "safe" conversion and add mutation tests to roundtrip api/serialization_test).
Cuts another 10% off decode and encode.
Automatic merge from submit-queue
Fix system container detection in kubelet on systemd
```release-note
Fix system container detection in kubelet on systemd.
This fixed environments where CPU and Memory Accounting were not enabled on the unit
that launched the kubelet or docker from reporting the root cgroup when
monitoring usage stats for those components.
```
Fixes https://github.com/kubernetes/kubernetes/issues/25909
/cc @kubernetes/sig-node @kubernetes/rh-cluster-infra @vishh @dchen1107
Automatic merge from submit-queue
rkt: Use volumes from RunContainerOptions
This replaces the previous creation of mounts from the `volumeGetter`
with mounts provided via RunContainerOptions.
This is motivated by the fact that the latter has a more complete set of
mounts (e.g. the `/etc/hosts` one created in kubelet.go in the case an IP is available).
This does not induce further e2e failures as far as I can tell.
cc @yifan-gu
Automatic merge from submit-queue
prevent namespace cleanup hotloop
Found chasing a sentry report. Looks like we hot-loop on namespace deletion failures.
@derekwaynecarr ptal
Automatic merge from submit-queue
kube-controller-manager: Add configure-cloud-routes option
This allows kube-controller-manager to allocate CIDRs to nodes (with
allocate-node-cidrs=true), but will not try to configure them on the
cloud provider, even if the cloud provider supports Routes.
The default is configure-cloud-routes=true, and it will only try to
configure routes if allocate-node-cidrs is also configured, so the
default behaviour is unchanged.
This is useful because on AWS the cloud provider configures routes by
setting up VPC routing table entries, but there is a limit of 50
entries. So setting configure-cloud-routes on AWS would allow us to
continue to allocate node CIDRs as today, but replace the VPC
route-table mechanism with something not limited to 50 nodes.
We can't just turn off the cloud-provider entirely because it also
controls other things - node discovery, load balancer creation etc.
Fix#25602
Automatic merge from submit-queue
Expose GET and PATCH for status subresource
We can do this for other status subresource. I only updated node/status in this PR to unblock https://github.com/kubernetes/node-problem-detector/issues/9.
cc @Random-Liu @lavalamp
The length of an IP can be 4 or 16, and even if 16 it can be a valid
ipv4 address. This check is the more-correct way to handle this, and it
also provides more granular error messages.
This replaces the previous creation of mounts from the `volumeGetter`
with mounts provided via RunContainerOptions.
This is motivated by the fact that the latter has a more complete set of
mounts (e.g. the `/etc/hosts` one created in kubelet.go).
Teardown can run before Setup when the kubelet is restarted... in that
case, the shaper was nil and thus calling the shaper resulted in a panic
This fixes that by ensuring the shaper is always set... +1 level of
indirection and all that.
Before this change, the podCIDRs map contained both cidrs and ips
depending on which code path entered a container into it.
Specifically, SetUpPod would enter a CIDR while GetPodNetworkStatus
would enter an IP.
This normalizes both of them to always enter just IP addresses.
This also removes the now-redundant cidr parsing that was used to get
the ip before