Automatic merge from submit-queue
controller-manager support number of garbage collector workers to be configurable
The number of garbage collector workers of controller-manager is a fixed value 5 now, make it configurable should more properly
Automatic merge from submit-queue
Unify logging in generators and avoid annoying logs.
@thockin regarding our discussing in the morning
@lavalamp - FYI
This mostly takes the previously checked in files and removes them, and moves
the generation to be on-demand instead of manual. Manually verified no change
in generated output.
Automatic merge from submit-queue
Deepcopy: avoid struct copies and reflection Call
- make signature of generated deepcopy methods symmetric with `in *type, out *type`, avoiding copies of big structs on the stack
- switch to `in interface{}, out interface{}` which allows us to call them with without `reflect.Call`
The first change reduces runtime of BenchmarkPodCopy-4 from `> 3500ns` to around `2300ns`.
The second change reduces runtime to around `1900ns`.
Automatic merge from submit-queue
Prevent kube-proxy from panicing when sysfs is mounted as read-only.
Fixes https://github.com/kubernetes/kubernetes/issues/25543.
This PR:
* Checks the permission of sysfs before setting conntrack hashsize, and returns an error "readOnlySysFSError" if sysfs is readonly. As I know, this is the only place we need write permission to sysfs, CMIIW.
* Update a new node condition 'RuntimeUnhealthy' with specific reason, message and hit to the administrator about the remediation.
I think this should be an acceptable fix for now.
Node problem detector is designed to integrate with different problem daemons, but **the main logic is in the problem detection phase**. After the problem is detected, what node problem detector does is also simply updating a node condition.
If we let kube-proxy pass the problem to node problem detector and let node problem detector update the node condition. It looks like an unnecessary hop. The logic in kube-proxy won't be different from this PR, but node problem detector will have to open an unsafe door to other pods because the lack of authentication mechanism.
It is a bit hard to test this PR, because we don't really have a bad docker in hand. I can only manually test it:
* If I manually change the code to let it return `"readOnlySysFSError`, the node condition will be updated:
```
NetworkUnavailable False Mon, 01 Jan 0001 00:00:00 +0000 Fri, 08 Jul 2016 01:36:41 -0700 RouteCreated RouteController created a route
OutOfDisk False Fri, 08 Jul 2016 01:37:36 -0700 Fri, 08 Jul 2016 01:34:49 -0700 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Fri, 08 Jul 2016 01:37:36 -0700 Fri, 08 Jul 2016 01:34:49 -0700 KubeletHasSufficientMemory kubelet has sufficient memory available
Ready True Fri, 08 Jul 2016 01:37:36 -0700 Fri, 08 Jul 2016 01:35:26 -0700 KubeletReady kubelet is posting ready status. WARNING: CPU hardcapping unsupported
RuntimeUnhealthy True Fri, 08 Jul 2016 01:35:31 -0700 Fri, 08 Jul 2016 01:35:31 -0700 ReadOnlySysFS Docker unexpectedly mounts sysfs as read-only for privileged container (docker issue #24000). This causes the critical system components of Kubernetes not properly working. To remedy this please restart the docker daemon.
KernelDeadlock False Fri, 08 Jul 2016 01:37:39 -0700 Fri, 08 Jul 2016 01:35:34 -0700 KernelHasNoDeadlock kernel has no deadlock
Addresses: 10.240.0.3,104.155.176.101
```
* If not, the node condition `RuntimeUnhealthy` won't appear.
* If I run the permission checking code in a unprivileged container, it did return `readOnlySysFSError`.
I'm not sure whether we want to mark the node as `Unscheduable` when this happened, which only needs few lines change. I can do that if we think we should.
I'll add some unit test if we think this fix is acceptable.
/cc @bprashanth @dchen1107 @matchstick @thockin @alex-mohr
Mark P1 to match the original issue.
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/.github/PULL_REQUEST_TEMPLATE.md?pixel)]()
Automatic merge from submit-queue
Enable memory eviction by default
```release-note
Enable memory based pod evictions by default on the kubelet.
Trigger pod eviction when available memory falls below 100Mi.
```
See: https://github.com/kubernetes/kubernetes/issues/28552
/cc @kubernetes/rh-cluster-infra @kubernetes/sig-node
This fixes PodSpec to generate cleanly. No other types only half-generate (so
now we Fatalf), though several fail to generate at all (only Errorf for now).
There are ample opportunities to optimize and streamline here. For example,
there's no reason to have a function to convert IntStr to IntStr. Removing the
function does generate the right assignment, but it is unclear whether the
registered function is needed or not. I opted to leave it alone for now.
Another example is Convert_Slice_byte_To_Slice_byte, which just seems silly.
This drives conversion generation from file tags like:
// +conversion-gen=k8s.io/my/internal/version
.. rather than hardcoded lists of packages.
The only net change in generated code can be explained as correct. Previously
it didn't know that conversion was available.
This is used subsequently to simplify the conversion generation, so each
package can declare what peer-packages it uses, and have those imported
dynamically, rather than having one mega list of packages to import and not
really being clear why, for any given list item.
Automatic merge from submit-queue
Prep for not checking in generated, part 1/2
This PR is extracted from #25978 - it is just the deep-copy related parts. All the Makefile and conversion stuff is excluded.
@wojtek-t this is literally branched, a bunch of commits deleted, and a very small number of manual fixups applied. If you think this is easier to review (and if it passes CI) you can feel free to go over it again. I will follow this with a conversion-related PR to build on this.
Or if you prefer, just close this and let the mega-PR ride.
@lavalamp
This is the last piece of Clayton's #26179 to be implemented with file tags.
All diffs are accounted for. Followup will use this to streamline some
packages.
Also add some V(5) debugging - it was helpful in diagnosing various issues, it
may be helpful again.
This drives most of the logic of deep-copy generation from tags like:
// +deepcopy-gen=package
..rather than hardcoded lists of packages. This will make it possible to
subsequently generate code ONLY for packages that need it *right now*, rather
than all of them always.
Also remove pkgs that really do not need deep-copies (no symbols used
anywhere).
Previously we just tracked comments on the 'package' declaration. Treat all
file comments as one comment-block, for simplicity. Can be revisited if
needed.
This means that tags like:
// +foo=bar
// +foo=bat
..will produce []string{"bar", "bat"}. This is needed for later commits which
will want to use this to make code generation more self contained.
In bringing back Clayton's PR piece-by-piece this was almost as easy to
implement as his version, and is much more like what I think we should be
doing.
Specifically, any time which defines a .DeepCopy() method will have that method
called preferentially. Otherwise we generate our own functions for
deep-copying. This affected exactly one type - resource.Quantity. In applying
this heuristic, several places in the generated code were simplified.
To achieve this I had to convert types.Type.Methods from a slice to a map,
which seems correct anyway (to do by-name lookups).
This re-institutes some of the rolled-back logic from previous commits. It
bounds the scope of what the deepcopy generator is willing to do with regards
to generating and calling generated functions.
His PR cam during the middle of this development cycle, and it was easier to
burn it down and recreate it than try to patch it into an existing series and
re-test every assumption. This behavior will be re-introduced in subsequent
commits.
Automatic merge from submit-queue
Cleanup third party (pt 2)
Move forked-and-hacked golang code to the forked/ directory. Remove ast/build/parse code that is now in stdlib. Remove unused shell2junit
Automatic merge from submit-queue
WIP - Handle map[]struct{} in DeepCopy
Deep copy was not properly handling the empty struct case we use for Sets.
@lavalamp I need your expertise when you have some time - the go2idl parser is turning sets.String into the following tree:
type: sets.String kind: Alias
underlying: map[string]sets.Empty kind: Map
key: string kind: Builtin
elem: set.Empty kind: Struct
^
should be Alias
Looking at tc.Named, I'm not sure what the expected outcome would be and why you flatten there.
While testing this fix in OpenShift it was discovered that the
PackageConstraint was overly aggressive - types that declare a public
copy function should always return true. PackageConstraint is intended
to limit packages where we might generate deep copy function, rather
than to prevent external packages from being consumed.
Downstream generators that want to reuse the upstream generated types
need to be able to define a different ignore tag (so that they can see
the already generated types).