Automatic merge from submit-queue
optimize deleteFromIndices method of thread_safe_store
As all methods of thread_safe_store are threadsafe, so i think, in deleteFromIndices method, if the index is nil, need not run the for structure below
When kubelet starts a pod that refers to non-existing PV, PVC or Node, it
should clearly show that the requested element does not exist.
Previous "PersistentVolumeClaim 'default/ceph-claim-wm' is not in cache"
looks like random kubelet hiccup, while "PersistentVolumeClaim
'default/ceph-claim-wm' not found" suggests that the object may not exist at
all and it might be an user error.
Fixes#27523
Refactor storePodsNamespacer.List() and
storeReplicationContollersNamespacer.List(). They are the same
function, just with different signatures.
This fixes a bug where, when we fell back on a brute force approach, we
were still returning an error.
Also change to explicit return without named return values.
A recent change tries to separate resync and relist. The motivation
was to avoid triggering relist when a resync is required.
However, the change is not effective since it stops the watcher. As hongchao
mentioned in the original comment, today's storage interface will not deliever
any progress notification to the watch chan. So any watcher that does not receive
events for the last few seconds will not be able to catch up from the previous
index after a hard close since the index of the last received event is out of
the cache window inside etcd2.
This pull request tries to fix this issue by not stoping watcher when a resync is
required.
Automatic merge from submit-queue
add namespace index for cache
@wojtek-t
Implement in this approach make the change of lister.go small, but we should replace all `NewInformer()` to `NewIndexInformer()`, even when someone not want to filter by namespace(eg. gc_controller and scheduler). Any suggestion?
Reflectors started from goroutines are broken because Go doesn't allow
runtime.Callers to see the spawning goroutine. Do a best effort parse of
the call stack for now.
Add logging so that we can easily see which reflectors processes launch,
and measure in logs the frequency of sync intervals.
The message as it is framed right now does not make any sense for the
end users of our system. It might even lead to confusion. So this is
attempt to make the error message less confusing.
Update the Deployments' API types, defaulting code, conversions, helpers
and validation to use ReplicaSets instead of ReplicationControllers and
LabelSelector instead of map[string]string for selectors.
Also update the Deployment controller, registry, kubectl subcommands,
client listers package and e2e tests to use ReplicaSets and
LabelSelector for Deployments.
Move type LabelSelector and type LabelSelectorRequirement from pkg/apis/extensions
This avoids an import loop when Job (and later DaemonSet, Deployment, ReplicaSet)
are moved out of extensions to new api groups.
Also Move LabelSelectorAsSelector utility from pkg/apis/extensions/ to pkg/api/unversioned/
Also its test.
Also LabelSelectorOp* constants.
Also the pkg/apis/extensions/validation functions ValidateLabelSelectorRequirement and
ValidateLabelSelector move to pkg/api/unversioned
The related type in pkg/apis/extensions/v1beta1/ is staying there. I might move
it in another PR if neccessary.