Commit Graph

7491 Commits (7bc6da0b77ee0c79efc4926b604f41f03bee4510)

Author SHA1 Message Date
Chao Xu 262799f91f serve the api in kube-apiserver 2017-05-25 23:55:15 -07:00
Kubernetes Submit Queue 7d37a2685c Merge pull request #45867 from kow3ns/controller-history
Automatic merge from submit-queue (batch tested with PRs 46429, 46308, 46395, 45867, 45492)

Controller history

**What this PR does / why we need it**:
Implements the ControllerRevision API object and clientset to allow for the implementation of StatefulSet update and DaemonSet history

```release-note
ControllerRevision type added for StatefulSet and DaemonSet history.
```
2017-05-25 22:42:08 -07:00
Kubernetes Submit Queue 54a47a6f1d Merge pull request #46308 from dashpole/summary_container_restart
Automatic merge from submit-queue (batch tested with PRs 46429, 46308, 46395, 45867, 45492)

Summary Test looks at pods that have containers that restart.

Occasionally, the node can report extra containers that had been restarted through the summary API.
This test change tests a pod that restarts, and hopefully should allow us to reproduce and debug this behavior.

/assign @dchen1107 

/release-note-none
2017-05-25 22:42:04 -07:00
Kubernetes Submit Queue 59ee250ced Merge pull request #46429 from wojtek-t/bump_go_to_183
Automatic merge from submit-queue (batch tested with PRs 46429, 46308, 46395, 45867, 45492)

Bump Go version to 1.8.3

This PR also removed this patched version of Go 1.8.1 which we used to use to workaround performance problem of Go 1.8.1.

Fix https://github.com/kubernetes/kubernetes/issues/45216
Ref #46391

@timothysc @bradfitz
2017-05-25 22:42:01 -07:00
Kubernetes Submit Queue c60bc53921 Merge pull request #46434 from shyamjvs/kubemark-config-upload
Automatic merge from submit-queue (batch tested with PRs 46124, 46434, 46089, 45589, 46045)

Copy kubeconfig to kubemark master

This should save the effort of digging through jenkins agent and its container to get the kubeconfig.
Ideally we should have kubectl directly working on the kubemark master, but I'm facing some issues due to older version of kubectl present by default on the node.

cc @wojtek-t @gmarek
2017-05-25 21:39:59 -07:00
Kubernetes Submit Queue b8dc4915f7 Merge pull request #46423 from gmarek/fix_perf
Automatic merge from submit-queue (batch tested with PRs 45949, 46009, 46320, 46423, 46437)

Fix performance test issues

Fix #46198
2017-05-25 19:41:04 -07:00
Kubernetes Submit Queue b9416c2c91 Merge pull request #46320 from vmware/e2evSphereStoragePolicySupport
Automatic merge from submit-queue (batch tested with PRs 45949, 46009, 46320, 46423, 46437)

e2e tests for storage policy support in Kubernetes

This PR covers e2e test cases for vSphere storage policy support in Kubernetes - #46176.

The following test scenario have been implemented.
- Specify only SPBM storage policy name.
     - Verify if the disk is provisioned on a compatible datastore with max free space.
- Specify a storage policy name which is not defined on VC.
    - Verify if PVC create errors out that no pbm profile with this policy is found.
- Specify both SPBM storage policy name and VSAN capabilities together.
    - Verify if PVC create errors out that you can't use both SPBM policy name with VSAN capabilities. You can only specify one.
- Specify SPBM storage policy name with user specified datastore which is non-compatible.
   - Verify if PVC create errors out that it can't provision a disk on a non-compatible datastore.

@jeffvance @divyenpatel

**Release note**:

```release-note
None
```
2017-05-25 19:41:02 -07:00
Kubernetes Submit Queue 470a6a45d5 Merge pull request #45949 from NickrenREN/kubelet-metric
Automatic merge from submit-queue (batch tested with PRs 45949, 46009, 46320, 46423, 46437)

Unregister some metrics

delete some registered metrics since they are not observed


**Release note**:
```release-note
NONE
```
2017-05-25 19:40:58 -07:00
Kenneth Owens ba128e6e41 Implements ControllerRevision API Object without codec and code
generation
2017-05-25 11:38:57 -07:00
Wojciech Tyczynski 3e8c27af34 Bump Go version to 1.8.3 2017-05-25 20:05:34 +02:00
Kubernetes Submit Queue 4a58809d88 Merge pull request #46219 from aleksandra-malinowska/stackdriver-performance-test-2
Automatic merge from submit-queue (batch tested with PRs 45269, 46219, 45966)

Add overriding Stackdriver API endpoint

Allow using Stackdriver test endpoint.
2017-05-25 07:21:01 -07:00
Kubernetes Submit Queue 26d7ee0447 Merge pull request #44774 from kargakis/uniquifier
Automatic merge from submit-queue

Switch Deployments to new hashing algo w/ collision avoidance mechanism

Implements https://github.com/kubernetes/community/pull/477

@kubernetes/sig-apps-api-reviews @kubernetes/sig-apps-pr-reviews 

Fixes https://github.com/kubernetes/kubernetes/issues/29735
Fixes https://github.com/kubernetes/kubernetes/issues/43948

```release-note
Deployments are updated to use (1) a more stable hashing algorithm (fnv) than the previous one (adler) and (2) a hashing collision avoidance mechanism that will ensure new rollouts will not block on hashing collisions anymore.
```
2017-05-25 06:09:58 -07:00
Shyam Jeedigunta 8f2b4c3b33 Copy kubeconfig to kubemark master 2017-05-25 14:55:28 +02:00
Michail Kargakis 9190a47c37
Generated changes for collision count
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 12:23:17 +02:00
Kubernetes Submit Queue 9c1480bb61 Merge pull request #46366 from nicksardo/gce-subnetwork-url
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)

GCE - Retrieve subnetwork name/url from gce.conf 

**What this PR does / why we need it**:
Features like ILB require specifying the subnetwork if the network is type manual.

**Notes:**
The network URL can be [constructed](68e7e18698/pkg/cloudprovider/providers/gce/gce.go (L211-L217)) by fetching instance metadata; however, the subnetwork is not provided through this feature. Users must specify the subnetwork name/url through the gce.conf.

Although multiple subnets can exist in the same region for a network, the cloud provider will only use one subnet url for creating LBs. 


**Release note**:
```release-note
NONE
```
2017-05-25 03:14:05 -07:00
Kubernetes Submit Queue 23348ceedc Merge pull request #46354 from smarterclayton/metrics_subresource
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)

Subresources are not included in apiserver prometheus metrics

Subresources are very often completely different code paths and errors
generated on those code paths are important to distinguish.

@kubernetes/sig-api-machinery-pr-reviews

```release-note
The Prometheus metrics for the kube-apiserver for tracking incoming API requests and latencies now return the `subresource` label for correctly attributing the type of API call.
```
2017-05-25 03:13:59 -07:00
gmarek 2437cf4d59 fix type in start-kubemark 2017-05-25 11:48:01 +02:00
gmarek 02951f182e Correctly handle nil resource usage in performance e2e tests 2017-05-25 11:44:03 +02:00
gmarek ded8e03fc3 Reduce service creation/deletion parallelism in the load test 2017-05-25 11:44:03 +02:00
Michail Kargakis 4a2c5eae92
Implement hash collision avoidance mechanism
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 11:17:45 +02:00
Kubernetes Submit Queue d84f3f4b7e Merge pull request #46363 from MrHohn/fix-CheckPodsCondition
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Fix CheckPodsCondition to print out the correct podName

From a couple CIs (https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/1114, https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gci-qa-serial-master/2246, https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-pre-release/2187), all indicate we print out the wrong pod name in CheckPodsCondition for _"Pod XXX failed to be running and ready, or succeeded."_:
```
I0524 02:09:50.173] May 24 02:09:50.173: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m55.033881993s elapsed)
I0524 02:09:52.178] May 24 02:09:52.178: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m57.03848264s elapsed)
I0524 02:09:54.183] May 24 02:09:54.182: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m59.043463323s elapsed)
I0524 02:09:56.183] May 24 02:09:56.183: INFO: Pod fluentd-gcp-v2.0-6wf67 failed to be running and ready, or succeeded.
I0524 02:09:56.184] May 24 02:09:56.183: INFO: Wanted all 23 pods to be running and ready, or succeeded. Result: false. Pods: [heapster-v1.3.0-3806988011-kzkg6 kube-proxy-bootstrap-e2e-minion-group-bbwn rescheduler-v0.3.0-bootstrap-e2e-master monitoring-influxdb-grafana-v4-1q59k l7-default-backend-1044750973-zgxsc etcd-server-events-bootstrap-e2e-master kube-apiserver-bootstrap-e2e-master kube-proxy-bootstrap-e2e-minion-group-6nqb kube-proxy-bootstrap-e2e-minion-group-mzbz fluentd-gcp-v2.0-chd2x kube-dns-806549836-f8p46 fluentd-gcp-v2.0-44x97 kube-dns-autoscaler-2528518105-vlg8t fluentd-gcp-v2.0-p1h4b kube-controller-manager-bootstrap-e2e-master l7-lb-controller-v0.9.3-bootstrap-e2e-master kubernetes-dashboard-2917854236-tn3nx kube-dns-806549836-fq2fp kube-scheduler-bootstrap-e2e-master etcd-empty-dir-cleanup-bootstrap-e2e-master kube-addon-manager-bootstrap-e2e-master etcd-server-bootstrap-e2e-master fluentd-gcp-v2.0-6wf67]
I0524 02:09:56.184] May 24 02:09:56.183: INFO: At least one pod wasn't running and ready or succeeded at test start.
I0524 02:09:56.184] [AfterEach] [k8s.io] Restart [Disruptive]
```

Check the codes and found we always print out the last pod name, which is random. Pass the pod name into channel to fix.

**Release note**:

```release-note
NONE
```
2017-05-25 00:11:05 -07:00
Kubernetes Submit Queue fe5b303365 Merge pull request #45913 from enj/enj/t/etcd_cohabitating_resources
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Detect cohabitating resources in etcd storage test

**What this PR does / why we need it**:

This change updates the etcd storage path test to detect cohabitating resources by looking at their expected location in etcd.  This was not detected in the past because the GVK check did not span across groups.

To limit noise from failures caused by multiple objects at the same location in etcd, the test now fails when different GVRs share the same expected path.  Thus every object is expected to have a unique path.

@liggitt PTAL

Signed-off-by: Monis Khan <mkhan@redhat.com>

**Release note**:

```
NONE
```
2017-05-25 00:10:59 -07:00
Kubernetes Submit Queue ed8843406e Merge pull request #46303 from Random-Liu/fix-cos-image-project
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Fix cos image project to cos-cloud.

Addressed https://github.com/kubernetes/kubernetes/pull/45136#discussion_r118092211.

@vishh @yujuhong @dchen1107
2017-05-24 23:19:09 -07:00
Kubernetes Submit Queue 8d88c55231 Merge pull request #46311 from dashpole/disable_ubuntu_gpu_test
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Dont attach a GPU to ubuntu test machines for node e2e serial tests

This should fix flakes in the e2e_node serial suite.

@vishh I think this is what you were asking for...

/assign @vishh
2017-05-24 23:19:07 -07:00
Kubernetes Submit Queue b71ca6691b Merge pull request #46309 from Random-Liu/move-docker-validation-to-separate-project
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Move docker validation test to separate project.

Docker validation test is leaking VMs because new docker version `DOCKER_VERSION=17.05.0-c` totally breaks the new gci image `GCE_IMAGES=gci-test-60-9579-0-0` with the `gci-docker-version` metadata specified.

The test successfully created the instance, but timed out when checking VM aliveness, and leaked the VM.

I've cleaned up all leaked VMs. This PR moves docker validation node e2e test into a separate project to not influencing other node e2e test.

@kewu1992 We should fix the docker automated validation test.

/cc @dchen1107 @yujuhong @abgworrall
2017-05-24 23:19:05 -07:00
System Administrator 9c8e92b8ff e2e tests for storage policy support in Kubernetes 2017-05-24 16:39:00 -07:00
David Ashpole 1a6572fc6c summary test now tests a pod that has containers that have restarted 2017-05-24 13:27:57 -07:00
Clayton Coleman ad431c454c
Subresources are not included in apiserver prometheus metrics
Subresources are very often completely different code paths and errors
generated on those code paths are important to distinguish.
2017-05-24 16:23:50 -04:00
Nick Sardo e7ee3913d7 Add subnetworkUrl param to e2e 2017-05-24 10:54:51 -07:00
Zihong Zheng 03d08623e8 Fix CheckPodsCondition to print out the correct podName 2017-05-24 10:20:57 -07:00
Kubernetes Submit Queue d4ff0f2a0e Merge pull request #46312 from dashpole/remove_memcg_jenkins_properties
Automatic merge from submit-queue (batch tested with PRs 42042, 46139, 46126, 46258, 46312)

Remove unused test properties

Issue:  #42676
A separate serial memcg suite was created for the initial stages of re-enabling memcg notifications.  Now that all e2e tests have memcg notifications enabled, this suite is no longer needed.
2017-05-23 19:43:07 -07:00
Kubernetes Submit Queue dae6955555 Merge pull request #46293 from nicksardo/chaosmonkey-defer-stop
Automatic merge from submit-queue (batch tested with PRs 46149, 45897, 46293, 46296, 46194)

Chaosmonkey - Signal stop to tests and wait for done when disruption fails

**What this PR does / why we need it**:
Prevents tests from leaking resources because their Teardown was never called when test disruption fails.   

**Which issue this PR fixes**
First problem of #45842 

**Release note**:
```release-note
NONE
```
2017-05-23 15:48:59 -07:00
Kubernetes Submit Queue 45b275d52c Merge pull request #45897 from ncdc/gc-require-list-watch
Automatic merge from submit-queue (batch tested with PRs 46149, 45897, 46293, 46296, 46194)

GC: update required verbs for deletable resources, allow list of ignored resources to be customized

The garbage collector controller currently needs to list, watch, get,
patch, update, and delete resources. Update the criteria for
deletable resources to reflect this.

Also allow the list of resources the garbage collector controller should
ignore to be customizable, so downstream integrators can add their own
resources to the list, if necessary.

cc @caesarxuchao @deads2k @smarterclayton @mfojtik @liggitt @sttts @kubernetes/sig-api-machinery-pr-reviews
2017-05-23 15:48:57 -07:00
Random-Liu 82f588b483 Fix cos image project to cos-cloud. 2017-05-23 15:12:03 -07:00
David Ashpole 8341d544f3 remove unused test properties 2017-05-23 14:39:18 -07:00
David Ashpole 20eb016597 dont attach a GPU to ubuntu machines 2017-05-23 14:34:18 -07:00
Random-Liu dc023144a3 Move docker validation test to separate project. 2017-05-23 14:07:15 -07:00
Kubernetes Submit Queue 1e2105808b Merge pull request #45136 from vishh/cos-nvidia-driver-install
Automatic merge from submit-queue

Enable "kick the tires" support for Nvidia GPUs in COS

This PR provides an installation daemonset that will install Nvidia CUDA drivers on Google Container Optimized OS (COS).
User space libraries and debug utilities from the Nvidia driver installation are made available on the host in a special directory on the host -
* `/home/kubernetes/bin/nvidia/lib` for libraries
*  `/home/kubernetes/bin/nvidia/bin` for debug utilities

Containers that run CUDA applications on COS are expected to consume the libraries and debug utilities (if necessary) from the host directories using `HostPath` volumes.

Note: This solution requires updating Pod Spec across distros. This is a known issue and will be addressed in the future. Until then CUDA workloads will not be portable.

This PR updates the COS base image version to m59. This is coupled with this PR for the following reasons:
1. Driver installation requires disabling a kernel feature in COS. 
2. The kernel API for disabling this interface changed across COS versions
3. If the COS image update is not handled in this PR, then a subsequent COS image update will break GPU integration and will require an update to the installation scripts in this PR.
4. Instead of having to post `3` PRs, one each for adding the basic installer, updating COS to m59, and then updating the installer again, this PR combines all the changes to reduce review overhead and latency, and additional noise that will be created when GPU tests break.

**Try out this PR**
1. Get Quota for GPUs in any region
2. `export `KUBE_GCE_ZONE=<zone-with-gpus>` KUBE_NODE_OS_DISTRIBUTION=gci`
3. `NODE_ACCELERATORS="type=nvidia-tesla-k80,count=1" cluster/kube-up.sh`
4. `kubectl create -f cluster/gce/gci/nvidia-gpus/cos-installer-daemonset.yaml`
5. Run your CUDA app in a pod.

**Another option is to run a e2e manually to try out this PR**
1. Get Quota for GPUs in any region
2. export `KUBE_GCE_ZONE=<zone-with-gpus>` KUBE_NODE_OS_DISTRIBUTION=gci
3. `NODE_ACCELERATORS="type=nvidia-tesla-k80,count=1"`
4. `go run hack/e2e.go -- --up` 
5. `hack/ginkgo-e2e.sh --ginkgo.focus="\[Feature:GPU\]"`
The e2e will install the drivers automatically using the daemonset and then run test workloads to validate driver integration.

TODO:
- [x] Update COS image version to m59 release.
- [x] Remove sleep from the install script and add it to the daemonset
- [x] Add an e2e that will run the daemonset and run a sample CUDA app on COS clusters.
- [x] Setup a test project with necessary quota to run GPU tests against HEAD to start with https://github.com/kubernetes/test-infra/pull/2759
- [x] Update node e2e serial configs to install nvidia drivers on COS by default
2017-05-23 10:46:10 -07:00
Kubernetes Submit Queue 1602e2a338 Merge pull request #45587 from foxish/pdb-maxunavailab
Automatic merge from submit-queue (batch tested with PRs 45587, 46286)

PDB Max Unavailable Field

Completes https://github.com/kubernetes/features/issues/285

```release-note
Adds a MaxUnavailable field to PodDisruptionBudget
```


Individual commits are self-contained; Last commit can be ignored because it is autogenerated code.
cc @kubernetes/sig-apps-api-reviews @kubernetes/sig-apps-pr-reviews
2017-05-23 10:29:56 -07:00
Nick Sardo f40f45abc1 Defer test stop & cleanup 2017-05-23 10:11:46 -07:00
Andy Goldstein d1a0384678 GC: allow ignored resources to be customized
Allow the list of resources the garbage collector controller should
ignore to be customizable, so downstream integrators can add their own
resources to the list, if necessary.
2017-05-23 12:05:09 -04:00
Kubernetes Submit Queue 8e07e61a43 Merge pull request #46223 from smarterclayton/scheduler_max
Automatic merge from submit-queue (batch tested with PRs 45766, 46223)

Scheduler should use a shared informer, and fix broken watch behavior for cached watches

Can be used either from a true shared informer or a local shared
informer created just for the scheduler.

Fixes a bug in the cache watcher where we were returning the "current" object from a watch event, not the historic event.  This means that we broke behavior when introducing the watch cache.  This may have API implications for filtering watch consumers - but on the other hand, it prevents clients filtering from seeing objects outside of their watch correctly, which can lead to other subtle bugs.

```release-note
The behavior of some watch calls to the server when filtering on fields was incorrect.  If watching objects with a filter, when an update was made that no longer matched the filter a DELETE event was correctly sent.  However, the object that was returned by that delete was not the (correct) version before the update, but instead, the newer version.  That meant the new object was not matched by the filter.  This was a regression from behavior between cached watches on the server side and uncached watches, and thus broke downstream API clients.
```
2017-05-23 07:42:00 -07:00
Anirudh 63e51dc66e PDB MaxUnavailable: e2e tests 2017-05-23 07:18:44 -07:00
Kubernetes Submit Queue cc6e51c6e8 Merge pull request #45427 from ncdc/gc-shared-informers
Automatic merge from submit-queue (batch tested with PRs 46201, 45952, 45427, 46247, 46062)

Use shared informers in gc controller if possible

Modify the garbage collector controller to try to use shared informers for resources, if possible, to reduce the number of unique reflectors listing and watching the same thing.

cc @kubernetes/sig-api-machinery-pr-reviews @caesarxuchao @deads2k @liggitt @sttts @smarterclayton @timothysc @soltysh @kargakis @kubernetes/rh-cluster-infra @derekwaynecarr @wojtek-t @gmarek
2017-05-22 20:58:03 -07:00
Kubernetes Submit Queue bb56937b92 Merge pull request #46055 from deads2k/crd-01-embed
Automatic merge from submit-queue (batch tested with PRs 46022, 46055, 45308, 46209, 43590)

embed kube-apiextensions inside of kube-apiserver

To reduce operation complexity, we decided to include the kube-apiextensions-server inside of kube-apiserver (https://github.com/kubernetes/community/blob/master/sig-api-machinery/api-extensions-position-statement.md#q-should-kube-aggregator-be-a-separate-binaryprocess-than-kube-apiserver).  With the API reasonably well established and a finalizer about merge, I think its time to add ourselves.

This pull wires kube-apiextensions-server ahead of the TPRs so that one will replace the other if both are added by accident (CRDs should have priority) and wires a controller for automatic aggregation.

WIP because I still need tests: unit test for controller, test-cmd test to mirror the TPR test.


```release-note
Adds the `CustomResourceDefinition` (crd) types to the `kube-apiserver`.  These are the successors to `ThirdPartyResource`.  See https://github.com/kubernetes/community/blob/master/contributors/design-proposals/thirdpartyresources.md for more details.
```
2017-05-22 19:59:57 -07:00
Kubernetes Submit Queue c2c5051adf Merge pull request #44899 from smarterclayton/burst
Automatic merge from submit-queue (batch tested with PRs 38990, 45781, 46225, 44899, 43663)

Support parallel scaling on StatefulSets

Fixes #41255

```release-note
StatefulSets now include an alpha scaling feature accessible by setting the `spec.podManagementPolicy` field to `Parallel`.  The controller will not wait for pods to be ready before adding the other pods, and will replace deleted pods as needed.  Since parallel scaling creates pods out of order, you cannot depend on predictable membership changes within your set.
```
2017-05-22 19:07:09 -07:00
Kubernetes Submit Queue a572f10387 Merge pull request #46205 from billy2180/bump-network-tester-json-image-version-to-1.9
Automatic merge from submit-queue (batch tested with PRs 46133, 46211, 46224, 46205, 45910)

test/images/network-tester:bump rc/pod image version to 1.9

Current image version is 1.9,update the image version of the associated json file to 1.9
```release-note
NONE
```
2017-05-22 15:50:05 -07:00
Kubernetes Submit Queue 03ba1324cf Merge pull request #46224 from gmarek/kubemark_heapster
Automatic merge from submit-queue (batch tested with PRs 46133, 46211, 46224, 46205, 45910)

Make CPU request for heapster in kubemark scale with the number of Nodes
2017-05-22 15:50:03 -07:00
Kubernetes Submit Queue 0329e3fdaf Merge pull request #46211 from gmarek/panic
Automatic merge from submit-queue (batch tested with PRs 46133, 46211, 46224, 46205, 45910)

Add more logs to kubelet_stats

Ref. #46198
2017-05-22 15:50:00 -07:00
Clayton Coleman 8cd95c78c4
Scheduler should use a shared informer
Can be used either from a true shared informer or a local shared
informer created just for the scheduler.
2017-05-22 13:50:14 -04:00