- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
Loop over priorityConfigs seperately. The node loop can only safely
modify result[i][index]. Before this change it sometimes modified
result[i] concurrently with other loops.
Fixes: 7164967662
==================== Test output for //pkg/scheduler/core:go_default_test:
==================
WARNING: DATA RACE
Read at 0x00c0005e8ed0 by goroutine 22:
k8s.io/kubernetes/pkg/scheduler/core.PrioritizeNodes.func2()
pkg/scheduler/core/generic_scheduler.go:667 +0x2ea
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil.func1()
staging/src/k8s.io/client-go/util/workqueue/parallelizer.go:65 +0x9e
Previous write at 0x00c0005e8ed0 by goroutine 21:
k8s.io/kubernetes/pkg/scheduler/core.PrioritizeNodes.func2()
pkg/scheduler/core/generic_scheduler.go:668 +0x450
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil.func1()
staging/src/k8s.io/client-go/util/workqueue/parallelizer.go:65 +0x9e
Goroutine 22 (running) created at:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil()
staging/src/k8s.io/client-go/util/workqueue/parallelizer.go:57 +0x1a3
k8s.io/kubernetes/pkg/scheduler/core.PrioritizeNodes()
pkg/scheduler/core/generic_scheduler.go:682 +0x592
k8s.io/kubernetes/pkg/scheduler/core.(*genericScheduler).Schedule()
pkg/scheduler/core/generic_scheduler.go:186 +0x77d
k8s.io/kubernetes/pkg/scheduler/core.TestGenericScheduler.func1()
pkg/scheduler/core/generic_scheduler_test.go:464 +0x91f
testing.tRunner()
GOROOT/src/testing/testing.go:827 +0x162
Goroutine 21 (running) created at:
k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.ParallelizeUntil()
staging/src/k8s.io/client-go/util/workqueue/parallelizer.go:57 +0x1a3
k8s.io/kubernetes/pkg/scheduler/core.PrioritizeNodes()
pkg/scheduler/core/generic_scheduler.go:682 +0x592
k8s.io/kubernetes/pkg/scheduler/core.(*genericScheduler).Schedule()
pkg/scheduler/core/generic_scheduler.go:186 +0x77d
k8s.io/kubernetes/pkg/scheduler/core.TestGenericScheduler.func1()
pkg/scheduler/core/generic_scheduler_test.go:464 +0x91f
testing.tRunner()
GOROOT/src/testing/testing.go:827 +0x162
==================
--- FAIL: TestGenericScheduler (0.01s)
--- FAIL: TestGenericScheduler/test_6 (0.00s)
testing.go:771: race detected during execution of test
testing.go:771: race detected during execution of test
FAIL
This previously caused a panic when moving lastKnownGood between two
non-nil values, because we were comparing the interface wrapper instead
of comparing the NodeConfigSources. The case of moving from one non-nil
lastKnownGood config to another doesn't appear to be tested by the e2e
node tests. I added a unit test and an e2e node test to help catch bugs
with this case in the future.
Up until now UnifiedControlPlaneImage existed as a string value as part of the
ClusterConfiguration. This provided an override for the Kubernetes core
component images with a single custom image. It is mostly used to override the
control plane images with the hyperkube image. This saves both bandwith and
disk space on the control plane nodes.
Unfortunately, this specified an entire image string (complete with its prefix,
image name and tag). This disables upgrades of setups that use hyperkube.
Therefore, to enable upgrades on hyperkube setups and to make configuration
more convenient, the UnifiedControlPlaneImage option is replaced with a boolean
option, called UseHyperKubeImage. If set to true, this option replaces the
image name of any Kubernetes core components with hyperkube, thus allowing for
upgrades and respecting the image repository and version, specified in the
ClusterConfiguration.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>