When we get an unsupported provider message, it often isn't clear what
method actually failed - add more information to the error message.
Issue #70280
Setting command line arguments via env variables that are not needed
by the binaries is just unnecessarily complex. The driver renaming
code in the E2E manifest PR would have to be made more complex to deal
with such a deployment. It is easier for that code and humans who look
at the .yaml file to remove the indirection.
I increased request from 500mCPU to 1CPU in
https://github.com/kubernetes/kubernetes/pull/70125 but this interacts
poorly with other e2e tests (higher requests mean pods fil to schedule).
So revert that change.
Force ELB to ensureHealthCheck when target pool exists.
Add e2e test to ensure that HC interval will be reconciled when
kube-controller-manager restarts.
Health checks with bigger thresholds and larger intervals will not be reconciled.
Add unittest for ILB and ELB to ensure HC reconciles and is configurable.
With CRI-O we've been hitting a lot of flakes with the following test:
[sig-apps] CronJob should remove from active list jobs that have been deleted
The events shown in the test failures in both kube and openshift were the following:
STEP: Found 13 events.
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:02 +0000 UTC - event for forbid: {cronjob-controller } SuccessfulCreate: Created job forbid-1540412040
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:02 +0000 UTC - event for forbid-1540412040: {job-controller } SuccessfulCreate: Created pod: forbid-1540412040-z7n7t
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:02 +0000 UTC - event for forbid-1540412040-z7n7t: {default-scheduler } Scheduled: Successfully assigned e2e-tests-cronjob-rjr2m/forbid-1540412040-z7n7t to 127.0.0.1
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:03 +0000 UTC - event for forbid-1540412040-z7n7t: {kubelet 127.0.0.1} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:03 +0000 UTC - event for forbid-1540412040-z7n7t: {kubelet 127.0.0.1} Created: Created container
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:03 +0000 UTC - event for forbid-1540412040-z7n7t: {kubelet 127.0.0.1} Started: Started container
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:14:12 +0000 UTC - event for forbid: {cronjob-controller } MissingJob: Active job went missing: forbid-1540412040
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:02 +0000 UTC - event for forbid: {cronjob-controller } SuccessfulCreate: Created job forbid-1540412100
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:02 +0000 UTC - event for forbid-1540412100: {job-controller } SuccessfulCreate: Created pod: forbid-1540412100-rq89l
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:02 +0000 UTC - event for forbid-1540412100-rq89l: {default-scheduler } Scheduled: Successfully assigned e2e-tests-cronjob-rjr2m/forbid-1540412100-rq89l to 127.0.0.1
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:06 +0000 UTC - event for forbid-1540412100-rq89l: {kubelet 127.0.0.1} Started: Started container
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:06 +0000 UTC - event for forbid-1540412100-rq89l: {kubelet 127.0.0.1} Created: Created container
Oct 24 20:20:05.541: INFO: At 2018-10-24 20:15:06 +0000 UTC - event for forbid-1540412100-rq89l: {kubelet 127.0.0.1} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
The code in test is racy because the Forbid policy can still let the controller to create
a new pod for the cronjob. CRI-O is fast at re-creating the pod and by the time
the test code reaches the check, it fails. The events are as follow:
[It] should remove from active list jobs that have been deleted
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:192
STEP: Creating a ForbidConcurrent cronjob
STEP: Ensuring a job is scheduled
STEP: Ensuring exactly one is scheduled
STEP: Deleting the job
STEP: deleting Job.batch forbid-1540412040 in namespace e2e-tests-cronjob-rjr2m, will wait for the garbage collector to delete the pods
Oct 24 20:14:02.533: INFO: Deleting Job.batch forbid-1540412040 took: 2.699182ms
Oct 24 20:14:02.634: INFO: Terminating Job.batch forbid-1540412040 pods took: 100.223228ms
STEP: Ensuring job was deleted
STEP: Ensuring there are no active jobs in the cronjob
[AfterEach] [sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
It looks clear that by the time we're ensuring that there are no more active jobs, there
could be _already_ a new job spinned, making the test flakes.
This PR fixes all the above by making sure that the _deleted_ job is not in the Active
list anymore, besides other pod already running but with different UUID which is
going to be fine anyway for the purpose of the test.
Signed-off-by: Antonio Murdaca <runcom@linux.com>
The E2E refactoring tightened the sanity checking of the --provider
parameter such that it only allowed known providers. That seemed to
make sense because it catches typos, but it turned out that various
callers depended on the "accept arbitrary provider value" behavior,
therefore it gets restored.
Provisioning test for retain policy requires each driver's backend volume
deletion logic. Without it, volume leakage happens. Move this test back
to volume_provisioning.go and test it only for gce, until general
backend volume deletion code for each driver becomes available.
Fixes: #70191
Resource consumer might use slightly more CPU than requested. That
resulted in HPA sometimes increasing size of deployments during e2e
tests. Deflake tests by:
- Scaling up CPU requests in those tests. Resource consumer might go a fixed
number of milli CPU seconds above target. Having higher requests makes
the test less sensitive.
- On scale down consume CPU in the middle between what would generate
recommendation of expexted size and 1 pod fewer (instead of righ on
edge beween expected and expected +1).
Some variables were int32 but always cast to int before use. Make them
int.
The service account authenticator isn't the only authenticator that
should respect API audience. The authentication config structure should
reflect that.
If dig fails to reach the DNS, it will output errors to stdout, which
can cause the test -n to falsely pass:
ubuntu@ubuntu:~$ dig +notcp +noall +answer +search kubernetes.default A
;; connection timed out; no servers could be reached
command terminated with exit code 9
ubuntu@ubuntu:~$ test -n "$(dig +notcp +noall +answer +search kubernetes.default A)" && echo OK
OK
This patch solves this issue by making sure the dig command actually succeeds
before checking its output.
The test "should run with the expected status" passes with and without
the set SELinuxOptions, but removing it will ensure that the test will
be able to run and pass on Windows nodes as well.
This change updates the ETCD storage test so that its data is
exported. Thus it can be used by other tests. The dry run test was
updated to consume this data instead of having a duplicate copy.
The code to start a master that can be used for "one of every
resource" style tests was also factored out. It is reused in the
dry run test as well.
This prevents these tests from drifting in the future and reduces
the long term maintenance burden.
Signed-off-by: Monis Khan <mkhan@redhat.com>
In some storage tests, kubelet is stopped first and the test check node
NotReady state. However, if it fails to have this state, kubelet could
not be restarted because the defer function is placed after the stop
kubelet command. This PR fixes this issue.
Make CreatePrivilegedPSPBinding reentrant so tests using it (e.g. DNS) can be
executed more than once against a cluster. Without this change, such tests will
fail because the PSP already exists, short circuiting test setup.
Not all users of the E2E framework want to run cloud-provider specific
tests. By splitting out the code it becomes possible to decide in
a E2E test suite which providers are supported.
This is achieved in two ways:
- the framework calls certain functions through a provider
interface instead of calling specific cloud provider functions
directly
- tests that are cloud-provider specific directly import the
new provider packages
The ingress test utilities are only needed by a few tests. Splitting
them out into a separate package makes the framework simpler for test
suites not using those tests.
Fixes: #66649
When facing an issue which is due to lack of PV/PVC Protection
finalizer on the e2e tests, the error message is just like
Expected
<bool>: false
to be true
Actually we cannot understand what happened during the tests.
This adds more debugging info on the tests.
Some commands used in tests are Linux specific and do not exist
or do not behave the same on Windows nodes. This can cause those
tests to fail on Windows nodes.
Replaces the mentioned commands with ones that behave the same on
both Linux and Windows.
The test "Should be able to deny attaching pod" can randomly fail
because it never waits for the pod to enter a "Running" state. Because
of this, the "kubectl attach" command can potentially fail, if the pod is
not in the correct state.
Additionally, if the "kubectl attach" actually manages to attach, then the
test will hang. The command should be executed with a timeout.
After the call to ioutil.TempDir, the directory has already been
created, and MkdirAll therefore can't do anything. The mode argument
in particular is misleading.
Tests settings should be defined in the test source code itself
because conceptually the framework is a separate entity that not all
test authors can modify.
For the sake of backwards compatibility the name of the command line
flags are not changed.
Tests settings should be defined in the test source code itself
because conceptually the framework is a separate entity that not all
test authors can modify.
Using the new framework/config code also has several advantages:
- defaults can be set with less code
- no confusion around what's a duration
- the options can also be set via command line flags
While at it, a minor bug gets fixed:
- readConfig() returns only defaults when called while
registering Ginkgo tests because Viperize() gets called later,
so the scale in the logging soak test couldn't really be configured;
now the value is read when the test runs and thus can be changed
The options get moved into the "instrumentation.logging"
resp. "instrumentation.monitoring" group to make it more obvious where
they are used. This is a breaking change, but that was already
necessary to improve the duration setting from plain integer to a
proper time duration.
Tests shouldn't have to use the central context for their settings,
because conceptually tests and framework get developed independently.
This does not yet use the new framework/config utility code because
that code still needs to be reviewed.
Besides moving the flags, they also get renamed from the top-level
"--csiImage{Version|Registry}" to
"--storage.csi.image.{version|registry}". These flags were introduced
fairly recently and shouldn't be in use much, so now is a good time to
introduce a hierarchical naming for storage flags, in particular
because more flags will be added soon.
Storing settings in the framework's TestContext is not something that
out-of-tree test authors can do because for them the framework is a
read-only upstream component. Conceptually the same is true for
in-tree tests, so the recommended approach is to define configuration
settings in the code that uses them.
How to do that is a bit uncertain. Viper has several
drawbacks (maintenance status uncertain, cannot list supported
options, cannot validate the configuration file). How to handle
configuration files is currently getting discussed for kubeadm, with
similar concerns about
Viper (https://github.com/kubernetes/kubeadm/issues/1040).
Instead of making a choice now for E2E, the recommendation is that
test authors continue to define command line flags as before, except
that they should do it in their own code and with better flag names.
But the ability to read options also from a file is useful, so
several enhancements get added:
- all settings defined via flags can also be read from a
configuration file, without extra work for test authors
- framework/config makes it possible to populate a struct directly
and define flags with a single function call
- a path and file suffix can be given to --viper-config (as in
"--viper-config /tmp/e2e.json") instead of expecting the file in
the current directory; as before, just plain "--viper-config e2e"
still works
- if "--viper-config" is set, the file must exist; otherwise the
"e2e" config is optional (as before)
- errors from Viper are no longer silently ignored, so syntax errors
are detected early
- Viper support is optional: test suite authors who don't want
it are not forced to use it by the e2e/framework
Individual implementations are not yet being moved.
Fixed all dependencies which call the interface.
Fixed golint exceptions to reflect the move.
Added project info as per @dims and
https://github.com/kubernetes/kubernetes-template-project.
Added dims to the security contacts.
Fixed minor issues.
Added missing template files.
Copied ControllerClientBuilder interface to cp.
This allows us to break the only dependency on K8s/K8s.
Added TODO to ControllerClientBuilder.
Fixed GoDeps.
Factored in feedback from JustinSB.
Some e2e tests are skipped by depending on Linux distribution of
master and node, and the options can be one of "debian", "ubuntu",
"gci" or "custom". This updates the help message of the options.
The words "error" and "fail" are magic in test output, and are
highlighted in the build logs. Change some test strings to avoid
hitting the highlighting in normal operation.
The new test/e2e/framework/testfiles package makes it possible to
write tests that do not depend on a specific way of providing
additional test files at runtime. Such tests and the framework are
then more easily reused in other test suites.
In the test/e2e suite file access is enabled based on the existing
"repo-root" command line parameter and the built-in bindata. Tests
using the new API will first check for files under "repo-root" and
then fall back to the builtin data. This way, users of a test binary
can modify those files without having to rebuild the binary.
"repo-root" is still needed because at least some tests check for
additional files (secret.yaml, via ingress_utils.go) that are not part
of the upstream source code and thus may or may not be built into a
test binary.
Tests using bindata or repo-root directly get modified to use the new
API, or removed when they are obsolete: test/e2e/examples.go depended
on files that were removed in
https://github.com/kubernetes/kubernetes/pull/61246 and thus can no
longer be run in Kubernetes. Moving the tests to kubernetes/examples
is tracked in https://github.com/kubernetes/examples/issues/214.
The file removal did not break the automated E2E testing probably
because the tests are under the Feature:Example tag and thus not
enabled during normal CI runs.
Removing also the obsolete tests makes it simpler to rework the
"repo-root" setting because less code uses it.
Related-to: #66649 and #23987
New command is now `kubectl diff` rather than `kubectl alpha diff` since
it's moving out of alpha soon, and will be using dry-run apply to
produce the diff rather than the custom merge logic.
This change makes the schema structs consistently use non-pointer
receivers. This makes it easier to call these methods since these
structs are used as values instead of pointers.
Signed-off-by: Monis Khan <mkhan@redhat.com>
Bootstrap initializes the necessary vSphere objects before the tests are
run. A call to Bootstrap was missing in persistent_volumes-vsphere.go's
BeforeEach. This results in Panic while running e2e tests for 'vsphere'
provider with a stack trace like this:
/usr/local/go/src/runtime/panic.go:502 +0x229
github.com/docker/kube-e2e-image/vendor/k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.1()
/go/src/github.com/docker/kube-e2e-image/vendor/k8s.io/kubernetes/test/e2e/storage/vsphere/persistent_volumes-vsphere.go:77
+0xa21
github.com/docker/kube-e2e-image/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc4217c9b60,
0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/go/src/github.com/docker/kube-e2e-image/e2e_test.go:88 +0x2c8
testing.tRunner(0xc4206e01e0, 0x4212900)
/usr/local/go/src/testing/testing.go:777 +0xd0
created by testing.(*T).Run
/usr/local/go/src/testing/testing.go:824 +0x2e0
This change fixes the Panic by calling Bootstrap.
Testing:
After this change, tests with FOCUS set to "PersistentVolumes:vsphere"
dont Panic. They pass as expected.
Signed-off-by: Anusha Ragunathan <anusha.ragunathan@docker.com>
Normally the pod would get created via a DaemonSet controller, but
during testing it is easier to create it directly. We just need to
ignore errors (like 'No API token found for service account
"csi-service-account"') and retry for a while. If the error persists,
the error will still abort and report it eventually.
This problem also occurs elsewhere, so an utility function in the
framework for it seems justified.
Fixes: #68776
This fixes the Redis StatefulSet e2e test by adding the missing `publishNotReadyAddresses: true` field, which was accidentially left out in #63742.
Without this fix, the redis e2e test will fail because the pod is unable
to lookup the service:
```
2018/09/14 13:57:10 lookup redis on 172.31.9.247:53: no such host
```
Signed-off-by: Mikkel Oscar Lyderik Larsen <mikkel.larsen@zalando.de>
The current interface is kind of clunky and not super easy to use, since
you have to specify parameters to specify which versions to diff. Also
the default isn't the most useful setting.
Change the interface by removing all the parameters and force only one
useful use-case, that is: diffing what's currently live against
what would be live if applied.
The EtcdMain function in integration tests is designed to launch an etcd
instance only when running in a bazel test. For non-bazel, we rely on
hack/make-rules/test-integration.sh to bring up the etcd instance.
This patch fixes the following in EtcdMain:
1. If etcd is not found in ${RUNFILES_DIR} then look in ${PATH}.
2. Try to connect to the etcd started by `make test-integraion`; if it
is up, then don't start etcd.
3. Gracefully shut down etcd after tests.
4. Get a port from the OS instead of deriving it from argv[0].
5. Don't use sync.Once.
The benefit of this change is that integration tests work with `go test`
as well as `make test-integration` without users needing to do anything
special. That makes it much easier to pass go testing flags to tests and
integrate with IDEs.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Added unschedulable and network-unavailable toleration.
Signed-off-by: Da K. Ma <klaus1982.cn@gmail.com>
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
part of #61312
fixes: https://github.com/kubernetes/kubernetes/issues/67606
**Release note**:
```release-note
If `TaintNodesByCondition` is enabled, add `node.kubernetes.io/unschedulable` and
`node.kubernetes.io/network-unavailable` automatically to DaemonSet pods.
```
Automatic merge from submit-queue (batch tested with PRs 65250, 68241). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Initial node performance testing framework.
This PR adds a framework for node performance testing.
Partially fixes: https://github.com/kubernetes/kubernetes/issues/65249.
Use the following command to run this test:
```sh
make test-e2e-node FOCUS="Node Performance Testing" SKIP="" PARALLELISM=1
```
It has been tested in the following environment:
- n1-standard-16
- Ubuntu 16.04
- docker 17.03.2
Note to reviewers:
This PR won't pass node e2e since the docker images in https://github.com/kubernetes/kubernetes/pull/65251 are required for this to function. The node e2e will fail when trying to pull the required images for testing.
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
CSI Node info registration in kubelet
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#67683
**Special notes for your reviewer**:
Feature issue: https://github.com/kubernetes/features/issues/557
Design doc: https://github.com/kubernetes/community/pull/2034
Missing pieces:
* CSI client retry and exponential backoff logic.
* CSINodeInfo object validation
* e2e test with all the CSI machinery.
An RBAC rule is also added to support external-provisioner topology updates.
**Release note**:
```release-note
Registers volume topology information reported by a node-level Container Storage Interface (CSI) driver. This enables Kubernetes support of CSI topology mechanisms.
```
Automatic merge from submit-queue (batch tested with PRs 68341, 68385). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Fixed error message reporting for Gluster driver tests
Quick fix for error message for Gluster driver tests. This doesn't solve the problem but will make it easier to pinpoint the issue if it flakes again on CI.
Related to: #68373
/sig storage
/kind bug
/assign @msau42
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67950, 68195). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Remove e2e-image-puller
**What this PR does / why we need it**:
A long time ago, We added the image prepulling as a workaround due to
the overwhelming amount of flake caused by pulling during the tests.
This functionality has been broken for a while now when we switched to a
COS image where mounting `docker` binary into `busybox` stopped working.
So we just have dead code we should clean up.
Change-Id: I538171a5c1d9361eee7f9e0a99655b88b1721e3e
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#63355
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Update default etcd server to 3.2.24 for kubernetes 1.12
**What this PR does / why we need it**:
Update default etcd server to 3.2.24 for kubernetes 1.12
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
xref #68147
**Special notes for your reviewer**:
NONE
**Release note**:
```
Update default etcd server to 3.2.24 for kubernetes 1.12
```
/assign @wojtek-t @jpbetz @dims
/cc @kubernetes/sig-cluster-lifecycle-pr-reviews @gyuho
What this PR does / why we need it:
Simple code and typo fixed in nfs tests. The tests in nfs are useful as an example of how to configure a NFS server and this typo was hurting code comprehension.
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
none
Special notes for your reviewer:
none
Release note:
none
Automatic merge from submit-queue (batch tested with PRs 68087, 68256, 64621, 68299, 68296). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Bump addon-manager to v8.7
**What this PR does / why we need it**:
Major changes:
- Support extra `--prune-whitelist` resources in kube-addon-manager.
- Update kubectl to v1.10.7.
Basically picking up https://github.com/kubernetes/kubernetes/pull/67743.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #NONE
**Special notes for your reviewer**:
/assign @Random-Liu @mikedanese
**Release note**:
```release-note
Bump addon-manager to v8.7
- Support extra `--prune-whitelist` resources in kube-addon-manager.
- Update kubectl to v1.10.7.
```
This changes the custom metrics client logic over to support multiple versions
of the custom metrics API by checking discovery to find the appropriate versions.
Fixes#68011
Co-authored-by: Solly Ross <sross@redhat.com>
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Fix gce localssd pv tests
**What this PR does / why we need it**:
When running local PV tests against GCE local SSD, it directly uses the disk so doesn't need to create a tmp dir like the other test formats. Fsgroup tests do not create test-file so don't error on cleanup if the file doesn't exist.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#68308
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 68171, 67945, 68233). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Move the CloudControllerManagerConfiguration to an API group in `cmd/`
**What this PR does / why we need it**:
This PR is the last piece of https://github.com/kubernetes/kubernetes/issues/67233.
It moves the `CloudControllerManagerConfiguration` to its own `cloudcontrollermanager.config.k8s.io` config API group, but unlike the other components this API group is "private" (only available in `k8s.io/kubernetes`, which limits consumer base), as it's located entirely in `cmd/` vs a staging repo.
This decision was made for now as we're not sure what the story for the ccm loading ComponentConfig files is, and probably a "real" file-loading ccm will never exist in core, only helper libraries. Eventually the ccm will only be a library in any case, and implementors will/can use the base types the ccm library API group provides. It's probably good to note that there is no practical implication of this change as the ccm **cannot** read ComponentConfig files. Hencec the code move isn't user-facing.
With this change, we're able to remove `pkg/apis/componentconfig`, as this was the last consumer. That is hence done in this PR as well (so the move is easily visible in git, vs first one "big add" then a "big remove"). The only piece of code that was used was the flag helper structs, so I moved them to `pkg/util/flag` that I think makes sense for now.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
ref: kubernetes/community#2354
**Special notes for your reviewer**:
This PR builds on top of (first two commits, marked as `Co-authored by: @stewart-yu`) https://github.com/kubernetes/kubernetes/pull/67689
**Release note**:
```release-note
NONE
```
/assign @liggitt @sttts @thockin @stewart-yu
Automatic merge from submit-queue (batch tested with PRs 68171, 67945, 68233). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
[e2e] verifying LimitRange update is effective before creating new pod
**What this PR does / why we need it**:
Refer to the flaky test mentioned in #68170, LimitRange updating should be verified before creating new pod.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#68170
**Special notes for your reviewer**:
/cc bsalamat k82cn
/sig scheduling
**Release note**:
```release-note
[e2e] verifying LimitRange update is effective before creating new pod
```
At e2e test for "apply set/view last-applied",
failure message is `Missing "replicas": 2 in kubectl
view-last-applied`, in spite of `replicas` key is contained.
This changes `Missing` to `Presenting`.
Automatic merge from submit-queue (batch tested with PRs 68161, 68023, 67909, 67955, 67731). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
CSI: skip attach for non-attachable drivers
**What this PR does / why we need it**:
This is implementation of https://github.com/kubernetes/community/pull/2523. CSI volumes that don't need attach/detach now don't need external attacher running.
WIP:
* contains #67803 to get CSIDriver API. Ignore the first commit.
* ~~missing e2e test~~
/sig storage
cc: @saad-ali @vladimirvivien @verult @msau42 @gnufied @davidz627
**Release note**:
```release-note
CSI volume plugin does not need external attacher for non-attachable CSI volumes.
```
Automatic merge from submit-queue (batch tested with PRs 68161, 68023, 67909, 67955, 67731). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Replace git volume with configmap in emptydir wrapper conflict test
**What this PR does / why we need it**: GitRepoVolumeSource is deprecated, use a ConfigMap instead. (This test is part of the conformance suite, so it would be good to allow downstreams to disable/not support gitRepo)
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67691, 68147). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Bump versions of components with latest security patches.
**What this PR does / why we need it**:
Upgrade versions of monitoring components used on GCP, to include latest security patches.
**Release note**:
```release-note
[fluentd-gcp-scaler addon] Bump fluentd-gcp-scaler to 0.4 to pick up security fixes.
[prometheus-to-sd addon] Bump prometheus-to-sd to 0.3.1 to pick up security fixes, bug fixes and new features.
[event-exporter addon] Bump event-exporter to 0.2.3 to pick up security fixes.
```
Automatic merge from submit-queue (batch tested with PRs 67709, 67556). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Fix volume scheduling issue with pod affinity and anti-affinity
**What this PR does / why we need it**:
The previous design of the volume scheduler had volume assume + bind done before pod assume + bind. This causes issues when trying to evaluate future pods with pod affinity/anti-affinity because the pod has not been assumed while the volumes have been decided.
This PR changes the design so that volume and pod are assumed first, followed by volume and pod binding. Volume binding waits (asynchronously) for the operations to complete or error. This eliminates the subsequent passes through the scheduler to wait for volume binding to complete (although pod events or resyncs may still cause the pod to run through scheduling while binding is still in progress). This design also aligns better with the scheduler framework design, so will make it easier to migrate in the future.
Many changes had to be made in the volume scheduler to handle this new design, mostly around:
* How we cache pending binding operations. Now, any delayed binding PVC that is not fully bound must have a cached binding operation. This also means bind API updates may be repeated.
* Waiting for the bind operation to fully complete, and detecting failure conditions to abort the bind and retry scheduling.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#65131
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixes issue where pod scheduling may fail when using local PVs and pod affinity and anti-affinity without the default StatefulSet OrderedReady pod management policy
```
Automatic merge from submit-queue (batch tested with PRs 66840, 68159). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
TTL for cleaning up Jobs after they finish
**What this PR does / why we need it**: https://github.com/kubernetes/features/issues/592
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#64470
For https://github.com/kubernetes/features/issues/592
**Special notes for your reviewer**: @kubernetes/sig-apps-pr-reviews
**Release note**:
```release-note
Add a TTL machenism to clean up Jobs after they finish.
```
Automatic merge from submit-queue (batch tested with PRs 67736, 68123, 68138). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Update external provisioner test to use latest nfs-provisioner
**What this PR does / why we need it**: latest nfs-provisioner will work with cri-containerd, so let's update it
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**: I want to move this test to use nfs-client-provisioner soon anyway since a lot of our e2e tests already use a containerized nfs server and it would be good to be consistent. So this can be treated as something of a stopgap but it would be nice to have ASAP to unblock https://github.com/kubernetes-incubator/external-storage/issues/432#issuecomment-417511065
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 67736, 68123, 68138). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Port security context NodeConformance e2e_node tests to e2e
**What this PR does / why we need it**:
Port all [NodeConformance] SecurityContext e2e_node tests to e2e/common.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes#67032
**Special notes for your reviewer**:
- This PR is a continuing effort to close#67032.
- Removed ContainerRuntime constraint [as discussed](https://github.com/kubernetes/kubernetes/pull/67032#discussion_r214201870).
- Porting all [NodeConformance] tests to e2e/common which do not have node dependencies.
- Does it make sense to port [privileged test](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/security_context_test.go#L558) to e2e/common and remove [NodeFeature:HostAccess] label from test name?
**Release note**:
```release-note
NONE
```
/area conformance
@kubernetes/sig-node-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 63011, 68089, 67944, 68132). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Support both directory and block device for local volume plugin FileSystem VolumeMode
Support both directory and block device for local volume plugin FileSystem VolumeMode
xref: [local storage dynamic provisioning design #1914](https://github.com/kubernetes/community/pull/1914)
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Support both directory and block device for local volume plugin FileSystem VolumeMode
```
A long time ago, We added the image prepulling as a workaround due to
the overwhelming amount of flake caused by pulling during the tests.
This functionality has been broken for a while now when we switched to a
COS image where mounting `docker` binary into `busybox` stopped working.
So we just have dead code we should clean up.
Change-Id: I538171a5c1d9361eee7f9e0a99655b88b1721e3e
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.
Add --server-dry-run flag to `kubectl apply`
- Adds the flag
- changes the helper so that we can pass options for patch,
- Adds a test to make sure it doesn't change the object
**What this PR does / why we need it**:
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Add new `--server-dry-run` flag to `kubectl apply` so that the request will be sent to the server with the dry-run flag (alpha), which means that changes won't be persisted.
```