This patch introduces glusterfsPersistentVolumeSource addition
to glusterfsVolumeSource. All fields remains same as glusterfsVolumeSource
with an addition of a new field
called `EndpointsNamespace` to define namespace of endpoint in the
spec.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Creating secrets is useful for CSI drivers like ceph-csi which have to
be configured via secrets.
While at it, the UniqueName method gets replaced with
MetaNamespaceKeyFunc which does the same thing (at least as long as
non-namespaced items don't have a redundant namespace set) and the
factory types aren't exported anymore (not necessary).
This way we can be sure that the kubelet can't communicate with the
master, even if falls-back to the internal/external IP (which seems to
be the case with DNS)
Issue #56787
As discussing on #68905
some tests of test/e2e/common/host_path.go are covered with
test/e2e/storage/testsuites/subpath.go
So we don't need to keep them in test/e2e/common/host_path.go
anymore for the maintenance.
The striped cache used by the token cache is slightly more sophisticated
however the simple cache provides about the same exact behavior. I used
the striped cache rather than the simple cache because:
* It has been used without issue as the primary token cache.
* It preforms better under load.
* It is already exposed in the public API of the token cache package.
The command executed in ValidateController function uses the
image name of the running container. This is a problem in multiarch
images, since the image name is the name of the image specific to
the architecture, but the image passed as parameter is the multiarch
one (as the test are architecture agnostic), making the test to fail.
This patch fixes it by making the logic use a command that get the
multiarch name given in the container spec.
Signed-off-by: Miguel Herranz <miguel@midokura.com>
This change updates the etcd storage path test to exercise custom
resource storage by creating custom resource definitions before
running the test.
Duplicated custom resource definition test logic was consolidated.
Signed-off-by: Monis Khan <mkhan@redhat.com>
Tests for scaling down based on external metric are flaky. I think this
is because they:
- Start with 2 replicas,
- Export metric value == 1/2 target,
- Expect scale down to 1.
Since the expected recommendation is exactly 1 it might flake (and with
scale down stabilization any recommendations higher than 1 will
persist).
Change expected value of the metric so recommended size will be lower
than 1. This should make those tests less flaky.
The detailed dumps of original and patched item content was useful
while developing the feature, but is less relevant now and too
verbose. It might be relevant again, so it's left in the code as
comments.
What gets logged now is just a single-line "creating" resp. "deleting"
message with the type of the item and its unique name.
This also enhances up some other aspects of the original logging:
- the namespace is included for item types that are namespaced
- the "deleting" message no longer gets replicated in each factory
method
Fixes: #70448
- Scale down based on custom metric was flaking. Increase target value
of the metric.
- Scale down based on CPU was flaking during stabilization. Increase
tolerance of stabilization (caused by resource consumer using more CPU
than requested).
Debugging the CSI driver tests depends a lot on the output of the CSI
sidecar containers and the CSI driver, but that information was not
captured automatically and thus unavailable after a test run. This is
particularly bad when running in a remote CI system, but also manually
watching the cluster during a test was cumbersome.
Now pod events and log messages get copied to the test's output at the
time that they happen (when running without report directory) or get
written to individual log files (when running with report directory in
the CI).
Ensuring that CSI drivers get deployed for testing exactly as intended
was problematic because the original .yaml files had to be converted
into code. e2e/manifest helped a bit, but not enough:
- could not load all entities
- didn't handle loading .yaml files with multiple entities
- actually creating and deleting entities still had to be done in tests
The new framework utility code handles all of that, including the
tricky cleanup operation that tests got wrong (AfterEach does not get
called after test failures!).
In addition, it is ensuring that each test gets its own instance of the
entities.
The PSP role binding for hostpath is now necessary because we switch
from creating a pod directly to creation via the StatefulSet
controller, which runs with less privileges.
Without this, the hostpath test runs into these errors in the
kubernetes-e2e-gce job:
Oct 19 16:30:09.225: INFO: At 2018-10-19 16:25:07 +0000 UTC - event for csi-hostpath-attacher: {statefulset-controller } FailedCreate: create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher failed error: pods "csi-hostpath-attacher-0" is forbidden: unable to validate against any pod security policy: []
Oct 19 16:30:09.225: INFO: At 2018-10-19 16:25:07 +0000 UTC - event for csi-hostpath-provisioner: {statefulset-controller } FailedCreate: create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner failed error: pods "csi-hostpath-provisioner-0" is forbidden: unable to validate against any pod security policy: []
Oct 19 16:30:09.225: INFO: At 2018-10-19 16:25:07 +0000 UTC - event for csi-hostpathplugin: {daemonset-controller } FailedCreate: Error creating: pods "csi-hostpathplugin-" is forbidden: unable to validate against any pod security policy: []
The extra role binding is silently ignored on clusters which don't
have this particular role.
When we get an unsupported provider message, it often isn't clear what
method actually failed - add more information to the error message.
Issue #70280
Setting command line arguments via env variables that are not needed
by the binaries is just unnecessarily complex. The driver renaming
code in the E2E manifest PR would have to be made more complex to deal
with such a deployment. It is easier for that code and humans who look
at the .yaml file to remove the indirection.