mirror of https://github.com/k3s-io/k3s
Merge pull request #62761 from Random-Liu/lower-usage-nano-cores-in-summary
Automatic merge from submit-queue (batch tested with PRs 62761, 62715). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Lower UsageNanoCores boundary in summary api test. We recently switched to use `p2p` instead of `bridge` in containerd https://github.com/containerd/cri/pull/742. However, after that switch, the `UsageNanoCores` becomes lower, and constantly fails the test. An example failure: * https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/pr-logs/pull/containerd_cri/740/pull-cri-containerd-node-e2e/690/ This is probably because: 1) The test container used in summary test does `ping`. https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/summary_test.go#L352 2) `p2p` is simpler than `bridge`, "Maybe cycles are saved from waiving Mac learning" - @jingax10. This PR lowers the boundary by 1 magnitude. Signed-off-by: Lantao Liu <lantaol@google.com> **Release note**: ```release-note none ```pull/8/head
commit
1ddb0e05e5
|
@ -175,7 +175,7 @@ var _ = framework.KubeDescribe("Summary API", func() {
|
|||
"StartTime": recent(maxStartAge),
|
||||
"CPU": ptrMatchAllFields(gstruct.Fields{
|
||||
"Time": recent(maxStatsAge),
|
||||
"UsageNanoCores": bounded(100000, 1E9),
|
||||
"UsageNanoCores": bounded(10000, 1E9),
|
||||
"UsageCoreNanoSeconds": bounded(10000000, 1E11),
|
||||
}),
|
||||
"Memory": ptrMatchAllFields(gstruct.Fields{
|
||||
|
@ -222,7 +222,7 @@ var _ = framework.KubeDescribe("Summary API", func() {
|
|||
}),
|
||||
"CPU": ptrMatchAllFields(gstruct.Fields{
|
||||
"Time": recent(maxStatsAge),
|
||||
"UsageNanoCores": bounded(100000, 1E9),
|
||||
"UsageNanoCores": bounded(10000, 1E9),
|
||||
"UsageCoreNanoSeconds": bounded(10000000, 1E11),
|
||||
}),
|
||||
"Memory": ptrMatchAllFields(gstruct.Fields{
|
||||
|
|
Loading…
Reference in New Issue