If not, using `go test -count=n` would make them pile up and ultimately
get to the limit of open files:
2015/12/05 12:43:56 http: Accept error: accept tcp 127.0.0.1:39768: accept4: too many open files; retrying in 5ms
2015/12/05 12:43:56 http: Accept error: accept tcp 127.0.0.1:46606: accept4: too many open files; retrying in 5ms
2015/12/05 12:43:56 http: Accept error: accept tcp 127.0.0.1:46606: accept4: too many open files; retrying in 10ms
2015/12/05 12:43:56 http: Accept error: accept tcp 127.0.0.1:46606: accept4: too many open files; retrying in 20ms
Steps to reproduce (no longer fails):
godep go test -short -run '^$' -o test .
./test -test.run '^TestDoRequestNewWayFile$' -test.count 100
Note that this might not fail if your `ulimit -n` is not low enough.
There has been a recent regression causing kubelet to assume no containers are
running for the pod if kubelet has not seen the pod before. This would cause
all containers to be restarted after kubelet gets restarted. This change fixes
the bug.
Implement a flag that defines the frequency at which a node's out of
disk condition can change its status. Use this flag to suspend out of
disk status changes in the time period specified by the flag, after
the status is changed once.
Set the flag to 0 in e2e tests so that we can predictably test out of
disk node condition.
Also, use util.Clock interface for all time related functionality in
the kubelet. Calling time functions in unversioned package or time
package such as unversioned.Now() or time.Now() makes it really hard
to test such code. It also makes the tests flaky and sometimes
unnecessarily slow due to time.Sleep() calls used to simulate the
time elapsed. So use util.Clock interface instead which can be faked
in the tests.
If a route already exists but is invalid (e.g. from a crash), we
automatically delete it before trying to create a route that would
otherwise conflict.
For AWS EBS, a volume can only be attached to a node in the same AZ.
The scheduler must therefore detect if a volume is being attached to a
pod, and ensure that the pod is scheduled on a node in the same AZ as
the volume.
So that the scheduler need not query the cloud provider every time, and
to support decoupled operation (e.g. bare metal) we tag the volume with
our placement labels. This is done automatically by means of an
admission controller on AWS when a PersistentVolume is created backed by
an EBS volume.
Support for tagging GCE PVs will follow.
Pods that specify a volume directly (i.e. without using a
PersistentVolumeClaim) will not currently be scheduled correctly (i.e.
they will be scheduled without zone-awareness).
Add flags to control max connections (set to 256k vs 64k default) and TCP
established timeout (set to 1 day vs 5 day default). Flags can be set to 0 to
mean "don't change it".
This is only set at startup, and not wrapped in a rectifier loop.
Tested manually.