because it causes a runtime panic if a binary which has its own implementation
of "-version" flag tries to reuse a package library which indirectly depend on
"pkg/version".
e.g. If such an user-defined binary tires to link "pkg/api" or "pkg/client",
the binary fails with a runtime panic "flag redefined: version".
To make sure the etcd watcher works, I changed the replication
controller to use watch.Interface. I made apiserver support watches on
controllers, so replicationController can be run only off of the
apiserver. I made sure all the etcd watch testing that used to be in
replicationController is now tested on the new etcd watcher in
pkg/tools/.
Detect whether the tree is dirty and append a "-dirty" indication to the
git commit (common practice with other repos, e.g. kernel, docker.)
Properly handle the case where a git tree is not found (e.g. building
from archive.)
In the sed expression, look for the variable to be updated
(commitFromGit) instead of hardcoding a line number.
Tested:
- Built from a dirty tree:
$ output/go/bin/kubelet -version
Kubernetes version 0.1, build 2d784c684c75-dirty
- Built from a clean tree:
$ output/go/bin/kubelet -version
Kubernetes version 0.1, build 505f23a31172
- Built from an archive:
$ hack/build-go.sh
WARNING: unable to find git commit, falling back to commitFromGit = `(none)`
$ output/go/bin/kubelet -version
Kubernetes version 0.1, build (none)
Signed-off-by: Filipe Brandenburger <filbranden@google.com>
Tested: Passed -version argument to kubelet (and all other binaries):
$ output/go/bin/kubecfg -version
Kubernetes version 0.1, build 6454a541fd56
Signed-off-by: Filipe Brandenburger <filbranden@google.com>
Currently, every write will result in a 202 (etcd adding a few
ms of latency to each request). This forces clients to go into
a poll loop and pick a reasonable server poll frequency, which
results in 1 + N queries to the server for the single operation
and adds unavoidable latency to each request which affects their
perception of the service.
Add a very slight (25ms by default) delay to wait for requests
to finish. For clients doing normal writes this reduces the
requests made against the server to 1. For clients on long requests
this has no effect. The downside is that http connections are held
on to for a longer period in high write loads. The decrease in
perceived latency from the kubecfg is significant.