* Add a test/e2e/shell.go that slurps up everything in hack/e2e-suite
and runs it as a bash test, borrowing all the code from hack/e2e.go.
* Rip out all the crap in hack/e2e.go that deal with multiple tests
* Move hack/e2e-suite/goe2e.sh to hack/ginkgo-e2e.sh so that it
doesn't get slurped up.
Simply incorporate some of the boilerplate from hack/e2e.go into the
scripts in hack/e2e-suite.
Use environment variables with default values to allow overrides in
kubectl command line and to use a versioned package root.
Tested:
- Ran `go run hack/e2e.go -test` on a test cluster.
- Ran the test cases individually.
- Ran hack/e2e-suite/goe2e.sh -t Pods to confirm it takes arguments.
- Also fixed cluster/test-network.sh (which should be more and more irrelevant.)
Use the E2E_REPORT_DIR global environment variable to define the
location where the JUnit XML reports should be saved.
Modify the Jenkins e2e.sh script to export that variable pointing to the
top of the Jenkins build tree.
Tested by running `E2E_REPORT_DIR=${PWD}/.. hack/e2e-test.sh` and
confirmed ../junit.xml was generated and looked good.
Removed auth for Grafana to facilitate usage via service proxy on the api-server.
Added a grafana service
Removed elasticsearch dependency for monitoring - faster startup times.
This was staring at me yesterday, and I even commented that "huh,
there's got to be something wrong with the firewall rules, but then
job/kubernetes-e2e-gce/1002/tapResults/ made it obvious: If you have
two e2e jobs running at the same time in the same project (hint:
Jenkins does), they'll race with each other, since resource names are
project scoped.
What I really want is
https://github.com/GoogleCloudPlatform/kubernetes/issues/2953, but
haven't had a chance to code that yet. Maybe it's time. (Then I'd
remove the provider-specific test and just say "is it > 0.7.2, or does
it claim to be capable of something from the future?" The latter
covers the HEAD server case .. though just bumping the server version
immediately after release might also accomplish that, too.)
After this DNS is resolvable from the host, if the DNS server is targetted
explicitly. This does NOT add the cluster DNS to the host's resolv.conf. That
is a larger problem, with distro-specific tie-ins and circular deps.
Add test artifacts to the build. This lets you do:
tar -xzf kubernetes.tar.gz
tar -xzf kubernetes-test.tar.gz
cd kubernetes
go run ./hack/e2e.go -up -test -down
without having a git checkout.
This change refactors the way Kubelet's DockerPuller handles the docker config credentials to utilize a new credentialprovider library.
The credentialprovider library is based on several of the files from the Kubelet's dockertools directory, but supports a new pluggable model for retrieving a .dockercfg-compatible JSON blob with credentials.
With this change, the Kubelet will lazily ask for the docker config from a set of DockerConfigProvider extensions each time it needs a credential.
This change provides common implementations of DockerConfigProvider for:
- "Default": load .dockercfg from disk
- "Caching": wraps another provider in a cache that expires after a pre-specified lifetime.
GCP-only:
- "google-dockercfg": reads a .dockercfg from a GCE instance's metadata
- "google-dockercfg-url": reads a .dockercfg from a URL specified in a GCE instance's metadata.
- "google-container-registry": reads an access token from GCE metadata into a password field.
Also fix up cert generation. It was failing during the first salt highstate when trying to chown the certs as the apiserver user didn't exist yet. Fix this by creating a 'kube-cert' group and chgrping the files to that. Then make the apiserver a member of that group.
Fixes#2365Fixes#2368
apiserver becomes kube-apiserver
controller-manager -> kube-controller-manager
scheduler and proxy similarly.
Only thing I promise is that right now hack/build-go.sh and
build/release.sh exit with 0. That's it. Who knows if any of this
actually works....
* Rewrite a bunch of the hack/ directory with modular reusable bash libraries.
* Have 'build/*' build on 'hack/*'. The stuff in build now just runs hack/* in a docker container.
* Use a docker data container to enable faster incremental builds.
* Standardize output to _output/{local,dockerized}/bin/OS/ARCH/*. This regularized placement makes cross compilation work.
* Move travis specific scripts under hack/travis
With new dockerized incremental builds, I can do a no-op `make quick-release` in ~30s. This is a significant improvement.