Automatic merge from submit-queue
Add eviction-pressure-transitition-period flag to kubelet
This PR does the following:
* add the new flag to control how often a node will go out of memory pressure or disk pressure conditions see: https://github.com/kubernetes/kubernetes/pull/25282
* pass an `eviction.Config` into `kubelet` so we can group config
/cc @vishh
Automatic merge from submit-queue
[e2e] kubectl stdin
Problem: Currently kubectl heavily relies on files which have to be (for lack of a better word :):):)) "written" to the file system. This hinders adoption of something like gobindata, by forcing an intermediary generated-assets directory type thing.
Solution: Lets migrate `kubectl.go` testing over to using standard input streams.
cc @kubernetes/sig-testing @timothysc
Since its hard for an individual to remember that we need codecgen and
ginkgo in godeps but don't actually have that dependancy listed in a way
that godep can automatically find.
Automatic merge from submit-queue
WIP v0 NVIDIA GPU support
```release-note
* Alpha support for scheduling pods on machines with NVIDIA GPUs whose kubelets use the `--experimental-nvidia-gpus` flag, using the alpha.kubernetes.io/nvidia-gpu resource
```
Implements part of #24071 for #23587
I am not familiar with the scheduler enough to know what to do with the scores. Mostly punting for now.
Missing items from the implementation plan: limitranger, rkt support, kubectl
support and docs
cc @erictune @davidopp @dchen1107 @vishh @Hui-Zhi @gopinatht
Automatic merge from submit-queue
pkg/apis/rbac: Add Openshift authorization API types
This PR updates #23396 by adding the Openshift RBAC types to a new API group.
Changes from Openshift:
* Omission of [ResourceGroups](4589987883/pkg/authorization/api/types.go (L32-L104)) as most of these were Openshift specific. Would like to add the concept back in for a later release of the API.
* Omission of IsPersonalSubjectAccessReview as its implementation relied on Openshift capability.
* Omission of SubjectAccessReview and ResourceAccessReview types. These are defined in `authorization.k8s.io`
~~API group is named `rbac.authorization.openshift.com` as we omitted the AccessReview stuff and that seemed to be the lest controversial based on conversations in #23396. Would be happy to change it if there's a dislike for the name.~~ Edit: API groups is named `rbac`, sorry misread the original thread.
As discussed in #18762, creating a new API group is kind difficult right now and the documentation is very out of date. Got a little help from @soltysh but I'm sure I'm missing some things. Also still need to add validation and a RESTStorage registry interface. Hence "WIP".
Any initial comments welcome.
cc @erictune @deads2k @sym3tri @philips
Automatic merge from submit-queue
Updating hack/update-codegen to keep federation_clientset updated
Right now, there is no check for this and hence federation_clientset becomes stale over time.
Updating hack/update-codegen to keep federation_clientset updated.
hack/verify-codegen.sh ensures that it is updated.
cc @caesarxuchao @lavalamp @jianhuiz @kubernetes/sig-cluster-federation
Automatic merge from submit-queue
Webhook Token Authenticator
Add a webhook token authenticator plugin to allow a remote service to make authentication decisions.
Implements part of #24071
I am not familiar with the scheduler enough to know what to do with the scores. Punting for now.
Missing items from the implementation plan: limitranger, rkt support, kubectl
support and user docs
Automatic merge from submit-queue
Hack update all remove dollar symbol
When not running ./hack/update in silent mode, the script fails due to undefined ``$Updating`` variable.
Automatic merge from submit-queue
Move godeps to vendor/
This is a first-step towards glide support, maybe we don't want or need to take this, but it was easy to try.
This fails to compile, not sure why:
```
# k8s.io/kubernetes/pkg/apis/extensions/v1beta1
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2703: undefined: extensions.ClusterAutoscaler
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2703: undefined: ClusterAutoscaler
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2719: undefined: extensions.ClusterAutoscaler
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2719: undefined: ClusterAutoscaler
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2723: undefined: extensions.ClusterAutoscalerList
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2723: undefined: ClusterAutoscalerList
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:3468: Convert_extensions_JobSpec_To_v1beta1_JobSpec redeclared in this block
previous declaration at _output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion.go:328
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:3845: Convert_extensions_ScaleStatus_To_v1beta1_ScaleStatus redeclared in this block
previous declaration at _output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion.go:98
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:4737: Convert_v1beta1_JobSpec_To_extensions_JobSpec redeclared in this block
previous declaration at _output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion.go:380
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:5186: Convert_v1beta1_ScaleStatus_To_extensions_ScaleStatus redeclared in this block
previous declaration at _output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion.go:120
_output/local/go/src/k8s.io/kubernetes/pkg/apis/extensions/v1beta1/conversion_generated.go:2723: too many errors
!!! Error in /home/thockin/tmp/godep-vendor/src/k8s.io/kubernetes/hack/lib/golang.sh:417
```
Automatic merge from submit-queue
cluster/images/hyperkube: create symlink for each server
Add a kubelet symlink so that the hyperkube image can appear as a kubelet image. https://github.com/kubernetes/kubernetes/issues/24510
a) it doesn't need it
b) changing CWD to a path with symlinks breaks deep within ginkgo, where it
crafts a relative path to ../../../../../../platforms/amd64/whatever which then
traverses the physical path not the symlinked one, and breaks.
Our `realpath` and `readlink -f` functions (required only because of MacOS,
thanks Steve) were poor substitutes at best. Mostly they were downright
broken. This thoroughly overhauls them and adds a test (in comments, since we
don't seem to have shell tests). For all the interesting cases I could think
of, the fakes act just like the real thing.
Then use those and canonicalize KUBE_ROOT. In order to make recursive calls of
our shell tool not additively grow `pwd` we have to essentially make the
sourcing of init.sh idempotent.
Automatic merge from submit-queue
Introduce events flag for describers
Printing events for a given object is not always needed. Thus, introducing --show-events=false to ``kubectl describe`` to skip events printing.
Fixes: #24239
Automatic merge from submit-queue
Reimplement 'pause' in C - smaller footprint all around
Statically links against musl. Size of amd64 binary is 3560 bytes.
I couldn't test the arm binary since I have no hardware to test it on, though I assume we want it to work on a raspberry pi.
This PR also adds the gcc5/musl cross compiling image used to build the binaries.
@thockin
Automatic merge from submit-queue
Add subPath to mount a child dir or file of a volumeMount
Allow users to specify a subPath in Container.volumeMounts so they can use a single volume for many mounts instead of creating many volumes. For instance, a user can now use a single PersistentVolume to store the Mysql database and the document root of an Apache server of a LAMP stack pod by mapping them to different subPaths in this single volume.
Also solves https://github.com/kubernetes/kubernetes/issues/20466.
Automatic merge from submit-queue
Update e2e-runner.sh so it can fetch multiple types of Trusty images
Trusty beta images now work with k8s 1.2.
@spxtr, @andyzheng0831 Can you review this?
Automatic merge from submit-queue
Kubelet: Cleanup with new engine api
Finish step 2 of #23563
This PR:
1) Cleanup go-dockerclient reference in the code.
2) Bump up the engine-api version.
3) Cleanup the code with new engine-api.
Fixes#24076.
Fixes#23809.
/cc @yujuhong
Introduce DescriberSettings for Describer display options
Introduce --show-events flag and DescriberSettings in Describer methods
Introduce unit-tests
Regenerated kubectl describe docs
Add events flag tests to test-cmd.sh
Signed-off-by: dhodovsk@redhat.com
Signed-off-by: jchaloup@redhat.com
Automatic merge from submit-queue
Jenkins: Clean up even if we can't bring cluster up
We're gathering all the cluster logs, go ahead and clean up the
resources.
Automatic merge from submit-queue
Allow etcd to store protobuf objects
Split storage serialization from client negotiation, and allow API server to take flag controlling serialization.
TODO:
* [x] API server still doesn't start - range allocation object doesn't seem to round trip correctly to etcd
* [ ] Verify that third party resources are ignoring protobuf (add a test)
* [ ] Add integration tests that verify storage is correctly protobuf
* [ ] Add a global default for which storage format to prefer?
Automatic merge from submit-queue
Map secret files into dockerized e2e
Rather than copying the GCE and AWS private keys/credentials to each Jenkins VM, we can put them in credentials and map them through.
This is one half of the change; if the relevant environment variables are set, we'll mount the files in.
cc @fejta @rmmh @apelisse
The codec factory should support two distinct interfaces - negotiating
for a serializer with a client, vs reading or writing data to a storage
form (etcd, disk, etc). Make the EncodeForVersion and DecodeToVersion
methods only take Encoder and Decoder, and slight refactoring elsewhere.
In the storage factory, use a content type to control what serializer to
pick, and use the universal deserializer. This ensures that storage can
read JSON (which might be from older objects) while only writing
protobuf. Add exceptions for those resources that may not be able to
write to protobuf (specifically third party resources, but potentially
others in the future).
Automatic merge from submit-queue
kubectl rolling-update support for same image
Fixes#23497.
Enables `kubectl rolling-update --image` to the same image, adding a `--image-pull-policy` flag to remove ambiguity. This allows rolling-update to behave as an "update and/or restart" (https://github.com/kubernetes/kubernetes/issues/23497#issuecomment-212349730), or as a forced update when the same tag can mean multiple versions (e.g. `:latest`). cc @janetkuo @nikhiljindal
Automatic merge from submit-queue
Add several arguments to boilerplate.py
This commit makes the root directory and boilerplate content directory configurable.
The defaults have remained the same, so no behavior changes should be expected.
cc @eparis
ref https://github.com/kubernetes/minikube/pull/37
Automatic merge from submit-queue
Use HOSTNAME in Docker build image tag hash
Fixes#24661 by including `$HOSTNAME` when generating the build image tag hash.
When running the verification checks under Docker, the `$KUBE_ROOT` will be identical across builds, so tags will collide unless we add additional uniqueness. By default, the hostname inside a Docker container is its ID, which should be unique enough for us.
I also deleted a misleading error message from the same check.
@kubernetes/sig-testing
This commit makes the root directory, boilerplate content directory and
the directories to skip configurable.
The defaults have remained the same, so no behavior changes should be expected.
Automatic merge from submit-queue
Run Kubemark builds inside Docker
Since Docker-in-Docker is tricky to get right (esp. wrt volume mounts), I'm only enabling it when necessary for a build (e.g. for kubemark).
cc @spxtr @fejta @wojtek-t
Automatic merge from submit-queue
allow kubectl subcmds to process multiple resources
~~autoscale, expose & patch~~ Many kubectl subcommands were limited to processing one resource at a time.
This PR allows those subcommands to process multiple resources.
This PR is in reference to https://github.com/kubernetes/kubernetes/pull/23116#issuecomment-202360784 by @deads2k
Automatic merge from submit-queue
Up to go 1.6.2 for build and test.
~~1.6.1 contains some security fixes. 1.6.2 should be out soon.~~ 1.6.2 is out :D
Images aren't pushed yet.
Automatic merge from submit-queue
Framework support for node e2e.
This should let us port existing e2e tests to the node e2e suite, if the tests are node specific.
Automatic merge from submit-queue
Provide flags to use etcd3 backed storage
ref: #24405
What's in this PR?
- Add a new flag "storage-backend" to choose "etcd2" or "etcd3". By default (i.e. empty), it's "etcd2".
- Take out etcd config code into a standalone package and let it create etcd2 or etcd3 storage backend given user input.
Automatic merge from submit-queue
Add kubelet flags for eviction threshold configuration
This PR just adds the flags for kubelet eviction and the associated generated code.
I am happy to tweak text, but we can also do that later at this point in the release.
Since this causes codegen, I wanted to stage this first.
/cc @vishh @kubernetes/sig-node
Automatic merge from submit-queue
Initial kube-up support for VMware's Photon Controller
This is for: https://github.com/kubernetes/kubernetes/issues/24121
Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/
This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons
It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.
Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.
A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown
Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.
We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.
We are aware of the kube-deploy efforts and will track them and
support them as appropriate.
Automatic merge from submit-queue
Federation apiobject cluster
add federation api group
add cluster api object and registry
~~generate cluster client~~ moved to #24117
update scripts to generate files for /federation
#19313#23653#23554
@nikhiljindal @quinton-hoole, @deepak-vij, @XiaoningDing, @alfred-huangjian @mfanjie @huangyuqi @colhom
Automatic merge from submit-queue
update codegen before update codecgen
Currently if I remove an API field, update-codecgen will complain generated deepcopy functions referring to invalid fields. Running update-codegen before update-codecgen solves the problem.
Automatic merge from submit-queue
kubectl: Allow []byte config fields to be set by the cli
Allows []byte config fields such as 'certificate-authority-data' to be set using `kubectl config set` commands.
Automatic merge from submit-queue
Script to cache metadata requests on the jenkins master
Fixes https://github.com/kubernetes/kubernetes/issues/23545
Create an http server which caches most requests to the metadata server. Use special logic to cache access tokens such that the expires_on json field is correct. Add a script to simplify enabling/disabling the cache by editing `/etc/hosts`
This is for: https://github.com/kubernetes/kubernetes/issues/24121
Photon Controller is an open-source cloud management platform. More
information is available at:
http://vmware.github.io/photon-controller/
This commit provides initial support for Photon Controller. The
following features are tested and working:
- kube-up and kube-down
- Basic pod and service management
- Networking within the Kubernetes cluster
- UI and DNS addons
It has been tested with a Kubernetes cluster of up to 10
nodes. Further work on scaling is planned for the near future.
Internally we have implemented continuous integration testing and will
run it multiple times per day against the Kubernetes master branch
once this is integrated so we can quickly react to problems.
A few things have not yet been implemented, but are planned:
- Support for kube-push
- Support for test-build-release, test-setup, test-teardown
Assuming this is accepted for inclusion, we will write documentation
for the kubernetes.io site.
We have included a script to help users configure Photon Controller
for use with Kubernetes. While not required, it will help some
users get started more quickly. It will be documented.
We are aware of the kube-deploy efforts and will track them and
support them as appropriate.