Commit Graph

49480 Commits (d152e20f41396d73f225f0bc96ecdcb8d34c2be3)

Author SHA1 Message Date
Yu-Ju Hong d152e20f41 Address the comments 2017-06-05 19:51:55 -07:00
Yu-Ju Hong 07a67c252c kuberuntime: check the value of RunAsNonRoot when verifying
The verification function is fixed to check the value of RunAsNonRoot,
not just the existence of it. Also adds unit tests to verify the correct
behavior.
2017-06-05 18:03:32 -07:00
Kubernetes Submit Queue f893cddfba Merge pull request #46460 from sakshamsharma/location_transformer
Automatic merge from submit-queue (batch tested with PRs 46550, 46663, 46816, 46820, 46460)

Add configuration for encryption providers

## Additions

Allows providing a configuration file (using flag `--experimental-encryption-provider-config`) to use the existing AEAD transformer (with multiple keys) by composing mutable transformer, prefix transformer (for parsing providerId), another prefix transformer (for parsing keyId), and AES-GCM transformers (one for each key). Multiple providers can be configured using the configuration file.

Example configuration:
```
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
    - namespaces
    providers:
    - aes:
        keys:
        - name: key1
          secret: c2vjcmv0iglzihnly3vyzq==
        - name: key2
          secret: dghpcybpcybwyxnzd29yza==
    - identity: {}
```

Need for configuration discussed in:
#41939
[Encryption](3418b4e4c6/contributors/design-proposals/encryption.md)

**Pathway of a read/write request**:
1. MutableTransformer
2. PrefixTransformer reads the provider-id, and passes the request further if that matches.
3. PrefixTransformer reads the key-id, and passes the request further if that matches.
4. GCMTransformer tries decrypting and authenticating the cipher text in case of reads. Similarly for writes.

## Caveats
1. To keep the command line parameter parsing independent of the individual transformer's configuration, we need to convert the configuration to an `interface{}` and manually parse it in the transformer. Suggestions on better ways to do this are welcome.

2. Flags `--encryption-provider` and `--encrypt-resource` (both mentioned in [this document](3418b4e4c6/contributors/design-proposals/encryption.md) ) are not supported in this because they do not allow more than one provider, and the current format for the configuration file possibly supersedes their functionality.

3. Currently, it can be tested by adding `--experimental-encryption-provider-config=config.yml` to `hack/local-up-cluster.sh` on line 511, and placing the above configuration in `config.yml` in the root project directory.

Previous discussion on these changes:
https://github.com/sakshamsharma/kubernetes/pull/1

@jcbsmpsn @destijl @smarterclayton

## TODO
1. Investigate if we need to store keys on disk (per [encryption.md](3418b4e4c6/contributors/design-proposals/encryption.md (option-1-simple-list-of-keys-on-disk)))
2. Look at [alpha flag conventions](https://github.com/kubernetes/kubernetes/blob/master/pkg/features/kube_features.go)
3. Need to reserve `k8s:enc` prefix formally for encrypted data. Else find a better way to detect transformed data.
2017-06-05 16:43:48 -07:00
Kubernetes Submit Queue 7fb75873ea Merge pull request #46820 from ixdy/bazel-kubeproxy-debian-iptables
Automatic merge from submit-queue (batch tested with PRs 46550, 46663, 46816, 46820, 46460)

bazel: base kube-proxy image on debian-iptables instead of busybox + iptables

**What this PR does / why we need it**: the bazel-built kube-proxy image currently uses a custom base image made up of scratch + busybox + iptables + a few dependencies, while the official kube-proxy image is based off of the debian-iptables image.

This difference seems to cause some weird issues such as #46103, since the container layout doesn't look the same.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #46103, probably?

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```

/assign @mikedanese @spxtr @pipejakob 
/cc @Q-Lee @thockin @cblecker
2017-06-05 16:43:46 -07:00
Kubernetes Submit Queue 39d548f40c Merge pull request #46816 from dashpole/update_godep
Automatic merge from submit-queue (batch tested with PRs 46550, 46663, 46816, 46820, 46460)

Update cAdvisor version to v0.26.0

issue: #46658

I have requested a 1 day exception for code freeze.

/assign @dchen1107 

```release-note
Fix disk partition discovery for brtfs
Add ZFS support
Add overlay2 storage driver support
```
2017-06-05 16:43:43 -07:00
Kubernetes Submit Queue 4faf7f1f4c Merge pull request #46663 from nicksardo/gce-internallb
Automatic merge from submit-queue (batch tested with PRs 46550, 46663, 46816, 46820, 46460)

[GCE] Support internal load balancers

**What this PR does / why we need it**:
Allows users to expose K8s services externally of the K8s cluster but within their GCP network. 

Fixes #33483

**Important User Notes:**
- This is a beta feature. ILB could be enabled differently in the future. 
- Requires nodes having version 1.7.0+ (ILB requires health checking and a health check endpoint on kube-proxy has just been exposed)
- This cannot be used for intra-cluster communication. Do not call the load balancer IP from a K8s node/pod.  
- There is no reservation system for private IPs. You can specify a RFC 1918 address in `loadBalancerIP` field, but it could be lost to another VM or LB if service settings are modified.
- If you're running an ingress, your existing loadbalancer backend service must be using BalancingMode type `RATE` - not `UTILIZATION`. 
  - Option 1: With a 1.5.8+ or 1.6.4+ version master, delete all your ingresses, and re-create them.
  - Option 2: Migrate to a new cluster running 1.7.0. Considering ILB requires nodes with 1.7.0, this isn't a bad idea.
  - Option 3: Possible migration opportunity, but use at your own risk. More to come later.


**Reviewer Notes**:
Several files were renamed, so github thinks ~2k lines have changed. Review commits one-by-one to see the actual changes.

**Release note**:
```release-note
Support creation of GCP Internal Load Balancers from Service objects
```
2017-06-05 16:43:41 -07:00
Kubernetes Submit Queue 7bbc615b97 Merge pull request #46550 from DirectXMan12/feature/hpa-status-conditions
Automatic merge from submit-queue

HPA Status Conditions

This PR introduces conditions to the status of the HorizontalPodAutoscaler (in autoscaling/v2alpha1).  
The conditions whether or not the autoscaler is actively scaling, and why.  This gives greater visibility
into the *current* status of the autoscaler, similarly to how conditions work for pods, nodes, etc.

`kubectl describe` has been updated to the display the conditions affecting a given HPA.

Implements kubernetes/features#264 (alpha in 1.7)

**Release note**:
```release-note
Introduces status conditions to the HorizontalPodAutoscaler in autoscaling/v2alpha1, indicating the current status of a given HorizontalPodAutoscaler, and why it is or is not scaling.
```
2017-06-05 15:42:58 -07:00
Anirudh Ramanathan cc294cfb7e Merge pull request #46985 from deads2k/controller-09-agg-health
make the health check wait for ready apiservices
2017-06-05 14:33:23 -07:00
deads2k 0ad98c29f0 make the health check wait for ready apiservices 2017-06-05 15:05:33 -04:00
Solly Ross c8fdeb022f Update generated autoscaling files
This commit updates the generated autoscaling files to be up-to-date
with the HPA status condition changes.
2017-06-05 11:21:31 -04:00
Solly Ross 53dccdbb43 Update kubectl to display HPA status conditions
This commit updates `kubectl describe` to display the new HPA
status conditions.  This should make it easier for users to discern
the current state of the HPA.
2017-06-05 11:21:31 -04:00
Solly Ross 1334b81d20 Make HPA controller set HPA status conditions
This commit causes the HPA controller to set a variety of status
conditions using the new `Status.Conditions` field of
autoscaling/v2alpha1.  These provide insight into the current state
of the HPA, and generally correspond to similar events being emitted.
2017-06-05 11:21:30 -04:00
Solly Ross 26ef38fe89 Add HPA status conditions to API types
This commit adds the new API status conditions to the API types.
The field exists as a field in autoscaling/v2alpha1, and is
round-tripped through an annotation in autoscaling/v1.
2017-06-05 10:50:34 -04:00
Kubernetes Submit Queue 0cff839317 Merge pull request #46771 from n-marton/46770-permission-for-volume-binder
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)

Added node to persistent-volume-binder clusterrole

**What this PR does / why we need it**: Added missing permission to volume-binder clusterrole

**Which issue this PR fixes**: fixes #46770

**Special notes for your reviewer**: Non

**Release note**: Non
2017-06-05 06:51:32 -07:00
Kubernetes Submit Queue 0cfef01a44 Merge pull request #46259 from Q-Lee/kube-proxy
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)

Add iptables lock-file mount to kube-proxy manifest

**What this PR does / why we need it**: kube-proxy is broken in make bazel-release. The new iptables binary uses a lockfile in "/run", but the directory doesn't exist. This causes iptables-restore to fail. We need to share the same lock-file amongst all containers, so mount the host /run dir.

This is similar to #46132 but expediency matters, since builds are broken.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #46103

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-05 06:51:29 -07:00
Kubernetes Submit Queue af64e0b8c9 Merge pull request #46759 from zjj2wry/kubelet
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)

Improve code coverage for pkg/kubelet/images/image_gc_manager

**What this PR does / why we need it**:
#39559 #40780

code coverage from 74.5% to 77.4%

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-06-05 06:51:25 -07:00
Kubernetes Submit Queue 6fef1a1deb Merge pull request #46810 from vishh/gpu-cos-image-validation
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)

Update the COS kernel sha for node e2e gpu installer

cc @mindprince

Relevant COS image - https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/jenkins/image-config-serial.yaml#L19
2017-06-05 06:51:23 -07:00
Kubernetes Submit Queue a72967454d Merge pull request #46734 from mbohlool/aggr
Automatic merge from submit-queue (batch tested with PRs 46734, 46810, 46759, 46259, 46771)

OpenAPI aggregation for kube-aggregator

This PR implements OpenAPI aggregation layer for kube-aggregator. On each API registration, it tries to download swagger.spec of the user api server. On failure it will try again next time (either on another add or get /swagger.* on aggregator server) up to five times. To merge specs, it first remove all unrelated paths from the downloaded spec (anything other than group/version of the API service) and then remove all unused definitions. Adding paths are straightforward as they won't have any conflicts, but definitions will most probably have conflicts. To resolve that, we would reused any definition that is not changed (documentation changes are fine) and rename the definition otherwise.

To use this PR, kube aggregator should have nonResourceURLs (for get verb) to user apiserver.

```release-note
Support OpenAPI spec aggregation for kube-aggregator
```

fixes: #43717
2017-06-05 06:51:20 -07:00
Kubernetes Submit Queue d3146080b4 Merge pull request #46804 from verult/gce-pdflake
Automatic merge from submit-queue (batch tested with PRs 45871, 46498, 46729, 46144, 46804)

PD e2e test: Ready node check now uses the most up-to-date node count.

Follow-up to PR #46746 

<!--  Steps to write your release note:
1. Use the release-note-* labels to set the release note state (if you have access)
2. Enter your extended release note in the below block; leaving it blank means using the PR title as the release note. If no release note is required, just write `NONE`.
2017-06-05 03:06:29 -07:00
Kubernetes Submit Queue bdf9dc1620 Merge pull request #46144 from janetkuo/kubectl-rollout-ds
Automatic merge from submit-queue (batch tested with PRs 45871, 46498, 46729, 46144, 46804)

Implement kubectl rollout undo and history for DaemonSet

~Depends on #45924, only the 2nd commit needs review~ (merged)

Ref https://github.com/kubernetes/community/pull/527/

TODOs:
- [x] kubectl rollout history
  - [x] sort controller history, print overview (with revision number and change cause)
  - [x] print detail view (content of a history) 
    - [x] print template 
    - [x] ~(do we need to?) print labels and annotations~
- [x] kubectl rollout undo: 
  - [x] list controller history, figure out which revision to rollback to
    - if toRevision == 0, rollback to the latest revision, otherwise choose the history with matching revision
  - [x] update the ds using the history to rollback to 
    - [x] replace the ds template with history's
    - [x] ~(do we need to?) replace the ds labels and annotations with history's~
- [x] test-cmd.sh 

@kubernetes/sig-apps-pr-reviews @erictune @kow3ns @lukaszo @kargakis @kubernetes/sig-cli-maintainers 

--- 

**Release note**:

```release-note
```
2017-06-05 03:06:26 -07:00
Kubernetes Submit Queue 2fcadae143 Merge pull request #46729 from shyamjvs/kubemark-admission-plugin
Automatic merge from submit-queue (batch tested with PRs 45871, 46498, 46729, 46144, 46804)

Enable some pod-related admission plugins for kubemark

Ref https://github.com/kubernetes/kubernetes/issues/44701

This should help reduce discrepancy in "list pods" latency wrt real cluster. Let's see.

/cc @wojtek-t @gmarek
2017-06-05 03:06:24 -07:00
Kubernetes Submit Queue 6236522738 Merge pull request #46498 from zjj2wry/adherence
Automatic merge from submit-queue (batch tested with PRs 45871, 46498, 46729, 46144, 46804)

Fix some comments in dnsprovider

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-06-05 03:06:22 -07:00
Kubernetes Submit Queue 04acd91a0d Merge pull request #45871 from YuPengZTE/devTestAddKnownTypesIdemPotent
Automatic merge from submit-queue

delete the useless "gv" in Errorf

Signed-off-by: yupengzte <yu.peng36@zte.com.cn>



**What this PR does / why we need it**:
Fix "no formatting directive in Errorf call"
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-05 02:54:14 -07:00
Kubernetes Submit Queue 45b7f5a4b0 Merge pull request #44255 from zlabjp/bump-mapstructure
Automatic merge from submit-queue (batch tested with PRs 43852, 44255)

Bump github.com/mitchellh/mapstructure

**What this PR does / why we need it**:

This PR bump revision of github.com/mitchellh/mapstructure.
The library is required by Gophercloud, also they has passed tests with the newer revision.
So, since Gophercloud is update, please also renew this library.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-06-05 01:56:24 -07:00
Kubernetes Submit Queue 974606544d Merge pull request #43852 from ailusazh/AddSuccessfulMountVolumeMsgToEvent
Automatic merge from submit-queue

Add SuccessfulMountVolume message to the events of pod

**What this PR does / why we need it:**
When creating a pod with volume, the volume mount may failed at first, but eventually succeed after retry several times. kubectl describe pod can only see the failed messages, so i think it will be better to add the SuccessfulMountVolume message to the pod events too.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes #42867
2017-06-05 01:46:36 -07:00
Phillip Wittrock 2510dc0ddd Merge pull request #46943 from madhusudancs/fed-ns-delete-all-clusters
Delete federation system namespace from all the federated clusters.
2017-06-04 22:08:59 -07:00
mbohlool 63e3e84e7e Update proto 2017-06-04 21:54:11 -07:00
mbohlool c2f2a33dc5 Update Bazel 2017-06-04 21:54:11 -07:00
mbohlool af445855c1 Update OpenAPI spec 2017-06-04 21:54:11 -07:00
mbohlool 1a1d9a0394 Aggregate OpenAPI specs 2017-06-04 21:54:11 -07:00
mbohlool fccff9adb6 Enable OpenAPI definition generation for apiregistration 2017-06-04 21:54:10 -07:00
mbohlool 0a886ffaf8 Separate Build and Serving parts of OpenAPI spec handler 2017-06-04 21:54:10 -07:00
mbohlool ef8ee84cd0 Remove unused servePath from GetOperationIDAndTags and GetDefinitionName 2017-06-04 21:54:10 -07:00
Madhusudan.C.S c3d5113365 Delete cluster role and their bindings federated clusters.
This is part of the namespace deletion big hammer. `kubefed join` not
just creates federation-system namespace, but also cluster role and
cluster role bindings in the joining clusters. Sometimes unjoin fails
to delete them. So we use a big hammer here to delete them.

This smells like a real problem in kubefed and needs investigation.
This is a short term fix to unblock the submit queue.
2017-06-04 21:26:44 -07:00
Madhusudan.C.S c30afde32e Delete federation system namespace from all the federated clusters.
This is a big hammer. `kubefed join` creates federation-system namespace
in the joining clusters if they don't already exist. This namespace
usually exists in the host cluster and hence cannot be deleted while
unjoining. So in order to be safe, we don't delete the federation-system
namespace from any federated cluster while unjoining them. This causes
a problem in our test environment if certain resources are left in the
namespace. Therefore we are deleting all federation-system namespace in
all the clusters.
2017-06-04 21:26:42 -07:00
David Ashpole 56f53b9207 update prometheus dependency for staging 2017-06-04 15:00:23 -07:00
David Ashpole 066d61ce0a update cadvisor godeps 2017-06-04 15:00:23 -07:00
Madhusudan.C.S 60d10e9e27 Do not delete PVs with --all, instead delete them selectively.
PV is a non-namespaced resource. Running `kubectl delete pv --all`, even
with `--namespace` is going to delete all the PVs in the cluster. This
is a dangerous operation and should not be deleted this way.

Instead we now retrieve the PVs bound to the PVCs in the namespace we
are deleteing and delete only those PVs.

Fixes issue #46380.
2017-06-04 14:57:43 -07:00
Shyam Jeedigunta b655953e21 Enable DefaultTolerationSeconds and PodPreset admission plugins for kubemark 2017-06-04 19:52:57 +02:00
Nick Sardo 025f178b7e Use new kubelet apis pkg for labels 2017-06-04 10:26:33 -07:00
Nick Sardo 7248c61ea5 Update test utilities & build file 2017-06-04 10:25:05 -07:00
Nick Sardo 05aaef3edc Hook external & internal lb together 2017-06-04 10:25:05 -07:00
Nick Sardo 660452dee1 Add internal LB logic 2017-06-04 10:25:05 -07:00
Nick Sardo 1283d65538 Modify external LB logic 2017-06-04 10:25:05 -07:00
Nick Sardo 2cdaf1f32b Refactor compute API calls 2017-06-04 10:25:05 -07:00
Nick Sardo b631061f05 Rename gce_staticip.go to gce_addresses.go 2017-06-04 10:25:05 -07:00
Nick Sardo 66773fea4b Rename gce_loadbalancer.go to gce_loadbalancer_external.go 2017-06-04 10:25:05 -07:00
Kubernetes Submit Queue 3837d95191 Merge pull request #45748 from mml/reliable-node-upgrade
Automatic merge from submit-queue

Respect PDBs during node upgrades and add test coverage to the ServiceTest upgrade test.

This is still a WIP... needs to be squashed at least, and I don't think it's currently passing until I increase the scale of the RC, but please have a look at the general outline.  Thanks!

Fixes #38336 

@kow3ns @bdbauer @krousey @erictune @maisem @davidopp 

```
On GCE, node upgrades will now respect PodDisruptionBudgets, if present.
```
2017-06-04 06:11:59 -07:00
Quintin Lee 6a380e8831 Add iptables lock-file mount to kube-proxy manifest 2017-06-03 23:53:04 -07:00
Kubernetes Submit Queue 3fdf6c3d14 Merge pull request #45896 from dashpole/disk_pressure_reclaim
Automatic merge from submit-queue

Delete all dead containers and sandboxes when under disk pressure.

This PR modifies the eviction manager to add dead container and sandbox garbage collection as a resource reclaim function for disk.  It also modifies the container GC logic to allow pods that are terminated, but not deleted to be removed.

It still does not delete containers that are less than the minGcAge.  This should prevent nodes from entering a permanently bad state if the entire disk is occupied by pods that are terminated (in the state failed, or succeeded), but not deleted.

There are two improvements we should consider making in the future:

- Track the disk space and inodes reclaimed by deleting containers.  We currently do not track this, and it prevents us from determining if deleting containers resolves disk pressure.  So we may still evict a pod even if we are able to free disk space by deleting dead containers.
- Once we can track disk space and inodes reclaimed, we should consider only deleting the containers we need to in order to relieve disk pressure.  This should help avoid a scenario where we try and delete a massive number of containers all at once, and overwhelm the runtime.

/assign @vishh 
cc @derekwaynecarr 

```release-note
Disk Pressure triggers the deletion of terminated containers on the node.
```
2017-06-03 23:43:46 -07:00