Commit Graph

1428 Commits (4e571eafaba3349acf7731f8cd0afc3ac47e17c4)

Author SHA1 Message Date
Kubernetes Submit Queue a8577f9816 Merge pull request #30800 from mml/db.controller.followup
Automatic merge from submit-queue

Followup fixes for disruption controller.

Part of #12611.
- Record an event when a pod does not have exactly 1 controller.
- Add TODO comment suggesting we simplify the two cases: integer and percentage.
2016-08-20 21:26:32 -07:00
Kubernetes Submit Queue c39f0eec4a Merge pull request #30993 from mksalawa/bump_heapster_version
Automatic merge from submit-queue

Bump heapster version

Bump heapster version to v1.2.0-beta.1.
Migrate metrics tests and HPA to use List objects introduced in the new version.
2016-08-19 19:36:53 -07:00
Kubernetes Submit Queue 3d7a105d9b Merge pull request #30903 from jingxu97/cherrypick-8-19
Automatic merge from submit-queue

Avoid failure message flush log when node no longer exist

When node is deleted, attach-detach controller cache may contain stale
information of this node, and update node status fails in reconciler
loop. This message easily flush the log file. This PR is just a quick
fix of this issue. More complete fix including make controller cache
up to date will be addressed in another PR.
2016-08-19 15:45:58 -07:00
Kubernetes Submit Queue 9ebaf29295 Merge pull request #30624 from derekwaynecarr/node-controller-fix
Automatic merge from submit-queue

Node controller deletePod return true if there are pods pending deletion

Fixes https://github.com/kubernetes/kubernetes/issues/30536

If a node had a single pod in terminating state, and that node no longer reported healthy, the pod was never deleted by the node controller because it believed there were no pods remaining.

@smarterclayton @ncdc
2016-08-19 14:33:54 -07:00
Kubernetes Submit Queue 9223591cda Merge pull request #30626 from deads2k/prevent-dc-hotloop
Automatic merge from submit-queue

prevent RC hotloop on denied pods

If a pod is rejected during creation, the RC controller hot-loops. This can happen most frequently due to insufficient quota.
2016-08-19 11:51:41 -07:00
Kubernetes Submit Queue 6ce405c6ee Merge pull request #27778 from screeley44/k8-vol-executor
Automatic merge from submit-queue

Add Events for operation_executor to show status of mounts, failed/successful to show in describe events

Fixes #27590 
@saad-ali @pmorie @erinboyd

After talking with @pmorie last week about the above issue, I decided to poke around and see if I could remedy.  The refactoring broke my previous UXP merged PR's that correctly showed failed mount errors in the describe events.  However, Not sure I implemented correctly, but it tested out and seems to be working, let me know what I missed or if this is not the correct approach.

```
Events:
  FirstSeen	LastSeen	Count	From			SubobjectPath	Type		Reason		Message
  ---------	--------	-----	----			-------------	--------	------		-------
  2m		2m		1	{default-scheduler }			Normal		Scheduled	Successfully assigned nfs-bb-pod1 to 127.0.0.1
  44s		44s		1	{kubelet 127.0.0.1}			Warning		FailedMount	Unable to mount volumes for pod "nfs-bb-pod1_default(a94f64f1-37c9-11e6-9aa5-52540073d346)": timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
  44s		44s		1	{kubelet 127.0.0.1}			Warning		FailedSync	Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "nfs-bb-pod1"/"default". list of unattached/unmounted volumes=[nfsvol]
  38s		38s		1	{kubelet }				Warning		FailedMount	Unable to mount volumes for pod "a94f64f1-37c9-11e6-9aa5-52540073d346": Mount failed: exit status 32
Mounting arguments: nfs1.rhs:/opt/data99 /var/lib/kubelet/pods/a94f64f1-37c9-11e6-9aa5-52540073d346/volumes/kubernetes.io~nfs/nfsvol nfs []
Output: mount.nfs: Connection timed out

Resolution hint: Check and make sure the NFS Server exists (ensure that correct IPAddress/Hostname was given) and is available/reachable.
Also make sure firewall ports are open on both client and NFS Server (2049 v4 and 2049, 20048 and 111 for v3).
Use commands telnet <nfs server> <port> and showmount <nfs server> to help test connectivity.
```
2016-08-19 08:27:48 -07:00
mksalawa 2833119a15 Use List objects for metrics in kubectl top and HPA 2016-08-19 17:26:50 +02:00
Kubernetes Submit Queue 6b20896fea Merge pull request #30686 from gmarek/metrics
Automatic merge from submit-queue

Add cluster health metrics to NodeController

Follow up of #28832

This adds metrics to monitor cluster/zone status.

cc @alex-mohr @fabioy @wojtek-t @Q-Lee
2016-08-19 03:40:51 -07:00
Kubernetes Submit Queue 30b180e4a5 Merge pull request #30943 from caesarxuchao/fix-gc-memory-leak
Automatic merge from submit-queue

Fix memory leak in gc

ref #30759

GC had a memory leak. The work queue item is never deleted.

I'm still fighting with my kubemark cluster to get statistics after this fix.

@wojtek-t @lavalamp
2016-08-19 02:08:56 -07:00
Chao Xu 65d1dbe8d9 fix memory leak in gc 2016-08-18 21:54:44 -07:00
Jing Xu 70deeb0ae4 node not exist during node status update should not block others
When node is deleted, attach-detach controller cache may contain stale
information of this node, and update node status fails in reconciler
loop. But one node update failure should not block updating other nodes.
Also the warning message easily flush the log file. This PR is just a quick
fix of this issue. More complete fix including make sure controller cache
up to date will be addressed in another PR.
2016-08-18 13:51:30 -07:00
Clayton Coleman 5f8366aac3
Convert() should accept the new conversion Context value
Allows Convert() to reuse the same conversions as ConvertToVersion
without being overly coupled to the version.
2016-08-18 14:45:20 -04:00
Kubernetes Submit Queue 9d2a5fe5e8 Merge pull request #29006 from jsafrane/dynprov2
Automatic merge from submit-queue

Implement dynamic provisioning (beta) of PersistentVolumes via StorageClass

Implemented according to PR #26908. There are several patches in this PR with one huge code regen inside.

* Please review the API changes (the first patch) carefully, sometimes I don't know what the code is doing...

* `PV.Spec.Class` and `PVC.Spec.Class` is not implemented, use annotation `volume.alpha.kubernetes.io/storage-class`

* See e2e test and integration test changes - Kubernetes won't provision a thing without explicit configuration of at least one `StorageClass` instance!

* Multiple provisioning volume plugins can coexist together, e.g. HostPath and AWS EBS. This is important for Gluster and RBD provisioners in #25026

* Contradicting the proposal, `claim.Selector` and `volume.alpha.kubernetes.io/storage-class` annotation are **not** mutually exclusive. They're both used for matching existing PVs. However, only `volume.alpha.kubernetes.io/storage-class` is used for provisioning, configuration of provisioning with `Selector` is left for (near) future.

* Documentation is missing. Can please someone write some while I am out?

For now, AWS volume plugin accepts classes with these parameters:

```
kind: StorageClass
metadata:
  name: slow
provisionerType: kubernetes.io/aws-ebs
provisionerParameters:
  type: io1
  zone: us-east-1d
  iopsPerGB: 10
```

* parameters are case-insensitive
* `type`: `io1`, `gp2`, `sc1`, `st1`. See AWS docs for details
* `iopsPerGB`: only for `io1` volumes. I/O operations per second per GiB. AWS volume plugin multiplies this with size of requested volume to compute IOPS of the volume and caps it at 20 000 IOPS (maximum supported by AWS, see AWS docs).
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (`gp2` in the same zone as in 1.3).

GCE:

```
apiVersion: extensions/v1beta1
kind: StorageClass
metadata:
  name: slow
provisionerType: kubernetes.io/gce-pd
provisionerParameters:
  type: pd-standard
  zone: us-central1-a
```

* `type`: `pd-standard` or `pd-ssd`
* `zone`: GCE zone
* of course, the plugin will use some defaults when a parameter is omitted in a `StorageClass` instance (SSD in the same zone as in 1.3 ?).


No OpenStack/Cinder yet

@kubernetes/sig-storage
2016-08-18 09:56:16 -07:00
gmarek 5d8cb17efa Add cluster health metrics to NodeController 2016-08-18 15:11:10 +02:00
deads2k 7cd51b4610 prevent RC hotloop on denied pods 2016-08-18 08:06:09 -04:00
Kubernetes Submit Queue 9696a27aa0 Merge pull request #30737 from saad-ali/fix29358Round2
Automatic merge from submit-queue

Skip safe to detach check if node API object no longer exists

Fixes #29358
2016-08-18 04:00:05 -07:00
Kubernetes Submit Queue e2f39fca86 Merge pull request #30807 from caesarxuchao/change_pod_lister_api
Automatic merge from submit-queue

Continue on #30774: Change podNamespacer API

continue on #30774, credit to @wojtek-t, Ref #30759

I just fixed a test and converted IsActivePod to operate on *Pod.
2016-08-18 02:08:23 -07:00
Jan Safranek bb5d562f37 Restore alpha behavior 2016-08-18 10:36:50 +02:00
Jan Safranek d8a95a3785 Update matching logic with storage class
- no default StorageClass
- PVC.Spec.Class == nil means the same as PVC.Spec.Class == ""
2016-08-18 10:36:50 +02:00
Jan Safranek 6e4d95f646 Dynamic provisioning V2 controller, provisioners, docs and tests. 2016-08-18 10:36:49 +02:00
Kubernetes Submit Queue f9190ed61a Merge pull request #30138 from gmarek/flags
Automatic merge from submit-queue

Expose flags for new NodeEviction logic in NodeController

Fix #28832
Last PR from the NodeController NodeEviction logic series. 

cc @davidopp @lavalamp @mml
2016-08-18 00:41:28 -07:00
Kubernetes Submit Queue 7ceb23c719 Merge pull request #30828 from bprashanth/nc_podready
Automatic merge from submit-queue

Nodecontroller doesn't flip readiness on pods if kubeletVersion < 1.2.0

Older versions of the kubelet didn't know how to reconcile pod.Status, so the nodecontroller would mark pods NotReady on netsplit, and if the partition recovered in < 5m, the pods would never get marked Ready resulting in NotReady endpoints indefinitely (till kubelet restart/pod recreate etc).
2016-08-17 19:02:12 -07:00
bprashanth 15c9826061 Nodecontroller doesn't flip readiness on pods if kubeletVersion < 1.2.0 2016-08-17 15:33:35 -07:00
Chao Xu 594234d61c fix tests; convert IsPodActive to operate on *Pod 2016-08-17 13:05:37 -07:00
Matt Liggett 441bfb0614 Record an event when a pod does not have exactly 1 controller. 2016-08-17 12:14:06 -07:00
Matt Liggett 17ddb19ada Add TODO comment. 2016-08-17 12:14:06 -07:00
Kubernetes Submit Queue f3f818a190 Merge pull request #29639 from aveshagarwal/master-default-resources-limits-fix
Automatic merge from submit-queue

Fix default resource limits (node allocatable) for downward api volumes and env vars

@kubernetes/rh-cluster-infra  @pmorie @derekwaynecarr
2016-08-17 11:37:41 -07:00
Wojciech Tyczynski 331083727f Change podNamespacer API 2016-08-17 16:55:01 +02:00
Scott Creeley 782d7d9815 Add Events for operation_executor to show status of mounts, failed or successful 2016-08-17 09:53:47 -04:00
gmarek 4cf698ef04 Expose flags for new NodeEviction logic in NodeController 2016-08-17 10:43:24 +02:00
saadali 0c72568247 Skip safe to detach if node api obj doesn't exist 2016-08-16 21:30:51 -07:00
Matt Liggett d60ba3c6e2 Implement DisruptionController.
Part of #12611
2016-08-16 15:20:41 -07:00
Kubernetes Submit Queue 1b0bc9421f Merge pull request #30301 from girishkalele/endpoint_hostnames
Automatic merge from submit-queue

Add NodeName to EndpointAddress object

Adding a new string type `nodeName` to api.EndpointAddress.
We could also do  *ObjectReference to the api.Node object instead, which would be more precise for the future.

```
type ObjectReference struct {
    Kind            string    `json:"kind,omitempty"`
    Namespace       string    `json:"namespace,omitempty"`
    Name            string    `json:"name,omitempty"`
    UID             types.UID `json:"uid,omitempty"`
    APIVersion      string    `json:"apiVersion,omitempty"`
    ResourceVersion string    `json:"resourceVersion,omitempty"`

    // Optional. If referring to a piece of an object instead of an entire object, this string
    // should contain information to identify the sub-object. For example, if the object
    // reference is to a container within a pod, this would take on a value like:
    // "spec.containers{name}" (where "name" refers to the name of the container that triggered
    // the event) or if no container name is specified "spec.containers[2]" (container with
    // index 2 in this pod). This syntax is chosen only to have some well-defined way of
    // referencing a part of an object.
    // TODO: this design is not final and this field is subject to change in the future.
    FieldPath string `json:"fieldPath,omitempty"`
}
```
2016-08-16 13:11:10 -07:00
Avesh Agarwal 52a60fe3be Fix default resource limits (node capacities) for downward api volumes 2016-08-16 14:41:17 -04:00
derekwaynecarr 14a2b261a8 Node controller deletePod return true if there are pods pending deletion 2016-08-16 13:05:38 -04:00
Kubernetes Submit Queue 9c769c5dbe Merge pull request #29437 from AdoHe/event_node_uid
Automatic merge from submit-queue

fix node controller event uid issue

Fix #29289. @smarterclayton ptal. This is not a very elegant fix, if we can use nodeName in log maybe we can set timedValue.Value to node.UID.
2016-08-15 21:13:43 -07:00
Girish Kalele e105525b33 Fix endpoints_controller unit tests 2016-08-15 21:01:21 -07:00
Girish Kalele 36180a930b Generated code 2016-08-15 17:24:01 -07:00
Girish Kalele 95111c457e endpoints controller: Write pod NodeName to endpointAddress in endpoint subsets 2016-08-15 15:12:15 -07:00
Girish Kalele c60ba61fe7 Add NodeName to EndpointAddress object 2016-08-15 15:11:51 -07:00
Kubernetes Submit Queue 79ed7064ca Merge pull request #27970 from jingxu97/restartKubelet-6-22
Automatic merge from submit-queue

Add volume reconstruct/cleanup logic in kubelet volume manager

Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.

Fixes https://github.com/kubernetes/kubernetes/issues/27653
2016-08-15 13:48:43 -07:00
Kubernetes Submit Queue 69419a145a Merge pull request #29802 from jfrazelle/fix-go-vet-errors
Automatic merge from submit-queue

fix go vet errors

<!--
Checklist for submitting a Pull Request

Please remove this comment block before submitting.

1. Please read our [contributor guidelines](https://github.com/kubernetes/kubernetes/blob/master/CONTRIBUTING.md).
2. See our [developer guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/development.md).
3. If you want this PR to automatically close an issue when it is merged,
   add `fixes #<issue number>` or `fixes #<issue number>, fixes #<issue number>`
   to close multiple issues (see: https://github.com/blog/1506-closing-issues-via-pull-requests).
4. Follow the instructions for [labeling and writing a release note for this PR](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/pull-requests.md#release-notes) in the block below.
-->

```release-note
```

This fixes the `go vet` errors brought about by go 1.7 testing re (#28742).

The are all pretty trivial and mostly related to literal composites.

also related to #16086
2016-08-15 13:10:08 -07:00
Jing Xu f19a1148db This change supports robust kubelet volume cleanup
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
2016-08-15 11:29:15 -07:00
Maciej Szulik d446930699 Remove pods along with jobs when Replace ConcurrentPolicy is set 2016-08-14 11:59:06 +02:00
AdoHe b2ab4c6d9b fix node controller event uid issue 2016-08-14 09:41:20 +08:00
Kubernetes Submit Queue e39d7f71e6 Merge pull request #30251 from hongchaodeng/r2
Automatic merge from submit-queue

Move new etcd storage (low level storage) into cacher

In an effort for #29888, we are pushing forward this:

What?
- It changes creating etcd storage.Interface impl into creating config
- In creating cacher storage (StorageWithCacher), it passes config created above and new etcd storage inside.

Why?
- We want to expose the information of (etcd) kv client to cacher. Cacher storage uses this information to talk to remote storage.
2016-08-13 10:09:49 -07:00
Kubernetes Submit Queue f98de24061 Merge pull request #30510 from derekwaynecarr/logging-fix
Automatic merge from submit-queue

Endpoint controller logs errors during normal cluster behavior

The endpoint controller logs an error when its forbidden from creating new endpoints during namespace termination.  This is normal cluster behavior, and therefore should not be logged.  This confuses operators administrating the cluster.

Updated to log at a lower level in response to a forbidden message when performing a create operation.  In case of an error on the API server side of the house, I continue to requeue the key.  It should be ignored in a future syncService call once the service is deleted as part of namespace termination.

See https://bugzilla.redhat.com/show_bug.cgi?id=1347425

/cc @kubernetes/rh-cluster-infra
2016-08-13 02:59:26 -07:00
Hongchao Deng d0938094d9 move new etcd storage into cacher 2016-08-12 18:40:20 -07:00
Janet Kuo e4269d490f Use unversioned client in scheduledjobs and set group version to batch/v2alpha1 2016-08-12 16:46:09 -07:00
Girish Kalele f64c052858 Revert "Scheduledjob e2e" 2016-08-12 16:12:19 -07:00