Commit Graph

1608 Commits (8e98f1dfec9d1f3a100fe9af9588bcbedc0ab801)

Author SHA1 Message Date
Kubernetes Submit Queue 6376ad134d Merge pull request #39606 from NickrenREN/kubelet-pod
Automatic merge from submit-queue (batch tested with PRs 38101, 41431, 39606, 41569, 41509)

optimize killPod() and syncPod() functions

make sure that one of the two arguments must be non-nil: runningPod, status ,just like the function note says
and judge the return value in syncPod() function before setting podKilled
2017-02-16 15:49:17 -08:00
Kubernetes Submit Queue 3c606cdd20 Merge pull request #41456 from dashpole/pod_volume_cleanup
Automatic merge from submit-queue (batch tested with PRs 41466, 41456, 41550, 41238, 41416)

Delay Deletion of a Pod until volumes are cleaned up

#41436 fixed the bug that caused #41095 and #40239 to have to be reverted.  Now that the bug is fixed, this shouldn't cause problems.

 @vishh @derekwaynecarr @sjenning @jingxu97 @kubernetes/sig-storage-misc
2017-02-16 10:14:05 -08:00
Yu-Ju Hong 5bb43a3a24 Report node not ready on failed PLEG health check 2017-02-16 09:00:22 -08:00
NickrenREN b40e575076 optimize killPod() and syncPod() functions
make sure that one of the two arguments must be non-nil: runningPod, status ,just like the function note says
and judge the return value in syncPod() function before setting podKilled
2017-02-16 09:13:23 +08:00
Kubernetes Submit Queue 3bc575c91f Merge pull request #33550 from rtreffer/kubelet-allow-multiple-dns-server
Automatic merge from submit-queue

Allow multipe DNS servers as comma-seperated argument for kubelet --dns

This PR explores how kubectls "--dns" could be extended to specify multiple DNS servers for in-cluster PODs. Testing on the local libvirt-coreos cluster shows that multiple DNS server are injected without issues.

Specifying multiple DNS servers increases resilience against
- Packet drops
- Single server failure

I am debugging services that do 50+ DNS requests for a single incoming interactive request, thus highly increase the chance of a slowdown (+5s) due to a single packet drop. Switching to two DNS servers will reduce the impact of the issues (roughly +1s on glibc, 0s on musl, error-rate goes down to error-rate^2).

Note that there is no need to change any runtime related code as far as I know. In the case of "default" dns the /etc/resolv.conf is parsed and multiple DNS server are send to the backend anyway. This only adds the same capability for the clusterFirst case.

I've heard from @thockin that multiple DNS entries are somehow considered. I've no idea what was considered, though. This is what I would like to see for our production use, though.

```release-note
NONE
```
2017-02-15 12:45:32 -08:00
David Ashpole 1d38818326 Revert "Merge pull request #41202 from dashpole/revert-41095-deletion_pod_lifecycle"
This reverts commit ff87d13b2c, reversing
changes made to 46becf2c81.
2017-02-15 08:44:03 -08:00
Kubernetes Submit Queue dd696683b7 Merge pull request #40647 from NickrenREN/secretManager
Automatic merge from submit-queue (batch tested with PRs 41360, 41423, 41430, 40647, 41352)

optimize NewSimpleSecretManager and cleanupOrphanedPodCgroups
2017-02-15 05:06:11 -08:00
Yu-Ju Hong fb94f441ce Set EnableCRI to true by default
This change makes kubelet to use the CRI implementation by default,
unless the users opt out explicitly by using --enable-cri=false.
For the rkt integration, the --enable-cri flag will have no effect
since rktnetes does not use CRI.

Also, mark the original --experimental-cri flag hidden and deprecated,
so that we can remove it in the next release.
2017-02-14 16:15:51 -08:00
NickrenREN 31bfefca3c optimize NewSimpleSecretManager and cleanupOrphanedPodCgroups
remove NewSimpleSecretManager second return value and cleanupOrphanedPodCgroups's return since they will never return err
2017-02-14 09:47:05 +08:00
Kubernetes Submit Queue e9de1b0221 Merge pull request #40992 from k82cn/rm_empty_line
Automatic merge from submit-queue (batch tested with PRs 41236, 40992)

Removed unnecessarly empty line.
2017-02-10 05:38:42 -08:00
Kubernetes Submit Queue 8188c3cca4 Merge pull request #40796 from wojtek-t/use_node_ttl_in_secret_manager
Automatic merge from submit-queue (batch tested with PRs 40796, 40878, 36033, 40838, 41210)

Implement TTL controller and use the ttl annotation attached to node in secret manager

For every secret attached to a pod as volume, Kubelet is trying to refresh it every sync period. Currently Kubelet has a ttl-cache of secrets of its pods and the ttl is set to 1 minute. That means that in large clusters we are targetting (5k nodes, 30pods/node), given that each pod has a secret associated with ServiceAccount from its namespaces, and with large enough number of namespaces (where on each node (almost) every pod is from a different namespace), that resource in ~30 GETs to refresh all secrets every minute from one node, which gives ~2500QPS for GET secrets to apiserver.

Apiserver cannot keep up with it very easily.

Desired solution would be to watch for secret changes, but because of security we don't want a node watching for all secrets, and it is not possible for now to watch only for secrets attached to pods from my node.

So as a temporary solution, we are introducing an annotation that would be a suggestion for kubelet for the TTL of secrets in the cache and a very simple controller that would be setting this annotation based on the cluster size (the large cluster is, the bigger ttl is). 
That workaround mean that only very local changes are needed in Kubelet, we are creating a well separated very simple controller, and once watching "my secrets" will be possible it will be easy to remove it and switch to that. And it will allow us to reach scalability goals.

@dchen1107 @thockin @liggitt
2017-02-10 00:04:44 -08:00
David Ashpole b224f83c37 Revert "[Kubelet] Delay deletion of pod from the API server until volumes are deleted" 2017-02-09 08:45:18 -08:00
Wojciech Tyczynski 6c0535a939 Use secret TTL annotation in secret manager 2017-02-09 13:53:32 +01:00
Kubernetes Submit Queue 42d8d4ca88 Merge pull request #40948 from freehan/cri-hostport
Automatic merge from submit-queue (batch tested with PRs 40873, 40948, 39580, 41065, 40815)

[CRI] Enable Hostport Feature for Dockershim

Commits:
1. Refactor common hostport util logics and add more tests

2. Add HostportManager which can ADD/DEL hostports instead of a complete sync.

3. Add Interface for retreiving portMappings information of a pod in Network Host interface. 
Implement GetPodPortMappings interface in dockerService. 

4. Teach kubenet to use HostportManager
2017-02-08 14:14:43 -08:00
Minhan Xia bd05e1af2b add portmapping getter into network host 2017-02-08 09:35:04 -08:00
David Ashpole 67cb2704c5 delete volumes before pod deletion 2017-02-08 07:34:49 -08:00
Kubernetes Submit Queue 843e6d1cc3 Merge pull request #40770 from apilloud/clientset_interface
Automatic merge from submit-queue (batch tested with PRs 41103, 41042, 41097, 40946, 40770)

Use Clientset interface in KubeletDeps

**What this PR does / why we need it**:
This replaces the Clientset struct with the equivalent interface for the KubeClient injected via KubeletDeps. This is useful for testing and for accessing the Node and Pod status event stream without an API server.

**Special notes for your reviewer**:
Follow up to #4907

**Release note**:

`NONE`
2017-02-07 22:12:39 -08:00
Klaus Ma cc26fe6ee9 Removed unnecessarly empty line. 2017-02-06 11:10:34 +08:00
Kubernetes Submit Queue a777a8e3ba Merge pull request #39972 from derekwaynecarr/pod-cgroups-default
Automatic merge from submit-queue (batch tested with PRs 40289, 40877, 40879, 39972, 40942)

Rename experimental-cgroups-per-pod flag

**What this PR does / why we need it**:
1. Rename `experimental-cgroups-per-qos` to `cgroups-per-qos`
1. Update hack/local-up-cluster to match `CGROUP_DRIVER` with docker runtime if used.

**Special notes for your reviewer**:
We plan to roll this feature out in the upcoming release.  Previous node e2e runs were running with this feature on by default.  We will default this feature on for all e2es next week.

**Release note**:
```release-note
Rename --experiemental-cgroups-per-qos to --cgroups-per-qos
```
2017-02-04 04:43:08 -08:00
Kubernetes Submit Queue f20b4fc67f Merge pull request #40655 from vishh/flag-gate-critical-pod-annotation
Automatic merge from submit-queue

Optionally avoid evicting critical pods in kubelet

For #40573

```release-note
When feature gate "ExperimentalCriticalPodAnnotation" is set, Kubelet will avoid evicting pods in "kube-system" namespace that contains a special annotation - `scheduler.alpha.kubernetes.io/critical-pod`
This feature should be used in conjunction with the rescheduler to guarantee availability for critical system pods - https://kubernetes.io/docs/admin/rescheduler/
```
2017-02-03 16:22:26 -08:00
Derek Carr 04a909a257 Rename cgroups-per-qos flag to not be experimental 2017-02-03 17:10:53 -05:00
Andrew Pilloud 3f8505022c Use clientset.Interface for KubeClient 2017-02-03 07:36:16 -08:00
Vishnu Kannan 6ddb528446 Revert "Sort critical pods before admission"
This reverts commit b7409e0038.
2017-02-02 10:41:24 -08:00
Wojciech Tyczynski ec6a95a665 Use caching secret manager in kubelet 2017-02-02 15:32:07 +01:00
Rene Treffer 42ff859c27 Allow multipe DNS servers as comma-seperated argument for --dns
Depending on an exact cluster setup multiple dns may make sense.
Comma-seperated lists of DNS server are quite common as DNS servers
are always plain IPs.
2017-02-01 22:38:40 +01:00
Michael Fraenkel beb53fb71a Port forward over websockets
- split out port forwarding into its own package

Allow multiple port forwarding ports
- Make it easy to determine which port is tied to which channel
- odd channels are for data
- even channels are for errors

- allow comma separated ports to specify multiple ports

Add  portfowardtester 1.2 to whitelist
2017-02-01 06:32:04 -07:00
deads2k a106d9f848 switch kubelet to use external (client-go) object references for events 2017-01-31 19:15:33 -05:00
deads2k 8a12000402 move client/record 2017-01-31 19:14:13 -05:00
Dr. Stefan Schimanski bc6fdd925d pkg/api/resource: move to apimachinery 2017-01-29 21:41:44 +01:00
Aleksandra Malinowska 74e1d8078e Revert "Delay deletion of pod from the API server until volumes are deleted" 2017-01-27 13:31:02 +01:00
Yu-Ju Hong 202488995a docker-CRI: Remove legacy code for non-grpc integration 2017-01-26 17:23:20 -08:00
David Ashpole 9094b57570 cleanup volumes before deleting from the api server 2017-01-25 10:21:15 -08:00
deads2k b0b156b381 make tools/cache authoritative 2017-01-25 08:29:45 -05:00
deads2k c2ae6d5b40 remove api to util dependency hiding types 2017-01-25 08:28:28 -05:00
Dr. Stefan Schimanski 82826ec273 pkg/util/flag: move to k8s.io/apiserver 2017-01-24 20:56:03 +01:00
Dr. Stefan Schimanski a6b2ebb50c pkg/flag: make feature gate extensible and split between generic and kube 2017-01-24 20:56:03 +01:00
Dr. Stefan Schimanski 56d60cfae6 pkg/util: move flags from pkg/util/config to pkg/util/flags 2017-01-24 20:56:03 +01:00
deads2k 5a8f075197 move authoritative client-go utils out of pkg 2017-01-24 08:59:18 -05:00
Clayton Coleman 469df12038
refactor: move ListOptions references to metav1 2017-01-23 17:52:46 -05:00
Wojciech Tyczynski bf7138652f SecretVolume using secret manager 2017-01-23 16:10:01 +01:00
Kubernetes Submit Queue 470e732d7f Merge pull request #40235 from deads2k/generic-26-listers
Automatic merge from submit-queue (batch tested with PRs 40232, 40235, 40237, 40240)

move listers out of cache to reduce import tree

Moving the listers from `pkg/client/cache` snips links to all the different API groups from `pkg/storage`, but the dreaded `ListOptions` remains.

@sttts
2017-01-20 14:22:51 -08:00
Kubernetes Submit Queue dcf14add92 Merge pull request #37228 from sjenning/teardown-terminated-volumes
Automatic merge from submit-queue (batch tested with PRs 37228, 40146, 40075, 38789, 40189)

kubelet: storage: teardown terminated pod volumes

This is a continuation of the work done in https://github.com/kubernetes/kubernetes/pull/36779

There really is no reason to keep volumes for terminated pods attached on the node.  This PR extends the removal of volumes on the node from memory-backed (the current policy) to all volumes.

@pmorie raised a concern an impact debugging volume related issues if terminated pod volumes are removed.  To address this issue, the PR adds a `--keep-terminated-pod-volumes` flag the kubelet and sets it for `hack/local-up-cluster.sh`.

For consideration in 1.6.

Fixes #35406

@derekwaynecarr @vishh @dashpole

```release-note
kubelet tears down pod volumes on pod termination rather than pod deletion
```
2017-01-20 12:34:52 -08:00
deads2k 1ce0637b27 move listers out of cache to reduce import tree 2017-01-20 15:01:38 -05:00
Seth Jennings e2750a305a reclaim terminated pod volumes 2017-01-20 11:08:35 -06:00
Kubernetes Submit Queue 53b43d6f8f Merge pull request #40190 from yujuhong/nsenter_exec
Automatic merge from submit-queue (batch tested with PRs 40168, 40165, 39158, 39966, 40190)

dockershim: add support for the 'nsenter' exec handler

This change simply plumbs the kubelet configuration
(--docker-exec-handler) to DockerService.

This fixes #35747.
2017-01-20 08:28:53 -08:00
Yu-Ju Hong f9479ed84b dockershim: add support for the 'nsenter' exec handler
This change simply plumbs the kubelet configuration
(--docker-exec-handler) to DockerService.
2017-01-19 16:23:48 -08:00
Wojciech Tyczynski 09e4de385c Enable nontrivial secret manager 2017-01-19 19:47:33 +01:00
Wojciech Tyczynski ffd8daf488 SecretManager with caching 2017-01-19 19:47:32 +01:00
Wojciech Tyczynski 85ee9e570b Create SecretManager interface 2017-01-19 19:47:32 +01:00
deads2k 11e8068d3f move pkg/fields to apimachinery 2017-01-19 09:50:16 -05:00
deads2k c47717134b move utils used in restclient to client-go 2017-01-19 07:55:14 -05:00
vefimova d925439727 Fixed forming of pod's Search line in resolv.conf:
- exclude duplicates while merging of host's and dns' search lines to form pod's one
 - truncate pod's search line if it exceeds resolver limits: is > 255 chars and containes > 6 searches
 - monitoring the resolv.conf file which is used by kubelet (set thru --resolv-conf="") and logging and eventing if search line in it consists of more than 3 entries
   (or 6 if Cluster Domain is set) or its lenght is > 255 chars
 - logging and eventing when a pod's search line is > 255 chars or containes > 6 searches during forming
Fixes #29270
2017-01-17 13:18:26 +00:00
Kubernetes Submit Queue 5b629d83a2 Merge pull request #39303 from NickrenREN/eviction-manager
Automatic merge from submit-queue (batch tested with PRs 37505, 39844, 39525, 39109, 39303)

remove NewManager() return err
2017-01-13 14:33:35 -08:00
Kubernetes Submit Queue 9a88687e24 Merge pull request #37865 from yujuhong/decouple_lifecycle
Automatic merge from submit-queue

kubelet: remove the pleg health check from healthz

This prevents kubelet from being killed when docker hangs.

Also, kubelet will report node not ready if PLEG hangs (`docker ps` + `docker inspect`).
2017-01-12 19:10:14 -08:00
NickrenREN a12dea14e0 fix redundant alias clientset 2017-01-12 10:21:05 +08:00
deads2k 6a4d5cd7cc start the apimachinery repo 2017-01-11 09:09:48 -05:00
Yu-Ju Hong 03106dd1cb kubelet: remove the pleg health check from healthz/
If docker hangs, we don't want kubelet to get killed as well.
2017-01-10 16:32:46 -08:00
deads2k 1df5b658f2 switch webhook to clientgo 2017-01-09 16:53:24 -05:00
NickrenREN 85e6076fab remove eviction-manager start return err
Start() function will never return err,we do not need the return value
2017-01-06 09:32:16 +08:00
Kubernetes Submit Queue 9b726d6b8f Merge pull request #38687 from ivan4th/remove-dockerlegacyservice-comment-from-kubelet
Automatic merge from submit-queue

Remove DockerLegacyService comment from kubelet
2017-01-03 23:28:22 -08:00
NickrenREN 0f35ce1af3 drop NewManager() return err
NewManager will never return err,drop it
2017-01-03 11:24:12 +08:00
Kubernetes Submit Queue ab91500f15 Merge pull request #39068 from NickrenREN/imageManager-start
Automatic merge from submit-queue (batch tested with PRs 39076, 39068)

fix image manager Start() function return
2016-12-22 00:27:30 -08:00
Dawn Chen b03fca9783 Fixed an import cycle issue:
import cycle not allowed in test
package k8s.io/kubernetes/pkg/client/restclient (test)
	imports k8s.io/kubernetes/pkg/api/testapi
	imports k8s.io/kubernetes/pkg/apis/componentconfig/install
	imports k8s.io/kubernetes/pkg/apis/componentconfig/v1alpha1
	imports k8s.io/kubernetes/pkg/kubelet/qos
	imports k8s.io/kubernetes/pkg/kubelet/pod
	imports k8s.io/kubernetes/pkg/client/clientset_generated/clientset
	imports k8s.io/kubernetes/pkg/client/clientset_generated/clientset/typed/apps/v1beta1
	imports k8s.io/kubernetes/pkg/client/restclient
2016-12-21 16:34:24 -08:00
Kubernetes Submit Queue 60a34fda0a Merge pull request #38673 from resouer/pod-qos-shim
Automatic merge from submit-queue (batch tested with PRs 39079, 38991, 38673)

Support systemd based pod qos in CRI dockershim

This PR makes pod level QoS works for CRI dockershim for systemd based cgroups. And will also fix #36807
- [x] Add cgroupDriver to dockerService and use docker info api to set value for it
- [x] Add a NOTE that detection only works for docker 1.11+, see [CHANGE LOG](https://github.com/docker/docker/blob/master/CHANGELOG.md#1110-2016-04-13)
- [x] Generate cgroupParent in syntax expected by cgroupDriver
- [x] Set cgroupParent to hostConfig for both sandbox and user container
- [x] Check if kubelet conflicts with cgroup driver of docker

cc @derekwaynecarr @vishh
2016-12-21 08:01:45 -08:00
NickrenREN bb5ccb978e fix image manager Start() function return
realImageGCManager's Start()  function will always return nil,we do not need the err return value,drop it.
2016-12-21 14:58:00 +08:00
bprashanth b7409e0038 Sort critical pods before admission 2016-12-15 18:58:13 -08:00
Harry Zhang b36c5cbbec Enable pod qos for systemd in cri
Check kubelet config with docker config
2016-12-16 10:48:36 +08:00
Kubernetes Submit Queue d8efc779ed Merge pull request #38154 from caesarxuchao/rename-release_1_5
Automatic merge from submit-queue (batch tested with PRs 38154, 38502)

Rename "release_1_5" clientset to just "clientset"

We used to keep multiple releases in the main repo. Now that [client-go](https://github.com/kubernetes/client-go) does the versioning, there is no need to keep releases in the main repo. This PR renames the "release_1_5" clientset to just "clientset", clientset development will be done in this directory.

@kubernetes/sig-api-machinery @deads2k 

```release-note
The main repository does not keep multiple releases of clientsets anymore. Please find previous releases at https://github.com/kubernetes/client-go
```
2016-12-14 14:21:51 -08:00
Chao Xu 03d8820edc rename /release_1_5 to /clientset 2016-12-14 12:39:48 -08:00
Kubernetes Submit Queue 63cf217b92 Merge pull request #38347 from euank/remove-extra-hn-check
Automatic merge from submit-queue (batch tested with PRs 38727, 38726, 38347, 38348)

kubelet: remove redundant hostNetwork helper

Trivial cleanup.
2016-12-13 17:31:51 -08:00
Ivan Shvedunov b45a8f30c5 Remove DockerLegacyService comment from kubelet
The comment is obsolete as there's no more DockerLegacyService.
2016-12-13 13:46:09 +03:00
Derek Carr af6c8a2479 Reduce max container runtime wait time 2016-12-09 16:40:13 -05:00
Kubernetes Submit Queue 61242f7408 Merge pull request #35939 from xiangpengzhao/minor-cleanup
Automatic merge from submit-queue

Minor cleanup: fix typos

Fix some typos.
2016-12-08 07:41:08 -08:00
Euan Kemp 15fc470343 kubelet: remove redundant hostNetwork helper
It did the same thing as the helper in kubecontainer
2016-12-07 17:24:24 -08:00
Derek Carr 5b2d1c2c25 Enable kernel memcg notification via additional flag 2016-12-07 10:09:41 -05:00
Kubernetes Submit Queue be5d1724f5 Merge pull request #37420 from zdj6373/kubelet-log
Automatic merge from submit-queue (batch tested with PRs 37208, 37446, 37420)

Kubelet log modification

Keep in line with the other error logs in the function.
After return, the caller records the error log.Delete redundant logs
2016-12-05 04:47:44 -08:00
Kubernetes Submit Queue 4ebc43c25d Merge pull request #37541 from zdj6373/note-error
Automatic merge from submit-queue

Function annotation modification

“return kl.pleg.Healthy()”,Based on the return function,"healty" to "healthy" better
2016-12-02 01:01:00 -08:00
Kubernetes Submit Queue c4b33f3be3 Merge pull request #37661 from yujuhong/always_add_pods
Automatic merge from submit-queue

kubelet: don't reject pods without adding them to the pod manager

kubelet relies on the pod manager as a cache of the pods in the apiserver (and
other sources) . The cache should be kept up-to-date even when rejecting pods.
Without this, kubelet may decide at any point to drop the status update
(request to the apiserver) for the rejected pod since it would think the pod no
longer exists in the apiserver.

This should fix #37658
2016-11-30 21:59:12 -08:00
Kubernetes Submit Queue 2ed490e15b Merge pull request #37255 from jingxu97/Nov/nfshung
Automatic merge from submit-queue

remove checking mount point in cleanupOrphanedPodDirs

To avoid nfs hung problem, remove the mountpoint checking code in
cleanupOrphanedPodDirs(). This removal should still be safe because it checks whether there are still directories under pod's volume and if so, do not delete the pod directory.

Note: After removing the mountpoint check code in cleanupOrphanedPodDirs(), the directories might not be cleaned up in such situation.
1. delete pod, kubelet reconciler tries to unmount the volume directory successfully
2. before reconciler tries to delete the volume directory, kubelet gets retarted
3. since under pod directory, there are still volume directors exist (but not mounted), cleanupOrphanedPodDIrs() will not clean them up.

Will work on a follow up PR to solve above issue.
2016-11-30 21:11:13 -08:00
Yu-Ju Hong 69caf533f0 kubelet: don't reject pods without adding them to the pod manager
kubelet relies on the pod manager as a cache of the pods in the apiserver (and
other sources) . The cache should be kept up-to-date even when rejecting pods.
Without this, kubelet may decide at any point to drop the status update
(request to the apiserver) for the rejected pod since it would think the pod no
longer exists in the apiserver.

Also check if the pod to-be-admitted has terminated or not. In the case where
it has terminated, skip the admission process completely.
2016-11-30 18:05:17 -08:00
Jing Xu 041fa6477b remove checking mount point in cleanupOrphanedPodDirs
To avoid nfs hung problem, remove the mountpoint checking code in
cleanupOrphanedPodDirs(). This removal should still be safe.
2016-11-30 13:46:39 -08:00
Pengfei Ni f584ed4398 Fix package aliases to follow golang convention 2016-11-30 15:40:50 +08:00
zdj6373 d43dc73610 Function annotation modification 2016-11-28 15:34:13 +08:00
zdj6373 c36ca0341c Kubelet log modification 2016-11-24 09:59:10 +08:00
Chao Xu 5e1adf91df cmd/kubelet 2016-11-23 15:53:09 -08:00
Vishnu kannan 9066253491 [kubelet] rename --cgroups-per-qos to --experimental-cgroups-per-qos to reflect the true nature of that feature
Signed-off-by: Vishnu kannan <vishnuk@google.com>
2016-11-14 14:06:39 -08:00
pweil- d0d78f478c experimental host user ns defaulting 2016-11-14 10:16:03 -05:00
Kubernetes Submit Queue 44f672e5e2 Merge pull request #34877 from resouer/e2e-log-path
Automatic merge from submit-queue

Add e2e node test for log path

fixes #34661

A node e2e test to check if container logs files are properly created with right content.

Since the log files under `/var/log/containers` are actually symbolic of docker containers log files, we can not use a pod to mount them in and do check (symbolic doesn't supported by docker volume).

cc @Random-Liu
2016-11-10 08:35:59 -08:00
Kubernetes Submit Queue 9bdff48d5e Merge pull request #36253 from timstclair/klet-stream-config-pr
Automatic merge from submit-queue

Use indirect streaming path for remote CRI shim

Last step for https://github.com/kubernetes/kubernetes/issues/29579

- Wire through the remote indirect streaming methods in the docker remote shim
- Add the docker streaming server as a handler at `<node>:10250/cri/{exec,attach,portforward}`
- Disable legacy streaming for dockershim

Note: This requires PR https://github.com/kubernetes/kubernetes/pull/34987 to work.

Tested manually on an E2E cluster.

/cc @euank @feiskyer @kubernetes/sig-node
2016-11-09 23:29:18 -08:00
Rajat Ramesh Koujalagi d81e216fc6 Better messaging for missing volume components on host to perform mount 2016-11-09 15:16:11 -08:00
Tim St. Clair 7badc1d226
Use indirect streaming path for dockershim & remote CRI runtime 2016-11-08 10:58:38 -08:00
Tim St. Clair 0f028ff660
Remove legacy dockershim streaming 2016-11-08 10:58:38 -08:00
Harry Zhang 64c8d3ad3d Add e2e node test for log path
Update to use pod to check log file
2016-11-08 13:01:25 -05:00
Yu-Ju Hong dcce768a3e Rename experimental-runtime-integration-type to experimental-cri 2016-11-07 11:29:24 -08:00
Kubernetes Submit Queue 182a09c3c7 Merge pull request #35526 from justinsb/fix_35521_b
Automatic merge from submit-queue

kubelet bootstrap: start hostNetwork pods before we have PodCIDR

Network readiness was checked in the pod admission phase, but pods that
fail admission are not retried.  Move the check to the pod start phase.

Issue #35409 
Issue #35521
2016-11-06 12:53:14 -08:00
Kubernetes Submit Queue 28733b0f8b Merge pull request #36201 from yujuhong/cri_inits
Automatic merge from submit-queue

CRI: rearrange kubelet rutnime initialization

Consolidate the code used by docker+cri and remote+cri for consistency, and to
prevent changing one without the other.  Enforce that
`--experimental-runtime-integration-type` has to be set in order for kubelet
use the CRI interface, *even for out-of-process shims`. This simplifies the
temporary `if` logic in kubelet while CRI still co-exists with older logic.
2016-11-06 10:23:52 -08:00
Kubernetes Submit Queue 8371a778f6 Merge pull request #35839 from Random-Liu/add-cri-runtime-status
Automatic merge from submit-queue

CRI: Add Status into CRI.

For https://github.com/kubernetes/kubernetes/issues/35701.
Fixes https://github.com/kubernetes/kubernetes/issues/35701.

This PR added a `Status` call in CRI, and the `RuntimeStatus` is defined as following:

``` protobuf
message RuntimeCondition {
    // Type of runtime condition.
    optional string type = 1;
    // Status of the condition, one of true/false.
    optional bool status = 2;
    // Brief reason for the condition's last transition.
    optional string reason = 3;
    // Human readable message indicating details about last transition.
    optional string message = 4;
}

message RuntimeStatus {
    // Conditions is an array of current observed runtime conditions.
    repeated RuntimeCondition conditions = 1;
}
```

Currently, only `conditions` is included in `RuntimeStatus`, and the definition is almost the same with `NodeCondition` and `PodCondition` in K8s api.

@yujuhong @feiskyer @bprashanth If this makes sense, I'll send a follow up PR to let dockershim return `RuntimeStatus` and let kubelet make use of it.
@yifan-gu @euank Does this make sense to rkt?
/cc @kubernetes/sig-node
2016-11-06 04:16:29 -08:00
Kubernetes Submit Queue 649c0ddd0e Merge pull request #35342 from timstclair/rejected
Automatic merge from submit-queue

[AppArmor] Hold bad AppArmor pods in pending rather than rejecting

Fixes https://github.com/kubernetes/kubernetes/issues/32837

Overview of the fix:

If the Kubelet needs to reject a Pod for a reason that the control plane doesn't understand (e.g. which AppArmor profiles are installed on the node), then it might contiinuously try to run the pod on the same rejecting node. This change adds a concept of "soft rejection", in which the Pod is admitted, but not allowed to run (and therefore held in a pending state). This prevents the pod from being retried on other nodes, but also prevents the high churn. This is consistent with how other missing local resources (e.g. volumes) is handled.

A side effect of the change is that Pods which are not initially runnable will be retried. This is desired behavior since it avoids a race condition when a new node is brought up but the AppArmor profiles have not yet been loaded on it.

``` release-note
Pods with invalid AppArmor configurations will be held in a Pending state, rather than rejected (failed). Check the pod status message to find out why it is not running.
```

@kubernetes/sig-node @timothysc @rrati @davidopp
2016-11-05 22:52:26 -07:00
Random-Liu 772bf8e14d Populate NetworkReady Status. 2016-11-05 00:02:05 -07:00
Random-Liu 4bd9dbf6ad Add RuntimeStatus in container/runtime.go 2016-11-05 00:02:05 -07:00