diff --git a/test/test_owners.csv b/test/test_owners.csv index 7ef5abc30e..4f82a186a9 100644 --- a/test/test_owners.csv +++ b/test/test_owners.csv @@ -9,7 +9,6 @@ AppArmor when running without AppArmor should reject a pod with an AppArmor prof Cadvisor should be healthy on every node.,vishh,0,node Cassandra should create and scale cassandra,fabioy,1,apps CassandraStatefulSet should create statefulset,wojtek-t,1,apps -ClusterDns should create pod that uses dns,sttts,0,network Cluster level logging using Elasticsearch should check that logs from containers are ingested into Elasticsearch,crassirostris,0,instrumentation Cluster level logging using GCL should check that logs from containers are ingested in GCL,crassirostris,0,instrumentation Cluster level logging using GCL should create a constant load with long-living pods and ensure logs delivery,crassirostris,0,instrumentation @@ -18,21 +17,22 @@ Cluster size autoscaling should add node to the particular mig,spxtr,1,autoscali Cluster size autoscaling should correctly scale down after a node is not needed,pmorie,1,autoscaling Cluster size autoscaling should correctly scale down after a node is not needed when there is non autoscaled pool,krousey,1,autoscaling Cluster size autoscaling should disable node pool autoscaling,Q-Lee,1,autoscaling -Cluster size autoscaling should increase cluster size if pending pods are small and there is another node pool that is not autoscaled,apelisse,1,autoscaling Cluster size autoscaling should increase cluster size if pending pods are small,childsb,1,autoscaling +Cluster size autoscaling should increase cluster size if pending pods are small and there is another node pool that is not autoscaled,apelisse,1,autoscaling Cluster size autoscaling should increase cluster size if pods are pending due to host port conflict,brendandburns,1,autoscaling -Cluster size autoscaling shouldn't increase cluster size if pending pod is too large,rrati,0,autoscaling Cluster size autoscaling should scale up correct target pool,mikedanese,1,autoscaling +Cluster size autoscaling shouldn't increase cluster size if pending pod is too large,rrati,0,autoscaling +ClusterDns should create pod that uses dns,sttts,0,network ConfigMap optional updates should be reflected in volume,timothysc,1,apps ConfigMap should be consumable from pods in volume,alex-mohr,1,apps ConfigMap should be consumable from pods in volume as non-root,rrati,0,apps -ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set,rrati,0,apps ConfigMap should be consumable from pods in volume as non-root with FSGroup,roberthbailey,1,apps +ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set,rrati,0,apps ConfigMap should be consumable from pods in volume with defaultMode set,Random-Liu,1,apps +ConfigMap should be consumable from pods in volume with mappings,rrati,0,apps ConfigMap should be consumable from pods in volume with mappings and Item mode set,eparis,1,apps ConfigMap should be consumable from pods in volume with mappings as non-root,apelisse,1,apps ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup,zmerlynn,1,apps -ConfigMap should be consumable from pods in volume with mappings,rrati,0,apps ConfigMap should be consumable in multiple volumes in the same pod,caesarxuchao,1,apps ConfigMap should be consumable via environment variable,ncdc,1,apps ConfigMap should be consumable via the environment,rkouj,0,apps @@ -41,28 +41,37 @@ Container Lifecycle Hook when create a pod with lifecycle hook when it is exec h Container Lifecycle Hook when create a pod with lifecycle hook when it is exec hook should execute prestop exec hook properly,rrati,0,node Container Lifecycle Hook when create a pod with lifecycle hook when it is http hook should execute poststart http hook properly,vishh,1,node Container Lifecycle Hook when create a pod with lifecycle hook when it is http hook should execute prestop http hook properly,freehan,1,node -ContainerLogPath Pod with a container printed log to stdout should print log to correct log path,resouer,0,node Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image *,Random-Liu,0,node Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits it should run with the expected status,luxas,1,node Container Runtime Conformance Test container runtime conformance blackbox test when starting a container that exits should report termination message *,alex-mohr,1,node +ContainerLogPath Pod with a container printed log to stdout should print log to correct log path,resouer,0,node CronJob should not emit unexpected warnings,soltysh,1,apps CronJob should not schedule jobs when suspended,soltysh,1,apps CronJob should not schedule new jobs when ForbidConcurrent,soltysh,1,apps CronJob should remove from active list jobs that have been deleted,soltysh,1,apps CronJob should replace jobs when ReplaceConcurrent,soltysh,1,apps CronJob should schedule multiple jobs concurrently,soltysh,1,apps -DaemonRestart Controller Manager should not create/delete replicas across restart,rrati,0,apps -DaemonRestart Kubelet should not restart containers across restart,madhusudancs,1,apps -DaemonRestart Scheduler should continue assigning pods to nodes across restart,lavalamp,1,apps +DNS config map should be able to change configuration,rkouj,0,network +DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios,MrHohn,0,network +DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods when cluster size changed,MrHohn,0,network +DNS should provide DNS for ExternalName services,rmmh,1,network +DNS should provide DNS for pods for Hostname and Subdomain Annotation,mtaufen,1,network +DNS should provide DNS for services,roberthbailey,1,network +DNS should provide DNS for the cluster,roberthbailey,1,network Daemon set should retry creating failed daemon pods,yifan-gu,1,apps Daemon set should run and stop complex daemon,jlowdermilk,1,apps Daemon set should run and stop complex daemon with node affinity,erictune,1,apps Daemon set should run and stop simple daemon,mtaufen,1,apps +DaemonRestart Controller Manager should not create/delete replicas across restart,rrati,0,apps +DaemonRestart Kubelet should not restart containers across restart,madhusudancs,1,apps +DaemonRestart Scheduler should continue assigning pods to nodes across restart,lavalamp,1,apps Density create a batch of pods latency/resource should be within limit when create * pods with * interval,apelisse,1,scalability Density create a batch of pods with higher API QPS latency/resource should be within limit when create * pods with * interval (QPS *),jlowdermilk,1,scalability Density create a sequence of pods latency/resource should be within limit when create * pods with * background pods,wojtek-t,1,scalability Density should allow running maximum capacity pods on nodes,smarterclayton,1,scalability Density should allow starting * pods per node using * with * secrets and * daemons,rkouj,0,scalability +Deployment RecreateDeployment should delete old pods and create new ones,pwittrock,0,apps +Deployment RollingUpdateDeployment should delete old pods and create new ones,pwittrock,0,apps Deployment deployment reaping should cascade to its replica sets and pods,wojtek-t,1,apps Deployment deployment should create new pods,pwittrock,0,apps Deployment deployment should delete old replica sets,pwittrock,0,apps @@ -75,19 +84,10 @@ Deployment lack of progress should be reported in the deployment status,kargakis Deployment overlapping deployment should not fight with each other,kargakis,1,apps Deployment paused deployment should be able to scale,kargakis,1,apps Deployment paused deployment should be ignored by the controller,kargakis,0,apps -Deployment RecreateDeployment should delete old pods and create new ones,pwittrock,0,apps -Deployment RollingUpdateDeployment should delete old pods and create new ones,pwittrock,0,apps Deployment scaled rollout deployment should not block on annotation check,kargakis,1,apps DisruptionController evictions: * => *,rkouj,0,scheduling DisruptionController should create a PodDisruptionBudget,rkouj,0,scheduling DisruptionController should update PodDisruptionBudget status,rkouj,0,scheduling -DNS config map should be able to change configuration,rkouj,0,network -DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios,MrHohn,0,network -DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods when cluster size changed,MrHohn,0,network -DNS should provide DNS for ExternalName services,rmmh,1,network -DNS should provide DNS for pods for Hostname and Subdomain Annotation,mtaufen,1,network -DNS should provide DNS for services,roberthbailey,1,network -DNS should provide DNS for the cluster,roberthbailey,1,network Docker Containers should be able to override the image's default arguments (docker cmd),maisem,0,node Docker Containers should be able to override the image's default command and arguments,maisem,0,node Docker Containers should be able to override the image's default commmand (docker entrypoint),maisem,0,node @@ -103,17 +103,24 @@ Downward API volume should provide container's memory limit,krousey,1,node Downward API volume should provide container's memory request,mikedanese,1,node Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set,lavalamp,1,node Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set,freehan,1,node -Downward API volume should provide podname as non-root with fsgroup and defaultMode,rrati,0,node Downward API volume should provide podname as non-root with fsgroup,rrati,0,node +Downward API volume should provide podname as non-root with fsgroup and defaultMode,rrati,0,node Downward API volume should provide podname only,mwielgus,1,node Downward API volume should set DefaultMode on files,davidopp,1,node Downward API volume should set mode on item file,mtaufen,1,node Downward API volume should update annotations on modification,eparis,1,node Downward API volume should update labels on modification,timothysc,1,node -DynamicKubeletConfiguration When a configmap called `kubelet-` is added to the `kube-system` namespace The Kubelet on that node should restart to take up the new config,mwielgus,1,storage Dynamic provisioning DynamicProvisioner Alpha should create and delete alpha persistent volumes,rrati,0,storage Dynamic provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes,jsafrane,0,storage Dynamic provisioning DynamicProvisioner should create and delete persistent volumes,jsafrane,0,storage +Dynamic provisioning DynamicProvisioner should not provision a volume in an unmanaged GCE zone.,jszczepkowski,1, +DynamicKubeletConfiguration When a configmap called `kubelet-` is added to the `kube-system` namespace The Kubelet on that node should restart to take up the new config,mwielgus,1,storage +ESIPP should handle updates to source ip annotation,bprashanth,1,network +ESIPP should only target nodes with endpoints,rrati,0,network +ESIPP should work for type=LoadBalancer,fgrzadkowski,1,network +ESIPP should work for type=NodePort,kargakis,1,network +ESIPP should work from pods,cjcullen,1,network +Empty starts a pod,childsb,1, "EmptyDir volumes should support (non-root,0644,default)",timstclair,1,node "EmptyDir volumes should support (non-root,0644,tmpfs)",spxtr,1,node "EmptyDir volumes should support (non-root,0666,default)",dchen1107,1,node @@ -136,33 +143,27 @@ EmptyDir volumes when FSGroup is specified volume on tmpfs should have the corre EmptyDir wrapper volumes should not cause race condition when used for configmaps,mtaufen,1,node EmptyDir wrapper volumes should not cause race condition when used for git_repo,brendandburns,1,node EmptyDir wrapper volumes should not conflict,deads2k,1,node -Empty does nothing,cjcullen,1,node -ESIPP should handle updates to source ip annotation,bprashanth,1,network -ESIPP should only target nodes with endpoints,rrati,0,network -ESIPP should work for type=LoadBalancer,fgrzadkowski,1,network -ESIPP should work for type=NodePort,kargakis,1,network -ESIPP should work from pods,cjcullen,1,network -Etcd failure should recover from network partition with master,justinsb,1,api-machinery Etcd failure should recover from SIGKILL,pmorie,1,api-machinery +Etcd failure should recover from network partition with master,justinsb,1,api-machinery Events should be sent by kubelets and the scheduler about pods scheduling and running,zmerlynn,1,node +Federated Services Without Clusters should succeed when a service is created,rmmh,1,federation +Federated Services with clusters DNS non-local federated service missing local service should never find DNS entries for a missing local service,mml,0,federation +Federated Services with clusters DNS non-local federated service should be able to discover a non-local federated service,jlowdermilk,1,federation +Federated Services with clusters DNS should be able to discover a federated service,derekwaynecarr,1,federation +Federated Services with clusters Federated Service should be deleted from underlying clusters when OrphanDependents is false,zmerlynn,1, +Federated Services with clusters Federated Service should create matching services in underlying clusters,thockin,1, +Federated Services with clusters Federated Service should not be deleted from underlying clusters when OrphanDependents is nil,yifan-gu,1, +Federated Services with clusters Federated Service should not be deleted from underlying clusters when OrphanDependents is true,davidopp,1, Federated ingresses Federated Ingresses Ingress connectivity and DNS should be able to connect to a federated ingress via its load balancer,rmmh,1,federation Federated ingresses Federated Ingresses should be created and deleted successfully,dchen1107,1,federation Federated ingresses Federated Ingresses should be deleted from underlying clusters when OrphanDependents is false,nikhiljindal,0,federation Federated ingresses Federated Ingresses should create and update matching ingresses in underlying clusters,rrati,0,federation Federated ingresses Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is nil,nikhiljindal,0,federation Federated ingresses Federated Ingresses should not be deleted from underlying clusters when OrphanDependents is true,nikhiljindal,0,federation -Federated Services with clusters DNS non-local federated service missing local service should never find DNS entries for a missing local service,mml,0,federation -Federated Services with clusters DNS non-local federated service should be able to discover a non-local federated service,jlowdermilk,1,federation -Federated Services with clusters DNS should be able to discover a federated service,derekwaynecarr,1,federation -Federated Services with clusters service creation should be deleted from underlying clusters when OrphanDependents is false,rkouj,0,federation -Federated Services with clusters service creation should create matching services in underlying clusters,jbeda,1,federation -Federated Services with clusters service creation should not be deleted from underlying clusters when OrphanDependents is nil,rkouj,0,federation -Federated Services with clusters service creation should not be deleted from underlying clusters when OrphanDependents is true,rkouj,0,federation -Federated Services Without Clusters should succeed when a service is created,rmmh,1,federation -Federation apiserver Admission control should not be able to create resources if namespace does not exist,alex-mohr,1,federation Federation API server authentication should accept cluster resources when the client has right authentication credentials,davidopp,1,federation Federation API server authentication should not accept cluster resources when the client has invalid authentication credentials,yujuhong,1,federation Federation API server authentication should not accept cluster resources when the client has no authentication credentials,nikhiljindal,1,federation +Federation apiserver Admission control should not be able to create resources if namespace does not exist,alex-mohr,1,federation Federation apiserver Cluster objects should be created and deleted successfully,rrati,0,federation Federation daemonsets DaemonSet objects should be created and deleted successfully,nikhiljindal,0,federation Federation daemonsets DaemonSet objects should be deleted from underlying clusters when OrphanDependents is false,nikhiljindal,0,federation @@ -191,18 +192,18 @@ Federation secrets Secret objects should not be deleted from underlying clusters Federation secrets Secret objects should not be deleted from underlying clusters when OrphanDependents is true,nikhiljindal,0,federation Firewall rule should create valid firewall rules for LoadBalancer type service,rkouj,0,network Firewall rule should have correct firewall rules for e2e cluster,rkouj,0,network -Garbage Collection Test: * Should eventually garbage collect containers when we exceed the number of dead containers per container,Random-Liu,0,cluster-lifecycle -Garbage collector should delete pods created by rc when not orphaning,justinsb,1,cluster-lifecycle -Garbage collector should delete RS created by deployment when not orphaning,rkouj,0,cluster-lifecycle -Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil,zmerlynn,1,cluster-lifecycle -Garbage collector should orphan pods created by rc if delete options say so,fabioy,1,cluster-lifecycle -Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true,rkouj,0,cluster-lifecycle GCP Volumes GlusterFS should be mountable,nikhiljindal,0,storage GCP Volumes NFSv4 should be mountable for NFSv4,nikhiljindal,0,storage -"Generated release_1_5 clientset should create pods, delete pods, watch pods",rrati,0,api-machinery -"Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs",soltysh,1,api-machinery GKE local SSD should write and read from node local SSD,fabioy,0,storage GKE node pools should create a cluster with multiple node pools,fabioy,1,cluster-lifecycle +Garbage Collection Test: * Should eventually garbage collect containers when we exceed the number of dead containers per container,Random-Liu,0,cluster-lifecycle +Garbage collector should delete RS created by deployment when not orphaning,rkouj,0,cluster-lifecycle +Garbage collector should delete pods created by rc when not orphaning,justinsb,1,cluster-lifecycle +Garbage collector should orphan RS created by deployment when deleteOptions.OrphanDependents is true,rkouj,0,cluster-lifecycle +Garbage collector should orphan pods created by rc if delete options say so,fabioy,1,cluster-lifecycle +Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil,zmerlynn,1,cluster-lifecycle +"Generated release_1_5 clientset should create pods, delete pods, watch pods",rrati,0,api-machinery +"Generated release_1_5 clientset should create v2alpha1 cronJobs, delete cronJobs, watch cronJobs",soltysh,1,api-machinery HA-master survive addition/removal replicas different zones,derekwaynecarr,0,api-machinery HA-master survive addition/removal replicas multizone workers,rkouj,0,api-machinery HA-master survive addition/removal replicas same zone,derekwaynecarr,0,api-machinery @@ -235,8 +236,8 @@ Kubectl client Kubectl api-versions should check if v1 is in available api versi Kubectl client Kubectl apply should apply a new configuration to an existing RC,pwittrock,0,cli Kubectl client Kubectl apply should reuse port when apply to an existing SVC,deads2k,0,cli Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info,pwittrock,0,cli -Kubectl client Kubectl create quota should create a quota without scopes,xiang90,1,cli Kubectl client Kubectl create quota should create a quota with scopes,rrati,0,cli +Kubectl client Kubectl create quota should create a quota without scopes,xiang90,1,cli Kubectl client Kubectl create quota should reject quota with invalid scopes,brendandburns,1,cli Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods,pwittrock,0,cli Kubectl client Kubectl expose should create services for rc,pwittrock,0,cli @@ -245,17 +246,17 @@ Kubectl client Kubectl logs should be able to retrieve and filter logs,jlowdermi Kubectl client Kubectl patch should add annotations for pods in rc,janetkuo,0,cli Kubectl client Kubectl replace should update a single-container pod's image,rrati,0,cli Kubectl client Kubectl rolling-update should support rolling-update to same image,janetkuo,0,cli +"Kubectl client Kubectl run --rm job should create a job from an image, then delete the job",soltysh,1,cli Kubectl client Kubectl run default should create an rc or deployment from an image,janetkuo,0,cli Kubectl client Kubectl run deployment should create a deployment from an image,janetkuo,0,cli Kubectl client Kubectl run job should create a job from an image when restart is OnFailure,soltysh,1,cli Kubectl client Kubectl run pod should create a pod from an image when restart is Never,janetkuo,0,cli Kubectl client Kubectl run rc should create an rc from an image,janetkuo,0,cli -"Kubectl client Kubectl run --rm job should create a job from an image, then delete the job",soltysh,1,cli Kubectl client Kubectl taint should remove all the taints with the same key off a node,erictune,1,cli Kubectl client Kubectl taint should update the taint on a node, pwittrock,0,cli Kubectl client Kubectl version should check is all data is printed,janetkuo,0,cli -Kubectl client Proxy server should support proxy with --port 0,ncdc,1,cli Kubectl client Proxy server should support --unix-socket=/path,zmerlynn,1,cli +Kubectl client Proxy server should support proxy with --port 0,ncdc,1,cli Kubectl client Simple pod should handle in-cluster config,rkouj,0,cli Kubectl client Simple pod should return command exit codes,yifan-gu,1,cli Kubectl client Simple pod should support exec,ncdc,0,cli @@ -269,37 +270,35 @@ Kubelet Cgroup Manager Pod containers On scheduling a BestEffort Pod Pod contain Kubelet Cgroup Manager Pod containers On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup,derekwaynecarr,0,node Kubelet Cgroup Manager Pod containers On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root,derekwaynecarr,0,node Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created,davidopp,1,node -kubelet Clean up pods on node kubelet should be able to delete * pods per node in *.,yujuhong,0,node +Kubelet Container Manager Validate OOM score adjustments once the node is setup Kubelet's oom-score-adj should be -999,kargakis,1,node "Kubelet Container Manager Validate OOM score adjustments once the node is setup burstable container's oom-score-adj should be between [2, 1000)",derekwaynecarr,1,node Kubelet Container Manager Validate OOM score adjustments once the node is setup docker daemon's oom-score-adj should be -999,thockin,1,node Kubelet Container Manager Validate OOM score adjustments once the node is setup guaranteed container's oom-score-adj should be -998,kargakis,1,node -Kubelet Container Manager Validate OOM score adjustments once the node is setup Kubelet's oom-score-adj should be -999,kargakis,1,node Kubelet Container Manager Validate OOM score adjustments once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000,timothysc,1,node Kubelet Eviction Manager hard eviction test pod using the most disk space gets evicted when the node disk usage is above the eviction hard threshold should evict the pod using the most disk space,rkouj,0,node -Kubelet experimental resource usage tracking resource tracking for * pods per node,yujuhong,0,node -kubelet host cleanup with volume mounts Host cleanup after disrupting NFS volume *,yujuhong,0,node -KubeletManagedEtcHosts should test kubelet managed /etc/hosts file,Random-Liu,1,node -Kubelet regular resource usage tracking resource tracking for * pods per node,yujuhong,0,node Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node,rkouj,0,node +Kubelet experimental resource usage tracking resource tracking for * pods per node,yujuhong,0,node +Kubelet regular resource usage tracking resource tracking for * pods per node,yujuhong,0,node Kubelet when scheduling a busybox command in a pod it should print the output to logs,ixdy,1,node Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete,smarterclayton,1,node Kubelet when scheduling a busybox command that always fails in a pod should have an error terminated reason,deads2k,1,node Kubelet when scheduling a read only busybox container it should not write to root filesystem,timothysc,1,node +KubeletManagedEtcHosts should test kubelet managed /etc/hosts file,Random-Liu,1,node Kubernetes Dashboard should check that the kubernetes-dashboard instance is alive,wonderfly,0,ui LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.,cjcullen,1,node Liveness liveness pods should be automatically restarted,derekwaynecarr,0,node -Loadbalancing: L7 GCE shoud create ingress with given static-ip,derekwaynecarr,0,network -Loadbalancing: L7 GCE should conform to Ingress spec,derekwaynecarr,0,network -Loadbalancing: L7 Nginx should conform to Ingress spec,ncdc,1,network Load capacity should be able to handle * pods per node * with * secrets and * daemons,rkouj,0,network +Loadbalancing: L7 GCE should conform to Ingress spec,derekwaynecarr,0,network +Loadbalancing: L7 GCE should create ingress with given static-ip,eparis,1, +Loadbalancing: L7 Nginx should conform to Ingress spec,ncdc,1,network "Logging soak should survive logging 1KB every * seconds, for a duration of *, scaling up to * pods per node",justinsb,1,node "MemoryEviction when there is memory pressure should evict pods in the correct order (besteffort first, then burstable, then guaranteed)",ixdy,1,node Mesos applies slave attributes as labels,justinsb,1,apps Mesos schedules pods annotated with roles on correct slaves,timstclair,1,apps Mesos starts static pods on every node in the mesos cluster,lavalamp,1,apps +MetricsGrabber should grab all metrics from API server.,gmarek,0,instrumentation MetricsGrabber should grab all metrics from a ControllerManager.,gmarek,0,instrumentation MetricsGrabber should grab all metrics from a Kubelet.,gmarek,0,instrumentation -MetricsGrabber should grab all metrics from API server.,gmarek,0,instrumentation MetricsGrabber should grab all metrics from a Scheduler.,gmarek,0,instrumentation MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted,roberthbailey,1,node MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted,justinsb,1,node @@ -311,6 +310,13 @@ Namespaces should always delete fast (ALL of 100 namespaces in 150 seconds),rmmh Namespaces should delete fast enough (90 percent of 100 namespaces in 150 seconds),kevin-wangzefeng,1,api-machinery Namespaces should ensure that all pods are removed when a namespace is deleted.,xiang90,1,api-machinery Namespaces should ensure that all services are removed when a namespace is deleted.,pmorie,1,api-machinery +Network Partition *,foxish,0,network +Network Partition Pods should return to running and ready state after network partition is healed *,foxish,0,network +Network Partition should come back up if node goes down,foxish,0,network +Network Partition should create new pods when node is partitioned,foxish,0,network +Network Partition should eagerly create replacement pod during network partition when termination grace is non-zero,foxish,0,network +Network Partition should not reschedule stateful pods if there is a network partition,brendandburns,0,network +Network should set TCP CLOSE_WAIT timeout,bowei,0,network Networking Granular Checks: Pods should function for intra-pod communication: http,stts,0,network Networking Granular Checks: Pods should function for intra-pod communication: udp,freehan,0,network Networking Granular Checks: Pods should function for node-pod communication: http,spxtr,1,network @@ -329,49 +335,43 @@ Networking IPerf should transfer ~ 1GB onto the service endpoint * servers (maxi Networking should check kube-proxy urls,lavalamp,1,network Networking should provide Internet connection for containers,sttts,0,network "Networking should provide unchanging, static URL paths for kubernetes api services",freehan,0,network -Network Partition *,foxish,0,network -Network Partition Pods should return to running and ready state after network partition is healed *,foxish,0,network -Network Partition should come back up if node goes down,foxish,0,network -Network Partition should create new pods when node is partitioned,foxish,0,network -Network Partition should eagerly create replacement pod during network partition when termination grace is non-zero,foxish,0,network -Network Partition should not reschedule stateful pods if there is a network partition,brendandburns,0,network -Network should set TCP CLOSE_WAIT timeout,bowei,0,network -NodeOutOfDisk runs out of disk space,vishh,0,node -NodeProblemDetector KernelMonitor should generate node condition and events for corresponding errors,Random-Liu,0,node -Nodes Resize should be able to add nodes,piosz,1,cluster-lifecycle -Nodes Resize should be able to delete nodes,zmerlynn,1,cluster-lifecycle NoExecuteTaintManager doesn't evict pod with tolerations from tainted nodes,freehan,0,scheduling NoExecuteTaintManager eventually evict pod with finite tolerations from tainted nodes,freehan,0,scheduling NoExecuteTaintManager evicts pods from tainted nodes,freehan,0,scheduling NoExecuteTaintManager removing taint cancels eviction,freehan,0,scheduling +NodeOutOfDisk runs out of disk space,vishh,0,node +NodeProblemDetector KernelMonitor should generate node condition and events for corresponding errors,Random-Liu,0,node +Nodes Resize should be able to add nodes,piosz,1,cluster-lifecycle +Nodes Resize should be able to delete nodes,zmerlynn,1,cluster-lifecycle Opaque resources should account opaque integer resources in pods with multiple containers.,ConnorDoyle,0,node Opaque resources should not break pods that do not consume opaque integer resources.,ConnorDoyle,0,node Opaque resources should not schedule pods that exceed the available amount of opaque integer resource.,ConnorDoyle,0,node Opaque resources should schedule pods that do consume opaque integer resources.,ConnorDoyle,0,node -PersistentVolumes persistentvolumereclaim:vsphere should delete persistent volume when reclaimPolicy set to delete and associated claim is deleted,copejon,0,storage -PersistentVolumes persistentvolumereclaim:vsphere should retain persistent volume when reclaimPolicy set to retain when associated claim is deleted,copejon,0,storage PersistentVolumes PersistentVolumes:GCEPD should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach,copejon,0,storage PersistentVolumes PersistentVolumes:GCEPD should test that deleting the Namespace of a PVC and Pod causes the successful detach of Persistent Disk,thockin,1,storage PersistentVolumes PersistentVolumes:GCEPD should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach,copejon,0,storage -PersistentVolumes PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access,copejon,0,storage -PersistentVolumes PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access,copejon,0,storage -PersistentVolumes PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access,copejon,0,storage +PersistentVolumes PersistentVolumes:NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.,lavalamp,1, PersistentVolumes PersistentVolumes:NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access,copejon,0,storage PersistentVolumes PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access,copejon,0,storage PersistentVolumes PersistentVolumes:NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access,copejon,0,storage PersistentVolumes PersistentVolumes:NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access,copejon,0,storage +PersistentVolumes PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access,copejon,0,storage +PersistentVolumes PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access,copejon,0,storage +PersistentVolumes PersistentVolumes:NFS with multiple PVs and PVCs all in same ns should create 4 PVs and 2 PVCs: test write access,copejon,0,storage PersistentVolumes Selector-Label Volume Binding:vsphere should bind volume with claim for given label,copejon,0,storage +PersistentVolumes persistentvolumereclaim:vsphere should delete persistent volume when reclaimPolicy set to delete and associated claim is deleted,copejon,0,storage +PersistentVolumes persistentvolumereclaim:vsphere should retain persistent volume when reclaimPolicy set to retain when associated claim is deleted,copejon,0,storage +PersistentVolumes when kubelet restarts *,rkouj,0,storage PersistentVolumes:vsphere should test that deleting a PVC before the pod does not cause pod deletion to fail on PD detach,rkouj,0,storage PersistentVolumes:vsphere should test that deleting the PV before the pod does not cause pod deletion to fail on PD detach,rkouj,0,storage -PersistentVolumes when kubelet restarts *,rkouj,0,storage Pet Store should scale to persist a nominal number ( * ) of transactions in * seconds,xiang90,1,apps +"Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host",saad-ali,0,storage +"Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully.",saad-ali,0,storage Pod Disks should be able to detach from a node which was deleted,rkouj,0,storage Pod Disks should be able to detach from a node whose api object was deleted,rkouj,0,storage -"Pod Disks Should schedule a pod w/ a readonly PD on two hosts, then remove both gracefully.",saad-ali,0,storage -"Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully.",saad-ali,1,storage -"Pod Disks Should schedule a pod w/ a RW PD, gracefully remove it, then schedule it on another host",saad-ali,0,storage "Pod Disks should schedule a pod w/ a RW PD shared between multiple containers, write to PD, delete pod, verify contents, and repeat in rapid succession",saad-ali,0,storage "Pod Disks should schedule a pod w/ a RW PD, ungracefully remove it, then schedule it on another host",mml,1,storage +"Pod Disks should schedule a pod w/ a readonly PD on two hosts, then remove both ungracefully.",saad-ali,1,storage "Pod Disks should schedule a pod w/two RW PDs both mounted to one container, write to PD, verify contents, delete pod, recreate pod, verify contents, and repeat in rapid succession",saad-ali,0,storage Pod garbage collector should handle the creation of 1000 pods,wojtek-t,1,node Pods Extended Delete Grace Period should be submitted and removed,rkouj,0,node @@ -395,14 +395,43 @@ Port forwarding With a server listening on localhost should support forwarding o "Port forwarding With a server listening on localhost that expects no client request should support a client that connects, sends data, and disconnects",rkouj,0,node PreStop should call prestop when killing a pod,ncdc,1,node PrivilegedPod should enable privileged commands,derekwaynecarr,0,node +Probing container should *not* be restarted with a /healthz http liveness probe,Random-Liu,0,node +"Probing container should *not* be restarted with a exec ""cat /tmp/health"" liveness probe",Random-Liu,0,node +Probing container should be restarted with a /healthz http liveness probe,Random-Liu,0,node Probing container should be restarted with a docker exec liveness probe with timeout,timstclair,0,node "Probing container should be restarted with a exec ""cat /tmp/health"" liveness probe",Random-Liu,0,node -Probing container should be restarted with a /healthz http liveness probe,Random-Liu,0,node Probing container should have monotonically increasing restart count,Random-Liu,0,node -"Probing container should *not* be restarted with a exec ""cat /tmp/health"" liveness probe",Random-Liu,0,node -Probing container should *not* be restarted with a /healthz http liveness probe,Random-Liu,0,node Probing container with readiness probe should not be ready before initial delay and never restart,Random-Liu,0,node Probing container with readiness probe that fails should never be ready and never restart,Random-Liu,0,node +Projected optional updates should be reflected in volume,pmorie,1,storage +Projected should be able to mount in a volume regardless of a different secret existing with same name in different namespace,Q-Lee,1, +Projected should be consumable from pods in volume,yujuhong,1,storage +Projected should be consumable from pods in volume as non-root,fabioy,1,storage +Projected should be consumable from pods in volume as non-root with FSGroup,timothysc,1,storage +Projected should be consumable from pods in volume as non-root with defaultMode and fsGroup set,xiang90,1,storage +Projected should be consumable from pods in volume with defaultMode set,piosz,1,storage +Projected should be consumable from pods in volume with mappings,lavalamp,1,storage +Projected should be consumable from pods in volume with mappings and Item Mode set,dchen1107,1,storage +Projected should be consumable from pods in volume with mappings and Item mode set,kevin-wangzefeng,1,storage +Projected should be consumable from pods in volume with mappings as non-root,roberthbailey,1,storage +Projected should be consumable from pods in volume with mappings as non-root with FSGroup,ixdy,1,storage +Projected should be consumable in multiple volumes in a pod,ixdy,1,storage +Projected should be consumable in multiple volumes in the same pod,luxas,1,storage +Projected should project all components that make up the projection API,fabioy,1,storage +Projected should provide container's cpu limit,justinsb,1,storage +Projected should provide container's cpu request,smarterclayton,1,storage +Projected should provide container's memory limit,cjcullen,1,storage +Projected should provide container's memory request,spxtr,1,storage +Projected should provide node allocatable (cpu) as default cpu limit if the limit is not set,zmerlynn,1,storage +Projected should provide node allocatable (memory) as default memory limit if the limit is not set,mikedanese,1,storage +Projected should provide podname as non-root with fsgroup,fabioy,1,storage +Projected should provide podname as non-root with fsgroup and defaultMode,gmarek,1,storage +Projected should provide podname only,vishh,1,storage +Projected should set DefaultMode on files,timstclair,1,storage +Projected should set mode on item file,gmarek,1,storage +Projected should update annotations on modification,janetkuo,1,storage +Projected should update labels on modification,xiang90,1,storage +Projected updates should be reflected in volume,yujuhong,1,storage Proxy * should proxy logs on node,rrati,0,node Proxy * should proxy logs on node using proxy subresource,rrati,0,node Proxy * should proxy logs on node with explicit kubelet port,ixdy,1,node @@ -424,9 +453,10 @@ ReplicationController should serve a basic image on each replica with a private ReplicationController should serve a basic image on each replica with a public image,krousey,1,apps ReplicationController should surface a failure condition on a common issue like exceeded quota,kargakis,0,apps Rescheduler should ensure that critical pod is scheduled in case there is no resources available,mtaufen,1,apps +Resource-usage regular resource usage tracking resource tracking for * pods per node,janetkuo,1, ResourceQuota should create a ResourceQuota and capture the life of a configMap.,timstclair,1,api-machinery -ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim.,bgrant0607,1,api-machinery ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class.,derekwaynecarr,0,api-machinery +ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim.,bgrant0607,1,api-machinery ResourceQuota should create a ResourceQuota and capture the life of a pod.,pmorie,1,api-machinery ResourceQuota should create a ResourceQuota and capture the life of a replication controller.,rrati,0,api-machinery ResourceQuota should create a ResourceQuota and capture the life of a secret.,ncdc,1,api-machinery @@ -434,23 +464,23 @@ ResourceQuota should create a ResourceQuota and capture the life of a service.,t ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated.,krousey,1,api-machinery ResourceQuota should verify ResourceQuota with best effort scope.,mml,1,api-machinery ResourceQuota should verify ResourceQuota with terminating scopes.,ncdc,1,api-machinery -Resource-usage regular resource usage tracking resource tracking for * pods per node,janetkuo,1, Restart Docker Daemon Network should recover from ip leak,bprashanth,0,node Restart should restart all nodes and ensure all nodes and pods recover,rrati,0,node RethinkDB should create and stop rethinkdb servers,mwielgus,1,apps +SSH should SSH to all nodes and run commands,quinton-hoole,0, SchedulerPredicates validates MaxPods limit number of pods that are allowed to run,gmarek,0,scheduling SchedulerPredicates validates resource limits of pods that are allowed to run,gmarek,0,scheduling -SchedulerPredicates validates that a pod with an invalid NodeAffinity is rejected,deads2k,1,scheduling -SchedulerPredicates validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid,smarterclayton,1,scheduling -SchedulerPredicates validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work,rrati,0,scheduling +SchedulerPredicates validates that Inter-pod-Affinity is respected if not matching,rrati,0,scheduling SchedulerPredicates validates that InterPod Affinity and AntiAffinity is respected if matching,yifan-gu,1,scheduling SchedulerPredicates validates that InterPodAffinity is respected if matching,kevin-wangzefeng,1,scheduling SchedulerPredicates validates that InterPodAffinity is respected if matching with multiple Affinities,caesarxuchao,1,scheduling -SchedulerPredicates validates that Inter-pod-Affinity is respected if not matching,rrati,0,scheduling SchedulerPredicates validates that InterPodAntiAffinity is respected if matching 2,sttts,0,scheduling SchedulerPredicates validates that NodeAffinity is respected if not matching,fgrzadkowski,0,scheduling SchedulerPredicates validates that NodeSelector is respected if matching,gmarek,0,scheduling SchedulerPredicates validates that NodeSelector is respected if not matching,gmarek,0,scheduling +SchedulerPredicates validates that a pod with an invalid NodeAffinity is rejected,deads2k,1,scheduling +SchedulerPredicates validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid,smarterclayton,1,scheduling +SchedulerPredicates validates that embedding the JSON PodAffinity and PodAntiAffinity setting as a string in the annotation value work,rrati,0,scheduling SchedulerPredicates validates that required NodeAffinity setting is respected if matching,mml,1,scheduling SchedulerPredicates validates that taints-tolerations is respected if matching,jlowdermilk,1,scheduling SchedulerPredicates validates that taints-tolerations is respected if not matching,derekwaynecarr,1,scheduling @@ -458,11 +488,11 @@ Secret should create a pod that reads a secret,luxas,1,apps Secrets optional updates should be reflected in volume,justinsb,1,apps Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace,rkouj,0,apps Secrets should be consumable from pods in env vars,mml,1,apps -Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set,rrati,0,apps Secrets should be consumable from pods in volume,rrati,0,apps +Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set,rrati,0,apps Secrets should be consumable from pods in volume with defaultMode set,derekwaynecarr,1,apps -Secrets should be consumable from pods in volume with mappings and Item Mode set,quinton-hoole,1,apps Secrets should be consumable from pods in volume with mappings,jbeda,1,apps +Secrets should be consumable from pods in volume with mappings and Item Mode set,quinton-hoole,1,apps Secrets should be consumable in multiple volumes in a pod,alex-mohr,1,apps Secrets should be consumable via the environment,ixdy,1,apps Security Context should support container.SecurityContext.RunAsUser,alex-mohr,1,apps @@ -475,9 +505,10 @@ Security Context should support seccomp default which is unconfined,lavalamp,1,a Security Context should support volume SELinux relabeling,thockin,1,apps Security Context should support volume SELinux relabeling when using hostIPC,alex-mohr,1,apps Security Context should support volume SELinux relabeling when using hostPID,dchen1107,1,apps +Service endpoints latency should not be very high,cjcullen,1,network +ServiceAccounts should allow opting out of API token automount,bgrant0607,1, ServiceAccounts should ensure a single API token exists,liggitt,0,network ServiceAccounts should mount an API token into pods,liggitt,0,network -Service endpoints latency should not be very high,cjcullen,1,network ServiceLoadBalancer should support simple GET on Ingress ips,bprashanth,0,network Services should be able to change the type and ports of a service,bprashanth,0,network Services should be able to create a functioning NodePort service,bprashanth,0,network @@ -496,14 +527,13 @@ Services should work after restarting apiserver,bprashanth,0,network Services should work after restarting kube-proxy,bprashanth,0,network SimpleMount should be able to mount an emptydir on a container,rrati,0,node "Spark should start spark master, driver and workers",jszczepkowski,1,apps -SSH should SSH to all nodes and run commands,quinton-hoole,0, "Staging client repo client should create pods, delete pods, watch pods",jbeda,1,api-machinery StatefulSet Basic StatefulSet functionality Scaling down before scale up is finished should wait until current pod will be running and ready before it will be removed,derekwaynecarr,0,apps StatefulSet Basic StatefulSet functionality Scaling should happen in predictable order and halt if any stateful pod is unhealthy,derekwaynecarr,0,apps +StatefulSet Basic StatefulSet functionality Should recreate evicted statefulset,rrati,0,apps StatefulSet Basic StatefulSet functionality should allow template updates,rkouj,0,apps StatefulSet Basic StatefulSet functionality should not deadlock when a pod's predecessor fails,rkouj,0,apps StatefulSet Basic StatefulSet functionality should provide basic identity,bprashanth,1,apps -StatefulSet Basic StatefulSet functionality Should recreate evicted statefulset,rrati,0,apps StatefulSet Deploy clustered applications should creating a working CockroachDB cluster,rkouj,0,apps StatefulSet Deploy clustered applications should creating a working mysql cluster,yujuhong,1,apps StatefulSet Deploy clustered applications should creating a working redis cluster,yifan-gu,1,apps @@ -521,20 +551,27 @@ Upgrade node upgrade should maintain a functioning cluster,zmerlynn,1,cluster-li Variable Expansion should allow composing env vars into new env vars,derekwaynecarr,0,node Variable Expansion should allow substituting values in a container's args,dchen1107,1,node Variable Expansion should allow substituting values in a container's command,mml,1,node +Volume Disk Format verify disk format type - eagerzeroedthick is honored for dynamically provisioned pv using storageclass,piosz,1, +Volume Disk Format verify disk format type - thin is honored for dynamically provisioned pv using storageclass,alex-mohr,1, +Volume Disk Format verify disk format type - zeroedthick is honored for dynamically provisioned pv using storageclass,jlowdermilk,1, Volume Placement provision pod on node with matching labels should create and delete pod with the same volume source attach/detach to different worker nodes,mml,0,storage Volume Placement provision pod on node with matching labels should create and delete pod with the same volume source on the same worker node,mml,0,storage -Volumes CephFS should be mountable,Q-Lee,1,storage Volumes Ceph RBD should be mountable,fabioy,1,storage +Volumes CephFS should be mountable,Q-Lee,1,storage Volumes Cinder should be mountable,cjcullen,1,storage Volumes ConfigMap should be mountable,rkouj,0,storage Volumes GlusterFS should be mountable,eparis,1,storage -Volumes iSCSI should be mountable,jsafrane,1,storage Volumes NFS should be mountable,rrati,0,storage Volumes PD should be mountable,caesarxuchao,1,storage +Volumes iSCSI should be mountable,jsafrane,1,storage Volumes vsphere should be mountable,jsafrane,0,storage -"when we run containers that should cause * should eventually see *, and then evict all of the correct pods",Random-Liu,0,node k8s.io/kubernetes/cmd/genutils,rmmh,1, k8s.io/kubernetes/cmd/hyperkube,jbeda,0, +k8s.io/kubernetes/cmd/kube-apiserver/app/options,nikhiljindal,0, +k8s.io/kubernetes/cmd/kube-controller-manager/app,dchen1107,1, +k8s.io/kubernetes/cmd/kube-discovery/app,pmorie,1, +k8s.io/kubernetes/cmd/kube-proxy/app,luxas,1, +k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/install,ixdy,1, k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/validation,caesarxuchao,1, k8s.io/kubernetes/cmd/kubeadm/app/cmd,caesarxuchao,1, k8s.io/kubernetes/cmd/kubeadm/app/discovery,brendandburns,0, @@ -542,18 +579,15 @@ k8s.io/kubernetes/cmd/kubeadm/app/images,davidopp,1, k8s.io/kubernetes/cmd/kubeadm/app/master,apprenda,0, k8s.io/kubernetes/cmd/kubeadm/app/node,apprenda,0, k8s.io/kubernetes/cmd/kubeadm/app/phases/addons,rkouj,0, -k8s.io/kubernetes/cmd/kubeadm/app/phases/certs/pkiutil,ixdy,1, k8s.io/kubernetes/cmd/kubeadm/app/phases/certs,rkouj,0, -k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig,rkouj,0, +k8s.io/kubernetes/cmd/kubeadm/app/phases/certs/pkiutil,ixdy,1, +k8s.io/kubernetes/cmd/kubeadm/app/phases/token,pmorie,1, k8s.io/kubernetes/cmd/kubeadm/app/preflight,apprenda,0, k8s.io/kubernetes/cmd/kubeadm/app/util,krousey,1, +k8s.io/kubernetes/cmd/kubeadm/app/util/kubeconfig,apelisse,1, +k8s.io/kubernetes/cmd/kubeadm/app/util/token,sttts,1, k8s.io/kubernetes/cmd/kubeadm/test/cmd,krousey,0, -k8s.io/kubernetes/cmd/kube-aggregator/pkg/apiserver,brendandburns,0, -k8s.io/kubernetes/cmd/kube-apiserver/app/options,nikhiljindal,0, -k8s.io/kubernetes/cmd/kube-controller-manager/app,dchen1107,1, -k8s.io/kubernetes/cmd/kube-discovery/app,pmorie,1, k8s.io/kubernetes/cmd/kubelet/app,derekwaynecarr,0, -k8s.io/kubernetes/cmd/kube-proxy/app,luxas,1, k8s.io/kubernetes/cmd/libs/go2idl/client-gen/types,caesarxuchao,0, k8s.io/kubernetes/cmd/libs/go2idl/go-to-protobuf/protobuf,smarterclayton,0, k8s.io/kubernetes/cmd/libs/go2idl/openapi-gen/generators,davidopp,1, @@ -562,10 +596,10 @@ k8s.io/kubernetes/examples,Random-Liu,0, k8s.io/kubernetes/federation/apis/federation/install,nikhiljindal,0, k8s.io/kubernetes/federation/apis/federation/validation,nikhiljindal,0, k8s.io/kubernetes/federation/cmd/federation-controller-manager/app,kzwang,0, +k8s.io/kubernetes/federation/pkg/dnsprovider,sttts,1, k8s.io/kubernetes/federation/pkg/dnsprovider/providers/aws/route53,cjcullen,1, k8s.io/kubernetes/federation/pkg/dnsprovider/providers/coredns,brendandburns,0, k8s.io/kubernetes/federation/pkg/dnsprovider/providers/google/clouddns,madhusudancs,1, -k8s.io/kubernetes/federation/pkg/dnsprovider,sttts,1, k8s.io/kubernetes/federation/pkg/federation-controller/cluster,nikhiljindal,0, k8s.io/kubernetes/federation/pkg/federation-controller/configmap,mwielgus,0, k8s.io/kubernetes/federation/pkg/federation-controller/daemonset,childsb,1, @@ -579,17 +613,25 @@ k8s.io/kubernetes/federation/pkg/federation-controller/util,bgrant0607,1, k8s.io/kubernetes/federation/pkg/federation-controller/util/eventsink,luxas,1, k8s.io/kubernetes/federation/pkg/federation-controller/util/planner,Q-Lee,1, k8s.io/kubernetes/federation/pkg/federation-controller/util/podanalyzer,caesarxuchao,1, -k8s.io/kubernetes/federation/pkg/kubefed/init,madhusudancs,0, k8s.io/kubernetes/federation/pkg/kubefed,madhusudancs,0, -k8s.io/kubernetes/federation/registry/cluster/etcd,nikhiljindal,0, +k8s.io/kubernetes/federation/pkg/kubefed/init,madhusudancs,0, k8s.io/kubernetes/federation/registry/cluster,nikhiljindal,0, -k8s.io/kubernetes/hack/cmd/teststale,thockin,1, +k8s.io/kubernetes/federation/registry/cluster/etcd,nikhiljindal,0, k8s.io/kubernetes/hack,thockin,1, +k8s.io/kubernetes/hack/cmd/teststale,thockin,1, +k8s.io/kubernetes/pkg/api,Q-Lee,1, k8s.io/kubernetes/pkg/api/endpoints,cjcullen,1, k8s.io/kubernetes/pkg/api/events,jlowdermilk,1, k8s.io/kubernetes/pkg/api/install,timothysc,1, +k8s.io/kubernetes/pkg/api/service,spxtr,1, +k8s.io/kubernetes/pkg/api/testapi,caesarxuchao,1, +k8s.io/kubernetes/pkg/api/util,rkouj,0, +k8s.io/kubernetes/pkg/api/v1,rkouj,0, +k8s.io/kubernetes/pkg/api/v1/endpoints,rkouj,0, +k8s.io/kubernetes/pkg/api/v1/pod,rkouj,0, +k8s.io/kubernetes/pkg/api/v1/service,rkouj,0, +k8s.io/kubernetes/pkg/api/validation,smarterclayton,1, k8s.io/kubernetes/pkg/apimachinery/tests,rkouj,0, -k8s.io/kubernetes/pkg/api,Q-Lee,1, k8s.io/kubernetes/pkg/apis/abac/v0,liggitt,0, k8s.io/kubernetes/pkg/apis/abac/v1beta1,liggitt,0, k8s.io/kubernetes/pkg/apis/apps/validation,derekwaynecarr,1, @@ -600,7 +642,6 @@ k8s.io/kubernetes/pkg/apis/batch/v1,vishh,1, k8s.io/kubernetes/pkg/apis/batch/v2alpha1,jlowdermilk,1, k8s.io/kubernetes/pkg/apis/batch/validation,erictune,0, k8s.io/kubernetes/pkg/apis/componentconfig,jbeda,1, -k8s.io/kubernetes/pkg/api/service,spxtr,1, k8s.io/kubernetes/pkg/apis/extensions,bgrant0607,1, k8s.io/kubernetes/pkg/apis/extensions/v1beta1,madhusudancs,1, k8s.io/kubernetes/pkg/apis/extensions/validation,nikhiljindal,1, @@ -608,13 +649,6 @@ k8s.io/kubernetes/pkg/apis/policy/validation,deads2k,1, k8s.io/kubernetes/pkg/apis/rbac/v1alpha1,liggitt,0, k8s.io/kubernetes/pkg/apis/rbac/validation,erictune,0, k8s.io/kubernetes/pkg/apis/storage/validation,caesarxuchao,1, -k8s.io/kubernetes/pkg/api/testapi,caesarxuchao,1, -k8s.io/kubernetes/pkg/api/util,rkouj,0, -k8s.io/kubernetes/pkg/api/v1/endpoints,rkouj,0, -k8s.io/kubernetes/pkg/api/v1/pod,rkouj,0, -k8s.io/kubernetes/pkg/api/v1,rkouj,0, -k8s.io/kubernetes/pkg/api/v1/service,rkouj,0, -k8s.io/kubernetes/pkg/api/validation,smarterclayton,1, k8s.io/kubernetes/pkg/auth/authorizer/abac,liggitt,0, k8s.io/kubernetes/pkg/client/chaosclient,deads2k,1, k8s.io/kubernetes/pkg/client/leaderelection,xiang90,1, @@ -636,6 +670,7 @@ k8s.io/kubernetes/pkg/cloudprovider/providers/ovirt,dchen1107,1, k8s.io/kubernetes/pkg/cloudprovider/providers/photon,luomiao,0, k8s.io/kubernetes/pkg/cloudprovider/providers/rackspace,caesarxuchao,1, k8s.io/kubernetes/pkg/cloudprovider/providers/vsphere,apelisse,1, +k8s.io/kubernetes/pkg/controller,mikedanese,1, k8s.io/kubernetes/pkg/controller/bootstrap,mikedanese,0, k8s.io/kubernetes/pkg/controller/certificates,rkouj,0, k8s.io/kubernetes/pkg/controller/cloud,rkouj,0, @@ -645,45 +680,45 @@ k8s.io/kubernetes/pkg/controller/deployment,asalkeld,0, k8s.io/kubernetes/pkg/controller/deployment/util,saad-ali,1, k8s.io/kubernetes/pkg/controller/disruption,fabioy,1, k8s.io/kubernetes/pkg/controller/endpoint,mwielgus,1, -k8s.io/kubernetes/pkg/controller/garbagecollector/metaonly,cjcullen,1, k8s.io/kubernetes/pkg/controller/garbagecollector,rmmh,1, +k8s.io/kubernetes/pkg/controller/garbagecollector/metaonly,cjcullen,1, k8s.io/kubernetes/pkg/controller/job,soltysh,1, -k8s.io/kubernetes/pkg/controller,mikedanese,1, k8s.io/kubernetes/pkg/controller/namespace/deletion,nikhiljindal,1, k8s.io/kubernetes/pkg/controller/node,gmarek,0, -k8s.io/kubernetes/pkg/controller/podautoscaler/metrics,piosz,0, k8s.io/kubernetes/pkg/controller/podautoscaler,piosz,0, +k8s.io/kubernetes/pkg/controller/podautoscaler/metrics,piosz,0, k8s.io/kubernetes/pkg/controller/podgc,rrati,0, k8s.io/kubernetes/pkg/controller/replicaset,fgrzadkowski,0, k8s.io/kubernetes/pkg/controller/replication,fgrzadkowski,0, k8s.io/kubernetes/pkg/controller/resourcequota,rrati,0, k8s.io/kubernetes/pkg/controller/route,gmarek,0, -k8s.io/kubernetes/pkg/controller/serviceaccount,liggitt,0, k8s.io/kubernetes/pkg/controller/service,asalkeld,0, +k8s.io/kubernetes/pkg/controller/serviceaccount,liggitt,0, k8s.io/kubernetes/pkg/controller/statefulset,justinsb,1, k8s.io/kubernetes/pkg/controller/ttl,wojtek-t,1, -k8s.io/kubernetes/pkg/controller/volume/attachdetach/cache,rrati,0, k8s.io/kubernetes/pkg/controller/volume/attachdetach,luxas,1, +k8s.io/kubernetes/pkg/controller/volume/attachdetach/cache,rrati,0, k8s.io/kubernetes/pkg/controller/volume/attachdetach/reconciler,jsafrane,1, k8s.io/kubernetes/pkg/controller/volume/persistentvolume,jsafrane,0, -k8s.io/kubernetes/pkg/controller/volume/persistentvolume/testing,ixdy,1, +k8s.io/kubernetes/pkg/credentialprovider,justinsb,1, k8s.io/kubernetes/pkg/credentialprovider/aws,zmerlynn,1, k8s.io/kubernetes/pkg/credentialprovider/azure,brendandburns,0, k8s.io/kubernetes/pkg/credentialprovider/gcp,mml,1, -k8s.io/kubernetes/pkg/credentialprovider,justinsb,1, k8s.io/kubernetes/pkg/fieldpath,childsb,1, +k8s.io/kubernetes/pkg/kubeapiserver,piosz,1, k8s.io/kubernetes/pkg/kubeapiserver/admission,rkouj,0, k8s.io/kubernetes/pkg/kubeapiserver/authorizer,rkouj,0, k8s.io/kubernetes/pkg/kubeapiserver/options,thockin,1, -k8s.io/kubernetes/pkg/kubeapiserver,piosz,1, -k8s.io/kubernetes/pkg/kubectl/cmd/config,asalkeld,0, +k8s.io/kubernetes/pkg/kubectl,madhusudancs,1, k8s.io/kubernetes/pkg/kubectl/cmd,rmmh,1, +k8s.io/kubernetes/pkg/kubectl/cmd/config,asalkeld,0, k8s.io/kubernetes/pkg/kubectl/cmd/set,erictune,1, k8s.io/kubernetes/pkg/kubectl/cmd/util,asalkeld,0, k8s.io/kubernetes/pkg/kubectl/cmd/util/editor,rrati,0, -k8s.io/kubernetes/pkg/kubectl,madhusudancs,1, k8s.io/kubernetes/pkg/kubectl/resource,caesarxuchao,1, +k8s.io/kubernetes/pkg/kubelet,vishh,0, k8s.io/kubernetes/pkg/kubelet/cadvisor,sttts,1, +k8s.io/kubernetes/pkg/kubelet/certificate,mikedanese,1, k8s.io/kubernetes/pkg/kubelet/client,timstclair,1, k8s.io/kubernetes/pkg/kubelet/cm,vishh,0, k8s.io/kubernetes/pkg/kubelet/config,mikedanese,1, @@ -698,10 +733,10 @@ k8s.io/kubernetes/pkg/kubelet/images,caesarxuchao,1, k8s.io/kubernetes/pkg/kubelet/kuberuntime,yifan-gu,1, k8s.io/kubernetes/pkg/kubelet/lifecycle,yujuhong,1, k8s.io/kubernetes/pkg/kubelet/network/cni,freehan,0, -k8s.io/kubernetes/pkg/kubelet/network,freehan,0, k8s.io/kubernetes/pkg/kubelet/network/hairpin,freehan,0, k8s.io/kubernetes/pkg/kubelet/network/hostport,erictune,1, k8s.io/kubernetes/pkg/kubelet/network/kubenet,freehan,0, +k8s.io/kubernetes/pkg/kubelet/network/testing,spxtr,1, k8s.io/kubernetes/pkg/kubelet/pleg,yujuhong,0, k8s.io/kubernetes/pkg/kubelet/pod,alex-mohr,1, k8s.io/kubernetes/pkg/kubelet/prober,alex-mohr,1, @@ -710,20 +745,20 @@ k8s.io/kubernetes/pkg/kubelet/qos,vishh,0, k8s.io/kubernetes/pkg/kubelet/rkt,apelisse,1, k8s.io/kubernetes/pkg/kubelet/rktshim,mml,1, k8s.io/kubernetes/pkg/kubelet/secret,kevin-wangzefeng,1, +k8s.io/kubernetes/pkg/kubelet/server,timstclair,0, k8s.io/kubernetes/pkg/kubelet/server/portforward,rkouj,0, k8s.io/kubernetes/pkg/kubelet/server/stats,timstclair,0, k8s.io/kubernetes/pkg/kubelet/server/streaming,caesarxuchao,1, -k8s.io/kubernetes/pkg/kubelet/server,timstclair,0, k8s.io/kubernetes/pkg/kubelet/status,mwielgus,1, k8s.io/kubernetes/pkg/kubelet/sysctl,piosz,1, k8s.io/kubernetes/pkg/kubelet/types,jlowdermilk,1, k8s.io/kubernetes/pkg/kubelet/util/cache,timothysc,1, +k8s.io/kubernetes/pkg/kubelet/util/csr,apelisse,1, k8s.io/kubernetes/pkg/kubelet/util/format,ncdc,1, k8s.io/kubernetes/pkg/kubelet/util/queue,yujuhong,0, -k8s.io/kubernetes/pkg/kubelet,vishh,0, +k8s.io/kubernetes/pkg/kubelet/volumemanager,rrati,0, k8s.io/kubernetes/pkg/kubelet/volumemanager/cache,janetkuo,1, k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler,timstclair,1, -k8s.io/kubernetes/pkg/kubelet/volumemanager,rrati,0, k8s.io/kubernetes/pkg/master,fabioy,1, k8s.io/kubernetes/pkg/master/tunneler,jsafrane,1, k8s.io/kubernetes/pkg/probe/exec,bgrant0607,1, @@ -734,8 +769,8 @@ k8s.io/kubernetes/pkg/proxy/healthcheck,rrati,0, k8s.io/kubernetes/pkg/proxy/iptables,freehan,0, k8s.io/kubernetes/pkg/proxy/userspace,luxas,1, k8s.io/kubernetes/pkg/proxy/winuserspace,jbhurat,0, -k8s.io/kubernetes/pkg/quota/evaluator/core,yifan-gu,1, k8s.io/kubernetes/pkg/quota,sttts,1, +k8s.io/kubernetes/pkg/quota/evaluator/core,yifan-gu,1, k8s.io/kubernetes/pkg/registry/apps/petset,kevin-wangzefeng,1, k8s.io/kubernetes/pkg/registry/apps/petset/storage,jlowdermilk,1, k8s.io/kubernetes/pkg/registry/authorization/subjectaccessreview,liggitt,1, @@ -754,21 +789,21 @@ k8s.io/kubernetes/pkg/registry/core/endpoint,bprashanth,1, k8s.io/kubernetes/pkg/registry/core/endpoint/storage,wojtek-t,1, k8s.io/kubernetes/pkg/registry/core/event,ixdy,1, k8s.io/kubernetes/pkg/registry/core/event/storage,thockin,1, -k8s.io/kubernetes/pkg/registry/core/limitrange/storage,spxtr,1, k8s.io/kubernetes/pkg/registry/core/limitrange,yifan-gu,1, +k8s.io/kubernetes/pkg/registry/core/limitrange/storage,spxtr,1, k8s.io/kubernetes/pkg/registry/core/namespace,quinton-hoole,1, k8s.io/kubernetes/pkg/registry/core/namespace/storage,jsafrane,1, k8s.io/kubernetes/pkg/registry/core/node,rmmh,1, k8s.io/kubernetes/pkg/registry/core/node/storage,spxtr,1, -k8s.io/kubernetes/pkg/registry/core/persistentvolumeclaim,bgrant0607,1, -k8s.io/kubernetes/pkg/registry/core/persistentvolumeclaim/storage,cjcullen,1, k8s.io/kubernetes/pkg/registry/core/persistentvolume,lavalamp,1, k8s.io/kubernetes/pkg/registry/core/persistentvolume/storage,alex-mohr,1, +k8s.io/kubernetes/pkg/registry/core/persistentvolumeclaim,bgrant0607,1, +k8s.io/kubernetes/pkg/registry/core/persistentvolumeclaim/storage,cjcullen,1, k8s.io/kubernetes/pkg/registry/core/pod,Random-Liu,1, k8s.io/kubernetes/pkg/registry/core/pod/rest,jsafrane,1, k8s.io/kubernetes/pkg/registry/core/pod/storage,wojtek-t,1, -k8s.io/kubernetes/pkg/registry/core/podtemplate/storage,spxtr,1, k8s.io/kubernetes/pkg/registry/core/podtemplate,thockin,1, +k8s.io/kubernetes/pkg/registry/core/podtemplate/storage,spxtr,1, k8s.io/kubernetes/pkg/registry/core/replicationcontroller,freehan,1, k8s.io/kubernetes/pkg/registry/core/replicationcontroller/storage,liggitt,1, k8s.io/kubernetes/pkg/registry/core/resourcequota,rrati,0, @@ -776,17 +811,17 @@ k8s.io/kubernetes/pkg/registry/core/resourcequota/storage,childsb,1, k8s.io/kubernetes/pkg/registry/core/rest,deads2k,0, k8s.io/kubernetes/pkg/registry/core/secret,rrati,0, k8s.io/kubernetes/pkg/registry/core/secret/storage,childsb,1, -k8s.io/kubernetes/pkg/registry/core/serviceaccount,caesarxuchao,1, -k8s.io/kubernetes/pkg/registry/core/serviceaccount/storage,smarterclayton,1, +k8s.io/kubernetes/pkg/registry/core/service,madhusudancs,1, k8s.io/kubernetes/pkg/registry/core/service/allocator,jbeda,1, k8s.io/kubernetes/pkg/registry/core/service/allocator/storage,spxtr,1, -k8s.io/kubernetes/pkg/registry/core/service/ipallocator/controller,mtaufen,1, k8s.io/kubernetes/pkg/registry/core/service/ipallocator,eparis,1, +k8s.io/kubernetes/pkg/registry/core/service/ipallocator/controller,mtaufen,1, k8s.io/kubernetes/pkg/registry/core/service/ipallocator/storage,xiang90,1, -k8s.io/kubernetes/pkg/registry/core/service,madhusudancs,1, -k8s.io/kubernetes/pkg/registry/core/service/portallocator/controller,rkouj,0, k8s.io/kubernetes/pkg/registry/core/service/portallocator,rrati,0, +k8s.io/kubernetes/pkg/registry/core/service/portallocator/controller,rkouj,0, k8s.io/kubernetes/pkg/registry/core/service/storage,cjcullen,1, +k8s.io/kubernetes/pkg/registry/core/serviceaccount,caesarxuchao,1, +k8s.io/kubernetes/pkg/registry/core/serviceaccount/storage,smarterclayton,1, k8s.io/kubernetes/pkg/registry/extensions/controller/storage,jsafrane,1, k8s.io/kubernetes/pkg/registry/extensions/daemonset,nikhiljindal,1, k8s.io/kubernetes/pkg/registry/extensions/daemonset/storage,kevin-wangzefeng,1, @@ -800,28 +835,30 @@ k8s.io/kubernetes/pkg/registry/extensions/podsecuritypolicy/storage,dchen1107,1, k8s.io/kubernetes/pkg/registry/extensions/replicaset,rrati,0, k8s.io/kubernetes/pkg/registry/extensions/replicaset/storage,wojtek-t,1, k8s.io/kubernetes/pkg/registry/extensions/rest,rrati,0, -k8s.io/kubernetes/pkg/registry/extensions/thirdpartyresourcedata/storage,childsb,1, -k8s.io/kubernetes/pkg/registry/extensions/thirdpartyresourcedata,sttts,1, k8s.io/kubernetes/pkg/registry/extensions/thirdpartyresource,mwielgus,1, k8s.io/kubernetes/pkg/registry/extensions/thirdpartyresource/storage,mikedanese,1, +k8s.io/kubernetes/pkg/registry/extensions/thirdpartyresourcedata,sttts,1, +k8s.io/kubernetes/pkg/registry/extensions/thirdpartyresourcedata/storage,childsb,1, k8s.io/kubernetes/pkg/registry/policy/poddisruptionbudget,Q-Lee,1, k8s.io/kubernetes/pkg/registry/policy/poddisruptionbudget/storage,dchen1107,1, +k8s.io/kubernetes/pkg/registry/rbac/reconciliation,roberthbailey,1, k8s.io/kubernetes/pkg/registry/rbac/validation,rkouj,0, k8s.io/kubernetes/pkg/registry/storage/storageclass,brendandburns,1, k8s.io/kubernetes/pkg/registry/storage/storageclass/storage,wojtek-t,1, k8s.io/kubernetes/pkg/security/apparmor,bgrant0607,1, -k8s.io/kubernetes/pkg/securitycontext,erictune,1, +k8s.io/kubernetes/pkg/security/podsecuritypolicy,erictune,0, k8s.io/kubernetes/pkg/security/podsecuritypolicy/apparmor,rrati,0, k8s.io/kubernetes/pkg/security/podsecuritypolicy/capabilities,erictune,0, -k8s.io/kubernetes/pkg/security/podsecuritypolicy,erictune,0, k8s.io/kubernetes/pkg/security/podsecuritypolicy/group,erictune,0, k8s.io/kubernetes/pkg/security/podsecuritypolicy/seccomp,rmmh,1, k8s.io/kubernetes/pkg/security/podsecuritypolicy/selinux,erictune,0, k8s.io/kubernetes/pkg/security/podsecuritypolicy/sysctl,rrati,0, k8s.io/kubernetes/pkg/security/podsecuritypolicy/user,erictune,0, k8s.io/kubernetes/pkg/security/podsecuritypolicy/util,erictune,0, +k8s.io/kubernetes/pkg/securitycontext,erictune,1, k8s.io/kubernetes/pkg/serviceaccount,liggitt,0, k8s.io/kubernetes/pkg/ssh,jbeda,1, +k8s.io/kubernetes/pkg/util,jbeda,1, k8s.io/kubernetes/pkg/util/async,spxtr,1, k8s.io/kubernetes/pkg/util/bandwidth,thockin,1, k8s.io/kubernetes/pkg/util/config,jszczepkowski,1, @@ -834,7 +871,6 @@ k8s.io/kubernetes/pkg/util/hash,timothysc,1, k8s.io/kubernetes/pkg/util/i18n,brendandburns,0, k8s.io/kubernetes/pkg/util/io,mtaufen,1, k8s.io/kubernetes/pkg/util/iptables,rrati,0, -k8s.io/kubernetes/pkg/util,jbeda,1, k8s.io/kubernetes/pkg/util/keymutex,saad-ali,0, k8s.io/kubernetes/pkg/util/labels,rmmh,1, k8s.io/kubernetes/pkg/util/limitwriter,deads2k,1, @@ -852,6 +888,7 @@ k8s.io/kubernetes/pkg/util/taints,rrati,0, k8s.io/kubernetes/pkg/util/term,davidopp,1, k8s.io/kubernetes/pkg/util/threading,roberthbailey,1, k8s.io/kubernetes/pkg/util/version,danwinship,0, +k8s.io/kubernetes/pkg/volume,saad-ali,0, k8s.io/kubernetes/pkg/volume/aws_ebs,caesarxuchao,1, k8s.io/kubernetes/pkg/volume/azure_dd,bgrant0607,1, k8s.io/kubernetes/pkg/volume/azure_file,maisem,1, @@ -870,18 +907,19 @@ k8s.io/kubernetes/pkg/volume/host_path,jbeda,1, k8s.io/kubernetes/pkg/volume/iscsi,cjcullen,1, k8s.io/kubernetes/pkg/volume/nfs,justinsb,1, k8s.io/kubernetes/pkg/volume/photon_pd,luomiao,0, +k8s.io/kubernetes/pkg/volume/projected,kevin-wangzefeng,1, k8s.io/kubernetes/pkg/volume/quobyte,yujuhong,1, k8s.io/kubernetes/pkg/volume/rbd,piosz,1, -k8s.io/kubernetes/pkg/volume,saad-ali,0, k8s.io/kubernetes/pkg/volume/secret,rmmh,1, +k8s.io/kubernetes/pkg/volume/util,saad-ali,0, k8s.io/kubernetes/pkg/volume/util/nestedpendingoperations,freehan,1, k8s.io/kubernetes/pkg/volume/util/operationexecutor,rkouj,0, -k8s.io/kubernetes/pkg/volume/util,saad-ali,0, k8s.io/kubernetes/pkg/volume/vsphere_volume,deads2k,1, k8s.io/kubernetes/plugin/cmd/kube-scheduler/app,deads2k,1, k8s.io/kubernetes/plugin/pkg/admission/admit,piosz,1, k8s.io/kubernetes/plugin/pkg/admission/alwayspullimages,kargakis,1, k8s.io/kubernetes/plugin/pkg/admission/antiaffinity,timothysc,1, +k8s.io/kubernetes/plugin/pkg/admission/defaulttolerationseconds,luxas,1, k8s.io/kubernetes/plugin/pkg/admission/deny,eparis,1, k8s.io/kubernetes/plugin/pkg/admission/exec,deads2k,1, k8s.io/kubernetes/plugin/pkg/admission/gc,kevin-wangzefeng,1, @@ -894,23 +932,25 @@ k8s.io/kubernetes/plugin/pkg/admission/namespace/lifecycle,derekwaynecarr,0, k8s.io/kubernetes/plugin/pkg/admission/persistentvolume/label,rrati,0, k8s.io/kubernetes/plugin/pkg/admission/podnodeselector,ixdy,1, k8s.io/kubernetes/plugin/pkg/admission/resourcequota,fabioy,1, -k8s.io/kubernetes/plugin/pkg/admission/securitycontext/scdeny,rrati,0, +k8s.io/kubernetes/plugin/pkg/admission/resourcequota/apis/resourcequota/validation,cjcullen,1, k8s.io/kubernetes/plugin/pkg/admission/security/podsecuritypolicy,maisem,1, +k8s.io/kubernetes/plugin/pkg/admission/securitycontext/scdeny,rrati,0, k8s.io/kubernetes/plugin/pkg/admission/serviceaccount,liggitt,0, k8s.io/kubernetes/plugin/pkg/admission/storageclass/default,pmorie,1, -k8s.io/kubernetes/plugin/pkg/auth/authorizer/rbac/bootstrappolicy,mml,1, k8s.io/kubernetes/plugin/pkg/auth/authorizer/rbac,rrati,0, +k8s.io/kubernetes/plugin/pkg/auth/authorizer/rbac/bootstrappolicy,mml,1, +k8s.io/kubernetes/plugin/pkg/scheduler,fgrzadkowski,0, k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates,fgrzadkowski,0, k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/priorities,fgrzadkowski,0, -k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider/defaults,fgrzadkowski,0, k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider,fgrzadkowski,0, +k8s.io/kubernetes/plugin/pkg/scheduler/algorithmprovider/defaults,fgrzadkowski,0, k8s.io/kubernetes/plugin/pkg/scheduler/api/validation,fgrzadkowski,0, +k8s.io/kubernetes/plugin/pkg/scheduler/core,madhusudancs,1, k8s.io/kubernetes/plugin/pkg/scheduler/factory,fgrzadkowski,0, -k8s.io/kubernetes/plugin/pkg/scheduler,fgrzadkowski,0, k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache,fgrzadkowski,0, k8s.io/kubernetes/plugin/pkg/scheduler/util,wojtek-t,1, -k8s.io/kubernetes/test/e2e/chaosmonkey,pmorie,1, k8s.io/kubernetes/test/e2e,kevin-wangzefeng,1, +k8s.io/kubernetes/test/e2e/chaosmonkey,pmorie,1, k8s.io/kubernetes/test/e2e_node,mml,1, k8s.io/kubernetes/test/e2e_node/system,Random-Liu,0, k8s.io/kubernetes/test/integration/auth,jbeda,1, @@ -939,3 +979,6 @@ k8s.io/kubernetes/test/integration/thirdparty,davidopp,1, k8s.io/kubernetes/test/integration/ttlcontroller,wojtek-t,1, k8s.io/kubernetes/test/integration/volume,rrati,0, k8s.io/kubernetes/test/list,maisem,1, +kubelet Clean up pods on node kubelet should be able to delete * pods per node in *.,yujuhong,0,node +kubelet host cleanup with volume mounts Host cleanup after pod using NFS mount is deleted *,bgrant0607,1, +"when we run containers that should cause * should eventually see *, and then evict all of the correct pods",Random-Liu,0,node