Commit Graph

53722 Commits (6368c1fc826b1fbda9123e44ba0962d1aaef1d1e)

Author SHA1 Message Date
Kubernetes Submit Queue 6368c1fc82 Merge pull request #51348 from rmmh/coreos-no-password
Automatic merge from submit-queue

Make coreos test images sshd not allow password login.

This will prevent security scanners from triggering.

Configuration is verbatim from:
https://coreos.com/os/docs/latest/customizing-sshd.html

```release-note
NONE
```
2017-08-26 04:19:11 -07:00
Kubernetes Submit Queue d27da4133d Merge pull request #49439 from zhangxiaoyu-zidif/fix-err-message-for-pdb
Automatic merge from submit-queue

fix error message for pdb.go

**What this PR does / why we need it**:
fix error message for pdb.go

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
NONE

**Special notes for your reviewer**:
NONE

**Release note**:

```release-note
NONE
```
2017-08-26 03:24:31 -07:00
Kubernetes Submit Queue c241cbe44d Merge pull request #51173 from liggitt/role-printers
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

Print multiple node roles, remove kubeadm-specific annotation from kubectl

related to #50010

Follow up to https://github.com/kubernetes/kubernetes/pull/50438 that removes the kubeadm-specific label, makes kubectl role-agnostic, and outputs multiple roles if present
2017-08-26 02:05:39 -07:00
Kubernetes Submit Queue 25a2177a95 Merge pull request #51296 from kokhang/kubeadm-flexvolume
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

Add host mountpath to controller-manager for flexvolume dir

Controller manager needs access to Flexvolume plugin when using attach-detach controller interface.

This PR adds the host mount path for the default directory of flexvolume plugins

Fixes https://github.com/kubernetes/kubeadm/issues/410
2017-08-26 02:05:36 -07:00
Kubernetes Submit Queue 932e07af53 Merge pull request #50031 from verult/ConnectedProbe
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

Dynamic Flexvolume plugin discovery, probing with filesystem watch.

**What this PR does / why we need it**: Enables dynamic Flexvolume plugin discovery. This model uses a filesystem watch (fsnotify library), which notifies the system that a probe is necessary only if something changes in the Flexvolume plugin directory.

This PR uses the dependency injection model in https://github.com/kubernetes/kubernetes/pull/49668.

**Release Note**:
```release-note
Dynamic Flexvolume plugin discovery. Flexvolume plugins can now be discovered on the fly rather than only at system initialization time.
```

/sig-storage

/assign @jsafrane @saad-ali 
/cc @bassam @chakri-nelluri @kokhang @liggitt @thockin
2017-08-26 02:05:34 -07:00
Kubernetes Submit Queue d660a41f36 Merge pull request #51101 from zhangxiaoyu-zidif/refactor-kubelet-kuberuntime-test
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

Refactor kuberuntime test case with sets.String

**What this PR does / why we need it**:
change to make got and want use sets.String instead, since that is both safe and more clearly shows the intent.

ref: https://github.com/kubernetes/kubernetes/pull/50554

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/51396

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-08-26 02:05:29 -07:00
Kubernetes Submit Queue 83f2ddea99 Merge pull request #51054 from derekwaynecarr/local-up-swap
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

hack/local-up-cluster.sh defaults to allow swap

**What this PR does / why we need it**:
developers on linux typically have swap on while developing.
defaults local-up-cluster experience to not fail kubelet if swap is enabled.

**Release note**:
```release-note
NONE
```
2017-08-26 02:05:26 -07:00
Kubernetes Submit Queue b65d665b99 Merge pull request #51264 from m1093782566/e2e-maxTries
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

Fix e2e network util wrong output message

**What this PR does / why we need it**:

See https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/networking_utils.go#L217

and 

https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/networking_utils.go#L273

I assume it should be `minTries` -> `MaxTries`

This PR fixes the wrong output message.

**Which issue this PR fixes**: fixes #51265

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-08-25 22:43:37 -07:00
Kubernetes Submit Queue 2616513381 Merge pull request #51297 from ixdy/bazel-fast-docker_pull
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

bazel: use fast docker_pull

**What this PR does / why we need it**: takes advantage of https://github.com/bazelbuild/rules_docker/pull/71.

Faster builds = yay.

**Release note**:

```release-note
NONE
```

/assign @Q-Lee @spxtr @mikedanese
2017-08-25 22:43:34 -07:00
Kubernetes Submit Queue 6650bbe0dd Merge pull request #50582 from dixudx/support_fieldSelector_spec.schedulerName
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

support fieldSelector spec.schedulerName

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #49190

**Special notes for your reviewer**:
/assign @davidopp  @bsalamat
/cc @lavalamp

**Release note**:

```release-note
add fieldSelector spec.schedulerName
```
2017-08-25 22:43:32 -07:00
Kubernetes Submit Queue ea206bbe29 Merge pull request #51347 from Random-Liu/fix-no-new-privs
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

Fix NoNewPrivs and also allow remote runtime to provide the support.

Fixes https://github.com/kubernetes/kubernetes/issues/51319.

This PR:
1) Let kubelet admit remote runtime for `NoNewPrivis` container runtime.
2) Fix a `NoNewPrivis` bug which checks wrong runtime type.

/cc @kubernetes/sig-node-bugs @jessfraz
2017-08-25 22:43:28 -07:00
Kubernetes Submit Queue 76c520cea3 Merge pull request #50889 from NickrenREN/local-storage-eviction
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

Change eviction manager to manage one single local storage resource

**What this PR does / why we need it**:
We decided to manage one single resource name, eviction policy should be modified too.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:  part of #50818

**Special notes for your reviewer**:

**Release note**:
```release-note
Change eviction manager to manage one single local ephemeral storage resource
```

/assign @jingxu97
2017-08-25 22:43:26 -07:00
Derek Carr 17c3b1ff56 hack/local-up-cluster.sh defaults to allow swap 2017-08-26 01:04:08 -04:00
Kubernetes Submit Queue c112dbcab4 Merge pull request #51341 from mtaufen/fix-port-disable
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

fix ReadOnlyPort defaulting, CAdvisorPort documentation

The ReadOnlyPort defaulting prevented passing 0 to diable via
the KubeletConfiguraiton struct.

The HealthzPort defaulting prevented passing 0 to disable via the
KubeletConfiguration struct. The documentation also failed to mention
this, but the check is performed in code.

The CAdvisorPort documentation failed to mention that you can pass 0 to
disable.


fixes #51345
2017-08-25 20:43:40 -07:00
Kubernetes Submit Queue 21aa8cacc5 Merge pull request #50730 from andrewsykim/49836
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

Cloud Controller Manager now sets Node.Spec.ProviderID

**What this PR does / why we need it**:
Cloud Controller Manager now sets `Node.Spec.ProviderID`.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/49836. 

**Special notes for your reviewer**:
* As part of an effort to move cloud controller manager into beta https://github.com/kubernetes/kubernetes/issues/48690.
2017-08-25 20:43:37 -07:00
Kubernetes Submit Queue 9e69d5b8f0 Merge pull request #50595 from k82cn/k8s_50594
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

NodeConditionPredicates should return NodeOutOfDisk error.

**What this PR does / why we need it**:
In https://github.com/kubernetes/kubernetes/pull/49932 , I moved node condition check into a predicates; but it return incorrect error :(. 

We also need to add more cases to `TestNodeShouldRunDaemonPod` which is key function of DaemonSet.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50594 

**Release note**:

```release-note
None
```
2017-08-25 20:43:35 -07:00
Kubernetes Submit Queue 21ca7f7eec Merge pull request #47782 from php-coder/fix_reverse_in_tests
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

Fix benchmarks to really test reverse order of the keys

**What this PR does / why we need it**:
This PR modifies the code to do what comments says -- reverse the order of keys. It also fixes the logic that was wrong and didn't allow stale data.

**Special notes for your reviewer**:
This change resolves the following review comments:
- https://github.com/kubernetes/kubernetes/pull/41939#discussion_r117068104
- https://github.com/kubernetes/kubernetes/pull/46916#discussion_r122763350
- https://github.com/kubernetes/kubernetes/pull/46916#discussion_r122764000

**Release note**:
```release-note
NONE
```

PTAL @smarterclayton
2017-08-25 20:43:33 -07:00
Kubernetes Submit Queue b65f3cc8dd Merge pull request #49850 from m1093782566/service-session-timeout
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

Paramaterize `stickyMaxAgeMinutes` for service in API

**What this PR does / why we need it**:

Currently I find `stickyMaxAgeMinutes` for a session affinity type service is hard code to 180min. There is a TODO comment, see

https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/iptables/proxier.go#L205

I think the seesion sticky max time varies from service to service and users may not aware of it since it's hard coded in all proxier.go - iptables, userspace and winuserspace.

Once we parameterize it in API, users can set/get the values for their different services.

Perhaps, we can introduce a new field `api.ClientIPAffinityConfig` in `api.ServiceSpec`.

There is an initial discussion about it in sig-network group. See,

https://groups.google.com/forum/#!topic/kubernetes-sig-network/i-LkeHrjs80

**Which issue this PR fixes**: 

fixes #49831

**Special notes for your reviewer**:

**Release note**:

```release-note
Paramaterize session affinity timeout seconds in service API for Client IP based session affinity.
```
2017-08-25 20:43:30 -07:00
Kubernetes Submit Queue 85f963310e Merge pull request #50504 from yastij/fcVolume-handleFailedMount
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)

handle failed mounts for fc volumes

**What this PR does / why we need it**: handles failed mounts for fc

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50502

**Special notes for your reviewer**: 

**Release note**:

```release-note
None
```
2017-08-25 19:40:38 -07:00
Kubernetes Submit Queue c170f5bfa2 Merge pull request #50972 from FengyunPan/external-loadBalancerIP
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)

Support for specifying external LoadBalancerIP on openstack

1. Support ServiceAnnotationLoadBalancerFloatingNetworkId for LB v1

2. Support for specifying external LoadBalancerIP on openstack
    Add ServiceAnnotationLoadBalancerInternal annotation to distinguish
    between internal LoadBalancerIP and external LoadBalancerIP.


**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fix #50851 

**Release note**:
```release-note
NONE
```
2017-08-25 19:40:36 -07:00
Kubernetes Submit Queue 9d7bdb6a5f Merge pull request #51274 from yastij/clean-cinder-detachLogError
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)

Clean cinder detachlogerror

**What this PR does / why we need it**:

**Which issue this PR fixes** : fixes #50441

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-08-25 19:40:32 -07:00
Kubernetes Submit Queue e923f2ba5c Merge pull request #50819 from NickrenREN/remove-overlay-scheduler
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)

Changing scheduling part to manage one single local storage resource

**What this PR does / why we need it**:
 Finally decided to manage a single local storage resource

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:  part of #50818

**Special notes for your reviewer**:
Since finally decided to manage a single local storage resource, remove overlay related code in scheduling part and change the name scratch to ephemeral storage.

**Release note**:
```release-note
Changing scheduling part of the alpha feature 'LocalStorageCapacityIsolation' to manage one single local ephemeral storage resource
```

/assign @jingxu97 
cc @ddysher
2017-08-25 19:40:29 -07:00
Kubernetes Submit Queue 65da3ce246 Merge pull request #51235 from cheftako/aggregator
Automatic merge from submit-queue

Fixed gke auth update wait condition.

Lookup whoami on gke using gcloud auth list.
Make sure we do not run the test on any cluster older than 1.7.

**What this PR does / why we need it**: Fixes issue with aggregator e2e test on GKE

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50945 

**Special notes for your reviewer**: There is a TODO, follow up will be provided when the immediate problem is resolved.

**Release note**: ```release-note
NONE
```
2017-08-25 18:52:46 -07:00
Lantao Liu b760fa95e5 Fix NoNewPrivs and also allow remote runtime to provide the support. 2017-08-25 21:32:33 +00:00
NickrenREN 9730e3d302 Change validation for local ephemeral storage 2017-08-26 05:15:16 +08:00
NickrenREN 27901ad5df Change eviction policy to manage one single local storage resource 2017-08-26 05:14:49 +08:00
Kubernetes Submit Queue a235ba4e49 Merge pull request #51327 from wasylkowski/ensure-ca-is-on
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)

Made the tests ensure that Cluster Autoscaler is on before running.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-08-25 14:01:36 -07:00
Kubernetes Submit Queue b5bb8099e7 Merge pull request #50971 from CaoShuFeng/audit_json
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)

set --audit-log-format default to json

Updates: https://github.com/kubernetes/kubernetes/issues/48561

**Release note**:
```
set --audit-log-format default to json for kube-apiserver
```
2017-08-25 14:01:33 -07:00
Kubernetes Submit Queue ccae631ff9 Merge pull request #50562 from atlassian/call-cleanup-properly
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)

Call the right cleanup function

**What this PR does / why we need it**:
`defer cleanup()` will always call the function that was returned by the first call to `r.resyncChan()` but it should call the one returned by the last call.

**Special notes for your reviewer**:
This will print `c1`, not `c2`. See https://play.golang.org/p/FDjDbUxOvI
```go
func main() {
	var c func()
	c = c1
	defer c()
	c = c2
}

func c1 () {
	fmt.Println("c1")
}

func c2 () {
	fmt.Println("c2")
}
```

**Release note**:
```release-note
NONE
```
/kind bug
/sig api-machinery
2017-08-25 14:01:30 -07:00
Kubernetes Submit Queue 39581ac9bf Merge pull request #51122 from luxas/kubeadm_impl_dryrun
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)

kubeadm: Fully implement --dry-run

**What this PR does / why we need it**:

Finishes the work begun in #50631 
 - Implements dry-run functionality for phases certs/kubeconfig/controlplane/etcd as well by making the outDir configurable
 - Prints the controlplane manifests to stdout, but not the certs/kubeconfig files due to the sensitive nature. However, kubeadm outputs the directory to go and look in for those.
 - Fixes a small yaml marshal error where `apiVersion` and `kind` wasn't printed earlier.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

fixes: https://github.com/kubernetes/kubeadm/issues/389

**Special notes for your reviewer**:

Full `kubeadm init --dry-run` output:

```
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization mode: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun477531930"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[dryrun] Wrote certificates, kubeconfig files and control plane manifests to "/tmp/kubeadm-init-dryrun477531930"
[dryrun] Won't print certificates or kubeconfig files due to the sensitive nature of them
[dryrun] Please go and examine the "/tmp/kubeadm-init-dryrun477531930" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    scheduler.alpha.kubernetes.io/critical-pod: ""
	  creationTimestamp: null
	  labels:
	    component: kube-apiserver
	    tier: control-plane
	  name: kube-apiserver
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-apiserver
	    - --allow-privileged=true
	    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
	    - --requestheader-extra-headers-prefix=X-Remote-Extra-
	    - --service-cluster-ip-range=10.96.0.0/12
	    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
	    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
	    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
	    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
	    - --experimental-bootstrap-token-auth=true
	    - --client-ca-file=/etc/kubernetes/pki/ca.crt
	    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
	    - --secure-port=6443
	    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
	    - --insecure-port=0
	    - --requestheader-username-headers=X-Remote-User
	    - --requestheader-group-headers=X-Remote-Group
	    - --requestheader-allowed-names=front-proxy-client
	    - --advertise-address=192.168.200.101
	    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
	    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
	    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
	    - --authorization-mode=Node,RBAC
	    - --etcd-servers=http://127.0.0.1:2379
	    image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 6443
	        scheme: HTTPS
	      initialDelaySeconds: 15
	      timeoutSeconds: 15
	    name: kube-apiserver
	    resources:
	      requests:
	        cpu: 250m
	    volumeMounts:
	    - mountPath: /etc/kubernetes/pki
	      name: k8s-certs
	      readOnly: true
	    - mountPath: /etc/ssl/certs
	      name: ca-certs
	      readOnly: true
	    - mountPath: /etc/pki
	      name: ca-certs-etc-pki
	      readOnly: true
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/pki
	    name: k8s-certs
	  - hostPath:
	      path: /etc/ssl/certs
	    name: ca-certs
	  - hostPath:
	      path: /etc/pki
	    name: ca-certs-etc-pki
	status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    scheduler.alpha.kubernetes.io/critical-pod: ""
	  creationTimestamp: null
	  labels:
	    component: kube-controller-manager
	    tier: control-plane
	  name: kube-controller-manager
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-controller-manager
	    - --address=127.0.0.1
	    - --kubeconfig=/etc/kubernetes/controller-manager.conf
	    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
	    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
	    - --leader-elect=true
	    - --use-service-account-credentials=true
	    - --controllers=*,bootstrapsigner,tokencleaner
	    - --root-ca-file=/etc/kubernetes/pki/ca.crt
	    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
	    image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10252
	        scheme: HTTP
	      initialDelaySeconds: 15
	      timeoutSeconds: 15
	    name: kube-controller-manager
	    resources:
	      requests:
	        cpu: 200m
	    volumeMounts:
	    - mountPath: /etc/kubernetes/pki
	      name: k8s-certs
	      readOnly: true
	    - mountPath: /etc/ssl/certs
	      name: ca-certs
	      readOnly: true
	    - mountPath: /etc/kubernetes/controller-manager.conf
	      name: kubeconfig
	      readOnly: true
	    - mountPath: /etc/pki
	      name: ca-certs-etc-pki
	      readOnly: true
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/pki
	    name: k8s-certs
	  - hostPath:
	      path: /etc/ssl/certs
	    name: ca-certs
	  - hostPath:
	      path: /etc/kubernetes/controller-manager.conf
	    name: kubeconfig
	  - hostPath:
	      path: /etc/pki
	    name: ca-certs-etc-pki
	status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    scheduler.alpha.kubernetes.io/critical-pod: ""
	  creationTimestamp: null
	  labels:
	    component: kube-scheduler
	    tier: control-plane
	  name: kube-scheduler
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-scheduler
	    - --leader-elect=true
	    - --kubeconfig=/etc/kubernetes/scheduler.conf
	    - --address=127.0.0.1
	    image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10251
	        scheme: HTTP
	      initialDelaySeconds: 15
	      timeoutSeconds: 15
	    name: kube-scheduler
	    resources:
	      requests:
	        cpu: 100m
	    volumeMounts:
	    - mountPath: /etc/kubernetes/scheduler.conf
	      name: kubeconfig
	      readOnly: true
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/scheduler.conf
	    name: kubeconfig
	status: {}
[markmaster] Will mark node thegopher as master by adding a label and a taint
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Attached patch:
	{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}}
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[token] Using token: 96efd6.98bbb2f4603c026b
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-96efd6"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4=
	  expiration: MjAxNy0wOC0yM1QyMzoxOTozNCswMzowMA==
	  token-id: OTZlZmQ2
	  token-secret: OThiYmIyZjQ2MDNjMDI2Yg==
	  usage-bootstrap-authentication: dHJ1ZQ==
	  usage-bootstrap-signing: dHJ1ZQ==
	kind: Secret
	metadata:
	  creationTimestamp: null
	  name: bootstrap-token-96efd6
	type: bootstrap.kubernetes.io/token
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:kubelet-bootstrap
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: system:node-bootstrapper
	subjects:
	- kind: Group
	  name: system:bootstrappers
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRole
	metadata:
	  creationTimestamp: null
	  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
	rules:
	- apiGroups:
	  - certificates.k8s.io
	  resources:
	  - certificatesigningrequests/nodeclient
	  verbs:
	  - create
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:node-autoapprove-bootstrap
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
	subjects:
	- kind: Group
	  name: system:bootstrappers
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  kubeconfig: |
	    apiVersion: v1
	    clusters:
	    - cluster:
	        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EZ3lNakl3TVRrek1Gb1hEVEkzTURneU1ESXdNVGt6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFk0CnZWZ1FSN3pva3VzbWVvQ3JwZ1lFdEFHSldhSWVVUXE0ZE8wcVA4TDFKQk10ZTdHcXVHeXlWdVlyejBBeXdGdkMKaEh3Tm1pbmpIWFdNYkgrQVdIUXJOZmtZMmRBdnVuL0NYZWd6RlRZZG56M1JzYU5EaW0wazVXaVhEamQwM21YVApicGpvMGxpT2ZtY0xlOHpYUXZNaHpmN2FMV24wOVJoN05Ld0M0eW84cis5MDNHNjVxRW56cnUybmJKTEJ1TFk0CkFsL3UxTElVSGV4dmExZjgzampOQ1NmQXJScGh1d0oyS1NTWXhoaEJpNHBJMzd0ZEFpN3diTUF0cG4zdU9rVEQKU0dtdGpkbFZoUlAzV1dHQzNQTjF3M1JRakpmTW5weFFZbFFmalU2UE9Pbzg4ODBwN3dnUXFDUU11bjU5UWlBWgpwNkI1c3lrUitMemhoZVpkMWtjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHaTVrcUJzMTdOMU5pRWx2RGJaWGFSeXk5anUKR3ZuRjRjSnczQ0dPR2hpdHgySmdxRkt5WXRIdlJUSFNYRXpBNTlteEs2RlJWUWpBZmJMdjhSZUNKUjYrSzdRdQo0U21uTVVxVXRTZFUzaHozVXZlMjVOTHVwMnhsYVpZbzVwdVRrOWhZdUszd09MbWgxZTFoRzcyUFpoZE5yOGd5Ck5lTFN3bjI4OEVUSlNCcWpob0FkV2w0YzZtcnpwWll4ekNrcEpUSDFPWnBCQzFUYmY3QW5HenVwRzB1Q1RSYWsKWTBCSERyL01uVGJKKzM5NEJyMXBId0NtQ3ZrWUY0RjVEeW9UTFQ0UFhGTnJSV3UweU9rMXdDdEFKbEs3eFlUOAp5Z015cUlRSG4rNjYrUGlsSUprcU81ODRoVm5ENURva1dLcEdISFlYNmNpRGYwU1hYZUI1d09YQ0xjaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
	        server: https://192.168.200.101:6443
	      name: ""
	    contexts: []
	    current-context: ""
	    kind: Config
	    preferences: {}
	    users: []
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  name: cluster-info
	  namespace: kube-public
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: Role
	metadata:
	  creationTimestamp: null
	  name: kubeadm:bootstrap-signer-clusterinfo
	  namespace: kube-public
	rules:
	- apiGroups:
	  - ""
	  resourceNames:
	  - cluster-info
	  resources:
	  - configmaps
	  verbs:
	  - get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: RoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:bootstrap-signer-clusterinfo
	  namespace: kube-public
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: Role
	  name: kubeadm:bootstrap-signer-clusterinfo
	subjects:
	- kind: User
	  name: system:anonymous
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  MasterConfiguration: |
	    api:
	      advertiseAddress: 192.168.200.101
	      bindPort: 6443
	    apiServerCertSANs: []
	    apiServerExtraArgs: null
	    authorizationModes:
	    - Node
	    - RBAC
	    certificatesDir: /etc/kubernetes/pki
	    cloudProvider: ""
	    controllerManagerExtraArgs: null
	    etcd:
	      caFile: ""
	      certFile: ""
	      dataDir: /var/lib/etcd
	      endpoints: []
	      extraArgs: null
	      image: ""
	      keyFile: ""
	    featureFlags: null
	    imageRepository: gcr.io/google_containers
	    kubernetesVersion: v1.7.4
	    networking:
	      dnsDomain: cluster.local
	      podSubnet: ""
	      serviceSubnet: 10.96.0.0/12
	    nodeName: thegopher
	    schedulerExtraArgs: null
	    token: 96efd6.98bbb2f4603c026b
	    tokenTTL: 86400000000000
	    unifiedControlPlaneImage: ""
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  name: kubeadm-config
	  namespace: kube-system
[dryrun] Would perform action GET on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Resource name: "system:node"
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  creationTimestamp: null
	  name: kube-dns
	  namespace: kube-system
[dryrun] Would perform action GET on resource "services" in API group "core/v1"
[dryrun] Resource name: "kubernetes"
[dryrun] Would perform action CREATE on resource "deployments" in API group "extensions/v1beta1"
[dryrun] Attached object:
	apiVersion: extensions/v1beta1
	kind: Deployment
	metadata:
	  creationTimestamp: null
	  labels:
	    k8s-app: kube-dns
	  name: kube-dns
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: kube-dns
	  strategy:
	    rollingUpdate:
	      maxSurge: 10%
	      maxUnavailable: 0
	  template:
	    metadata:
	      creationTimestamp: null
	      labels:
	        k8s-app: kube-dns
	    spec:
	      affinity:
	        nodeAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	            nodeSelectorTerms:
	            - matchExpressions:
	              - key: beta.kubernetes.io/arch
	                operator: In
	                values:
	                - amd64
	      containers:
	      - args:
	        - --domain=cluster.local.
	        - --dns-port=10053
	        - --config-dir=/kube-dns-config
	        - --v=2
	        env:
	        - name: PROMETHEUS_PORT
	          value: "10055"
	        image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
	        imagePullPolicy: IfNotPresent
	        livenessProbe:
	          failureThreshold: 5
	          httpGet:
	            path: /healthcheck/kubedns
	            port: 10054
	            scheme: HTTP
	          initialDelaySeconds: 60
	          successThreshold: 1
	          timeoutSeconds: 5
	        name: kubedns
	        ports:
	        - containerPort: 10053
	          name: dns-local
	          protocol: UDP
	        - containerPort: 10053
	          name: dns-tcp-local
	          protocol: TCP
	        - containerPort: 10055
	          name: metrics
	          protocol: TCP
	        readinessProbe:
	          httpGet:
	            path: /readiness
	            port: 8081
	            scheme: HTTP
	          initialDelaySeconds: 3
	          timeoutSeconds: 5
	        resources:
	          limits:
	            memory: 170Mi
	          requests:
	            cpu: 100m
	            memory: 70Mi
	        volumeMounts:
	        - mountPath: /kube-dns-config
	          name: kube-dns-config
	      - args:
	        - -v=2
	        - -logtostderr
	        - -configDir=/etc/k8s/dns/dnsmasq-nanny
	        - -restartDnsmasq=true
	        - --
	        - -k
	        - --cache-size=1000
	        - --log-facility=-
	        - --server=/cluster.local/127.0.0.1#10053
	        - --server=/in-addr.arpa/127.0.0.1#10053
	        - --server=/ip6.arpa/127.0.0.1#10053
	        image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
	        imagePullPolicy: IfNotPresent
	        livenessProbe:
	          failureThreshold: 5
	          httpGet:
	            path: /healthcheck/dnsmasq
	            port: 10054
	            scheme: HTTP
	          initialDelaySeconds: 60
	          successThreshold: 1
	          timeoutSeconds: 5
	        name: dnsmasq
	        ports:
	        - containerPort: 53
	          name: dns
	          protocol: UDP
	        - containerPort: 53
	          name: dns-tcp
	          protocol: TCP
	        resources:
	          requests:
	            cpu: 150m
	            memory: 20Mi
	        volumeMounts:
	        - mountPath: /etc/k8s/dns/dnsmasq-nanny
	          name: kube-dns-config
	      - args:
	        - --v=2
	        - --logtostderr
	        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
	        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
	        image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
	        imagePullPolicy: IfNotPresent
	        livenessProbe:
	          failureThreshold: 5
	          httpGet:
	            path: /metrics
	            port: 10054
	            scheme: HTTP
	          initialDelaySeconds: 60
	          successThreshold: 1
	          timeoutSeconds: 5
	        name: sidecar
	        ports:
	        - containerPort: 10054
	          name: metrics
	          protocol: TCP
	        resources:
	          requests:
	            cpu: 10m
	            memory: 20Mi
	      dnsPolicy: Default
	      serviceAccountName: kube-dns
	      tolerations:
	      - key: CriticalAddonsOnly
	        operator: Exists
	      - effect: NoSchedule
	        key: node-role.kubernetes.io/master
	      volumes:
	      - configMap:
	          name: kube-dns
	          optional: true
	        name: kube-dns-config
	status: {}
[dryrun] Would perform action CREATE on resource "services" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	kind: Service
	metadata:
	  creationTimestamp: null
	  labels:
	    k8s-app: kube-dns
	    kubernetes.io/cluster-service: "true"
	    kubernetes.io/name: KubeDNS
	  name: kube-dns
	  namespace: kube-system
	  resourceVersion: "0"
	spec:
	  clusterIP: 10.96.0.10
	  ports:
	  - name: dns
	    port: 53
	    protocol: UDP
	    targetPort: 53
	  - name: dns-tcp
	    port: 53
	    protocol: TCP
	    targetPort: 53
	  selector:
	    k8s-app: kube-dns
	status:
	  loadBalancer: {}
[addons] Applied essential addon: kube-dns
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  creationTimestamp: null
	  name: kube-proxy
	  namespace: kube-system
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  kubeconfig.conf: |
	    apiVersion: v1
	    kind: Config
	    clusters:
	    - cluster:
	        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
	        server: https://192.168.200.101:6443
	      name: default
	    contexts:
	    - context:
	        cluster: default
	        namespace: default
	        user: default
	      name: default
	    current-context: default
	    users:
	    - name: default
	      user:
	        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  labels:
	    app: kube-proxy
	  name: kube-proxy
	  namespace: kube-system
[dryrun] Would perform action CREATE on resource "daemonsets" in API group "extensions/v1beta1"
[dryrun] Attached object:
	apiVersion: extensions/v1beta1
	kind: DaemonSet
	metadata:
	  creationTimestamp: null
	  labels:
	    k8s-app: kube-proxy
	  name: kube-proxy
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: kube-proxy
	  template:
	    metadata:
	      creationTimestamp: null
	      labels:
	        k8s-app: kube-proxy
	    spec:
	      containers:
	      - command:
	        - /usr/local/bin/kube-proxy
	        - --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
	        image: gcr.io/google_containers/kube-proxy-amd64:v1.7.4
	        imagePullPolicy: IfNotPresent
	        name: kube-proxy
	        resources: {}
	        securityContext:
	          privileged: true
	        volumeMounts:
	        - mountPath: /var/lib/kube-proxy
	          name: kube-proxy
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	      hostNetwork: true
	      serviceAccountName: kube-proxy
	      tolerations:
	      - effect: NoSchedule
	        key: node-role.kubernetes.io/master
	      - effect: NoSchedule
	        key: node.cloudprovider.kubernetes.io/uninitialized
	        value: "true"
	      volumes:
	      - configMap:
	          name: kube-proxy
	        name: kube-proxy
	      - hostPath:
	          path: /run/xtables.lock
	        name: xtables-lock
	  updateStrategy:
	    type: RollingUpdate
	status:
	  currentNumberScheduled: 0
	  desiredNumberScheduled: 0
	  numberMisscheduled: 0
	  numberReady: 0
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:node-proxier
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: system:node-proxier
	subjects:
	- kind: ServiceAccount
	  name: kube-proxy
	  namespace: kube-system
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /tmp/kubeadm-init-dryrun477531930/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 96efd6.98bbb2f4603c026b 192.168.200.101:6443 --discovery-token-ca-cert-hash sha256:ccb794198ae65cb3c9e997be510c18023e0e9e064225a588997b9e6c64ebf9f1

```

**Release note**:

```release-note
kubeadm: Implement a `--dry-run` mode and flag for `kubeadm`
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @ncdc @sttts
2017-08-25 14:01:27 -07:00
Kubernetes Submit Queue cfbd97c4f9 Merge pull request #51134 from stevekuznetsov/skuznets/verbose-bazel-error
Automatic merge from submit-queue

Don't silence `go get` during verify scripts

When the verify scripts fail to install a component using `go get`
today, they silence output go `stderr` and cause the user to not get any
actionable feedback on failure.

Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>

```release-note
NONE
```

/assign @ixdy
2017-08-25 13:59:51 -07:00
Steve Leon c682d7cd74 Add host mountpath for controller-manager for flexvolume dir
Controller manager needs access to Flexvolume plugin when
using attach-detach controller interface.

This PR adds the host mount path for the default directory of flexvolume
plugins

Fixes https://github.com/kubernetes/kubeadm/issues/410
2017-08-25 13:23:53 -07:00
Michael Taufen 6918ab1d70 fix ReadOnlyPort, HealthzPort, CAdvisorPort defaulting/documentation
The ReadOnlyPort defaulting prevented passing 0 to diable via
the KubeletConfiguraiton struct.

The HealthzPort defaulting prevented passing 0 to disable via the
KubeletConfiguration struct. The documentation also failed to mention
this, but the check is performed in code.

The CAdvisorPort documentation failed to mention that you can pass 0 to
disable.
2017-08-25 13:15:36 -07:00
Kubernetes Submit Queue acdf625e46 Merge pull request #51143 from oomichi/issue/43051
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)

Add signal handler for catching Ctrl-C on hack/e2e

**What this PR does / why we need it**:

When operating e2e test, hack/e2e.go process creates kubetest process.
To kill the kubetest process when stop e2e test with Ctrl-C, we need
to send the signal to the process because it also creates another
process and it needs to kill it.
This PR adds the signal handler on hack/e2e.go to kill the kubetest
process.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

fixes #43051

**Special notes for your reviewer**:

https://github.com/kubernetes/test-infra/pull/4154 is the part of kubetest.

**Release note**:

`NONE`
2017-08-25 12:31:10 -07:00
Kubernetes Submit Queue 08c2071bec Merge pull request #47171 from xilabao/validate-nonResourceURL-in-create-clusterrole
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)

validate nonResourceURL in create clusterrole

**Release note**:

```release-note
NONE
```
2017-08-25 12:31:07 -07:00
Kubernetes Submit Queue cd908f3e59 Merge pull request #51257 from NickrenREN/validation-bugfix
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)

Fix validation return value

Errors returned by some validation functions may be wrong

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #51256


**Release note**:
```release-note
NONE
```
2017-08-25 12:31:05 -07:00
Kubernetes Submit Queue 16a438b56e Merge pull request #50063 from dixudx/manifests_use_hostpath_type
Automatic merge from submit-queue (batch tested with PRs 51038, 50063, 51257, 47171, 51143)

update related manifest files to use hostpath type

**What this PR does / why we need it**:
Per [discussion in #46597](https://github.com/kubernetes/kubernetes/pull/46597#pullrequestreview-53568947)

Dependes on #46597

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

Fixes: https://github.com/kubernetes/kubeadm/issues/298

**Special notes for your reviewer**:
/cc @euank @thockin @tallclair @Random-Liu 

**Release note**:

```release-note
None
```
2017-08-25 12:31:02 -07:00
Kubernetes Submit Queue c02afa6e39 Merge pull request #51038 from nicksardo/gce-netproj
Automatic merge from submit-queue

GCE: Consume new config value for network project id

This PR will allow users to specify the network's project ID in gce.conf. If it's not specified, it will be filled with `ProjectID`.  This means that `network-project-id` is a required field for building a cluster on a shared VPC network. However, this means the field does not need to be specified for GKE clusters on non-shared networks. 

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes #48515

**Special notes for your reviewer**:
/assign @bowei @freehan 

**Release note**:
```release-note
NONE
```
2017-08-25 12:25:50 -07:00
Ryan Hitchman a7e64aaa66 Make coreos test images sshd not allow password login.
Configuration is based on:
https://coreos.com/os/docs/latest/customizing-sshd.html

The specific SSHD config is:

    # Use most defaults for sshd configuration.
    UsePrivilegeSeparation sandbox
    Subsystem sftp internal-sftp
    ClientAliveInterval 180
    UseDNS no
    UsePAM yes
    PrintLastLog no # handled by PAM
    PrintMotd no # handled by PAM
    AuthenticationMethods publickey

This will prevent security scanners from triggering.
2017-08-25 11:49:34 -07:00
Cheng Xing 396c3c7c6f Adding dynamic Flexvolume plugin discovery capability, using filesystem watch. 2017-08-25 11:42:32 -07:00
Walter Fender 3b9485bba3 Fixed gke auth update wait condition.
Lookup whoami on gke using gcloud auth list.
Make sure we do not run the test on any cluster older than 1.7.
Fix for Mehdy
Fixes for LavaLamp
2017-08-25 11:11:59 -07:00
Kubernetes Submit Queue 5f805a5e66 Merge pull request #51207 from yguo0905/uc
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)

Update cos image to cos-stable-60-9592-84-0

cos-m60 has been stable for a long time. This image contains a docker upgrade, which has been validated in https://github.com/kubernetes/kubernetes/issues/42926.

**Release note**:

```
None
```

/assign @yujuhong 
/cc @dchen1107
2017-08-25 11:07:17 -07:00
Kubernetes Submit Queue c19785cfea Merge pull request #49674 from crimsonfaith91/rollout
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)

StatefulSet kubectl rollout command

**What this PR does / why we need it**: This PR implements StatefulSet kubectl rollout command, covering `history`, `status`, and `undo`.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #49890 

**Special notes for your reviewer**:

**Release note**:

```release-note
kubectl rollout `history`, `status`, and `undo` subcommands now support StatefulSets.
```
2017-08-25 11:07:15 -07:00
Kubernetes Submit Queue fe0c519f49 Merge pull request #51132 from ConnorDoyle/cpuset-helpers
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)

Add cpuset helper library.

Blocker for CPU manager #49186 (1 of 6)

@sjenning @derekwaynecarr 

```release-note
NONE
```
2017-08-25 11:07:12 -07:00
Kubernetes Submit Queue dfd1ca7728 Merge pull request #49988 from sbezverk/fsgroup_check
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)

Adding fsGroup check before mounting a volume

fsGroup check will be enforcing that if a volume has already been
mounted by one pod and another pod wants to mount it but has a different
fsGroup value, this mount operation will not be allowed.

Closes #45053
2017-08-25 11:07:09 -07:00
Kubernetes Submit Queue c04e516373 Merge pull request #50033 from cmluciano/cml/addnpcidrselector
Automatic merge from submit-queue (batch tested with PRs 50033, 49988, 51132, 49674, 51207)

Add IPBlock to Network Policy

**What this PR does / why we need it**:
 Add ipBlockRule to NetworkPolicyPeer.

**Which issue this PR fixes**
fixes #49978

**Special notes for your reviewer**:
- I added this directly as a field on the existing API per guidance from API-Machinery/lazy SIG-Network consensus.

Todo:
- [ ] Documentation comments to mention this is beta, unless we want to go straight to GA
- [ ] e2e tests

**Release note**:
```
Support ipBlock in NetworkPolicy
```
2017-08-25 11:07:07 -07:00
Lucas Käldström 2c71814344
kubeadm: Fully implement 'kubeadm init --dry-run' 2017-08-25 20:31:14 +03:00
Kubernetes Submit Queue cb6f32e8ba Merge pull request #50841 from zjj2wry/kubectl-set-image-ignoring
Automatic merge from submit-queue (batch tested with PRs 50872, 51103, 51220, 51285, 50841)

Fix issue(#49695)kubectl set image deployment is ignoring --selector

**What this PR does / why we need it**:
closes #49695

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-08-25 10:10:13 -07:00
Kubernetes Submit Queue 0a51ba116a Merge pull request #51285 from jordanlewis/patch-1
Automatic merge from submit-queue (batch tested with PRs 50872, 51103, 51220, 51285, 50841)

Update CockroachDB tag to v1.0.5

cc @a-robinson
2017-08-25 10:10:10 -07:00
Kubernetes Submit Queue 1206ce612e Merge pull request #51220 from grodrigues3/add-sig-leads-owners-aliases
Automatic merge from submit-queue (batch tested with PRs 50872, 51103, 51220, 51285, 50841)

add sig leads to owners-aliases

**What this PR does / why we need it**:
Adds sig leads listed in community sigs.yaml file into OWNERS aliases.  Useful for granting privileges to sig leads, such as accepting issues into the milestone during burndown by adding `approved-for-milestone` label.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```

@marun @spiffxp @dchen1107
2017-08-25 10:10:08 -07:00