Automatic merge from submit-queue
Remove the Rackspace provider
**What this PR does / why we need it**:
To aid the effort of moving providers out of the cluster dir, I'm
removing Rackspace and leaving behind a README.md simply as a
placeholder until the entire dir is deleted.
**Which issue this PR fixes**
Fixes#6962
**Release note**:
```release-note
Deployment of Kubernetes clusters on Rackspace using the in-tree bash deployment (i.e. cluster/kube-up.sh or get-kube.sh) is obsolete and support has been removed.```
Automatic merge from submit-queue
Log warning when invalid dir passed to kubectl proxy --www
**Release note**:
```
Log warning when invalid directory is passed to `kubectl proxy --www`
```
Automatic merge from submit-queue (batch tested with PRs 45033, 44961, 45021, 45097, 44938)
Cleanup orphan logging that goes on in the sync loop.
**What this PR does / why we need it**:
Fixes#44937
**Before this PR** The older logs were like this:
```
E0426 00:06:33.763347 21247 kubelet_volumes.go:114] Orphaned pod "35c4a858-2a12-11e7-910c-42010af00003" found, but volume paths are still present on disk.
E0426 00:06:33.763400 21247 kubelet_volumes.go:114] Orphaned pod "e7676365-1580-11e7-8c27-42010af00003" found, but volume paths are still present on disk.
```
The problem being that, all the volumes were spammed w/ no summary info.
**After this PR** the logs look like this:
```
E0426 01:32:27.295568 22261 kubelet_volumes.go:129] Orphaned pod "408b060e-2a1d-11e7-90e8-42010af00003" found, but volume paths are still present on disk. : There were a total of 2 errors similar to this. Turn up verbosity to see them.
E0426 01:32:29.295515 22261 kubelet_volumes.go:129] Orphaned pod "408b060e-2a1d-11e7-90e8-42010af00003" found, but volume paths are still present on disk. : There were a total of 2 errors similar to this. Turn up verbosity to see them.
E0426 01:32:31.293180 22261 kubelet_volumes.go:129] Orphaned pod "408b060e-2a1d-11e7-90e8-42010af00003" found, but volume paths are still present on disk. : There were a total of 2 errors similar to this. Turn up verbosity to see them.
```
And with logging turned up, the extra info logs are shown with details:
```
E0426 01:34:21.933983 26010 kubelet_volumes.go:129] Orphaned pod "1c565800-2a20-11e7-bbc2-42010af00003" found, but volume paths are still present on disk. : There were a total of 3 errors similar to this. Turn up verbosity to see them.
I0426 01:34:21.934010 26010 kubelet_volumes.go:131] Orphan pod: Orphaned pod "1c565800-2a20-11e7-bbc2-42010af00003" found, but volume paths are still present on disk.
I0426 01:34:21.934015 26010 kubelet_volumes.go:131] Orphan pod: Orphaned pod "408b060e-2a1d-11e7-90e8-42010af00003" found, but volume paths are still present on disk.
I0426 01:34:21.934019 26010 kubelet_volumes.go:131] Orphan pod: Orphaned pod "e7676365-1580-11e7-8c27-42010af00003" found, but volume paths are still present on disk.
```
**Release note**
```release-note
Roll up volume error messages in the kubelet sync loop.
```
Automatic merge from submit-queue (batch tested with PRs 45033, 44961, 45021, 45097, 44938)
Disable the kubelet part of metrics collection in kubemark
Fixes https://github.com/kubernetes/kubernetes/issues/45038
This should fix it, as we are just interested in getting the apiserver metrics from kubemark master.
cc @wojtek-t @gmarek
Automatic merge from submit-queue (batch tested with PRs 45033, 44961, 45021, 45097, 44938)
Add request count to APICall metric
Ref https://github.com/kubernetes/kubernetes/issues/44701
This should add beside the API call latencies, the count of the requests.
cc @wojtek-t @gmarek
Automatic merge from submit-queue
Prune examples and e2es per discussion on sig-testing
**What this PR does / why we need it**:
Prune k8petstore from examples and e2es per discussion on sig-testing
**Special notes for your reviewer**:
This can live elsewhere outside the main repository.
**Release note**:
```
NONE
```
/cc @jayunit100 @fejta @kubernetes/sig-testing-pr-reviews
Automatic merge from submit-queue
kubectl binary plugins
**What this PR does / why we need it**:
Introduces the ability to extend `kubectl` by adding third-party plugins that will be exposed through `kubectl`.
Plugins are executable commands written in any language. To be included as a plugin, a binary or script file has to
1. be located under one of the supported plugin path locations:
1.1 `~/.kubectl/plugins` dir
1.2. one or more directory set in the `KUBECTL_PLUGINS_PATH` env var
1.3. the `kubectl/plugins` dir under one or more directory set in the `XDG_DATA_DIRS` env var, which defaults to `/usr/local/share:/usr/share`
2. in any of the plugin path above, have a subfolder with the plugin file(s)
3. in the subfolder, contain at least a `plugin.yaml` file that describes the plugin
Example:
```
$ cat ~/.kube/plugins/myplugin/plugin.yaml
name: "myplugin"
shortDesc: "My plugin's short description"
command: "echo Hello plugins!"
$ kubectl myplugin
Hello plugins!
```
~~In case the plugin declares `tunnel: true`, the plugin engine will pass the `KUBECTL_PLUGIN_API_HOST` env var when calling the plugin binary. Plugins can then access the Kube REST API in "http://$KUBECTL_PLUGIN_API_HOST/api" using the same context currently in use by `kubectl`.~~
Test plugins are provided in `pkg/kubectl/plugins/examples`. Just copy (or symlink) the files to `~/.kube/plugins` to test.
**Which issue this PR fixes**:
Related to the discussions in the proposal document: https://github.com/kubernetes/kubernetes/pull/30086 and https://github.com/kubernetes/community/pull/122.
**Release note**:
```release-note
Introduces the ability to extend kubectl by adding third-party plugins. Developer preview, please refer to the documentation for instructions about how to use it.
```
Automatic merge from submit-queue (batch tested with PRs 44868, 44350)
build external watch event so simple encoders can encode
`kube-apiserver` clients require a specific serialization of `watch.Event` to function properly. There is no reason to allow flexibility of serialization at this point since no client would able to understand a different encoding.
I found this which trying to use a simple, unstructured json encoder and the clients kept choking on watches because it serialized without the proper json tags.
@kubernetes/sig-api-machinery-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41530, 44814, 43620, 41985)
kube-apiserver: improve bootstrap token authentication error messages
This was requested by @jbeda as a follow up to https://github.com/kubernetes/kubernetes/pull/41281.
cc @jbeda @luxas @kubernetes/sig-auth-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41530, 44814, 43620, 41985)
Fixes juju kubernetes master: 1. Get certs from a dead leader. 2. Append tokens.
**What this PR does / why we need it**:
Fixes two issues with the Juju kubernetes master.
1. Grab certificates from a leader that is already removed.
2. Append (not truncate) auth tokens
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
fixes#43563fixes#43519
**Special notes for your reviewer**:
**Release note**:
```
Recover certificates from leadership context in case all masters die in a Juju deployment
```
Automatic merge from submit-queue (batch tested with PRs 41530, 44814, 43620, 41985)
no need check is nil, because has checked before
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
rename variables to make sure that they conform to golang variable name conventions
rename variables to make sure that they conform to golang variable name conventions
**What this PR does / why we need it**:
there are lots of package level unexported variables in package `cmd` not conforming golang variable name conventions, such as `version_example`, in this PR i rename all of them to make sure that they conform to golang variable name conventions
Several of these loops overlap, and when they are the reason a failure
is happening it is difficult to sort them out. Slighly misalign these
loops to make their impact obvious.
Automatic merge from submit-queue
Add metrics exporter to the fluentd-gcp deployment
Metrics exporter container reads metrics from the `/metrics` endpoint in fluentd and exports them directly to the Stackdriver. It assumes that Stackdriver Monitoring API is enabled.
/cc @fgrzadkowski
Automatic merge from submit-queue (batch tested with PRs 42432, 44628, 45101, 44921)
Use correct option name in the kubernetes-worker layer registry action
**What this PR does / why we need it**: It fixes#44920
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44920
**Special notes for your reviewer**:
**Release note**:
```
Ensure kubernetes-worker juju layer registry action uses correct ingress controller option name
```
Automatic merge from submit-queue (batch tested with PRs 42432, 44628, 45101, 44921)
Mark PersistentVolumes as [Feature:Volumes]
**What this PR does / why we need it**:
Just so that we know that we need a cloud provider that
supports volumes to run this test. This is similar to
the change in 63bc42c872.
Ran into this when i was trying to run e2e tests with
local-up-cluster locally and figured out this test will
not work since we don't support local storage persistent
volumes.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 42432, 44628, 45101, 44921)
kubeadm: join test cmds for new flags
**What this PR does / why we need it**: Adding test-cmds for new kubeadm join flags.
Adding tests is a WIP from #34136
This is a continuation from https://github.com/kubernetes/kubernetes/pull/42812 since it had to be closed.
**Special notes for your reviewer**: /cc @luxas
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Improved code coverage for plugin/pkg/scheduler/algorithm
**What this PR does / why we need it**:
Part of #39559 , code coverage improved from 0% to 100%
**Special notes for your reviewer**:
Improved coverage for scheduler/algorithm to 100%
Test cover output:
```
make test WHAT=./plugin/pkg/scheduler/algorithm KUBE_COVER=y
Running tests for APIVersion: v1,apps/v1beta1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2alpha1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1
+++ [0302 10:43:05] Saving coverage output in '/tmp/k8s_coverage/v1,apps/v1beta1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2alpha1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1/20170302-104305'
skipped k8s.io/kubernetes/cmd/libs/go2idl/generator
skipped k8s.io/kubernetes/vendor/k8s.io/client-go/1.4/rest
ok k8s.io/kubernetes/plugin/pkg/scheduler/algorithm 0.061s coverage: 100.0% of statements
+++ [0302 10:43:07] Combined coverage report: /tmp/k8s_coverage/v1,apps/v1beta1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2alpha1,batch/v1,batch/v2alpha1,certificates.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,policy/v1beta1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,storage.k8s.io/v1beta1,federation/v1beta1/20170302-104305/combined-coverage.html
```
Automatic merge from submit-queue
prevent corrupted spdy stream after hijacking connection
This PR fixes corner case in spdy stream code where some bytes would never arrive at the server.
Reading directly from a hijacked connection isn't safe because some data may have already been read by the server before `Hijack` was called. To ensure all data will be received it's safer to read from the returned `bufio.Reader`. This problem seem to happen more frequently when using Go 1.8.
This is described in https://golang.org/pkg/net/http/#Hijacker:
> // The returned bufio.Reader may contain unprocessed buffered
// data from the client.
I came across this while debugging a flaky test that used code from the `k8s.io/apimachinery/pkg/util/httpstream/spdy` package. After filling the code with debug logs and long hours running the tests in loop in the hope of catching the error I finally caught something weird.
The first word on the first spdy frame [read by the server here](b625085230/vendor/github.com/docker/spdystream/spdy/read.go (L148)) had the value `0x03000100`. See, the first frame to arrive on the server was supposed to be a control frame indicating the creation of a new stream, but all control frames need the high-order bit set to 1, which was not the case here, so the saver mistakenly assumed this was a data frame and the stream would never be created. The correct value for the first word of a SYN_STREAM frame was supposed to be `0x80030001` and this lead me on the path of finding who had consumed the first 1 byte prior to the frame reader being called and finally finding the problem with the Hijack call.
I added a new test to try stressing this condition and ensuring that this bug doesn't happen anymore. However, it's quite ugly as it loops 1000 times creating streams on servers to increase the chances of this bug happening. So, I'm not sure whether it's worth it to keep this test or if I should remove it from the PR. Please let me know what you guys think and I'll be happy to update this.
Fixes#45093#45089#45078#45075#45072#45066#45023
Just so that we know that we need a cloud provider that
supports volumes to run this test. This is similar to
the change in 63bc42c872.
Ran into this when i was trying to run e2e tests with
local-up-cluster locally and figured out this test will
not work since we don't support local storage persistent
volumes.
Automatic merge from submit-queue
Start recording cloud provider metrics for AWS
**What this PR does / why we need it**:
This PR implements support for emitting metrics from AWS about storage operations.
**Which issue this PR fixes**
Fixes https://github.com/kubernetes/features/issues/182
**Release note**:
```
Add support for emitting metrics from AWS cloudprovider about storage operations.
```
Automatic merge from submit-queue
Log node name when error attaching volume
Helps with debugging to know immediately which node the volume failed to atach to. Went through all plugins, added this to 3. @gnufied
```release-note
NONE
```