Commit Graph

47944 Commits (7b0dee89f29c4c9e645f3d66c98aeb2250783e0a)

Author SHA1 Message Date
Madhusudan.C.S 20e558060c Address review comments. 2017-05-10 00:03:42 -07:00
Madhusudan.C.S e0ca8abba8 Update Google Cloud DNS List implementation to perform a paged walk of lists to aggregate all the DNS records.
The current `List()` implementation just lists the DNS resorce records in
a given managed zone once and retruns the list. It neither performs a paged
walk nor does it consider the `page_token` in the returned response.

This change walks all the pages and aggregates the records in the pages
and returns the aggregated list. This is potentially dangerous as it can
blow up memory if there are a huge number of records in the given
managed zone. But this is the best we can do without changing the
provider interface too much. Next step is to define a new paged list
interface and implement it.
2017-05-10 00:03:42 -07:00
Madhusudan.C.S 704d13bfc8 Modify the DNS provider Rrset.Get(name) interface to return multiple records and update federated service controller.
There can be multiple DNS resource records for a given name. They can
vary by type, ttl, rrdata and a number of various other parameters. It
is incorrect to return a single resource record for a given name.

This change updates the Get interface to return multiple records for a given
name and uses this list in the federated service controller to perform
DNS operations.
2017-05-10 00:03:41 -07:00
Kubernetes Submit Queue 3fbfafdd0a Merge pull request #45523 from colemickens/cmpr-cpfix3
Automatic merge from submit-queue

azure: load balancer: support UDP, fix multiple loadBalancerSourceRanges support, respect sessionAffinity

**What this PR does / why we need it**:

1. Adds support for UDP ports
2. Fixes support for multiple `loadBalancerSourceRanges`
3. Adds support the Service spec's `sessionAffinity`
4. Removes dead code from the Instances file

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #43683

**Special notes for your reviewer**: n/a

**Release note**:

```release-note
azure: add support for UDP ports
azure: fix support for multiple `loadBalancerSourceRanges`
azure: support the Service spec's `sessionAffinity`
```
2017-05-09 22:07:55 -07:00
Chao Xu a5fd6b91e7 generated 2017-05-09 21:28:39 -07:00
Kubernetes Submit Queue 0f3403d351 Merge pull request #45469 from jianglingxia/jlx-0508
Automatic merge from submit-queue

modify the outdated link

modify the outdated link
2017-05-09 21:16:26 -07:00
Kubernetes Submit Queue 148b5da60b Merge pull request #44746 from xiangpengzhao/fix-podpreset
Automatic merge from submit-queue

Add support for PodPreset in `kubectl get` command

**What this PR does / why we need it**:
PR title

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #44736

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-09 21:16:17 -07:00
Kubernetes Submit Queue f7dcf7d8a7 Merge pull request #44987 from perotinus/syncworker
Automatic merge from submit-queue (batch tested with PRs 45453, 45307, 44987)

[Federation] Add a worker queue to the generic sync controller.

This is in preparation for converting the ReplicaSet controller to be a generic sync controller.

This doesn't include support for multiple workers yet: it's not immediately obvious how to support the command-line flags for ReplicaSet (or, I suppose in general, how do TypeAdapters support external configuration via whatever flag mechanism we're using).

cc @marun

**Release note**:
```release-note
NONE
```
2017-05-09 20:23:45 -07:00
Kubernetes Submit Queue 51a3413371 Merge pull request #45307 from yujuhong/mv-docker-client
Automatic merge from submit-queue (batch tested with PRs 45453, 45307, 44987)

Migrate the docker client code from dockertools to dockershim

Move docker client code from dockertools to dockershim/libdocker. This includes
DockerInterface (renamed to Interface), FakeDockerClient, etc.

This is part of #43234
2017-05-09 20:23:44 -07:00
Kubernetes Submit Queue 61593ba8b8 Merge pull request #45453 from k82cn/k8s_45220
Automatic merge from submit-queue (batch tested with PRs 45453, 45307, 44987)

Init cache with assigned non-terminated pods before scheduling

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #45220

**Release note**:

```release-note
The fix makes scheduling go routine waiting for cache (e.g. Pod) to be synced.
```
2017-05-09 20:23:37 -07:00
Lee Verberne f83337a8ac Fix AssertCalls usage for kubelet fake runtimes
Despite its name, AssertCalls() does not assert anything. It returns an
error that must be checked. This was causing false negatives for
a handful of unit tests.
2017-05-10 01:40:58 +00:00
Kubernetes Submit Queue 7c3f8c9bcf Merge pull request #45181 from vmware/NodeAddressesIPV6IssueNew
Automatic merge from submit-queue

Filter out IPV6 addresses from NodeAddresses() returned by vSphere

The vSphere CP returns both IPV6 and IPV4 addresses for a Node as part of NodeAddresses() implementation. However, Kubelet fails due to duplicate api.NodeAddress value when the node has an IPV6 address associated with it. This issue is tracked in #42690. The following are observed:

- when we enabled the logs and checked the addresses sent by vSphere CP to Kubelet, we don't see any duplicate addresses at all.
- Also, kubelet_node_status doesn’t receive any duplicate address from cloud provider.

However, when we filter out the IPV6 addresses and only return IPV4 addresses to the Kubelet, it works perfectly fine. 

Even though the Kubelet receives the non-duplicate node-addresses, it still errors out with duplicate node addresses. It might be an issue when kubelet propagates these addresses to API server (or) API server is enable to handle IPV6 addresses.

@divyenpatel @abrarshivani @pdhamdhere @tusharnt

**Release note**:

```release-note
None
```
2017-05-09 18:16:03 -07:00
Chao Xu 2f41a515e9 let hack/update-codecgen.sh include k8s.io/metrics 2017-05-09 18:05:23 -07:00
Chao Xu dec78eb9ae make client-go/pkg/api invisible to k8s.io/metrics; except for the fake
client, which will be fixed soon
2017-05-09 18:05:23 -07:00
Chao Xu 0b3eb50b39 Remove invocation of registry from custom_metrics/client.go 2017-05-09 18:05:22 -07:00
Chao Xu b5a41e770a remove unnecessary call to metrics install package
remove init and reference to client-go/api from metrcis install package
2017-05-09 18:05:22 -07:00
Chao Xu 074affca6b copy interal ObjectReference to k8s.io/metrics 2017-05-09 18:05:22 -07:00
xiangpengzhao baafbf406e Add support for PodPreset in kubectl get command 2017-05-10 08:59:22 +08:00
xilabao 697efd1baf De-duplication in printer 2017-05-10 08:45:20 +08:00
Jonathan MacMillan 0f851bfa2e [Federation] Improve the logging and user feedback in 'kubefed init'. 2017-05-09 16:06:37 -07:00
Jonathan MacMillan 6856dad472 [Federation] Add a worker queue to the generic sync controller. 2017-05-09 15:40:42 -07:00
Matthew Wong bbe82a2688 Ensure desired state of world populator runs before volume reconstructor 2017-05-09 18:25:59 -04:00
Kubernetes Submit Queue 76889118d7 Merge pull request #45280 from JulienBalestra/run-pod-inside-unique-netns
Automatic merge from submit-queue

rkt: Generate a new Network Namespace for each Pod

**What this PR does / why we need it**:

This PR concerns the Kubelet with the Container runtime rkt.
Currently, when a Pod stops and the kubelet restart it, the Pod will use the **same network namespace** based on its PodID.

When the Garbage Collection is triggered, it delete all the old resources and the current network namespace.

The Pods and all containers inside it loose the _eth0_ interface.
I explained more in details in #45149 how to reproduce this behavior.

This PR generates a new unique network namespace name for each new/restarting Pod.
The Garbage collection retrieve the correct network namespace and remove it safely.

**Which issue this PR fixes** : 

fix #45149 

**Special notes for your reviewer**:

Following @yifan-gu guidelines, so maybe expecting him for the final review.

**Release note**:

`NONE`
2017-05-09 15:07:34 -07:00
Ryan Hitchman dd4bb1213d Escape "<>&" in apiserver errors to avoid triggering vulnerability scanners.
Simple XSS scans might fetch /<script>alert('vulnerable')</script>, and
fail when the response body includes the script tag verbatim, despite
the headers directing the browser to interpret the response as text.

This isn't a real vulnerability, but it's easier to fix this here than
it is to fix the scanners.
2017-05-09 14:46:44 -07:00
Kubernetes Submit Queue aee07e9464 Merge pull request #45446 from zdj6373/cni
Automatic merge from submit-queue

cni Log changes

Newly modified log error, modified
2017-05-09 14:23:32 -07:00
Dhawal Patel 0e57b912a6 Update comment on ServiceAnnotationLoadBalancerInternal 2017-05-09 13:41:15 -07:00
Kubernetes Submit Queue b60d322c27 Merge pull request #44991 from aaronlevy/cns
Automatic merge from submit-queue

Skip inspecting pod network if unknown namespace

**What this PR does / why we need it**:

If we fail to determine the network namespace of a container we still try to inspect the state - even though there is no way for it to succeed. This leads to errors like:

> NetworkPlugin cni failed on the status hook for pod "X": Unexpected command output nsenter: cannot open : No such file or directory

Instead, if we cannot determine the network namespace, we should just exit with a (hopefully) more clear error message.

I left the wording as assuming a terminated pod, based on:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/dockershim/helpers.go#L208-L211

ref: 
https://github.com/kubernetes-incubator/bootkube/issues/475
https://github.com/coreos/coreos-kubernetes/issues/856
2017-05-09 13:36:56 -07:00
Kubernetes Submit Queue 52e8d6b95c Merge pull request #45529 from wanghaoran1988/fix_issue_44476
Automatic merge from submit-queue

oidc auth plugin not to override the Auth header if it's already exits

**What this PR does / why we need it**:
oidc auth client plugin should not override the `Authorization` header if it's already exits.
**Which issue this PR fixes** : 
fix oidc auth plugin override the` Authorization` header
**Special notes for your reviewer**:

**Release note**:
2017-05-09 12:52:53 -07:00
Kubernetes Submit Queue 6f5337cff7 Merge pull request #45527 from gyliu513/statefulset
Automatic merge from submit-queue (batch tested with PRs 45304, 45006, 45527)

Fixed indent for some statefulset for e2e test.

**What this PR does / why we need it**:
Make sure the e2e test passed for statefulset.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #45526

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-05-09 12:04:45 -07:00
Kubernetes Submit Queue caf47282d1 Merge pull request #45006 from feiskyer/hostipc
Automatic merge from submit-queue (batch tested with PRs 45304, 45006, 45527)

Add node e2e tests for hostIPC

**What this PR does / why we need it**:

Add node e2e tests for hostIPC.

**Which issue this PR fixes**

Part of #44118.

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```

/assign @Random-Liu @yujuhong
2017-05-09 12:04:43 -07:00
Kubernetes Submit Queue f8f9d7db93 Merge pull request #45304 from deads2k/controller-03-ns-discovery
Automatic merge from submit-queue (batch tested with PRs 45304, 45006, 45527)

increase the QPS for namespace controller

The namespace controller is really chatty. Especially to discovery since that involves two requests for every API version available. This bumps the QPS and burst on the namespace controller to avoid being stuck waiting.
2017-05-09 12:04:41 -07:00
Klaus Ma 3278de723a generated client-go. 2017-05-10 01:50:38 +08:00
Klaus Ma 7bf698a2c8 generated codes. 2017-05-10 01:50:38 +08:00
Klaus Ma c78faec4ff Initialize scheduler cache with assigned non-terminated pods before scheduling. 2017-05-10 01:50:38 +08:00
Kubernetes Submit Queue 202a9f8445 Merge pull request #42317 from NickrenREN/attach-detach-error-info-print
Automatic merge from submit-queue

add and clear err message about RemoveVolumeFromReportAsAttached()

**Release note**:

```release-note
NONE
```
2017-05-09 10:44:32 -07:00
Jacek N b61fd20cb2 Don't append :443 to registry domain in the kubernetes-worker layer registry action. Fixes #45547 2017-05-09 16:37:09 +01:00
Kubernetes Submit Queue 97889d4ff9 Merge pull request #45432 from deads2k/agg-30-status
Automatic merge from submit-queue (batch tested with PRs 44798, 45537, 45448, 45432)

use apiservice.status to break apart controller and handling concerns

Still needs tests.

This starts breaking the handler and controller aspects of the aggregator by making use of status and conditions instead of actually running a specific check on demand.

@kubernetes/sig-api-machinery-pr-reviews 
@luxas since you've been asking
2017-05-09 08:29:40 -07:00
Kubernetes Submit Queue fc28762671 Merge pull request #45448 from zhangxiaoyu-zidif/cleancode-nfs-return-err
Automatic merge from submit-queue (batch tested with PRs 44798, 45537, 45448, 45432)

nfs.go: cleancode err

**What this PR does / why we need it**:
The modification makes  code clean, simple, and easy to inspect. 

**Release note**:

```release-note
NONE
```
2017-05-09 08:29:37 -07:00
Kubernetes Submit Queue 93adcbd02d Merge pull request #45537 from shyamjvs/yolo
Automatic merge from submit-queue (batch tested with PRs 44798, 45537, 45448, 45432)

Stream output of run-gcloud-compute-with-retries to stdout in realtime

Ref https://github.com/kubernetes/kubernetes/issues/40139#issuecomment-299894222 (3rd point)
This should help us get more info about timeouts during start-kubemark-master.sh.

cc @wojtek-t @gmarek
2017-05-09 08:29:35 -07:00
Kubernetes Submit Queue 49626c975b Merge pull request #44798 from zetaab/master
Automatic merge from submit-queue

Statefulsets for cinder: allow multi-AZ deployments, spread pods across zones

**What this PR does / why we need it**: Currently if we do not specify availability zone in cinder storageclass, the cinder is provisioned to zone called nova. However, like mentioned in issue, we have situation that we want spread statefulset across 3 different zones. Currently this is not possible with statefulsets and cinder storageclass. In this new solution, if we leave it empty the algorithm will choose the zone for the cinder drive similar style like in aws and gce storageclass solutions. 

**Which issue this PR fixes** fixes #44735

**Special notes for your reviewer**:

example:

```
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: all
provisioner: kubernetes.io/cinder
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
  name: galera
  labels:
    app: mysql
spec:
  ports:
  - port: 3306
    name: mysql
  clusterIP: None
  selector:
    app: mysql
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mysql
spec:
  serviceName: "galera"
  replicas: 3
  template:
    metadata:
      labels:
        app: mysql
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
      - name: mysql
        image: adfinissygroup/k8s-mariadb-galera-centos:v002
        imagePullPolicy: Always
        ports:
        - containerPort: 3306
          name: mysql
        - containerPort: 4444
          name: sst
        - containerPort: 4567
          name: replication
        - containerPort: 4568
          name: ist
        volumeMounts:
        - name: storage
          mountPath: /data
        readinessProbe:
          exec:
            command:
            - /usr/share/container-scripts/mysql/readiness-probe.sh
          initialDelaySeconds: 15
          timeoutSeconds: 5
        env:
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
  volumeClaimTemplates:
  - metadata:
      name: storage
      annotations:
        volume.beta.kubernetes.io/storage-class: all
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 12Gi
```

If this example is deployed it will automatically create one replica per AZ. This helps us a lot making HA databases.

Current storageclass for cinder is not perfect in case of statefulsets. Lets assume that cinder storageclass is defined to be in zone called nova, but because labels are not added to pv - pods can be started in any zone. The problem is that at least in our openstack it is not possible to use cinder drive located in zone x from zone y. However, should we have possibility to choose between cross-zone cinder mounts or not? Imo it is not good way of doing things that they mount volume from another zone where the pod is located(means more network traffic between zones)? What you think? Current new solution does not allow that anymore (should we have possibility to allow it? it means removing the labels from pv).

There might be some things that needs to be fixed still in this release and I need help for that. Some parts of the code is not perfect.

Issues what i am thinking about (I need some help for these):
1) Can everybody see in openstack what AZ their servers are? Can there be like access policy that do not show that? If AZ is not found from server specs, I have no idea how the code behaves. 
2) In GetAllZones() function, is it really needed to make new serviceclient using openstack.NewComputeV2 or could I somehow use existing one
3) This fetches all servers from some openstack tenant(project). However, in some cases kubernetes is maybe deployed only to specific zone. If kube servers are located for instance in zone 1, and then there are another servers in same tenant in zone 2. There might be usecase that cinder drive is provisioned to zone-2 but it cannot start pod, because kubernetes does not have any nodes in zone-2. Could we have better way to fetch kubernetes nodes zones? Currently that information is not added to kubernetes node labels automatically in openstack (which should I think). I have added those labels manually to nodes. If that zone information is not added to nodes, the new solution does not start stateful pods at all, because it cannot target pods.


cc @rootfs @anguslees @jsafrane 

```release-note
Default behaviour in cinder storageclass is changed. If availability is not specified, the zone is chosen by algorithm. It makes possible to spread stateful pods across many zones.
```
2017-05-09 08:10:44 -07:00
Kubernetes Submit Queue 49e5435529 Merge pull request #45403 from sttts/sttts-tri-state-watch-capacity
Automatic merge from submit-queue

apiserver: injectable default watch cache size

This makes it possible to override the default watch capacity in the REST options getter. Before this PR the default is written into the storage struct explicitly, and if it is the default, the REST options getter didn't know. With this the PR the default is applied late and can be injected from the outside.
2017-05-09 07:27:35 -07:00
Dr. Stefan Schimanski 7a06299f4a apitesting: external serialization roundtrip test 2017-05-09 16:10:08 +02:00
deads2k 272aa2434d start using apiservice status in controllers and serving 2017-05-09 09:52:51 -04:00
Kubernetes Submit Queue 110f410e55 Merge pull request #45463 from nilebox/nilebox-tpr-watcher-example
Automatic merge from submit-queue (batch tested with PRs 45481, 45463)

ThirdPartyResource example: added watcher example, code cleanup

**NOTE**: This is a cleaned and updated version of PR https://github.com/kubernetes/kubernetes/pull/43027

**What this PR does / why we need it**:
An example of using go-client for watching on ThirdPartyResource events (create/update/delete).
2017-05-09 06:52:34 -07:00
deads2k b976881752 add apiservices/status REST handling 2017-05-09 09:44:27 -04:00
Kubernetes Submit Queue 02d75cb453 Merge pull request #45481 from CaoShuFeng/xtables/lock
Automatic merge from submit-queue

Remove leaked tmp file in unit tests

Some unit tests leave a temp file in work space:
pkg/util/iptables/xtables.lock
This patch remove that file
@dcbw 
**Release note**:

```NONE
```
2017-05-09 06:40:31 -07:00
Dr. Stefan Schimanski b7146bca19 Add myself to client-go OWNERS 2017-05-09 13:45:02 +02:00
Shyam Jeedigunta 0759289dcf Stream output of run-gcloud-compute-with-retries to stdout in realtime 2017-05-09 13:44:48 +02:00
Nail Islamov a6c97715ed ThirdPartyResource client-go example: added TPR controller example, code cleanup and integration test 2017-05-09 21:31:39 +10:00
Kubernetes Submit Queue d602ea69dc Merge pull request #45295 from rootfs/vol-owner
Automatic merge from submit-queue

add rootfs gnufied and childsb to volume approver

**What this PR does / why we need it**:
add me and @gnufied @childsb to volume approver 
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-09 04:13:00 -07:00