Change minion to node

Contination of #1111

I tried to keep this PR down to just a simple search-n-replace to keep
things simple.  I may have gone too far in some spots but its easy to
roll those back if needed.

I avoided renaming `contrib/mesos/pkg/minion` because there's already
a `contrib/mesos/pkg/node` dir and fixing that will require a bit of work
due to a circular import chain that pops up. So I'm saving that for a
follow-on PR.

I rolled back some of this from a previous commit because it just got
to big/messy. Will follow up with additional PRs

Signed-off-by: Doug Davis <dug@us.ibm.com>
pull/6/head
Doug Davis 2016-05-05 13:41:49 -07:00
parent 5af1b25235
commit 9d5bac6330
40 changed files with 195 additions and 186 deletions

View File

@ -16760,7 +16760,7 @@
},
"v1.Node": {
"id": "v1.Node",
"description": "Node is a worker node in Kubernetes, formerly known as minion. Each node will have a unique identifier in the cache (i.e. in etcd).",
"description": "Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd).",
"properties": {
"kind": {
"type": "string",

View File

@ -481,7 +481,7 @@ span.icon > [class^="icon-"], span.icon > [class*=" icon-"] { cursor: default; }
<div class="sect2">
<h3 id="_v1_node">v1.Node</h3>
<div class="paragraph">
<p>Node is a worker node in Kubernetes, formerly known as minion. Each node will have a unique identifier in the cache (i.e. in etcd).</p>
<p>Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd).</p>
</div>
<table class="tableblock frame-all grid-all" style="width:100%; ">
<colgroup>

View File

@ -176,7 +176,7 @@ Some important differences between v1beta1/2 and v1beta3:
* The resource `id` is now called `name`.
* `name`, `labels`, `annotations`, and other metadata are now nested in a map called `metadata`
* `desiredState` is now called `spec`, and `currentState` is now called `status`
* `/minions` has been moved to `/nodes`, and the resource has kind `Node`
* `/nodes` has been moved to `/nodes`, and the resource has kind `Node`
* The namespace is required (for all namespaced resources) and has moved from a URL parameter to the path: `/api/v1beta3/namespaces/{namespace}/{resource_collection}/{resource_name}`. If you were not using a namespace before, use `default` here.
* The names of all resource collections are now lower cased - instead of `replicationControllers`, use `replicationcontrollers`.
* To watch for changes to a resource, open an HTTP or Websocket connection to the collection query and provide the `?watch=true` query parameter along with the desired `resourceVersion` parameter to watch from.

View File

@ -64,7 +64,7 @@ you manually created or configured your cluster.
### Architecture overview
Kubernetes is a cluster of several machines that consists of a Kubernetes
master and a set number of nodes (previously known as 'minions') for which the
master and a set number of nodes (previously known as 'nodes') for which the
master which is responsible. See the [Architecture](architecture.md) topic for
more details.
@ -161,7 +161,7 @@ Note that we do not automatically open NodePort services in the AWS firewall
NodePort services are more of a building block for things like inter-cluster
services or for LoadBalancer. To consume a NodePort service externally, you
will likely have to open the port in the node security group
(`kubernetes-minion-<clusterid>`).
(`kubernetes-node-<clusterid>`).
For SSL support, starting with 1.3 two annotations can be added to a service:
@ -194,7 +194,7 @@ modifying the headers.
kube-proxy sets up two IAM roles, one for the master called
[kubernetes-master](../../cluster/aws/templates/iam/kubernetes-master-policy.json)
and one for the nodes called
[kubernetes-minion](../../cluster/aws/templates/iam/kubernetes-minion-policy.json).
[kubernetes-node](../../cluster/aws/templates/iam/kubernetes-minion-policy.json).
The master is responsible for creating ELBs and configuring them, as well as
setting up advanced VPC routing. Currently it has blanket permissions on EC2,
@ -242,7 +242,7 @@ HTTP URLs are passed to instances; this is how Kubernetes code gets onto the
machines.
* Creates two IAM profiles based on templates in [cluster/aws/templates/iam](../../cluster/aws/templates/iam/):
* `kubernetes-master` is used by the master.
* `kubernetes-minion` is used by nodes.
* `kubernetes-node` is used by nodes.
* Creates an AWS SSH key named `kubernetes-<fingerprint>`. Fingerprint here is
the OpenSSH key fingerprint, so that multiple users can run the script with
different keys and their keys will not collide (with near-certainty). It will
@ -265,7 +265,7 @@ The debate is open here, where cluster-per-AZ is discussed as more robust but
cross-AZ-clusters are more convenient.
* Associates the subnet to the route table
* Creates security groups for the master (`kubernetes-master-<clusterid>`)
and the nodes (`kubernetes-minion-<clusterid>`).
and the nodes (`kubernetes-node-<clusterid>`).
* Configures security groups so that masters and nodes can communicate. This
includes intercommunication between masters and nodes, opening SSH publicly
for both masters and nodes, and opening port 443 on the master for the HTTPS
@ -281,8 +281,8 @@ information that must be passed in this way.
routing rule for the internal network range (`MASTER_IP_RANGE`, defaults to
10.246.0.0/24).
* For auto-scaling, on each nodes it creates a launch configuration and group.
The name for both is <*KUBE_AWS_INSTANCE_PREFIX*>-minion-group. The default
name is kubernetes-minion-group. The auto-scaling group has a min and max size
The name for both is <*KUBE_AWS_INSTANCE_PREFIX*>-node-group. The default
name is kubernetes-node-group. The auto-scaling group has a min and max size
that are both set to NUM_NODES. You can change the size of the auto-scaling
group to add or remove the total number of nodes from within the AWS API or
Console. Each nodes self-configures, meaning that they come up; run Salt with

View File

@ -170,10 +170,10 @@ Sample kubectl output:
```console
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT REASON SOURCE MESSAGE
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-node-4.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-4.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-1.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-1.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-3.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-3.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-2.c.saad-dev-vms.internal Minion starting {kubelet kubernetes-node-2.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:02 +0000 Thu, 12 Feb 2015 01:13:02 +0000 1 kubernetes-node-4.c.saad-dev-vms.internal Node starting {kubelet kubernetes-node-4.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-1.c.saad-dev-vms.internal Node starting {kubelet kubernetes-node-1.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-3.c.saad-dev-vms.internal Node starting {kubelet kubernetes-node-3.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:09 +0000 Thu, 12 Feb 2015 01:13:09 +0000 1 kubernetes-node-2.c.saad-dev-vms.internal Node starting {kubelet kubernetes-node-2.c.saad-dev-vms.internal} Starting kubelet.
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-influx-grafana-controller-0133o Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 elasticsearch-logging-controller-fplln Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods
Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 kibana-logging-controller-gziey Pod failedScheduling {scheduler } Error scheduling: no nodes available to schedule pods

View File

@ -1182,7 +1182,7 @@ than capitalization of the initial letter, the two should almost always match.
No underscores nor dashes in either.
* Field and resource names should be declarative, not imperative (DoSomething,
SomethingDoer, DoneBy, DoneAt).
* `Minion` has been deprecated in favor of `Node`. Use `Node` where referring to
* Use `Node` where referring to
the node resource in the context of the cluster. Use `Host` where referring to
properties of the individual physical/virtual system, such as `hostname`,
`hostPath`, `hostNetwork`, etc.

View File

@ -371,8 +371,8 @@ provisioned.
#### I have Vagrant up but the nodes won't validate!
Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt minion
log (`sudo cat /var/log/salt/minion`).
Log on to one of the nodes (`vagrant ssh node-1`) and inspect the salt node
log (`sudo cat /var/log/salt/node`).
#### I want to change the number of nodes!

View File

@ -92,9 +92,9 @@ Use the `examples/guestbook-go/redis-master-controller.json` file to create a [r
4. To verify what containers are running in the redis-master pod, you can SSH to that machine with `gcloud compute ssh --zone` *`zone_name`* *`host_name`* and then run `docker ps`:
```console
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-bz1p
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-node-bz1p
me@kubernetes-minion-3:~$ sudo docker ps
me@kubernetes-node-3:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
d5c458dabe50 redis "/entrypoint.sh redis" 5 minutes ago Up 5 minutes
```

View File

@ -322,7 +322,7 @@ You can get information about a pod, including the machine that it is running on
```console
$ kubectl describe pods redis-master-2353460263-1ecey
Name: redis-master-2353460263-1ecey
Node: kubernetes-minion-m0k7/10.240.0.5
Node: kubernetes-node-m0k7/10.240.0.5
...
Labels: app=redis,pod-template-hash=2353460263,role=master,tier=backend
Status: Running
@ -337,7 +337,7 @@ Containers:
...
```
The `Node` is the name and IP of the machine, e.g. `kubernetes-minion-m0k7` in the example above. You can find more details about this node with `kubectl describe nodes kubernetes-minion-m0k7`.
The `Node` is the name and IP of the machine, e.g. `kubernetes-node-m0k7` in the example above. You can find more details about this node with `kubectl describe nodes kubernetes-node-m0k7`.
If you want to view the container logs for a given pod, you can run:
@ -356,7 +356,7 @@ me@workstation$ gcloud compute ssh <NODE-NAME>
Then, you can look at the Docker containers on the remote machine. You should see something like this (the specifics of the IDs will be different):
```console
me@kubernetes-minion-krxw:~$ sudo docker ps
me@kubernetes-node-krxw:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
...
0ffef9649265 redis:latest "/entrypoint.sh redi" About a minute ago Up About a minute k8s_master.869d22f3_redis-master-dz33o_default_1449a58a-5ead-11e5-a104-688f84ef8ef6_d74cb2b5
@ -718,10 +718,10 @@ NAME REGION IP_ADDRESS IP_PROTOCOL TARGET
frontend us-central1 130.211.188.51 TCP us-central1/targetPools/frontend
```
In Google Compute Engine, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion` (replace with your tags as appropriate):
In Google Compute Engine, you also may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-node` (replace with your tags as appropriate):
```console
$ gcloud compute firewall-rules create --allow=tcp:80 --target-tags=kubernetes-minion kubernetes-minion-80
$ gcloud compute firewall-rules create --allow=tcp:80 --target-tags=kubernetes-node kubernetes-node-80
```
For GCE Kubernetes startup details, see the [Getting started on Google Compute Engine](../../docs/getting-started-guides/gce.md)

View File

@ -143,12 +143,12 @@ kubectl get -o template po wildfly-rc-w2kk5 --template={{.status.podIP}}
10.246.1.23
```
Log in to minion and access the application:
Log in to node and access the application:
```sh
vagrant ssh minion-1
vagrant ssh node-1
Last login: Thu Jul 16 00:24:36 2015 from 10.0.2.2
[vagrant@kubernetes-minion-1 ~]$ curl http://10.246.1.23:8080/employees/resources/employees/
[vagrant@kubernetes-node-1 ~]$ curl http://10.246.1.23:8080/employees/resources/employees/
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>
```

View File

@ -180,7 +180,7 @@ You will have to open up port 80 if it's not open yet in your
environment. On Google Compute Engine, you may run the below command.
```
gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-minion
gcloud compute firewall-rules create meteor-80 --allow=tcp:80 --target-tags kubernetes-node
```
What is going on?

View File

@ -59,7 +59,7 @@ $ vi cluster/saltbase/pillar/privilege.sls
allow_privileged: true
```
Now spin up a cluster using your preferred KUBERNETES_PROVIDER. Remember that `kube-up.sh` may start other pods on your minion nodes, so ensure that you have enough resources to run the five pods for this example.
Now spin up a cluster using your preferred KUBERNETES_PROVIDER. Remember that `kube-up.sh` may start other pods on your nodes, so ensure that you have enough resources to run the five pods for this example.
```sh

View File

@ -160,7 +160,7 @@ phabricator-controller-9vy68 1/1 Running 0 1m
If you ssh to that machine, you can run `docker ps` to see the actual pod:
```sh
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-2
me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-node-2
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
@ -230,10 +230,10 @@ and then visit port 80 of that IP address.
**Note**: Provisioning of the external IP address may take few minutes.
**Note**: You may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`:
**Note**: You may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-node`:
```sh
$ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-minion
$ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-node
```
### Step Six: Cleanup

View File

@ -16,5 +16,5 @@
echo "Create Phabricator replication controller" && kubectl create -f phabricator-controller.json
echo "Create Phabricator service" && kubectl create -f phabricator-service.json
echo "Create firewall rule" && gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-minion
echo "Create firewall rule" && gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-node

View File

@ -79,13 +79,13 @@ $ cluster/kubectl.sh run cpuhog \
-- md5sum /dev/urandom
```
This will create a single pod on your minion that requests 1/10 of a CPU, but it has no limit on how much CPU it may actually consume
This will create a single pod on your node that requests 1/10 of a CPU, but it has no limit on how much CPU it may actually consume
on the node.
To demonstrate this, if you SSH into your machine, you will see it is consuming as much CPU as possible on the node.
```
$ vagrant ssh minion-1
$ vagrant ssh node-1
$ sudo docker stats $(sudo docker ps -q)
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
6b593b1a9658 0.00% 1.425 MB/1.042 GB 0.14% 1.038 kB/738 B
@ -150,7 +150,7 @@ $ cluster/kubectl.sh run cpuhog \
Let's SSH into the node, and look at usage stats.
```
$ vagrant ssh minion-1
$ vagrant ssh node-1
$ sudo su
$ docker stats $(docker ps -q)
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O

View File

@ -88,18 +88,18 @@ And kubectl get nodes should agree:
```
$ kubectl get nodes
NAME LABELS STATUS
eu-minion-0n61 kubernetes.io/hostname=eu-minion-0n61 Ready
eu-minion-79ua kubernetes.io/hostname=eu-minion-79ua Ready
eu-minion-7wz7 kubernetes.io/hostname=eu-minion-7wz7 Ready
eu-minion-loh2 kubernetes.io/hostname=eu-minion-loh2 Ready
eu-node-0n61 kubernetes.io/hostname=eu-node-0n61 Ready
eu-node-79ua kubernetes.io/hostname=eu-node-79ua Ready
eu-node-7wz7 kubernetes.io/hostname=eu-node-7wz7 Ready
eu-node-loh2 kubernetes.io/hostname=eu-node-loh2 Ready
$ kubectl config use-context <clustername_us>
$ kubectl get nodes
NAME LABELS STATUS
kubernetes-minion-5jtd kubernetes.io/hostname=kubernetes-minion-5jtd Ready
kubernetes-minion-lqfc kubernetes.io/hostname=kubernetes-minion-lqfc Ready
kubernetes-minion-sjra kubernetes.io/hostname=kubernetes-minion-sjra Ready
kubernetes-minion-wul8 kubernetes.io/hostname=kubernetes-minion-wul8 Ready
kubernetes-node-5jtd kubernetes.io/hostname=kubernetes-node-5jtd Ready
kubernetes-node-lqfc kubernetes.io/hostname=kubernetes-node-lqfc Ready
kubernetes-node-sjra kubernetes.io/hostname=kubernetes-node-sjra Ready
kubernetes-node-wul8 kubernetes.io/hostname=kubernetes-node-wul8 Ready
```
## Testing reachability
@ -207,10 +207,10 @@ $ kubectl exec -it kubectl-tester bash
kubectl-tester $ kubectl get nodes
NAME LABELS STATUS
eu-minion-0n61 kubernetes.io/hostname=eu-minion-0n61 Ready
eu-minion-79ua kubernetes.io/hostname=eu-minion-79ua Ready
eu-minion-7wz7 kubernetes.io/hostname=eu-minion-7wz7 Ready
eu-minion-loh2 kubernetes.io/hostname=eu-minion-loh2 Ready
eu-node-0n61 kubernetes.io/hostname=eu-node-0n61 Ready
eu-node-79ua kubernetes.io/hostname=eu-node-79ua Ready
eu-node-7wz7 kubernetes.io/hostname=eu-node-7wz7 Ready
eu-node-loh2 kubernetes.io/hostname=eu-node-loh2 Ready
```
For a more advanced example of sharing clusters, see the [service-loadbalancer](https://github.com/kubernetes/contrib/tree/master/service-loadbalancer/README.md)

View File

@ -70,7 +70,7 @@ gs://kubernetes-jenkins/logs/kubernetes-e2e-gce/
gcp-resources-{before, after}.txt
junit_{00, 01, ...}.xml
jenkins-e2e-master/{kube-apiserver.log, ...}
jenkins-e2e-minion-abcd/{kubelet.log, ...}
jenkins-e2e-node-abcd/{kubelet.log, ...}
12344/
...
```

View File

@ -44,7 +44,6 @@ cluster/saltbase/salt/etcd/etcd.manifest: "value": "{{ storage_backend }}
cluster/saltbase/salt/etcd/etcd.manifest:{% set storage_backend = pillar.get('storage_backend', 'etcd2') -%}
cluster/saltbase/salt/kube-admission-controls/init.sls:{% if 'LimitRanger' in pillar.get('admission_control', '') %}
cluster/saltbase/salt/kube-apiserver/kube-apiserver.manifest:{% set params = address + " " + storage_backend + " " + etcd_servers + " " + etcd_servers_overrides + " " + cloud_provider + " " + cloud_config + " " + runtime_config + " " + feature_gates + " " + admission_control + " " + target_ram_mb + " " + service_cluster_ip_range + " " + client_ca_file + basic_auth_file + " " + min_request_timeout + " " + enable_garbage_collector -%}
cluster/saltbase/salt/kube-apiserver/kube-apiserver.manifest:{% set params = address + " " + storage_backend + " " + etcd_servers + " " + etcd_servers_overrides + " " + cloud_provider + " " + cloud_config + " " + runtime_config + " " + admission_control + " " + target_ram_mb + " " + service_cluster_ip_range + " " + client_ca_file + basic_auth_file + " " + min_request_timeout + " " + enable_garbage_collector -%}
cluster/saltbase/salt/kube-controller-manager/kube-controller-manager.manifest:{% set params = "--master=127.0.0.1:8080" + " " + cluster_name + " " + cluster_cidr + " " + allocate_node_cidrs + " " + service_cluster_ip_range + " " + terminated_pod_gc + " " + enable_garbage_collector + " " + cloud_provider + " " + cloud_config + " " + service_account_key + " " + log_level + " " + root_ca_file -%}
cluster/saltbase/salt/kube-controller-manager/kube-controller-manager.manifest:{% set params = params + " " + feature_gates -%}
cluster/saltbase/salt/kube-controller-manager/kube-controller-manager.manifest:{% if pillar.get('enable_hostpath_provisioner', '').lower() == 'true' -%}

View File

@ -14,6 +14,7 @@ api-external-dns-names
api-burst
api-prefix
api-rate
apiserver-count
api-server-port
api-servers
api-token
@ -31,9 +32,12 @@ authentication-token-webhook-config-file
authorization-mode
authorization-policy-file
authorization-rbac-super-user
authorization-webhook-config-file
authorization-webhook-cache-authorized-ttl
authorization-webhook-cache-unauthorized-ttl
authorization-webhook-config-file
auth-path
auth-provider
auth-provider-arg
babysit-daemons
basic-auth-file
bench-pods
@ -50,8 +54,8 @@ build-tag
cadvisor-port
cert-dir
certificate-authority
cgroups-per-qos
cgroup-root
cgroups-per-qos
chaos-chance
clean-start
cleanup
@ -68,21 +72,21 @@ cluster-cidr
cluster-dns
cluster-domain
cluster-ip
cluster-name
cluster-tag
cluster-monitor-period
cluster-name
cluster-signing-cert-file
cluster-signing-key-file
cni-bin-dir
cni-conf-dir
cluster-tag
concurrent-deployment-syncs
concurrent-endpoint-syncs
concurrent-gc-syncs
concurrent-namespace-syncs
concurrent-replicaset-syncs
concurrent-service-syncs
concurrent-resource-quota-syncs
concurrent-serviceaccount-token-syncs
concurrent-gc-syncs
concurrent-service-syncs
config-sync-period
configure-cbr0
configure-cloud-routes
@ -93,10 +97,10 @@ conntrack-tcp-timeout-established
consumer-port
consumer-service-name
consumer-service-namespace
contain-pod-resources
container-port
container-runtime
container-runtime-endpoint
contain-pod-resources
controller-start-interval
cors-allowed-origins
cpu-cfs-quota
@ -124,13 +128,13 @@ disable-kubenet
dns-port
dns-provider
dns-provider-config
dockercfg-path
docker-email
docker-endpoint
docker-exec-handler
docker-password
docker-server
docker-username
dockercfg-path
driver-port
drop-embedded-fields
dry-run
@ -141,8 +145,9 @@ e2e-verify-service-account
enable-controller-attach-detach
enable-custom-metrics
enable-debugging-handlers
enable-garbage-collector
enable-dynamic-provisioning
enable-garbage-collector
enable-garbage-collector
enable-hostpath-provisioner
enable-server
enable-swagger-ui
@ -162,11 +167,11 @@ event-burst
event-qps
event-ttl
eviction-hard
eviction-soft
eviction-soft-grace-period
eviction-pressure-transition-period
eviction-max-pod-grace-period
eviction-minimum-reclaim
eviction-pressure-transition-period
eviction-soft
eviction-soft-grace-period
executor-bindall
executor-logv
executor-path
@ -195,8 +200,8 @@ federated-api-qps
federated-kube-context
federation-name
file-check-frequency
file-suffix
file_content_in_loop
file-suffix
flex-volume-plugin-dir
forward-services
framework-name
@ -219,16 +224,16 @@ google-json-key
grace-period
ha-domain
hairpin-mode
hard-pod-affinity-symmetric-weight
hard
hard-pod-affinity-symmetric-weight
healthz-bind-address
healthz-port
horizontal-pod-autoscaler-sync-period
host-ipc-sources
hostname-override
host-network-sources
host-pid-sources
host-port-endpoints
hostname-override
http-check-frequency
http-port
ignore-daemonsets
@ -241,6 +246,7 @@ image-pull-policy
image-service-endpoint
include-extended-apis
included-types-overrides
include-extended-apis
input-base
input-dirs
insecure-experimental-approve-all-kubelet-csrs-for-group
@ -273,10 +279,6 @@ kops-zones
kube-api-burst
kube-api-content-type
kube-api-qps
kube-master
kube-master
kube-master-url
kube-reserved
kubecfg-file
kubectl-path
kubelet-address
@ -298,6 +300,10 @@ kubelet-read-only-port
kubelet-root-dir
kubelet-sync-frequency
kubelet-timeout
kube-master
kube-master
kube-master-url
kube-reserved
kubernetes-service-node-port
label-columns
large-cluster-size-threshold
@ -324,6 +330,8 @@ master-os-distro
master-service-namespace
max-concurrency
max-connection-bytes-per-sec
maximum-dead-containers
maximum-dead-containers-per-container
max-log-age
max-log-backups
max-log-size
@ -332,8 +340,6 @@ max-outgoing-burst
max-outgoing-qps
max-pods
max-requests-inflight
maximum-dead-containers
maximum-dead-containers-per-container
mesos-authentication-principal
mesos-authentication-provider
mesos-authentication-secret-file
@ -347,15 +353,15 @@ mesos-launch-grace-period
mesos-master
mesos-sandbox-overlay
mesos-user
min-pr-number
min-request-timeout
min-resync-period
minimum-container-ttl-duration
minimum-image-ttl-duration
minion-max-log-age
minion-max-log-backups
minion-max-log-size
minion-path-override
min-pr-number
min-request-timeout
min-resync-period
namespace-sync-period
network-plugin
network-plugin-dir
@ -367,14 +373,20 @@ node-eviction-rate
node-instance-group
node-ip
node-labels
node-max-log-age
node-max-log-backups
node-max-log-size
node-monitor-grace-period
node-monitor-period
node-name
node-os-distro
node-path-override
node-startup-grace-period
node-status-update-frequency
node-sync-period
no-headers
non-masquerade-cidr
no-suggestions
num-nodes
oidc-ca-file
oidc-client-id
@ -383,7 +395,6 @@ oidc-issuer-url
oidc-username-claim
only-idl
oom-score-adj
out-version
outofdisk-transition-frequency
output-base
output-directory
@ -391,6 +402,7 @@ output-file-base
output-package
output-print-type
output-version
out-version
path-override
pod-cidr
pod-eviction-timeout
@ -413,6 +425,7 @@ proxy-logv
proxy-mode
proxy-port-range
public-address-override
pvclaimbinder-sync-period
pv-recycler-increment-timeout-nfs
pv-recycler-maximum-retry
pv-recycler-minimum-timeout-hostpath
@ -420,7 +433,6 @@ pv-recycler-minimum-timeout-nfs
pv-recycler-pod-template-filepath-hostpath
pv-recycler-pod-template-filepath-nfs
pv-recycler-timeout-increment-hostpath
pvclaimbinder-sync-period
read-only-port
really-crash-for-testing
reconcile-cidr
@ -524,9 +536,9 @@ test-timeout
tls-ca-file
tls-cert-file
tls-private-key-file
to-version
token-auth-file
ttl-keys-prefix
to-version
ttl-secs
type-src
udp-port

View File

@ -31,11 +31,9 @@ import (
const (
StatusUnprocessableEntity = 422
StatusTooManyRequests = 429
// HTTP recommendations are for servers to define 5xx error codes
// for scenarios not covered by behavior. In this case, ServerTimeout
// is an indication that a transient server error has occurred and the
// client *should* retry, with an optional Retry-After header to specify
// the back off window.
// StatusServerTimeout is an indication that a transient server error has
// occurred and the client *should* retry, with an optional Retry-After
// header to specify the back off window.
StatusServerTimeout = 504
)

View File

@ -2,8 +2,8 @@
"kind": "Node",
"apiVersion": "v1",
"metadata": {
"name": "e2e-test-wojtekt-minion-etd6",
"selfLink": "/api/v1/nodes/e2e-test-wojtekt-minion-etd6",
"name": "e2e-test-wojtekt-node-etd6",
"selfLink": "/api/v1/nodes/e2e-test-wojtekt-node-etd6",
"uid": "a7e89222-e8e5-11e4-8fde-42010af09327",
"resourceVersion": "379",
"creationTimestamp": "2015-04-22T11:49:39Z"

View File

@ -331,7 +331,7 @@ func TestBadJSONRejection(t *testing.T) {
t.Errorf("Did not reject despite use of unknown type: %s", badJSONUnknownType)
}
/*badJSONKindMismatch := []byte(`{"kind": "Pod"}`)
if err2 := DecodeInto(badJSONKindMismatch, &Minion{}); err2 == nil {
if err2 := DecodeInto(badJSONKindMismatch, &Node{}); err2 == nil {
t.Errorf("Kind is set but doesn't match the object type: %s", badJSONKindMismatch)
}*/
}

View File

@ -1205,7 +1205,7 @@ message NamespaceStatus {
optional string phase = 1;
}
// Node is a worker node in Kubernetes, formerly known as minion.
// Node is a worker node in Kubernetes.
// Each node will have a unique identifier in the cache (i.e. in etcd).
message Node {
// Standard object's metadata.

View File

@ -2678,7 +2678,7 @@ type ResourceList map[ResourceName]resource.Quantity
// +genclient=true
// +nonNamespaced=true
// Node is a worker node in Kubernetes, formerly known as minion.
// Node is a worker node in Kubernetes.
// Each node will have a unique identifier in the cache (i.e. in etcd).
type Node struct {
unversioned.TypeMeta `json:",inline"`

View File

@ -795,7 +795,7 @@ func (NamespaceStatus) SwaggerDoc() map[string]string {
}
var map_Node = map[string]string{
"": "Node is a worker node in Kubernetes, formerly known as minion. Each node will have a unique identifier in the cache (i.e. in etcd).",
"": "Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd).",
"metadata": "Standard object's metadata. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata",
"spec": "Spec defines the behavior of a node. http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status",
"status": "Most recently observed status of the node. Populated by the system. Read-only. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status",

View File

@ -2885,7 +2885,7 @@ func (c *Cloud) updateInstanceSecurityGroupsForLoadBalancer(lb *elb.LoadBalancer
// Open the firewall from the load balancer to the instance
// We don't actually have a trivial way to know in advance which security group the instance is in
// (it is probably the minion security group, but we don't easily have that).
// (it is probably the node security group, but we don't easily have that).
// However, we _do_ have the list of security groups on the instance records.
// Map containing the changes we want to make; true to add, false to remove

View File

@ -2085,7 +2085,7 @@ func (gce *GCECloud) GetInstanceGroup(name string, zone string) (*compute.Instan
// Take a GCE instance 'hostname' and break it down to something that can be fed
// to the GCE API client library. Basically this means reducing 'kubernetes-
// minion-2.c.my-proj.internal' to 'kubernetes-minion-2' if necessary.
// node-2.c.my-proj.internal' to 'kubernetes-node-2' if necessary.
func canonicalizeInstanceName(name string) string {
ix := strings.Index(name, ".")
if ix != -1 {

View File

@ -283,7 +283,7 @@ func (m *OVirtInstanceMap) ListSortedNames() []string {
return names
}
// List enumerates the set of minions instances known by the cloud provider
// List enumerates the set of nodes instances known by the cloud provider
func (v *OVirtCloud) List(filter string) ([]types.NodeName, error) {
instances, err := v.fetchAllInstances()
if err != nil {

View File

@ -573,7 +573,7 @@ func Example_printPodWithWideFormat() {
NegotiatedSerializer: ns,
Client: nil,
}
nodeName := "kubernetes-minion-abcd"
nodeName := "kubernetes-node-abcd"
cmd := NewCmdRun(f, os.Stdin, os.Stdout, os.Stderr)
pod := &api.Pod{
ObjectMeta: api.ObjectMeta{
@ -600,7 +600,7 @@ func Example_printPodWithWideFormat() {
}
// Output:
// NAME READY STATUS RESTARTS AGE IP NODE
// test1 1/2 podPhase 6 10y 10.1.1.3 kubernetes-minion-abcd
// test1 1/2 podPhase 6 10y 10.1.1.3 kubernetes-node-abcd
}
func Example_printPodWithShowLabels() {
@ -613,7 +613,7 @@ func Example_printPodWithShowLabels() {
NegotiatedSerializer: ns,
Client: nil,
}
nodeName := "kubernetes-minion-abcd"
nodeName := "kubernetes-node-abcd"
cmd := NewCmdRun(f, os.Stdin, os.Stdout, os.Stderr)
pod := &api.Pod{
ObjectMeta: api.ObjectMeta{
@ -647,7 +647,7 @@ func Example_printPodWithShowLabels() {
}
func newAllPhasePodList() *api.PodList {
nodeName := "kubernetes-minion-abcd"
nodeName := "kubernetes-node-abcd"
return &api.PodList{
Items: []api.Pod{
{

View File

@ -48,7 +48,7 @@ var (
describe_example = dedent.Dedent(`
# Describe a node
kubectl describe nodes kubernetes-minion-emt8.c.myproject.internal
kubectl describe nodes kubernetes-node-emt8.c.myproject.internal
# Describe a pod
kubectl describe pods/nginx

View File

@ -15,5 +15,5 @@ limitations under the License.
*/
// Package registrytest provides tests for Registry implementations
// for storing Minions, Pods, Schedulers and Services.
// for storing Nodes, Pods, Schedulers and Services.
package registrytest // import "k8s.io/kubernetes/pkg/registry/registrytest"

View File

@ -161,7 +161,7 @@ func TestBadJSONRejection(t *testing.T) {
t.Errorf("Did not reject despite use of unknown type: %s", badJSONUnknownType)
}
/*badJSONKindMismatch := []byte(`{"kind": "Pod"}`)
if err2 := DecodeInto(badJSONKindMismatch, &Minion{}); err2 == nil {
if err2 := DecodeInto(badJSONKindMismatch, &Node{}); err2 == nil {
t.Errorf("Kind is set but doesn't match the object type: %s", badJSONKindMismatch)
}*/
}

View File

@ -38,20 +38,20 @@ func TestProxyTransport(t *testing.T) {
testTransport := &Transport{
Scheme: "http",
Host: "foo.com",
PathPrepend: "/proxy/minion/minion1:10250",
PathPrepend: "/proxy/node/node1:10250",
}
testTransport2 := &Transport{
Scheme: "https",
Host: "foo.com",
PathPrepend: "/proxy/minion/minion1:8080",
PathPrepend: "/proxy/node/node1:8080",
}
emptyHostTransport := &Transport{
Scheme: "https",
PathPrepend: "/proxy/minion/minion1:10250",
PathPrepend: "/proxy/node/node1:10250",
}
emptySchemeTransport := &Transport{
Host: "foo.com",
PathPrepend: "/proxy/minion/minion1:10250",
PathPrepend: "/proxy/node/node1:10250",
}
type Item struct {
input string
@ -67,120 +67,120 @@ func TestProxyTransport(t *testing.T) {
table := map[string]Item{
"normal": {
input: `<pre><a href="kubelet.log">kubelet.log</a><a href="/google.log">google.log</a></pre>`,
sourceURL: "http://myminion.com/logs/log.log",
sourceURL: "http://mynode.com/logs/log.log",
transport: testTransport,
output: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/minion/minion1:10250/google.log">google.log</a></pre>`,
output: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/node/node1:10250/google.log">google.log</a></pre>`,
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:10250/logs/log.log",
forwardedURI: "/proxy/node/node1:10250/logs/log.log",
},
"full document": {
input: `<html><header></header><body><pre><a href="kubelet.log">kubelet.log</a><a href="/google.log">google.log</a></pre></body></html>`,
sourceURL: "http://myminion.com/logs/log.log",
sourceURL: "http://mynode.com/logs/log.log",
transport: testTransport,
output: `<html><header></header><body><pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/minion/minion1:10250/google.log">google.log</a></pre></body></html>`,
output: `<html><header></header><body><pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/node/node1:10250/google.log">google.log</a></pre></body></html>`,
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:10250/logs/log.log",
forwardedURI: "/proxy/node/node1:10250/logs/log.log",
},
"trailing slash": {
input: `<pre><a href="kubelet.log">kubelet.log</a><a href="/google.log/">google.log</a></pre>`,
sourceURL: "http://myminion.com/logs/log.log",
sourceURL: "http://mynode.com/logs/log.log",
transport: testTransport,
output: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/minion/minion1:10250/google.log/">google.log</a></pre>`,
output: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/node/node1:10250/google.log/">google.log</a></pre>`,
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:10250/logs/log.log",
forwardedURI: "/proxy/node/node1:10250/logs/log.log",
},
"content-type charset": {
input: `<pre><a href="kubelet.log">kubelet.log</a><a href="/google.log">google.log</a></pre>`,
sourceURL: "http://myminion.com/logs/log.log",
sourceURL: "http://mynode.com/logs/log.log",
transport: testTransport,
output: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/minion/minion1:10250/google.log">google.log</a></pre>`,
output: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/node/node1:10250/google.log">google.log</a></pre>`,
contentType: "text/html; charset=utf-8",
forwardedURI: "/proxy/minion/minion1:10250/logs/log.log",
forwardedURI: "/proxy/node/node1:10250/logs/log.log",
},
"content-type passthrough": {
input: `<pre><a href="kubelet.log">kubelet.log</a><a href="/google.log">google.log</a></pre>`,
sourceURL: "http://myminion.com/logs/log.log",
sourceURL: "http://mynode.com/logs/log.log",
transport: testTransport,
output: `<pre><a href="kubelet.log">kubelet.log</a><a href="/google.log">google.log</a></pre>`,
contentType: "text/plain",
forwardedURI: "/proxy/minion/minion1:10250/logs/log.log",
forwardedURI: "/proxy/node/node1:10250/logs/log.log",
},
"subdir": {
input: `<a href="kubelet.log">kubelet.log</a><a href="/google.log">google.log</a>`,
sourceURL: "http://myminion.com/whatever/apt/somelog.log",
sourceURL: "http://mynode.com/whatever/apt/somelog.log",
transport: testTransport2,
output: `<a href="kubelet.log">kubelet.log</a><a href="https://foo.com/proxy/minion/minion1:8080/google.log">google.log</a>`,
output: `<a href="kubelet.log">kubelet.log</a><a href="https://foo.com/proxy/node/node1:8080/google.log">google.log</a>`,
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:8080/whatever/apt/somelog.log",
forwardedURI: "/proxy/node/node1:8080/whatever/apt/somelog.log",
},
"image": {
input: `<pre><img src="kubernetes.jpg"/><img src="/kubernetes_abs.jpg"/></pre>`,
sourceURL: "http://myminion.com/",
sourceURL: "http://mynode.com/",
transport: testTransport,
output: `<pre><img src="kubernetes.jpg"/><img src="http://foo.com/proxy/minion/minion1:10250/kubernetes_abs.jpg"/></pre>`,
output: `<pre><img src="kubernetes.jpg"/><img src="http://foo.com/proxy/node/node1:10250/kubernetes_abs.jpg"/></pre>`,
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:10250/",
forwardedURI: "/proxy/node/node1:10250/",
},
"abs": {
input: `<script src="http://google.com/kubernetes.js"/>`,
sourceURL: "http://myminion.com/any/path/",
sourceURL: "http://mynode.com/any/path/",
transport: testTransport,
output: `<script src="http://google.com/kubernetes.js"/>`,
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:10250/any/path/",
forwardedURI: "/proxy/node/node1:10250/any/path/",
},
"abs but same host": {
input: `<script src="http://myminion.com/kubernetes.js"/>`,
sourceURL: "http://myminion.com/any/path/",
input: `<script src="http://mynode.com/kubernetes.js"/>`,
sourceURL: "http://mynode.com/any/path/",
transport: testTransport,
output: `<script src="http://foo.com/proxy/minion/minion1:10250/kubernetes.js"/>`,
output: `<script src="http://foo.com/proxy/node/node1:10250/kubernetes.js"/>`,
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:10250/any/path/",
forwardedURI: "/proxy/node/node1:10250/any/path/",
},
"redirect rel": {
sourceURL: "http://myminion.com/redirect",
sourceURL: "http://mynode.com/redirect",
transport: testTransport,
redirect: "/redirected/target/",
redirectWant: "http://foo.com/proxy/minion/minion1:10250/redirected/target/",
forwardedURI: "/proxy/minion/minion1:10250/redirect",
redirectWant: "http://foo.com/proxy/node/node1:10250/redirected/target/",
forwardedURI: "/proxy/node/node1:10250/redirect",
},
"redirect abs same host": {
sourceURL: "http://myminion.com/redirect",
sourceURL: "http://mynode.com/redirect",
transport: testTransport,
redirect: "http://myminion.com/redirected/target/",
redirectWant: "http://foo.com/proxy/minion/minion1:10250/redirected/target/",
forwardedURI: "/proxy/minion/minion1:10250/redirect",
redirect: "http://mynode.com/redirected/target/",
redirectWant: "http://foo.com/proxy/node/node1:10250/redirected/target/",
forwardedURI: "/proxy/node/node1:10250/redirect",
},
"redirect abs other host": {
sourceURL: "http://myminion.com/redirect",
sourceURL: "http://mynode.com/redirect",
transport: testTransport,
redirect: "http://example.com/redirected/target/",
redirectWant: "http://example.com/redirected/target/",
forwardedURI: "/proxy/minion/minion1:10250/redirect",
forwardedURI: "/proxy/node/node1:10250/redirect",
},
"source contains the redirect already": {
input: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/minion/minion1:10250/google.log">google.log</a></pre>`,
input: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/node/node1:10250/google.log">google.log</a></pre>`,
sourceURL: "http://foo.com/logs/log.log",
transport: testTransport,
output: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/minion/minion1:10250/google.log">google.log</a></pre>`,
output: `<pre><a href="kubelet.log">kubelet.log</a><a href="http://foo.com/proxy/node/node1:10250/google.log">google.log</a></pre>`,
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:10250/logs/log.log",
forwardedURI: "/proxy/node/node1:10250/logs/log.log",
},
"no host": {
input: "<html></html>",
sourceURL: "http://myminion.com/logs/log.log",
sourceURL: "http://mynode.com/logs/log.log",
transport: emptyHostTransport,
output: "<html></html>",
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:10250/logs/log.log",
forwardedURI: "/proxy/node/node1:10250/logs/log.log",
},
"no scheme": {
input: "<html></html>",
sourceURL: "http://myminion.com/logs/log.log",
sourceURL: "http://mynode.com/logs/log.log",
transport: emptySchemeTransport,
output: "<html></html>",
contentType: "text/html",
forwardedURI: "/proxy/minion/minion1:10250/logs/log.log",
forwardedURI: "/proxy/node/node1:10250/logs/log.log",
},
}

View File

@ -1200,7 +1200,7 @@ message NamespaceStatus {
optional string phase = 1;
}
// Node is a worker node in Kubernetes, formerly known as minion.
// Node is a worker node in Kubernetes.
// Each node will have a unique identifier in the cache (i.e. in etcd).
message Node {
// Standard object's metadata.

View File

@ -2666,7 +2666,7 @@ type ResourceList map[ResourceName]resource.Quantity
// +genclient=true
// +nonNamespaced=true
// Node is a worker node in Kubernetes, formerly known as minion.
// Node is a worker node in Kubernetes.
// Each node will have a unique identifier in the cache (i.e. in etcd).
type Node struct {
unversioned.TypeMeta `json:",inline"`

View File

@ -794,7 +794,7 @@ func (NamespaceStatus) SwaggerDoc() map[string]string {
}
var map_Node = map[string]string{
"": "Node is a worker node in Kubernetes, formerly known as minion. Each node will have a unique identifier in the cache (i.e. in etcd).",
"": "Node is a worker node in Kubernetes.",
"metadata": "Standard object's metadata. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#metadata",
"spec": "Spec defines the behavior of a node. http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status",
"status": "Most recently observed status of the node. Populated by the system. Read-only. More info: http://releases.k8s.io/HEAD/docs/devel/api-conventions.md#spec-and-status",

View File

@ -706,7 +706,7 @@ func testRollbackDeployment(f *framework.Framework) {
deploymentStrategyType := extensions.RollingUpdateDeploymentStrategyType
framework.Logf("Creating deployment %s", deploymentName)
d := newDeployment(deploymentName, deploymentReplicas, deploymentPodLabels, deploymentImageName, deploymentImage, deploymentStrategyType, nil)
createAnnotation := map[string]string{"action": "create", "author": "minion"}
createAnnotation := map[string]string{"action": "create", "author": "node"}
d.Annotations = createAnnotation
deploy, err := c.Extensions().Deployments(ns).Create(d)
Expect(err).NotTo(HaveOccurred())

View File

@ -488,7 +488,7 @@ type ResourceUsagePerNode map[string]ResourceUsagePerContainer
func formatResourceUsageStats(nodeName string, containerStats ResourceUsagePerContainer) string {
// Example output:
//
// Resource usage for node "e2e-test-foo-minion-abcde":
// Resource usage for node "e2e-test-foo-node-abcde":
// container cpu(cores) memory(MB)
// "/" 0.363 2942.09
// "/docker-daemon" 0.088 521.80
@ -794,7 +794,7 @@ type NodesCPUSummary map[string]ContainersCPUSummary
func (r *ResourceMonitor) FormatCPUSummary(summary NodesCPUSummary) string {
// Example output for a node (the percentiles may differ):
// CPU usage of containers on node "e2e-test-foo-minion-0vj7":
// CPU usage of containers on node "e2e-test-foo-node-0vj7":
// container 5th% 50th% 90th% 95th%
// "/" 0.051 0.159 0.387 0.455
// "/runtime 0.000 0.000 0.146 0.166

View File

@ -237,7 +237,7 @@ func (r *ResourceCollector) GetBasicCPUStats(containerName string) map[float64]f
func formatResourceUsageStats(containerStats framework.ResourceUsagePerContainer) string {
// Example output:
//
// Resource usage for node "e2e-test-foo-minion-abcde":
// Resource usage for node "e2e-test-foo-node-abcde":
// container cpu(cores) memory(MB)
// "/" 0.363 2942.09
// "/docker-daemon" 0.088 521.80
@ -255,7 +255,7 @@ func formatResourceUsageStats(containerStats framework.ResourceUsagePerContainer
func formatCPUSummary(summary framework.ContainersCPUSummary) string {
// Example output for a node (the percentiles may differ):
// CPU usage of containers on node "e2e-test-foo-minion-0vj7":
// CPU usage of containers on node "e2e-test-foo-node-0vj7":
// container 5th% 50th% 90th% 95th%
// "/" 0.051 0.159 0.387 0.455
// "/runtime 0.000 0.000 0.146 0.166

View File

@ -15,22 +15,22 @@ Here is some representative output.
$ ./serve_hostnames
I0326 14:21:04.179893 11434 serve_hostnames.go:60] Starting serve_hostnames soak test with queries=10 and podsPerNode=1 upTo=1
I0326 14:21:04.507252 11434 serve_hostnames.go:85] Nodes found on this cluster:
I0326 14:21:04.507282 11434 serve_hostnames.go:87] 0: kubernetes-minion-5h4m.c.kubernetes-satnam.internal
I0326 14:21:04.507297 11434 serve_hostnames.go:87] 1: kubernetes-minion-9i4n.c.kubernetes-satnam.internal
I0326 14:21:04.507309 11434 serve_hostnames.go:87] 2: kubernetes-minion-d0yo.c.kubernetes-satnam.internal
I0326 14:21:04.507320 11434 serve_hostnames.go:87] 3: kubernetes-minion-jay1.c.kubernetes-satnam.internal
I0326 14:21:04.507282 11434 serve_hostnames.go:87] 0: kubernetes-node-5h4m.c.kubernetes-satnam.internal
I0326 14:21:04.507297 11434 serve_hostnames.go:87] 1: kubernetes-node-9i4n.c.kubernetes-satnam.internal
I0326 14:21:04.507309 11434 serve_hostnames.go:87] 2: kubernetes-node-d0yo.c.kubernetes-satnam.internal
I0326 14:21:04.507320 11434 serve_hostnames.go:87] 3: kubernetes-node-jay1.c.kubernetes-satnam.internal
I0326 14:21:04.507347 11434 serve_hostnames.go:95] Using namespace serve-hostnames-8145 for this test.
I0326 14:21:04.507363 11434 serve_hostnames.go:98] Creating service serve-hostnames-8145/serve-hostnames
I0326 14:21:04.559849 11434 serve_hostnames.go:148] Creating pod serve-hostnames-8145/serve-hostname-0-0 on node kubernetes-minion-5h4m.c.kubernetes-satnam.internal
I0326 14:21:04.605603 11434 serve_hostnames.go:148] Creating pod serve-hostnames-8145/serve-hostname-1-0 on node kubernetes-minion-9i4n.c.kubernetes-satnam.internal
I0326 14:21:04.662099 11434 serve_hostnames.go:148] Creating pod serve-hostnames-8145/serve-hostname-2-0 on node kubernetes-minion-d0yo.c.kubernetes-satnam.internal
I0326 14:21:04.707179 11434 serve_hostnames.go:148] Creating pod serve-hostnames-8145/serve-hostname-3-0 on node kubernetes-minion-jay1.c.kubernetes-satnam.internal
I0326 14:21:04.559849 11434 serve_hostnames.go:148] Creating pod serve-hostnames-8145/serve-hostname-0-0 on node kubernetes-node-5h4m.c.kubernetes-satnam.internal
I0326 14:21:04.605603 11434 serve_hostnames.go:148] Creating pod serve-hostnames-8145/serve-hostname-1-0 on node kubernetes-node-9i4n.c.kubernetes-satnam.internal
I0326 14:21:04.662099 11434 serve_hostnames.go:148] Creating pod serve-hostnames-8145/serve-hostname-2-0 on node kubernetes-node-d0yo.c.kubernetes-satnam.internal
I0326 14:21:04.707179 11434 serve_hostnames.go:148] Creating pod serve-hostnames-8145/serve-hostname-3-0 on node kubernetes-node-jay1.c.kubernetes-satnam.internal
I0326 14:21:04.757646 11434 serve_hostnames.go:194] Waiting for the serve-hostname pods to be ready
I0326 14:23:31.125188 11434 serve_hostnames.go:211] serve-hostnames-8145/serve-hostname-0-0 is running
I0326 14:23:31.165984 11434 serve_hostnames.go:211] serve-hostnames-8145/serve-hostname-1-0 is running
I0326 14:25:22.213751 11434 serve_hostnames.go:211] serve-hostnames-8145/serve-hostname-2-0 is running
I0326 14:25:37.387257 11434 serve_hostnames.go:211] serve-hostnames-8145/serve-hostname-3-0 is running
W0326 14:25:39.243813 11434 serve_hostnames.go:265] No response from pod serve-hostname-3-0 on node kubernetes-minion-jay1.c.kubernetes-satnam.internal at iteration 0
W0326 14:25:39.243813 11434 serve_hostnames.go:265] No response from pod serve-hostname-3-0 on node kubernetes-node-jay1.c.kubernetes-satnam.internal at iteration 0
I0326 14:25:39.243844 11434 serve_hostnames.go:269] Iteration 0 took 1.814483599s for 40 queries (22.04 QPS)
I0326 14:25:39.243871 11434 serve_hostnames.go:182] Cleaning up pods
I0326 14:25:39.434619 11434 serve_hostnames.go:130] Cleaning up service serve-hostnames-8145/server-hostnames
@ -45,20 +45,20 @@ The number of iterations to perform for issuing queries can be changed from the
$ ./serve_hostnames --up_to=3 --pods_per_node=2
I0326 14:27:27.584378 11808 serve_hostnames.go:60] Starting serve_hostnames soak test with queries=10 and podsPerNode=2 upTo=3
I0326 14:27:27.913713 11808 serve_hostnames.go:85] Nodes found on this cluster:
I0326 14:27:27.913774 11808 serve_hostnames.go:87] 0: kubernetes-minion-5h4m.c.kubernetes-satnam.internal
I0326 14:27:27.913800 11808 serve_hostnames.go:87] 1: kubernetes-minion-9i4n.c.kubernetes-satnam.internal
I0326 14:27:27.913825 11808 serve_hostnames.go:87] 2: kubernetes-minion-d0yo.c.kubernetes-satnam.internal
I0326 14:27:27.913846 11808 serve_hostnames.go:87] 3: kubernetes-minion-jay1.c.kubernetes-satnam.internal
I0326 14:27:27.913774 11808 serve_hostnames.go:87] 0: kubernetes-node-5h4m.c.kubernetes-satnam.internal
I0326 14:27:27.913800 11808 serve_hostnames.go:87] 1: kubernetes-node-9i4n.c.kubernetes-satnam.internal
I0326 14:27:27.913825 11808 serve_hostnames.go:87] 2: kubernetes-node-d0yo.c.kubernetes-satnam.internal
I0326 14:27:27.913846 11808 serve_hostnames.go:87] 3: kubernetes-node-jay1.c.kubernetes-satnam.internal
I0326 14:27:27.913904 11808 serve_hostnames.go:95] Using namespace serve-hostnames-4997 for this test.
I0326 14:27:27.913931 11808 serve_hostnames.go:98] Creating service serve-hostnames-4997/serve-hostnames
I0326 14:27:27.969083 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-0-0 on node kubernetes-minion-5h4m.c.kubernetes-satnam.internal
I0326 14:27:28.020133 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-0-1 on node kubernetes-minion-5h4m.c.kubernetes-satnam.internal
I0326 14:27:28.070054 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-1-0 on node kubernetes-minion-9i4n.c.kubernetes-satnam.internal
I0326 14:27:28.118641 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-1-1 on node kubernetes-minion-9i4n.c.kubernetes-satnam.internal
I0326 14:27:28.168786 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-2-0 on node kubernetes-minion-d0yo.c.kubernetes-satnam.internal
I0326 14:27:28.214730 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-2-1 on node kubernetes-minion-d0yo.c.kubernetes-satnam.internal
I0326 14:27:28.261685 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-3-0 on node kubernetes-minion-jay1.c.kubernetes-satnam.internal
I0326 14:27:28.320224 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-3-1 on node kubernetes-minion-jay1.c.kubernetes-satnam.internal
I0326 14:27:27.969083 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-0-0 on node kubernetes-node-5h4m.c.kubernetes-satnam.internal
I0326 14:27:28.020133 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-0-1 on node kubernetes-node-5h4m.c.kubernetes-satnam.internal
I0326 14:27:28.070054 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-1-0 on node kubernetes-node-9i4n.c.kubernetes-satnam.internal
I0326 14:27:28.118641 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-1-1 on node kubernetes-node-9i4n.c.kubernetes-satnam.internal
I0326 14:27:28.168786 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-2-0 on node kubernetes-node-d0yo.c.kubernetes-satnam.internal
I0326 14:27:28.214730 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-2-1 on node kubernetes-node-d0yo.c.kubernetes-satnam.internal
I0326 14:27:28.261685 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-3-0 on node kubernetes-node-jay1.c.kubernetes-satnam.internal
I0326 14:27:28.320224 11808 serve_hostnames.go:148] Creating pod serve-hostnames-4997/serve-hostname-3-1 on node kubernetes-node-jay1.c.kubernetes-satnam.internal
I0326 14:27:28.387007 11808 serve_hostnames.go:194] Waiting for the serve-hostname pods to be ready
I0326 14:28:28.969149 11808 serve_hostnames.go:211] serve-hostnames-4997/serve-hostname-0-0 is running
I0326 14:28:29.010376 11808 serve_hostnames.go:211] serve-hostnames-4997/serve-hostname-0-1 is running
@ -68,11 +68,11 @@ I0326 14:30:00.850461 11808 serve_hostnames.go:211] serve-hostnames-4997/serve
I0326 14:30:00.891559 11808 serve_hostnames.go:211] serve-hostnames-4997/serve-hostname-2-1 is running
I0326 14:30:00.932829 11808 serve_hostnames.go:211] serve-hostnames-4997/serve-hostname-3-0 is running
I0326 14:30:00.973941 11808 serve_hostnames.go:211] serve-hostnames-4997/serve-hostname-3-1 is running
W0326 14:30:04.726582 11808 serve_hostnames.go:265] No response from pod serve-hostname-2-0 on node kubernetes-minion-d0yo.c.kubernetes-satnam.internal at iteration 0
W0326 14:30:04.726658 11808 serve_hostnames.go:265] No response from pod serve-hostname-2-1 on node kubernetes-minion-d0yo.c.kubernetes-satnam.internal at iteration 0
W0326 14:30:04.726582 11808 serve_hostnames.go:265] No response from pod serve-hostname-2-0 on node kubernetes-node-d0yo.c.kubernetes-satnam.internal at iteration 0
W0326 14:30:04.726658 11808 serve_hostnames.go:265] No response from pod serve-hostname-2-1 on node kubernetes-node-d0yo.c.kubernetes-satnam.internal at iteration 0
I0326 14:30:04.726696 11808 serve_hostnames.go:269] Iteration 0 took 3.711080213s for 80 queries (21.56 QPS)
W0326 14:30:08.267297 11808 serve_hostnames.go:265] No response from pod serve-hostname-2-0 on node kubernetes-minion-d0yo.c.kubernetes-satnam.internal at iteration 1
W0326 14:30:08.267365 11808 serve_hostnames.go:265] No response from pod serve-hostname-2-1 on node kubernetes-minion-d0yo.c.kubernetes-satnam.internal at iteration 1
W0326 14:30:08.267297 11808 serve_hostnames.go:265] No response from pod serve-hostname-2-0 on node kubernetes-node-d0yo.c.kubernetes-satnam.internal at iteration 1
W0326 14:30:08.267365 11808 serve_hostnames.go:265] No response from pod serve-hostname-2-1 on node kubernetes-node-d0yo.c.kubernetes-satnam.internal at iteration 1
I0326 14:30:08.267404 11808 serve_hostnames.go:269] Iteration 1 took 3.540635303s for 80 queries (22.59 QPS)
I0326 14:30:11.971349 11808 serve_hostnames.go:269] Iteration 2 took 3.703884372s for 80 queries (21.60 QPS)
I0326 14:30:11.971425 11808 serve_hostnames.go:182] Cleaning up pods
@ -98,20 +98,20 @@ pod on node 3 returned 12 responses and the pod on node 2 did not respond at all
$ ./serve_hostnames --v=4
I0326 14:33:26.020917 12099 serve_hostnames.go:60] Starting serve_hostnames soak test with queries=10 and podsPerNode=1 upTo=1
I0326 14:33:26.365201 12099 serve_hostnames.go:85] Nodes found on this cluster:
I0326 14:33:26.365260 12099 serve_hostnames.go:87] 0: kubernetes-minion-5h4m.c.kubernetes-satnam.internal
I0326 14:33:26.365288 12099 serve_hostnames.go:87] 1: kubernetes-minion-9i4n.c.kubernetes-satnam.internal
I0326 14:33:26.365313 12099 serve_hostnames.go:87] 2: kubernetes-minion-d0yo.c.kubernetes-satnam.internal
I0326 14:33:26.365334 12099 serve_hostnames.go:87] 3: kubernetes-minion-jay1.c.kubernetes-satnam.internal
I0326 14:33:26.365260 12099 serve_hostnames.go:87] 0: kubernetes-node-5h4m.c.kubernetes-satnam.internal
I0326 14:33:26.365288 12099 serve_hostnames.go:87] 1: kubernetes-node-9i4n.c.kubernetes-satnam.internal
I0326 14:33:26.365313 12099 serve_hostnames.go:87] 2: kubernetes-node-d0yo.c.kubernetes-satnam.internal
I0326 14:33:26.365334 12099 serve_hostnames.go:87] 3: kubernetes-node-jay1.c.kubernetes-satnam.internal
I0326 14:33:26.365392 12099 serve_hostnames.go:95] Using namespace serve-hostnames-1631 for this test.
I0326 14:33:26.365419 12099 serve_hostnames.go:98] Creating service serve-hostnames-1631/serve-hostnames
I0326 14:33:26.423927 12099 serve_hostnames.go:118] Service create serve-hostnames-1631/server-hostnames took 58.473361ms
I0326 14:33:26.423981 12099 serve_hostnames.go:148] Creating pod serve-hostnames-1631/serve-hostname-0-0 on node kubernetes-minion-5h4m.c.kubernetes-satnam.internal
I0326 14:33:26.423981 12099 serve_hostnames.go:148] Creating pod serve-hostnames-1631/serve-hostname-0-0 on node kubernetes-node-5h4m.c.kubernetes-satnam.internal
I0326 14:33:26.480185 12099 serve_hostnames.go:168] Pod create serve-hostnames-1631/serve-hostname-0-0 request took 56.178906ms
I0326 14:33:26.480271 12099 serve_hostnames.go:148] Creating pod serve-hostnames-1631/serve-hostname-1-0 on node kubernetes-minion-9i4n.c.kubernetes-satnam.internal
I0326 14:33:26.480271 12099 serve_hostnames.go:148] Creating pod serve-hostnames-1631/serve-hostname-1-0 on node kubernetes-node-9i4n.c.kubernetes-satnam.internal
I0326 14:33:26.534300 12099 serve_hostnames.go:168] Pod create serve-hostnames-1631/serve-hostname-1-0 request took 53.981761ms
I0326 14:33:26.534396 12099 serve_hostnames.go:148] Creating pod serve-hostnames-1631/serve-hostname-2-0 on node kubernetes-minion-d0yo.c.kubernetes-satnam.internal
I0326 14:33:26.534396 12099 serve_hostnames.go:148] Creating pod serve-hostnames-1631/serve-hostname-2-0 on node kubernetes-node-d0yo.c.kubernetes-satnam.internal
I0326 14:33:26.590188 12099 serve_hostnames.go:168] Pod create serve-hostnames-1631/serve-hostname-2-0 request took 55.752115ms
I0326 14:33:26.590222 12099 serve_hostnames.go:148] Creating pod serve-hostnames-1631/serve-hostname-3-0 on node kubernetes-minion-jay1.c.kubernetes-satnam.internal
I0326 14:33:26.590222 12099 serve_hostnames.go:148] Creating pod serve-hostnames-1631/serve-hostname-3-0 on node kubernetes-node-jay1.c.kubernetes-satnam.internal
I0326 14:33:26.650024 12099 serve_hostnames.go:168] Pod create serve-hostnames-1631/serve-hostname-3-0 request took 59.781614ms
I0326 14:33:26.650083 12099 serve_hostnames.go:194] Waiting for the serve-hostname pods to be ready
I0326 14:33:32.776651 12099 serve_hostnames.go:211] serve-hostnames-1631/serve-hostname-0-0 is running
@ -161,7 +161,7 @@ I0326 14:35:05.607126 12099 serve_hostnames.go:249] Proxy call in namespace se
I0326 14:35:05.607164 12099 serve_hostnames.go:258] serve-hostname-3-0: 12
I0326 14:35:05.607176 12099 serve_hostnames.go:258] serve-hostname-1-0: 10
I0326 14:35:05.607186 12099 serve_hostnames.go:258] serve-hostname-0-0: 18
W0326 14:35:05.607199 12099 serve_hostnames.go:265] No response from pod serve-hostname-2-0 on node kubernetes-minion-d0yo.c.kubernetes-satnam.internal at iteration 0
W0326 14:35:05.607199 12099 serve_hostnames.go:265] No response from pod serve-hostname-2-0 on node kubernetes-node-d0yo.c.kubernetes-satnam.internal at iteration 0
I0326 14:35:05.607211 12099 serve_hostnames.go:269] Iteration 0 took 1.774856469s for 40 queries (22.54 QPS)
I0326 14:35:05.607236 12099 serve_hostnames.go:182] Cleaning up pods
I0326 14:35:05.797893 12099 serve_hostnames.go:130] Cleaning up service serve-hostnames-1631/server-hostnames