6092df1095
Automatic merge from submit-queue (batch tested with PRs 61818, 61800). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Add CRI container log format support back for elastic search. The CRI container log format support was removed accidentally in https://github.com/kubernetes/kubernetes/pull/58525. This PR adds that back. I've tested it, and it works: ``` SSSSS ------------------------------ [sig-instrumentation] Cluster level logging using Elasticsearch [Feature:Elasticsearch] should check that logs from containers are ingested into Elasticsearch /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/logging/elasticsearch/basic.go:39 [BeforeEach] [sig-instrumentation] Cluster level logging using Elasticsearch [Feature:Elasticsearch] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:141 STEP: Creating a kubernetes client Mar 28 08:09:01.724: INFO: >>> kubeConfig: /home/lantaol/.kube/config STEP: Building a namespace api object Mar 28 08:09:02.952: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Cluster level logging using Elasticsearch [Feature:Elasticsearch] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/logging/elasticsearch/basic.go:32 [It] should check that logs from containers are ingested into Elasticsearch /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/logging/elasticsearch/basic.go:39 Mar 28 08:09:02.988: INFO: Checking the Elasticsearch service exists. Mar 28 08:09:03.025: INFO: Checking to make sure the Elasticsearch pods are running Mar 28 08:09:03.066: INFO: Checking to make sure we are talking to an Elasticsearch service. Mar 28 08:09:03.176: INFO: Checking health of Elasticsearch service. Mar 28 08:09:03.299: INFO: Starting repeating logging pod synthlogger STEP: Waiting for logs to ingest Mar 28 08:09:17.420: INFO: Sending a search request to Elasticsearch with the following query: kubernetes.pod_name:synthlogger AND kubernetes.namespace_name:e2e-tests-es-logging-pqlx7 Mar 28 08:09:27.420: INFO: Sending a search request to Elasticsearch with the following query: kubernetes.pod_name:synthlogger AND kubernetes.namespace_name:e2e-tests-es-logging-pqlx7 Mar 28 08:09:37.420: INFO: Sending a search request to Elasticsearch with the following query: kubernetes.pod_name:synthlogger AND kubernetes.namespace_name:e2e-tests-es-logging-pqlx7 Mar 28 08:09:47.420: INFO: Sending a search request to Elasticsearch with the following query: kubernetes.pod_name:synthlogger AND kubernetes.namespace_name:e2e-tests-es-logging-pqlx7 Mar 28 08:09:57.420: INFO: Sending a search request to Elasticsearch with the following query: kubernetes.pod_name:synthlogger AND kubernetes.namespace_name:e2e-tests-es-logging-pqlx7 Mar 28 08:10:07.420: INFO: Sending a search request to Elasticsearch with the following query: kubernetes.pod_name:synthlogger AND kubernetes.namespace_name:e2e-tests-es-logging-pqlx7 [AfterEach] [sig-instrumentation] Cluster level logging using Elasticsearch [Feature:Elasticsearch] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:142 Mar 28 08:10:07.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-tests-es-logging-pqlx7" for this suite. Mar 28 08:10:57.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Mar 28 08:11:00.046: INFO: namespace: e2e-tests-es-logging-pqlx7, resource: bindings, ignored listing per whitelist Mar 28 08:11:00.338: INFO: namespace e2e-tests-es-logging-pqlx7 deletion completed in 52.693713026s • [SLOW TEST:118.614 seconds] [sig-instrumentation] Cluster level logging using Elasticsearch [Feature:Elasticsearch] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/common/framework.go:23 should check that logs from containers are ingested into Elasticsearch /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/logging/elasticsearch/basic.go:39 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMar 28 08:11:00.346: INFO: Running AfterSuite actions on all node Mar 28 08:11:00.346: INFO: Running AfterSuite actions on node 1 Ran 1 of 845 Specs in 123.981 seconds SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 844 Skipped PASS Ginkgo ran 1 suite in 2m4.323020647s Test Suite Passed 2018/03/28 08:11:00 process.go:152: Step './hack/ginkgo-e2e.sh --ginkgo.focus=Cluster\slevel\slogging\susing\sElasticsearch' finished in 2m5.943972428s 2018/03/28 08:11:00 e2e.go:83: Done ``` Mark 1.10, because this is a regression for CRI container runtimes in 1.10. The original support was added in 1.9. https://github.com/kubernetes/kubernetes/pull/54777 **Release note**: ```release-note none ``` |
||
---|---|---|
.. | ||
addon-manager | ||
calico-policy-controller | ||
cluster-loadbalancing | ||
cluster-monitoring | ||
dashboard | ||
device-plugins/nvidia-gpu | ||
dns | ||
dns-horizontal-autoscaler | ||
etcd-empty-dir-cleanup | ||
fluentd-elasticsearch | ||
fluentd-gcp | ||
ip-masq-agent | ||
istio | ||
kube-proxy | ||
metadata-agent | ||
metadata-proxy | ||
metrics-server | ||
node-problem-detector | ||
python-image | ||
rbac | ||
storage-class | ||
BUILD | ||
README.md |
README.md
Legacy Cluster add-ons
For more information on add-ons see the documentation.
Overview
Cluster add-ons are resources like Services and Deployments (with pods) that are shipped with the Kubernetes binaries and are considered an inherent part of the Kubernetes clusters.
There are currently two classes of add-ons:
- Add-ons that will be reconciled.
- Add-ons that will be created if they don't exist.
More details could be found in addon-manager/README.md.
Cooperating Horizontal / Vertical Auto-Scaling with "reconcile class addons"
"Reconcile" class addons will be periodically reconciled to the original state given
by the initial config. In order to make Horizontal / Vertical Auto-scaling functional,
the related fields in config should be left unset. More specifically, leave replicas
in ReplicationController
/ Deployment
/ ReplicaSet
unset for Horizontal Scaling,
leave resources
for container unset for Vertical Scaling. The periodic reconcile
won't clobbered these fields, hence they could be managed by Horizontal / Vertical
Auto-scaler.
Add-on naming
The suggested naming for most of the resources is <basename>
(with no version number).
Though resources like Pod
, ReplicationController
and DaemonSet
are exceptional.
It would be hard to update Pod
because many fields in Pod
are immutable. For
ReplicationController
and DaemonSet
, in-place update may not trigger the underlying
pods to be re-created. You probably need to change their names during update to trigger
a complete deletion and creation.