k3s/cluster/addons/fluentd-gcp
Kubernetes Submit Queue 888546c325
Merge pull request #68029 from neolit123/fluentd-owners
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md.

cluster/addons: add labels to fluentd owner files

**What this PR does / why we need it**:
this PR adds SIG labels to fluentd OWNER files:
- cluster/addons/fluentd-elasticsearch/OWNERS
- cluster/addons/fluentd-gcp/OWNERS

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:
let me know if the labels need adjustment.

**Release note**:

```release-note
NONE
```

/assign @roberthbailey @mikedanese 
/cc @timothysc 
/sig gcp
/sig instrumentation
/kind cleanup
2018-09-02 12:51:38 -07:00
..
fluentd-gcp-image Update docs for user-guide 2017-06-27 12:21:49 +08:00
podsecuritypolicies Use runtime/default as default seccomp profile for unprivileged PodSecurityPolicy 2018-05-15 09:39:37 -07:00
OWNERS cluser/addons: add labels to fluentd owner files 2018-08-30 00:38:08 +03:00
README.md Review #1 2018-02-22 09:59:16 +01:00
event-exporter.yaml Bump version of event-exporter. 2018-07-13 13:20:58 +02:00
fluentd-gcp-configmap-old.yaml remove rescheduler 2018-08-22 11:49:14 +08:00
fluentd-gcp-configmap.yaml remove rescheduler 2018-08-22 11:49:14 +08:00
fluentd-gcp-ds-sa.yaml Fix setting resources in fluentd-gcp plugin 2017-11-22 12:40:50 +01:00
fluentd-gcp-ds.yaml Put fluentd back to host network 2018-08-30 10:44:04 +02:00
scaler-deployment.yaml Fix parameter for fluentd-gcp-scaler 2018-08-16 16:18:51 +02:00
scaler-policy.yaml Enable scaling fluentd-gcp resources using ScalingPolicy. 2018-02-09 14:33:33 +01:00
scaler-rbac.yaml Enable scaling fluentd-gcp resources using ScalingPolicy. 2018-02-09 14:33:33 +01:00

README.md

Stackdriver Logging Agent

==============

Stackdriver Logging Agent is a DaemonSet which spawns a pod on each node that reads logs, generated by kubelet, container runtime and containers and sends them to the Stackdriver. When logs are exported to the Stackdriver, they can be searched, viewed, and analyzed.

Learn more at: https://kubernetes.io/docs/tasks/debug-application-cluster/logging-stackdriver

Troubleshooting

In Kubernetes clusters in version 1.10.0 or later, fluentd-gcp DaemonSet can be manually scaled. This is useful e.g. when applications running in the cluster are sending a large volume of logs (i.e. over 100kB/s), causing fluentd-gcp to fail with OutOfMemory errors. Conversely, if the applications aren't generating a lot of logs, it may be useful to reduce the amount of resources consumed by fluentd-gcp, making these resources available to other applications. To learn more about Kubernetes resource requests and limits, see the official documentation (CPU, memory). The amount of resources requested by fluentd-gcp on every node in the cluster can be fetched by running following command:

$ kubectl get ds -n kube-system -l k8s-app=fluentd-gcp \
-o custom-columns=NAME:.metadata.name,\
CPU_REQUEST:.spec.template.spec.containers[].resources.requests.cpu,\
MEMORY_REQUEST:.spec.template.spec.containers[].resources.requests.memory,\
MEMORY_LIMIT:.spec.template.spec.containers[].resources.limits.memory

This will display an output similar to the following:

NAME                  CPU_REQUEST   MEMORY_REQUEST   MEMORY_LIMIT
fluentd-gcp-v2.0.15   100m          200Mi            300Mi

In order to change those values, a ScalingPolicy needs to be defined. Currently, only base values are supported (no automatic scaling). The ScalingPolicy can be created using kubectl. E.g. to set cpu request to 101m, memory request to 150Mi and memory limit to 400Mi:

$ cat <<EOF | kubectl apply -f -
apiVersion: scalingpolicy.kope.io/v1alpha1
kind: ScalingPolicy
metadata:
  name: fluentd-gcp-scaling-policy
  namespace: kube-system
spec:
  containers:
  - name: fluentd-gcp
    resources:
      requests:
      - resource: cpu
        base: 101m
      - resource: memory
        base: 150Mi
      limits:
      - resource: memory
        base: 400Mi
EOF

To remove the override and go back to GKE-provided defaults, it is enough to just remove the ScalingPolicy:

$ kubectl delete -n kube-system scalingpolicies.scalingpolicy.kope.io/fluentd-gcp-scaling-policy

Analytics