Automatic merge from submit-queue
Admit critical pods under resource pressure
And evict critical pods that are not static.
Depends on #40952.
For #40573
Automatic merge from submit-queue (batch tested with PRs 41994, 41969, 41997, 40952, 40576)
Updating kubectl to send delete requests with orphanDependents=false if --cascade is true
Ref https://github.com/kubernetes/kubernetes/issues/40568#38897
Updating kubectl to always set `DeleteOptions.orphanDependents=false` when deleting a resource with `--cascade=true`.
This is primarily for federation where we want to use server side cascading deletion.
Impact on kubernetes: kubectl will do another GET after sending a DELETE and wait till the resource is actually deleted. This can have an impact if the resource has a finalizer. kubectl will wait till the finalizer is removed and then the resource is deleted, which is the right thing to do but a notable change in behavior.
cc @caesarxuchao @lavalamp @smarterclayton @kubernetes/sig-federation-pr-reviews @kubernetes/sig-cli-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41994, 41969, 41997, 40952, 40576)
Guaranteed admission for Critical Pods
This is the first step in implementing node-level preemption for critical pods.
It defines the AdmissionFailureHandler interface, which allows callers, like the kubelet, to define how failed predicates are handled, and take steps to correct failures if necessary.
In the kubelet's implementation, it triggers preemption if the pod being admitted is critical, and if the only failed predicates are InsufficientResourceErrors, then it prempts (not yet implemented) other other pods to allow admission of the critical pod.
cc: @vishh
Automatic merge from submit-queue
Add namespaced role to inspect particular configmap for delegated authentication
Builds on https://github.com/kubernetes/kubernetes/pull/41814 and https://github.com/kubernetes/kubernetes/pull/41922 (those are already lgtm'ed) with the ultimate goal of making an extension API server zero-config for "normal" authentication cases.
This part creates a namespace role in `kube-system` that can *only* look the configmap which gives the delegated authentication check. When a cluster-admin grants the SA running the extension API server the power to run delegated authentication checks, he should also bind this role in this namespace.
@sttts Should we add a flag to aggregated API servers to indicate they want to look this up so they can crashloop on startup? The alternative is sometimes having it and sometimes not. I guess we could try to key on explicit "disable front-proxy" which may make more sense.
@kubernetes/sig-api-machinery-misc
@ncdc I spoke to @liggitt about this before he left and he was ok in concept. Can you take a look at the details?
Automatic merge from submit-queue (batch tested with PRs 41857, 41864, 40522, 41835, 41991)
kubectl: Allow 'drain --force' to remove orphaned pods
If the managing resource of a given pod (e.g. DaemonSet/ReplicaSet/etc) is deleted (effectively orphaning the pod), and ``kubectl drain --force`` is invoked on the node hosting the pod, the command would fail with an error indicating that the managing resource was not found. This PR reduces the error to a warning if ``--force`` is specified, allowing nodes with orphaned pods to be drained.
Reference: https://bugzilla.redhat.com/show_bug.cgi?id=1424678
cc: @derekwaynecarr
```release-note
Allow drain --force to remove pods whose managing resource is deleted.
```
Automatic merge from submit-queue (batch tested with PRs 41814, 41922, 41957, 41406, 41077)
add kubectl can-i to see if you can perform an action
Adds `kubectl auth can-i <verb> <resource> [<name>]` so that a user can see if they are allowed to perform an action.
@kubernetes/sig-cli-pr-reviews @fabianofranz
This particular command satisfies the immediate need of knowing if you can perform an action without trying that action. When using RBAC in a script that is adding permissions, there is a lag between adding the permission and the permission being realized in the RBAC cache. As a user on the CLI, you almost never see it, but as a script adding a binding and then using that new power, you hit it quite often.
There are natural follow-ons to the same area (hence the `auth` subcommand) to figure out if someone else can perform an action, what actions you can perform in total, and who can perform a given action. Someone else is an API we have already, what-can-i-do was a proposed API a while back and a very useful one for interfaces, and who-can is common question if someone is administering a namespace.
Automatic merge from submit-queue (batch tested with PRs 41814, 41922, 41957, 41406, 41077)
pv_controller: Do not report exponential backoff as error.
It's not an error when recycle/delete/provision operation cannot be started
because it has failed recently. It will be restarted automatically when
backoff expires.
This just pollutes logs without any useful information:
```
E0214 08:00:30.428073 77288 pv_controller.go:1410] error scheduling operaion "delete-pvc-1fa0e8b4-f2b5-11e6-a8bb-fa163ecb84eb[1fbd52ee-f2b5-11e6-a8bb-fa163ecb84eb]": Failed to create operation with name "delete-pvc-1fa0e8b4-f2b5-11e6-a8bb-fa163ecb84eb[1fbd52ee-f2b5-11e6-a8bb-fa163ecb84eb]". An operation with that name failed at 2017-02-14 08:00:15.631133152 -0500 EST. No retries permitted until 2017-02-14 08:00:31.631133152 -0500 EST (16s). Last error: "Cannot delete the volume \"11a4faea-bfc7-4713-88b3-dec492480dba\", it's still attached to a node".
```
```release-note
NONE
```
@kubernetes/sig-storage-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41814, 41922, 41957, 41406, 41077)
Use consistent helper for getting secret names from pod
Kubelet secret-manager and mirror-pod admission both need to know what secrets a pod spec references. Eventually, a node authorizer will also need to know the list of secrets.
This creates a single (well, double, because api versions) helper that can be used to traverse the secret names referenced from a pod, optionally short-circuiting (for places that are just looking to see if any secrets are referenced, like admission, or are looking for a particular secret ref, like authorization)
Fixes:
* secret manager not handling secrets used by env/envFrom in initcontainers
* admission allowing mirror pods with secret references
@smarterclayton @wojtek-t
Automatic merge from submit-queue
make reconcilation generic to handle roles and clusterroles
We have a need to reconcile regular roles, so this pull moves the reconciliation code to use interfaces (still tightly coupled) rather than structs.
@liggitt @kubernetes/sig-auth-pr-reviews
Automatic merge from submit-queue
add client-ca to configmap in kube-public
Client CA information is not secret and it's required for any API server trying to terminate a TLS connection. This pull adds the information to configmaps in `kube-public` that look like this:
```yaml
apiVersion: v1
data:
client-ca.crt: |
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
requestheader-allowed-names: '["system:auth-proxy"]'
requestheader-client-ca-file: |
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
requestheader-extra-headers-prefix: '["X-Remote-Extra-"]'
requestheader-group-headers: '["X-Remote-Group"]'
requestheader-username-headers: '["X-Remote-User"]'
kind: ConfigMap
metadata:
creationTimestamp: 2017-02-22T17:54:37Z
name: extension-apiserver-authentication
namespace: kube-system
resourceVersion: "6"
selfLink: /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
uid: fa1dd328-f927-11e6-8b0e-28d2447dc82b
```
@kubernetes/sig-auth-api-reviews @liggitt @kubernetes/sig-api-machinery-pr-reviews @lavalamp @sttts
There will need to be a corresponding pull for permissions
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)
Add custom CA file to openstack cloud provider config
**What this PR does / why we need it**: Adds ability to specify custom CA bundle file to verify OpenStack endpoint against. Useful in tests and PoC deployments. Similar to what https://github.com/kubernetes/kubernetes/pull/35488 did for authentication.
**Which issue this PR fixes**: None
**Special notes for your reviewer**: Based on https://github.com/kubernetes/kubernetes/pull/35488 which added support for custom CA file for authentication.
**Release note**:
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)
Make DaemonSets survive taint-based evictions when nodes turn unreachable/notReady
**What this PR does / why we need it**:
DaemonPods shouldn't be deleted by NodeController in case of Node problems.
This PR is to add infinite tolerations for Unreachable/NotReady NoExecute Taints, so that they won't be deleted by NodeController when a node goes unreachable/notReady.
**Which issue this PR fixes** :
fixes#41738
Related PR: #41133
**Special notes for your reviewer**:
**Release note**:
```release-note
Make DaemonSets survive taint-based evictions when nodes turn unreachable/notReady.
```
Automatic merge from submit-queue (batch tested with PRs 40932, 41896, 41815, 41309, 41628)
Modify CronJob API to add job history limits, cleanup jobs in controller
**What this PR does / why we need it**:
As discussed in #34710: this adds two limits to `CronJobSpec`, to limit the number of finished jobs created by a CronJob to keep.
**Which issue this PR fixes**: fixes#34710
**Special notes for your reviewer**:
cc @soltysh, please have a look and let me know what you think -- I'll then add end to end testing and update the doc in a separate commit. What is the timeline to get this into 1.6?
The plan:
- [x] API changes
- [x] Changing versioned APIs
- [x] `types.go`
- [x] `defaults.go` (nothing to do)
- [x] `conversion.go` (nothing to do?)
- [x] `conversion_test.go` (nothing to do?)
- [x] Changing the internal structure
- [x] `types.go`
- [x] `validation.go`
- [x] `validation_test.go`
- [x] Edit version conversions
- [x] Edit (nothing to do?)
- [x] Run `hack/update-codegen.sh`
- [x] Generate protobuf objects
- [x] Run `hack/update-generated-protobuf.sh`
- [x] Generate json (un)marshaling code
- [x] Run `hack/update-codecgen.sh`
- [x] Update fuzzer
- [x] Actual logic
- [x] Unit tests
- [x] End to end tests
- [x] Documentation changes and API specs update in separate commit
**Release note**:
```release-note
Add configurable limits to CronJob resource to specify how many successful and failed jobs are preserved.
```
Automatic merge from submit-queue (batch tested with PRs 41621, 41946, 41941, 41250, 41729)
Refactor printers and describers into their own package.
This sets the stage for using printer code from the server side (decoupled from kubectl) and loosens the coupling between kubectl and the printers. `pkg/printers` contains interfaces and has an import restriction against pulling in API specific code, while `pkg/printers/internalversion` can be used for internal types.
Add a method on `Factory` for retrieving PrinterForCommand which uses the Scheme and RESTMapper from the Factory, not the hardcoded ones. This further separates kubectl from the core API scheme and allows better composition.
Change NamePrinter to use RESTMapper (previously it was hardcoding those conversions). This means that we now return plural resource names (`pods/foo`) but is correct once aliases and shortnames start being returned by the mapper.
This is a prerequisite for server side get, but is pure refactor (contains no new features).
@deads2k @liggitt
Automatic merge from submit-queue (batch tested with PRs 41621, 41946, 41941, 41250, 41729)
bug fix for hostport-syncer
fix a bug introduced by the previous refactoring of hostport-syncer. https://github.com/kubernetes/kubernetes/pull/39443
and fix some nits
Automatic merge from submit-queue
BestEffort QoS class has min cpu shares
**What this PR does / why we need it**:
BestEffort QoS class is given the minimum amount of CPU shares per the QoS design.
Automatic merge from submit-queue (batch tested with PRs 42106, 42094, 42069, 42098, 41852)
Fix availableReplicas validation
An available replica is a ready replica, not the other way around
@kubernetes/sig-apps-bugs caught while testing https://github.com/kubernetes/kubernetes/pull/42097
Automatic merge from submit-queue (batch tested with PRs 41854, 41801, 40088, 41590, 41911)
Add storage.k8s.io/v1 API
v1 API is direct copy of v1beta1 API. This v1 API gets installed and exposed in this PR, I tested that kubectl can create both v1beta1 and v1 StorageClass.
~~Rest of Kubernetes (controllers, examples,. tests, ...) still use v1beta1 API, I will update it when this PR gets merged as these changes would get lost among generated code.~~ Most parts use v1 API now, it would not compile / run tests without it.
**Release note**:
```
Kubernetes API storage.k8s.io for storage objects is now fully supported and is available as storage.k8s.io/v1. Beta version of the API storage.k8s.io/v1beta1 is still available in this release, however it will be removed in a future Kubernetes release.
Together with the API endpoint, StorageClass annotation "storageclass.beta.kubernetes.io/is-default-class" is deprecated and "storageclass.kubernetes.io/is-default-class" should be used instead to mark a default storage class. The beta annotation is still working in this release, however it won't be supported in the next one.
```
@kubernetes/sig-storage-misc
Automatic merge from submit-queue (batch tested with PRs 40665, 41094, 41351, 41721, 41843)
Update i18n tools and process.
@fabianofranz @zen @kubernetes/sig-cli-pr-reviews
This is an update to the translation process based on feedback from folks.
The main changes are:
* `msgctx` is being removed from the files.
* String wrapping and string extraction have been separated.
* More tools from the `gettext` family of tools are being used
* Extracted strings are being sorted for canonical ordering
* A `.pot` template has been added.
Automatic merge from submit-queue (batch tested with PRs 41714, 41510, 42052, 41918, 31515)
Show specific error when a volume is formatted by unexpected filesystem.
kubelet now detects that e.g. xfs volume is being mounted as ext3 because of
wrong volume.Spec.
Mount error is left in the error message to diagnose issues with mounting e.g.
'ext3' volume as 'ext4' - they are different filesystems, however kernel should
mount ext3 as ext4 without errors.
Example kubectl describe pod output:
```
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
41s 3s 7 {kubelet ip-172-18-3-82.ec2.internal} Warning FailedMount MountVolume.MountDevice failed for volume "kubernetes.io/aws-ebs/aws://us-east-1d/vol-ba79c81d" (spec.Name: "pvc-ce175cbb-6b82-11e6-9fe4-0e885cca73d3") pod "3d19cb64-6b83-11e6-9fe4-0e885cca73d3" (UID: "3d19cb64-6b83-11e6-9fe4-0e885cca73d3") with: failed to mount the volume as "ext4", it's already formatted with "xfs". Mount error: mount failed: exit status 32
Mounting arguments: /dev/xvdba /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1d/vol-ba79c81d ext4 [defaults]
Output: mount: wrong fs type, bad option, bad superblock on /dev/xvdba,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
```
Automatic merge from submit-queue (batch tested with PRs 41714, 41510, 42052, 41918, 31515)
Switch scheduler to use generated listers/informers
Where possible, switch the scheduler to use generated listers and
informers. There are still some places where it probably makes more
sense to use one-off reflectors/informers (listing/watching just a
single node, listing/watching scheduled & unscheduled pods using a field
selector).
I think this can wait until master is open for 1.7 pulls, given that we're close to the 1.6 freeze.
After this and #41482 go in, the only code left that references legacylisters will be federation, and 1 bit in a stateful set unit test (which I'll clean up in a follow-up).
@resouer I imagine this will conflict with your equivalence class work, so one of us will be doing some rebasing 😄
cc @wojtek-t @gmarek @timothysc @jayunit100 @smarterclayton @deads2k @liggitt @sttts @derekwaynecarr @kubernetes/sig-scheduling-pr-reviews @kubernetes/sig-scalability-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41714, 41510, 42052, 41918, 31515)
controller: fix requeueing progressing deployments
Drop the secondary queue and add either ratelimited or after the
required amount of time that we need to wait directly in the main
queue. In this way we can always be sure that we will sync back
the Deployment if its progress has yet to resolve into a complete
(NewReplicaSetAvailable) or TimedOut condition.
This should also simplify the deployment controller a bit.
Fixes https://github.com/kubernetes/kubernetes/issues/39785. Once this change soaks, I will move the test out of the flaky suite.
@kubernetes/sig-apps-misc
Automatic merge from submit-queue
Fix for Support selection of datastore for dynamic provisioning in vS…
Fixes#40558
Current vSphere Cloud provider doesn't allow a user to select a datastore for dynamic provisioning. All the volumes are created in default datastore provided by the user in the global vsphere configuration file.
With this fix, the user will be able to provide the datastore in the storage class definition. This will allow the volumes to be created in the datastore specified by the user in the storage class definition. This field is optional. If no datastore is specified, the volume will be created in the default datastore specified in the global config file.
For example:
User creates a storage class with the datastore
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
datastore: VMFSDatastore
Now the volume will be created in the datastore - "VMFSDatastore" specified by the user.
If the user creates a storage class without any datastore
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
provisioner: kubernetes.io/vsphere-volume
parameters:
diskformat: thin
Now the volume will be created in the datastore which in the global configuration file (vsphere.conf)
@pdhamdhere @kerneltime
Automatic merge from submit-queue (batch tested with PRs 41667, 41820, 40910, 41645, 41361)
Refactor ControllerRefManager
**What this PR does / why we need it**:
To prepare for implementing ControllerRef across all controllers (https://github.com/kubernetes/community/pull/298), this pushes the common adopt/orphan logic into ControllerRefManager so each controller doesn't have to duplicate it.
This also shares the adopt/orphan logic between Pods and ReplicaSets, so it lives in only one place.
**Which issue this PR fixes**:
**Special notes for your reviewer**:
**Release note**:
```release-note
```
cc @kubernetes/sig-apps-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41667, 41820, 40910, 41645, 41361)
Switch admission to use shared informers
Originally part of #40097
cc @smarterclayton @derekwaynecarr @deads2k @liggitt @sttts @gmarek @wojtek-t @timothysc @lavalamp @kubernetes/sig-scalability-pr-reviews @kubernetes/sig-api-machinery-pr-reviews
Automatic merge from submit-queue (batch tested with PRs 41667, 41820, 40910, 41645, 41361)
Allow multiple mounts in StatefulSet volume zone placement
We have some heuristics that ensure that volumes (and hence stateful set
pods) are spread out across zones. Sadly they forgot to account for
multiple mounts. This PR updates the heuristic to ignore the mount name
when we see something that looks like a statefulset volume, thus
ensuring that multiple mounts end up in the same AZ.
Fix#35695
```release-note
Fix zone placement heuristics so that multiple mounts in a StatefulSet pod are created in the same zone
```
Automatic merge from submit-queue
add deads2k and sttts to kubeapiserver owners
Adds @deads2k and @sttts to packages we authored or significantly modified.
@lavalamp @smarterclayton
Automatic merge from submit-queue (batch tested with PRs 38702, 41810, 41778, 41858, 41872)
Always enable RBAC in kubeadm and make a pkg with authorization constants
**What this PR does / why we need it**:
This PR:
- Splits the authz constants out into a dedicated package, so consumers don't have to import lots of other things (informers, etc...)
- Makes a `IsValidAuthorizationMode` function for easy checking
- Hooks up kubeadm against the new constant package, for example using the validation method when validating the kubeadm API obj
- Always enables RBAC in kubeadm as discussed with @liggitt and @jbeda
- This because we have to grant some rules in all cases for kubeadm (for instance, making the cluster-info configmap public)
- Adds more unit tests
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
@liggitt @jbeda @errordeveloper @dmmcquay @pires @deads2k
Automatic merge from submit-queue (batch tested with PRs 38702, 41810, 41778, 41858, 41872)
gce: Reuse unsuccessfully provisioned volumes.
GCE PD names generated by Kubernetes are guaranteed to be unique - they
contain name of the cluster and UID of the PVC that is behind it.
Presence of a GCE PD that has the same name as we want to provision
indicates that previous provisioning did not go well and most probably
the controller manager process was restarted in the meantime.
Kubernetes should reuse this volume and not provision a new one.
Fixes#38681
Where possible, switch the scheduler to use generated listers and
informers. There are still some places where it probably makes more
sense to use one-off reflectors/informers (listing/watching just a
single node, listing/watching scheduled & unscheduled pods using a field
selector).
Automatic merge from submit-queue
Add ClassName attributes to PV and PVC
This just adds new attributes to PV/PVC. Real code that uses the attributes instead of beta annotations will follow when we agree on the attribute names / style.
Automatic merge from submit-queue
Include all user.Info data in CSR object
In order to use authorization checks to auto-approve CSRs in the future, we need all the info from the user.Info interface.
This mirrors the API fields in the TokenReview API used to return user info, and in the SubjectAccessReview API we use to check authorization.
```release-note
The CertificateSigningRequest API added the `extra` field to persist all information about the requesting user. This mirrors the fields in the SubjectAccessReview API used to check authorization.
```
Automatic merge from submit-queue (batch tested with PRs 41812, 41665, 40007, 41281, 41771)
kube-apiserver: add a bootstrap token authenticator for TLS bootstrapping
Follows up on https://github.com/kubernetes/kubernetes/pull/36101
Still needs:
* More tests.
* To be hooked up to the API server.
- Do I have to do that in a separate PR after k8s.io/apiserver is synced?
* Docs (kubernetes.io PR).
* Figure out caching strategy.
* Release notes.
cc @kubernetes/sig-auth-api-reviews @liggitt @luxas @jbeda
```release-notes
Added a new secret type "bootstrap.kubernetes.io/token" for dynamically creating TLS bootstrapping bearer tokens.
```
Automatic merge from submit-queue (batch tested with PRs 41812, 41665, 40007, 41281, 41771)
Kubelet-rkt: Add useful informations for Ops on the Kubelet Host
Create a Systemd SyslogIdentifier inside the [Service]
Create a Systemd Description inside the [Unit]
**What this PR does / why we need it**:
#### Overview
Logged against the host, it's difficult to identify who's who.
This PR add useful information to quickly get straight to the point with the **DESCRIPTION** field:
```
systemctl list-units "k8s*"
UNIT LOAD ACTIVE SUB DESCRIPTION
k8s_b5a9bdf7-e396-4989-8df0-30a5fda7f94c.service loaded active running kube-controller-manager-172.20.0.206
k8s_bec0d8a1-dc15-4b47-a850-e09cf098646a.service loaded active running nginx-daemonset-gxm4s
k8s_d2981e9c-2845-4aa2-a0de-46e828f0c91b.service loaded active running kube-apiserver-172.20.0.206
k8s_fde4b0ab-87f8-4fd1-b5d2-3154918f6c89.service loaded active running kube-scheduler-172.20.0.206
```
#### Overview and Journal
Always on the host, to easily retrieve the pods logs, this PR add a SyslogIdentifier named as the PodBaseName.
```
# A DaemonSet prometheus-node-exporter is running on the Kubernetes Cluster
systemctl list-units "k8s*" | grep prometheus-node-exporter
k8s_c60a4b1a-387d-4fce-afa1-642d6f5716c1.service loaded active running prometheus-node-exporter-85cpp
# Get the logs from the prometheus-node-exporter DaemonSet
journalctl -t prometheus-node-exporter | wc -l
278
```
Sadly the `journalctl` flag `-t` / `--identifier` doesn't allow a pattern to catch the logs.
Also this field improve any queries made by any tools who exports the Journal (E.g: ES, Kibana):
```
{
"__CURSOR" : "s=86fd390d123b47af89bb15f41feb9863;i=164b2c27;b=7709deb3400841009e0acc2fec1ebe0e;m=1fe822ca4;t=54635e6a62285;x=b2d321019d70f36f",
"__REALTIME_TIMESTAMP" : "1484572200411781",
"__MONOTONIC_TIMESTAMP" : "8564911268",
"_BOOT_ID" : "7709deb3400841009e0acc2fec1ebe0e",
"PRIORITY" : "6",
"_UID" : "0",
"_GID" : "0",
"_SYSTEMD_SLICE" : "system.slice",
"_SELINUX_CONTEXT" : "system_u:system_r:kernel_t:s0",
"_MACHINE_ID" : "7bbb4401667243da81671e23fd8a2246",
"_HOSTNAME" : "Kubelet-Host",
"_TRANSPORT" : "stdout",
"SYSLOG_FACILITY" : "3",
"_COMM" : "ld-linux-x86-64",
"_CAP_EFFECTIVE" : "3fffffffff",
"SYSLOG_IDENTIFIER" : "prometheus-node-exporter",
"_PID" : "88827",
"_EXE" : "/var/lib/rkt/pods/run/c60a4b1a-387d-4fce-afa1-642d6f5716c1/stage1/rootfs/usr/lib64/ld-2.21.so",
"_CMDLINE" : "stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn [....]",
"_SYSTEMD_CGROUP" : "/system.slice/k8s_c60a4b1a-387d-4fce-afa1-642d6f5716c1.service",
"_SYSTEMD_UNIT" : "k8s_c60a4b1a-387d-4fce-afa1-642d6f5716c1.service",
"MESSAGE" : "[ 8564.909237] prometheus-node-exporter[115]: time=\"2017-01-16T13:10:00Z\" level=info msg=\" - time\" source=\"node_exporter.go:157\""
}
```