devel/ tree 80col updates; and other minor edits

Signed-off-by: Mike Brown <brownwm@us.ibm.com>
pull/6/head
Mike Brown 2016-05-03 14:31:42 -05:00
parent 3a0d2d55c3
commit 136833f78e
6 changed files with 263 additions and 138 deletions

View File

@ -38,18 +38,18 @@ Most of what is written here is not at all specific to Kubernetes, but it bears
being written down in the hope that it will occasionally remind people of "best
practices" around code reviews.
You've just had a brilliant idea on how to make Kubernetes better. Let's call
that idea "FeatureX". Feature X is not even that complicated. You have a
pretty good idea of how to implement it. You jump in and implement it, fixing a
bunch of stuff along the way. You send your PR - this is awesome! And it sits.
And sits. A week goes by and nobody reviews it. Finally someone offers a few
comments, which you fix up and wait for more review. And you wait. Another
week or two goes by. This is horrible.
You've just had a brilliant idea on how to make Kubernetes better. Let's call
that idea "Feature-X". Feature-X is not even that complicated. You have a pretty
good idea of how to implement it. You jump in and implement it, fixing a bunch
of stuff along the way. You send your PR - this is awesome! And it sits. And
sits. A week goes by and nobody reviews it. Finally someone offers a few
comments, which you fix up and wait for more review. And you wait. Another
week or two goes by. This is horrible.
What went wrong? One particular problem that comes up frequently is this - your
PR is too big to review. You've touched 39 files and have 8657 insertions.
When your would-be reviewers pull up the diffs they run away - this PR is going
to take 4 hours to review and they don't have 4 hours right now. They'll get to it
What went wrong? One particular problem that comes up frequently is this - your
PR is too big to review. You've touched 39 files and have 8657 insertions. When
your would-be reviewers pull up the diffs they run away - this PR is going to
take 4 hours to review and they don't have 4 hours right now. They'll get to it
later, just as soon as they have more free time (ha!).
Let's talk about how to avoid this.
@ -63,38 +63,39 @@ Let's talk about how to avoid this.
## 1. Don't build a cathedral in one PR
Are you sure FeatureX is something the Kubernetes team wants or will accept, or
that it is implemented to fit with other changes in flight? Are you willing to
bet a few days or weeks of work on it? If you have any doubt at all about the
usefulness of your feature or the design - make a proposal doc (in docs/proposals;
for example [the QoS proposal](http://prs.k8s.io/11713)) or a sketch PR (e.g., just
the API or Go interface) or both. Write or code up just enough to express the idea
and the design and why you made those choices, then get feedback on this. Be clear
about what type of feedback you are asking for. Now, if we ask you to change a
bunch of facets of the design, you won't have to re-write it all.
Are you sure Feature-X is something the Kubernetes team wants or will accept, or
that it is implemented to fit with other changes in flight? Are you willing to
bet a few days or weeks of work on it? If you have any doubt at all about the
usefulness of your feature or the design - make a proposal doc (in
docs/proposals; for example [the QoS proposal](http://prs.k8s.io/11713)) or a
sketch PR (e.g., just the API or Go interface) or both. Write or code up just
enough to express the idea and the design and why you made those choices, then
get feedback on this. Be clear about what type of feedback you are asking for.
Now, if we ask you to change a bunch of facets of the design, you won't have to
re-write it all.
## 2. Smaller diffs are exponentially better
Small PRs get reviewed faster and are more likely to be correct than big ones.
Let's face it - attention wanes over time. If your PR takes 60 minutes to
review, I almost guarantee that the reviewer's eye for details is not as keen in
the last 30 minutes as it was in the first. This leads to multiple rounds of
review when one might have sufficed. In some cases the review is delayed in its
Let's face it - attention wanes over time. If your PR takes 60 minutes to
review, I almost guarantee that the reviewer's eye for detail is not as keen in
the last 30 minutes as it was in the first. This leads to multiple rounds of
review when one might have sufficed. In some cases the review is delayed in its
entirety by the need for a large contiguous block of time to sit and read your
code.
Whenever possible, break up your PRs into multiple commits. Making a series of
Whenever possible, break up your PRs into multiple commits. Making a series of
discrete commits is a powerful way to express the evolution of an idea or the
different ideas that make up a single feature. There's a balance to be struck,
obviously. If your commits are too small they become more cumbersome to deal
with. Strive to group logically distinct ideas into commits.
different ideas that make up a single feature. There's a balance to be struck,
obviously. If your commits are too small they become more cumbersome to deal
with. Strive to group logically distinct ideas into separate commits.
For example, if you found that FeatureX needed some "prefactoring" to fit in,
make a commit that JUST does that prefactoring. Then make a new commit for
FeatureX. Don't lump unrelated things together just because you didn't think
about prefactoring. If you need to, fork a new branch, do the prefactoring
there and send a PR for that. If you can explain why you are doing seemingly
no-op work ("it makes the FeatureX change easier, I promise") we'll probably be
For example, if you found that Feature-X needed some "prefactoring" to fit in,
make a commit that JUST does that prefactoring. Then make a new commit for
Feature-X. Don't lump unrelated things together just because you didn't think
about prefactoring. If you need to, fork a new branch, do the prefactoring
there and send a PR for that. If you can explain why you are doing seemingly
no-op work ("it makes the Feature-X change easier, I promise") we'll probably be
OK with it.
Obviously, a PR with 25 commits is still very cumbersome to review, so use
@ -103,135 +104,146 @@ common sense.
## 3. Multiple small PRs are often better than multiple commits
If you can extract whole ideas from your PR and send those as PRs of their own,
you can avoid the painful problem of continually rebasing. Kubernetes is a
you can avoid the painful problem of continually rebasing. Kubernetes is a
fast-moving codebase - lock in your changes ASAP, and make merges be someone
else's problem.
Obviously, we want every PR to be useful on its own, so you'll have to use
common sense in deciding what can be a PR vs what should be a commit in a larger
PR. Rule of thumb - if this commit or set of commits is directly related to
FeatureX and nothing else, it should probably be part of the FeatureX PR. If
PR. Rule of thumb - if this commit or set of commits is directly related to
Feature-X and nothing else, it should probably be part of the Feature-X PR. If
you can plausibly imagine someone finding value in this commit outside of
FeatureX, try it as a PR.
Feature-X, try it as a PR.
Don't worry about flooding us with PRs. We'd rather have 100 small, obvious PRs
Don't worry about flooding us with PRs. We'd rather have 100 small, obvious PRs
than 10 unreviewable monoliths.
## 4. Don't rename, reformat, comment, etc in the same PR
Often, as you are implementing FeatureX, you find things that are just wrong.
Bad comments, poorly named functions, bad structure, weak type-safety. You
Often, as you are implementing Feature-X, you find things that are just wrong.
Bad comments, poorly named functions, bad structure, weak type-safety. You
should absolutely fix those things (or at least file issues, please) - but not
in this PR. See the above points - break unrelated changes out into different
PRs or commits. Otherwise your diff will have WAY too many changes, and your
in this PR. See the above points - break unrelated changes out into different
PRs or commits. Otherwise your diff will have WAY too many changes, and your
reviewer won't see the forest because of all the trees.
## 5. Comments matter
Read up on GoDoc - follow those general rules. If you're writing code and you
Read up on GoDoc - follow those general rules. If you're writing code and you
think there is any possible chance that someone might not understand why you did
something (or that you won't remember what you yourself did), comment it. If
something (or that you won't remember what you yourself did), comment it. If
you think there's something pretty obvious that we could follow up on, add a
TODO. Many code-review comments are about this exact issue.
TODO. Many code-review comments are about this exact issue.
## 5. Tests are almost always required
Nothing is more frustrating than doing a review, only to find that the tests are
inadequate or even entirely absent. Very few PRs can touch code and NOT touch
tests. If you don't know how to test FeatureX - ask! We'll be happy to help
inadequate or even entirely absent. Very few PRs can touch code and NOT touch
tests. If you don't know how to test Feature-X - ask! We'll be happy to help
you design things for easy testing or to suggest appropriate test cases.
## 6. Look for opportunities to generify
If you find yourself writing something that touches a lot of modules, think hard
about the dependencies you are introducing between packages. Can some of what
you're doing be made more generic and moved up and out of the FeatureX package?
Do you need to use a function or type from an otherwise unrelated package? If
so, promote! We have places specifically for hosting more generic code.
about the dependencies you are introducing between packages. Can some of what
you're doing be made more generic and moved up and out of the Feature-X package?
Do you need to use a function or type from an otherwise unrelated package? If
so, promote! We have places specifically for hosting more generic code.
Likewise if FeatureX is similar in form to FeatureW which was checked in last
month and it happens to exactly duplicate some tricky stuff from FeatureW,
consider prefactoring core logic out and using it in both FeatureW and FeatureX.
But do that in a different commit or PR, please.
Likewise if Feature-X is similar in form to Feature-W which was checked in last
month and it happens to exactly duplicate some tricky stuff from Feature-W,
consider prefactoring core logic out and using it in both Feature-W and
Feature-X. But do that in a different commit or PR, please.
## 7. Fix feedback in a new commit
Your reviewer has finally sent you some feedback on FeatureX. You make a bunch
Your reviewer has finally sent you some feedback on Feature-X. You make a bunch
of changes and ... what? You could patch those into your commits with git
"squash" or "fixup" logic. But that makes your changes hard to verify. Unless
"squash" or "fixup" logic. But that makes your changes hard to verify. Unless
your whole PR is pretty trivial, you should instead put your fixups into a new
commit and re-push. Your reviewer can then look at that commit on its own - so
commit and re-push. Your reviewer can then look at that commit on its own - so
much faster to review than starting over.
We might still ask you to clean up your commits at the very end, for the sake
of a more readable history, but don't do this until asked, typically at the point
where the PR would otherwise be tagged LGTM.
of a more readable history, but don't do this until asked, typically at the
point where the PR would otherwise be tagged LGTM.
General squashing guidelines:
* Sausage => squash
When there are several commits to fix bugs in the original commit(s), address reviewer feedback, etc. Really we only want to see the end state and commit message for the whole PR.
When there are several commits to fix bugs in the original commit(s), address
reviewer feedback, etc. Really we only want to see the end state and commit
message for the whole PR.
* Layers => don't squash
When there are independent changes layered upon each other to achieve a single goal. For instance, writing a code munger could be one commit, applying it could be another, and adding a precommit check could be a third. One could argue they should be separate PRs, but there's really no way to test/review the munger without seeing it applied, and there needs to be a precommit check to ensure the munged output doesn't immediately get out of date.
When there are independent changes layered upon each other to achieve a single
goal. For instance, writing a code munger could be one commit, applying it could
be another, and adding a precommit check could be a third. One could argue they
should be separate PRs, but there's really no way to test/review the munger
without seeing it applied, and there needs to be a precommit check to ensure the
munged output doesn't immediately get out of date.
A commit, as much as possible, should be a single logical change. Each commit should always have a good title line (<70 characters) and include an additional description paragraph describing in more detail the change intended. Do not link pull requests by `#` in a commit description, because GitHub creates lots of spam. Instead, reference other PRs via the PR your commit is in.
A commit, as much as possible, should be a single logical change. Each commit
should always have a good title line (<70 characters) and include an additional
description paragraph describing in more detail the change intended. Do not link
pull requests by `#` in a commit description, because GitHub creates lots of
spam. Instead, reference other PRs via the PR your commit is in.
## 8. KISS, YAGNI, MVP, etc
Sometimes we need to remind each other of core tenets of software design - Keep
It Simple, You Aren't Gonna Need It, Minimum Viable Product, and so on. Adding
It Simple, You Aren't Gonna Need It, Minimum Viable Product, and so on. Adding
features "because we might need it later" is antithetical to software that
ships. Add the things you need NOW and (ideally) leave room for things you
ships. Add the things you need NOW and (ideally) leave room for things you
might need later - but don't implement them now.
## 9. Push back
We understand that it is hard to imagine, but sometimes we make mistakes. It's
OK to push back on changes requested during a review. If you have a good reason
We understand that it is hard to imagine, but sometimes we make mistakes. It's
OK to push back on changes requested during a review. If you have a good reason
for doing something a certain way, you are absolutely allowed to debate the
merits of a requested change. You might be overruled, but you might also
prevail. We're mostly pretty reasonable people. Mostly.
merits of a requested change. You might be overruled, but you might also
prevail. We're mostly pretty reasonable people. Mostly.
## 10. I'm still getting stalled - help?!
So, you've done all that and you still aren't getting any PR love? Here's some
So, you've done all that and you still aren't getting any PR love? Here's some
things you can do that might help kick a stalled process along:
* Make sure that your PR has an assigned reviewer (assignee in GitHub). If
this is not the case, reply to the PR comment stream asking for one to be
assigned.
* Make sure that your PR has an assigned reviewer (assignee in GitHub). If
this is not the case, reply to the PR comment stream asking for one to be
assigned.
* Ping the assignee (@username) on the PR comment stream asking for an
estimate of when they can get to it.
estimate of when they can get to it.
* Ping the assignee by email (many of us have email addresses that are well
published or are the same as our GitHub handle @google.com or @redhat.com).
published or are the same as our GitHub handle @google.com or @redhat.com).
* Ping the [team](https://github.com/orgs/kubernetes/teams) (via @team-name)
that works in the area you're submitting code.
that works in the area you're submitting code.
If you think you have fixed all the issues in a round of review, and you haven't
heard back, you should ping the reviewer (assignee) on the comment stream with a
"please take another look" (PTAL) or similar comment indicating you are done and
you think it is ready for re-review. In fact, this is probably a good habit for
you think it is ready for re-review. In fact, this is probably a good habit for
all PRs.
One phenomenon of open-source projects (where anyone can comment on any issue)
is the dog-pile - your PR gets so many comments from so many people it becomes
hard to follow. In this situation you can ask the primary reviewer
(assignee) whether they want you to fork a new PR to clear out all the comments.
Remember: you don't HAVE to fix every issue raised by every person who feels
like commenting, but you should at least answer reasonable comments with an
hard to follow. In this situation you can ask the primary reviewer (assignee)
whether they want you to fork a new PR to clear out all the comments. Remember:
you don't HAVE to fix every issue raised by every person who feels like
commenting, but you should at least answer reasonable comments with an
explanation.
## Final: Use common sense
Obviously, none of these points are hard rules. There is no document that can
take the place of common sense and good taste. Use your best judgment, but put
a bit of thought into how your work can be made easier to review. If you do
Obviously, none of these points are hard rules. There is no document that can
take the place of common sense and good taste. Use your best judgment, but put
a bit of thought into how your work can be made easier to review. If you do
these things your PRs will flow much more easily.

View File

@ -67,7 +67,7 @@ discoverable from the issue.
5. Link to durable storage with the rest of the logs. This means (for all the
tests that Google runs) the GCS link is mandatory! The Jenkins test result
link is nice but strictly optional: not only does it expire more quickly,
it's not accesible to non-Googlers.
it's not accessible to non-Googlers.
## Expectations when a flaky test is assigned to you
@ -132,15 +132,20 @@ system!
# Hunting flaky unit tests in Kubernetes
Sometimes unit tests are flaky. This means that due to (usually) race conditions, they will occasionally fail, even though most of the time they pass.
Sometimes unit tests are flaky. This means that due to (usually) race
conditions, they will occasionally fail, even though most of the time they pass.
We have a goal of 99.9% flake free tests. This means that there is only one flake in one thousand runs of a test.
We have a goal of 99.9% flake free tests. This means that there is only one
flake in one thousand runs of a test.
Running a test 1000 times on your own machine can be tedious and time consuming. Fortunately, there is a better way to achieve this using Kubernetes.
Running a test 1000 times on your own machine can be tedious and time consuming.
Fortunately, there is a better way to achieve this using Kubernetes.
_Note: these instructions are mildly hacky for now, as we get run once semantics and logging they will get better_
_Note: these instructions are mildly hacky for now, as we get run once semantics
and logging they will get better_
There is a testing image `brendanburns/flake` up on the docker hub. We will use this image to test our fix.
There is a testing image `brendanburns/flake` up on the docker hub. We will use
this image to test our fix.
Create a replication controller with the following config:
@ -166,15 +171,25 @@ spec:
value: https://github.com/kubernetes/kubernetes
```
Note that we omit the labels and the selector fields of the replication controller, because they will be populated from the labels field of the pod template by default.
Note that we omit the labels and the selector fields of the replication
controller, because they will be populated from the labels field of the pod
template by default.
```sh
kubectl create -f ./controller.yaml
```
This will spin up 24 instances of the test. They will run to completion, then exit, and the kubelet will restart them, accumulating more and more runs of the test.
You can examine the recent runs of the test by calling `docker ps -a` and looking for tasks that exited with non-zero exit codes. Unfortunately, docker ps -a only keeps around the exit status of the last 15-20 containers with the same image, so you have to check them frequently.
You can use this script to automate checking for failures, assuming your cluster is running on GCE and has four nodes:
This will spin up 24 instances of the test. They will run to completion, then
exit, and the kubelet will restart them, accumulating more and more runs of the
test.
You can examine the recent runs of the test by calling `docker ps -a` and
looking for tasks that exited with non-zero exit codes. Unfortunately, docker
ps -a only keeps around the exit status of the last 15-20 containers with the
same image, so you have to check them frequently.
You can use this script to automate checking for failures, assuming your cluster
is running on GCE and has four nodes:
```sh
echo "" > output.txt
@ -186,13 +201,15 @@ done
grep "Exited ([^0])" output.txt
```
Eventually you will have sufficient runs for your purposes. At that point you can delete the replication controller by running:
Eventually you will have sufficient runs for your purposes. At that point you
can delete the replication controller by running:
```sh
kubectl delete replicationcontroller flakecontroller
```
If you do a final check for flakes with `docker ps -a`, ignore tasks that exited -1, since that's what happens when you stop the replication controller.
If you do a final check for flakes with `docker ps -a`, ignore tasks that
exited -1, since that's what happens when you stop the replication controller.
Happy flake hunting!

View File

@ -34,34 +34,69 @@ Documentation for other releases can be found at
# Generation and release cycle of clientset
Client-gen is an automatic tool that generates [clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets) based on API types. This doc introduces the use the client-gen, and the release cycle of the generated clientsets.
Client-gen is an automatic tool that generates
[clientset](../../docs/proposals/client-package-structure.md#high-level-client-sets)
based on API types. This doc introduces the use the client-gen, and the release
cycle of the generated clientsets.
## Using client-gen
The workflow includes four steps:
- Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark the types (e.g., Pods) that you want to generate clients for with the `// +genclient=true` tag. If the resource associated with the type is not namespace scoped (e.g., PersistentVolume), you need to append the `nonNamespaced=true` tag as well.
- Running the client-gen tool: you need to use the command line argument `--input` to specify the groups and versions of the APIs you want to generate clients for, client-gen will then look into `pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you have marked with the `genclient` tags. For example, run
- Marking API types with tags: in `pkg/apis/${GROUP}/${VERSION}/types.go`, mark
the types (e.g., Pods) that you want to generate clients for with the
`// +genclient=true` tag. If the resource associated with the type is not
namespace scoped (e.g., PersistentVolume), you need to append the
`nonNamespaced=true` tag as well.
- Running the client-gen tool: you need to use the command line argument
`--input` to specify the groups and versions of the APIs you want to generate
clients for, client-gen will then look into
`pkg/apis/${GROUP}/${VERSION}/types.go` and generate clients for the types you
have marked with the `genclient` tags. For example, running:
```
$ client-gen --input="api/v1,extensions/v1beta1" --clientset-name="my_release"
```
will generate a clientset named "my_release" which includes clients for api/v1 objects and extensions/v1beta1 objects. You can run `$ client-gen --help` to see other command line arguments.
- Adding expansion methods: client-gen only generates the common methods, such as `Create()` and `Delete()`. You can manually add additional methods through the expansion interface. For example, this [file](../../pkg/client/clientset_generated/release_1_2/typed/core/v1/pod_expansion.go) adds additional methods to Pod's client. As a convention, we put the expansion interface and its methods in file ${TYPE}_expansion.go.
- Generating Fake clients for testing purposes: client-gen will generate a fake clientset if the command line argument `--fake-clientset` is set. The fake clientset provides the default implementation, you only need to fake out the methods you care about when writing test cases.
will generate a clientset named "my_release" which includes clients for api/v1
objects and extensions/v1beta1 objects. You can run `$ client-gen --help` to see
other command line arguments.
- Adding expansion methods: client-gen only generates the common methods, such
as `Create()` and `Delete()`. You can manually add additional methods through
the expansion interface. For example, this
[file](../../pkg/client/clientset_generated/release_1_2/typed/core/v1/pod_expansion.go)
adds additional methods to Pod's client. As a convention, we put the expansion
interface and its methods in file ${TYPE}_expansion.go.
- Generating fake clients for testing purposes: client-gen will generate a fake
clientset if the command line argument `--fake-clientset` is set. The fake
clientset provides the default implementation, you only need to fake out the
methods you care about when writing test cases.
The output of client-gen includes:
- clientset: the clientset will be generated at `pkg/client/clientset_generated/` by default, and you can change the path via the `--clientset-path` command line argument.
- clientset: the clientset will be generated at
`pkg/client/clientset_generated/` by default, and you can change the path via
the `--clientset-path` command line argument.
- Individual typed clients and client for group: They will be generated at `pkg/client/clientset_generated/${clientset_name}/typed/generated/${GROUP}/${VERSION}/`
## Released clientsets
At the 1.2 release, we have two released clientsets in the repo: internalclientset and release_1_2.
- internalclientset: because most components in our repo still deal with the internal objects, the internalclientset talks in internal objects to ease the adoption of clientset. We will keep updating it as our API evolves. Eventually it will be replaced by a versioned clientset.
- release_1_2: release_1_2 clientset is a versioned clientset, it includes clients for the core v1 objects, extensions/v1beta1, autoscaling/v1, and batch/v1 objects. We will NOT update it after we cut the 1.2 release. After the 1.2 release, we will create release_1_3 clientset and keep it updated until we cut release 1.3.
At the 1.2 release, we have two released clientsets in the repo:
internalclientset and release_1_2.
- internalclientset: because most components in our repo still deal with the
internal objects, the internalclientset talks in internal objects to ease the
adoption of clientset. We will keep updating it as our API evolves. Eventually
it will be replaced by a versioned clientset.
- release_1_2: release_1_2 clientset is a versioned clientset, it includes
clients for the core v1 objects, extensions/v1beta1, autoscaling/v1, and
batch/v1 objects. We will NOT update it after we cut the 1.2 release. After the
1.2 release, we will create release_1_3 clientset and keep it updated until we
cut release 1.3.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->

View File

@ -34,29 +34,36 @@ Documentation for other releases can be found at
# Getting Kubernetes Builds
You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh) to or use as a reference on how to get the most recent builds with curl. With `get-build.sh` you can grab the most recent stable build, the most recent release candidate, or the most recent build to pass our ci and gce e2e tests (essentially a nightly build).
You can use [hack/get-build.sh](http://releases.k8s.io/HEAD/hack/get-build.sh)
to get a build or to use as a reference on how to get the most recent builds
with curl. With `get-build.sh` you can grab the most recent stable build, the
most recent release candidate, or the most recent build to pass our ci and gce
e2e tests (essentially a nightly build).
Run `./hack/get-build.sh -h` for its usage.
For example, to get a build at a specific version (v1.1.1):
To get a build at a specific version (v1.1.1) use:
```console
./hack/get-build.sh v1.1.1
```
Alternatively, to get the latest stable release:
To get the latest stable release:
```console
./hack/get-build.sh release/stable
```
Finally, you can just print the latest or stable version:
Use the "-v" option to print the version number of a build without retrieving
it. For example, the following prints the version number for the latest ci
build:
```console
./hack/get-build.sh -v ci/latest
```
You can also use the gsutil tool to explore the Google Cloud Storage release buckets. Here are some examples:
You can also use the gsutil tool to explore the Google Cloud Storage release
buckets. Here are some examples:
```sh
gsutil cat gs://kubernetes-release-dev/ci/latest.txt # output the latest ci version number

View File

@ -31,7 +31,8 @@ Documentation for other releases can be found at
Updated: 11/3/2015
*This document is oriented at users and developers who want to write documents for Kubernetes.*
*This document is oriented at users and developers who want to write documents
for Kubernetes.*
**Table of Contents**
<!-- BEGIN MUNGE: GENERATED_TOC -->
@ -56,24 +57,34 @@ Updated: 11/3/2015
## General Concepts
Each document needs to be munged to ensure its format is correct, links are valid, etc. To munge a document, simply run `hack/update-munge-docs.sh`. We verify that all documents have been munged using `hack/verify-munge-docs.sh`. The scripts for munging documents are called mungers, see the [mungers section](#what-are-mungers) below if you're curious about how mungers are implemented or if you want to write one.
Each document needs to be munged to ensure its format is correct, links are
valid, etc. To munge a document, simply run `hack/update-munge-docs.sh`. We
verify that all documents have been munged using `hack/verify-munge-docs.sh`.
The scripts for munging documents are called mungers, see the
[mungers section](#what-are-mungers) below if you're curious about how mungers
are implemented or if you want to write one.
## How to Get a Table of Contents
Instead of writing table of contents by hand, insert the following code in your md file:
Instead of writing table of contents by hand, insert the following code in your
md file:
```
<!-- BEGIN MUNGE: GENERATED_TOC -->
<!-- END MUNGE: GENERATED_TOC -->
```
After running `hack/update-munge-docs.sh`, you'll see a table of contents generated for you, layered based on the headings.
After running `hack/update-munge-docs.sh`, you'll see a table of contents
generated for you, layered based on the headings.
## How to Write Links
It's important to follow the rules when writing links. It helps us correctly versionize documents for each release.
It's important to follow the rules when writing links. It helps us correctly
versionize documents for each release.
Use inline links instead of urls at all times. When you add internal links to `docs/` or `examples/`, use relative links; otherwise, use `http://releases.k8s.io/HEAD/<path/to/link>`. For example, avoid using:
Use inline links instead of urls at all times. When you add internal links to
`docs/` or `examples/`, use relative links; otherwise, use
`http://releases.k8s.io/HEAD/<path/to/link>`. For example, avoid using:
```
[GCE](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/gce.md) # note that it's under docs/
@ -89,18 +100,27 @@ Instead, use:
[Kubernetes](http://kubernetes.io/) # external link
```
The above example generates the following links: [GCE](../getting-started-guides/gce.md), [Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and [Kubernetes](http://kubernetes.io/).
The above example generates the following links:
[GCE](../getting-started-guides/gce.md),
[Kubernetes package](http://releases.k8s.io/HEAD/pkg/), and
[Kubernetes](http://kubernetes.io/).
## How to Include an Example
While writing examples, you may want to show the content of certain example files (e.g. [pod.yaml](../user-guide/pod.yaml)). In this case, insert the following code in the md file:
While writing examples, you may want to show the content of certain example
files (e.g. [pod.yaml](../user-guide/pod.yaml)). In this case, insert the
following code in the md file:
```
<!-- BEGIN MUNGE: EXAMPLE path/to/file -->
<!-- END MUNGE: EXAMPLE path/to/file -->
```
Note that you should replace `path/to/file` with the relative path to the example file. Then `hack/update-munge-docs.sh` will generate a code block with the content of the specified file, and a link to download it. This way, you save the time to do the copy-and-paste; what's better, the content won't become out-of-date every time you update the example file.
Note that you should replace `path/to/file` with the relative path to the
example file. Then `hack/update-munge-docs.sh` will generate a code block with
the content of the specified file, and a link to download it. This way, you save
the time to do the copy-and-paste; what's better, the content won't become
out-of-date every time you update the example file.
For example, the following:
@ -135,11 +155,17 @@ spec:
### Code formatting
Wrap a span of code with single backticks (`` ` ``). To format multiple lines of code as its own code block, use triple backticks (```` ``` ````).
Wrap a span of code with single backticks (`` ` ``). To format multiple lines of
code as its own code block, use triple backticks (```` ``` ````).
### Syntax Highlighting
Adding syntax highlighting to code blocks improves readability. To do so, in your fenced block, add an optional language identifier. Some useful identifier includes `yaml`, `console` (for console output), and `sh` (for shell quote format). Note that in a console output, put `$ ` at the beginning of each command and put nothing at the beginning of the output. Here's an example of console code block:
Adding syntax highlighting to code blocks improves readability. To do so, in
your fenced block, add an optional language identifier. Some useful identifier
includes `yaml`, `console` (for console output), and `sh` (for shell quote
format). Note that in a console output, put `$ ` at the beginning of each
command and put nothing at the beginning of the output. Here's an example of
console code block:
```
```console
@ -159,26 +185,38 @@ pod "foo" created
### Headings
Add a single `#` before the document title to create a title heading, and add `##` to the next level of section title, and so on. Note that the number of `#` will determine the size of the heading.
Add a single `#` before the document title to create a title heading, and add
`##` to the next level of section title, and so on. Note that the number of `#`
will determine the size of the heading.
## What Are Mungers?
Mungers are like gofmt for md docs which we use to format documents. To use it, simply place
Mungers are like gofmt for md docs which we use to format documents. To use it,
simply place
```
<!-- BEGIN MUNGE: xxxx -->
<!-- END MUNGE: xxxx -->
```
in your md files. Note that xxxx is the placeholder for a specific munger. Appropriate content will be generated and inserted between two brackets after you run `hack/update-munge-docs.sh`. See [munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details.
in your md files. Note that xxxx is the placeholder for a specific munger.
Appropriate content will be generated and inserted between two brackets after
you run `hack/update-munge-docs.sh`. See
[munger document](http://releases.k8s.io/HEAD/cmd/mungedocs/) for more details.
## Auto-added Mungers
After running `hack/update-munge-docs.sh`, you may see some code / mungers in your md file that are auto-added. You don't have to add them manually. It's recommended to just read this section as a reference instead of messing up with the following mungers.
After running `hack/update-munge-docs.sh`, you may see some code / mungers in
your md file that are auto-added. You don't have to add them manually. It's
recommended to just read this section as a reference instead of messing up with
the following mungers.
### Unversioned Warning
UNVERSIONED_WARNING munger inserts unversioned warning which warns the users when they're reading the document from HEAD and informs them where to find the corresponding document for a specific release.
UNVERSIONED_WARNING munger inserts unversioned warning which warns the users
when they're reading the document from HEAD and informs them where to find the
corresponding document for a specific release.
```
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
@ -191,7 +229,8 @@ UNVERSIONED_WARNING munger inserts unversioned warning which warns the users whe
### Is Versioned
IS_VERSIONED munger inserts `IS_VERSIONED` tag in documents in each release, which stops UNVERSIONED_WARNING munger from inserting warning messages.
IS_VERSIONED munger inserts `IS_VERSIONED` tag in documents in each release,
which stops UNVERSIONED_WARNING munger from inserting warning messages.
```
<!-- BEGIN MUNGE: IS_VERSIONED -->

View File

@ -31,19 +31,30 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
Instrumenting Kubernetes with a new metric
===================
The following is a step-by-step guide for adding a new metric to the Kubernetes code base.
## Instrumenting Kubernetes with a new metric
We use the Prometheus monitoring system's golang client library for instrumenting our code. Once you've picked out a file that you want to add a metric to, you should:
The following is a step-by-step guide for adding a new metric to the Kubernetes
code base.
We use the Prometheus monitoring system's golang client library for
instrumenting our code. Once you've picked out a file that you want to add a
metric to, you should:
1. Import "github.com/prometheus/client_golang/prometheus".
2. Create a top-level var to define the metric. For this, you have to:
1. Pick the type of metric. Use a Gauge for things you want to set to a particular value, a Counter for things you want to increment, or a Histogram or Summary for histograms/distributions of values (typically for latency). Histograms are better if you're going to aggregate the values across jobs, while summaries are better if you just want the job to give you a useful summary of the values.
1. Pick the type of metric. Use a Gauge for things you want to set to a
particular value, a Counter for things you want to increment, or a Histogram or
Summary for histograms/distributions of values (typically for latency).
Histograms are better if you're going to aggregate the values across jobs, while
summaries are better if you just want the job to give you a useful summary of
the values.
2. Give the metric a name and description.
3. Pick whether you want to distinguish different categories of things using labels on the metric. If so, add "Vec" to the name of the type of metric you want and add a slice of the label names to the definition.
3. Pick whether you want to distinguish different categories of things using
labels on the metric. If so, add "Vec" to the name of the type of metric you
want and add a slice of the label names to the definition.
https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L53
https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L31
@ -53,13 +64,17 @@ We use the Prometheus monitoring system's golang client library for instrumentin
https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/kubelet/metrics/metrics.go#L74
https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L78
4. Use the metric by calling the appropriate method for your metric type (Set, Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary), first calling WithLabelValues if your metric has any labels
4. Use the metric by calling the appropriate method for your metric type (Set,
Inc/Add, or Observe, respectively for Gauge, Counter, or Histogram/Summary),
first calling WithLabelValues if your metric has any labels
https://github.com/kubernetes/kubernetes/blob/3ce7fe8310ff081dbbd3d95490193e1d5250d2c9/pkg/kubelet/kubelet.go#L1384
https://github.com/kubernetes/kubernetes/blob/cd3299307d44665564e1a5c77d0daa0286603ff5/pkg/apiserver/apiserver.go#L87
These are the metric type definitions if you're curious to learn about them or need more information:
These are the metric type definitions if you're curious to learn about them or
need more information:
https://github.com/prometheus/client_golang/blob/master/prometheus/gauge.go
https://github.com/prometheus/client_golang/blob/master/prometheus/counter.go
https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go