Adds a document on pod templates that can be shared
between various controller docs.
Move more philosophical content to later in the doc.
Add more task-oriented stuff earlier.
Put example config in the document, early on, so users have something concrete to relate the discussion of fields to.
Link to Job and DaemonSet docs.
Make format more like that of Job and DaemonSet docs.
Use jsonpath in examples, which is available in v1.1.
Added example files.
The pending codec -> conversion split changes the signature of
Encode and Decode to be more complicated. Create a stub helper
with the exact semantics of today and do the simple mechanical
refactor here to reduce the cost of that change.
Before this change we have a mish-mash of ways to pass field names around for
error generation. Sometimes string fieldnames, sometimes .Prefix(), sometimes
neither, often wrong names or not indexed when it should be.
Instead of that mess, this is part one of a couple of commits that will make it
more strongly typed and hopefully encourage correct behavior. At least you
will have to think about field names, which is better than nothing.
It turned out to be really hard to do this incrementally.
Introduce examples explaining how to use DataSets to optimally
distribute cassandra nodes onto each kubernetes node in the network.
Signed-off-by: Christian Stewart <christian@paral.in>
This adds a very basic Zeppelin image that works with the existing
Spark example. As can be seen from the documentation, it has a couple
of warts:
* It requires kubectl port-forward (which is unstable across long
periods of time, at least for me, on this app, bug incoming). See
* I needed to roll my own container (none of the existing containers
exactly matched needs, or even built anymore against modern Zeppelin
master, and the rest of the example is Spark 1.5).
The image itself is *huge*. One of the further refinements we need to
look at is how to possibly strip the Maven build for this container
down to just the interpreters we care about, because the deps here
are frankly ridiculous.
This might be a case where, if possible, we might want to open an
upstream request to build things dynamically, then use something like
probably the cut the image down considerably. (This might already be
possible, need to poke at whether you can late-bind interpreters
later.)
This ensures nfs-common is installed on GCE, and provides a more
functional explanation/example. I launched two replication controllers
so that there were busybox pods to poke around at the NFS volume, and
so that the later wget actually works (the original example would have
to work on the node, or need some other access to the container
network). After switching to two controllers, it actually makes more
sense to use PV claims, and it's probably a configuration that makes
more sense for indirection for NFS anyways.
* Pod -> ReplicationController, which also forced me to hack around
hostname issue on the master. (Spark master sees the incoming slave
request to spark-master and assumes it's not meant for it, since it's
name is spark-master-controller-abcdef.)
* Remove service env dependencies (depend on DNS instead).
* JSON -> YAML.
* Add GCS connector.
* Make example do something actually useful: A familiar example to
anyone at Google, implement wordcount of all of Shakespeare's works.
* Fix a minor service connection issue in the gluster example.