hypothesis: The old userspace proxier would internally retry connections. The
new one does not. When this test comes up, the firewall might not yet be open or
something is causing a long delay and a timeout. I can't repro this failure
locally, so I am shooting in the dark. It's sort of plausible.
evidence: I can SSH into the jenkins master that is hung and I can see the hung
curl. I can run that curl by hand and it works. I can see that my shell is in
the same netns as that hung curl.
Introduce examples explaining how to use DataSets to optimally
distribute cassandra nodes onto each kubernetes node in the network.
Signed-off-by: Christian Stewart <christian@paral.in>
This adds a very basic Zeppelin image that works with the existing
Spark example. As can be seen from the documentation, it has a couple
of warts:
* It requires kubectl port-forward (which is unstable across long
periods of time, at least for me, on this app, bug incoming). See
* I needed to roll my own container (none of the existing containers
exactly matched needs, or even built anymore against modern Zeppelin
master, and the rest of the example is Spark 1.5).
The image itself is *huge*. One of the further refinements we need to
look at is how to possibly strip the Maven build for this container
down to just the interpreters we care about, because the deps here
are frankly ridiculous.
This might be a case where, if possible, we might want to open an
upstream request to build things dynamically, then use something like
probably the cut the image down considerably. (This might already be
possible, need to poke at whether you can late-bind interpreters
later.)
Adds an example using DaemonSets to distribute the NewRelic worker onto all nodes in a k8s cluster.
Signed-off-by: Christian Stewart <christian@paral.in>
Since this is a container service port anyways, "insecure" is a bit of
a red herring. There's no real security relevance to the incoming port
numbers for the NFS server pod.
This lets us get rid of the examples/nfs/exporter Docker build
(@jsafrane's personal image).
This ensures nfs-common is installed on GCE, and provides a more
functional explanation/example. I launched two replication controllers
so that there were busybox pods to poke around at the NFS volume, and
so that the later wget actually works (the original example would have
to work on the node, or need some other access to the container
network). After switching to two controllers, it actually makes more
sense to use PV claims, and it's probably a configuration that makes
more sense for indirection for NFS anyways.
* Pod -> ReplicationController, which also forced me to hack around
hostname issue on the master. (Spark master sees the incoming slave
request to spark-master and assumes it's not meant for it, since it's
name is spark-master-controller-abcdef.)
* Remove service env dependencies (depend on DNS instead).
* JSON -> YAML.
* Add GCS connector.
* Make example do something actually useful: A familiar example to
anyone at Google, implement wordcount of all of Shakespeare's works.
* Fix a minor service connection issue in the gluster example.
Fix some errors in guestbook-go README.md:
1. fix some markdown errors by removing the `<nop>` tag
2. replace some (not all of them) `containers` with `pods`
3. `gcloud comput` -> `gcloud compute`
4. improved sentences that has `list all` to make the descriptions more accurate
5. other tiny fixes
Code comments currently claim the default iscsi mount path as
kubernetes.io/pod/iscsi/<portal>-iqn-<iqn>-lun-<id>, however actual
path being used is
kubernetes.io/iscsi/iscsi/<portal>-iqn-<iqn>-lun-<id>
This leads to ultimate path being similar to this :
kubernetes.io/iscsi/iscsi/...iqn-iqn...-lun-N
Both iscsi and iqn are repated twice for no reason, since "iqn" is
required by spec to be part of an iqn. This is also wrong on
multiple leves as actual allowed naming formats are :
iqn.2001-04.com.example:storage:diskarrays-sn-a8675309
eui.02004567A425678D
(RFC 3720 3.2.6.3)
and in the second case "iqn-eui" in the path would be misleading.
Change this to a more reasonable path of
kubernetes.io/iscsi/<portal>-<iqn>-lun-<id>
which also aligns up with how the /dev/by-path and sysfs entries
are created for iscsi devices on linux
* -- *
Update iSCSI README and sample json file
There seems to have been quite a skew in recent updates to these
files adding in wrong info or info that no longer lines up the
sample config with the README.
Fixed the following issues :
* Fix discrepancy in samples json using initiator iqn from previous
linked example as target iqn (which was just wrong)
* Generate sample output and README from the same json config provided.
* Remove recommendation to edit initiator name, this is not required
(open-iscsi warns against editing this manually and provides a utility
for the same)
* Update docker inspect command to one that works.
* Use separate LUNs for separate mount points instead of re-using.
Flocker [1] is an open-source container data volume manager for
Dockerized applications.
This PR adds a volume plugin for Flocker.
The plugin interfaces the Flocker Control Service REST API [2] to
attachment attach the volume to the pod.
Each kubelet host should run Flocker agents (Container Agent and Dataset
Agent).
The kubelet will also require environment variables that contain the
host and port of the Flocker Control Service. (see Flocker architecture
[3] for more).
- `FLOCKER_CONTROL_SERVICE_HOST`
- `FLOCKER_CONTROL_SERVICE_PORT`
The contribution introduces a new 'flocker' volume type to the API with
fields:
- `datasetName`: which indicates the name of the dataset in Flocker
added to metadata;
- `size`: a human-readable number that indicates the maximum size of the
requested dataset.
Full documentation can be found docs/user-guide/volumes.md and examples
can be found at the examples/ folder
[1] https://clusterhq.com/flocker/introduction/
[2] https://docs.clusterhq.com/en/1.3.1/reference/api.html
[3] https://docs.clusterhq.com/en/1.3.1/concepts/architecture.html
rbd: if rbd image is not formatted, format it to the designated filesystem type
rbd: update example README.md and include instructions to get base64 encoded Ceph secret
if rbd fails to lock image, unmap the image before exiting
Signed-off-by: Huamin Chen <hchen@redhat.com>