Automatic merge from submit-queue
Cassandra examples updating images to v9
- this is a version bump for the C* image
- I also increased the cpu to .5 because .1 is slow like warm death
Who can actually run the build to get the container to the examples repo?
Automatic merge from submit-queue
kube-apiserver options should be decoupled from impls
A few months ago we refactored options to keep it independent of the
implementations, so that it could be used in CLI tools to validate
config or to generate config, without pulling in the full dependency
tree of the master. This change restores that by separating
server_run_options.go back to its own package.
Also, options structs should never contain non-serializable types, which
storagebackend.Config was doing with runtime.Codec. Split the codec out.
Fix a typo on the name of the etcd2.go storage backend.
Finally, move DefaultStorageMediaType to server_run_options.
@nikhiljindal as per my comment in #24454, @liggitt because you and I
discussed this last time
A few months ago we refactored options to keep it independent of the
implementations, so that it could be used in CLI tools to validate
config or to generate config, without pulling in the full dependency
tree of the master. This change restores that by separating
server_run_options.go back to its own package.
Also, options structs should never contain non-serializable types, which
storagebackend.Config was doing with runtime.Codec. Split the codec out.
Fix a typo on the name of the etcd2.go storage backend.
Finally, move DefaultStorageMediaType to server_run_options.
Automatic merge from submit-queue
Newrelic daemonset
1. base64 on Mac does not support the **wrap** option. Easy to support on both Mac and Linux by using **tr** to remove the newline.
2. DaemonSet definition does not conform to latest schema:
> $ kubectl create -f ./newrelic-daemonset.yaml
error validating "./newrelic-daemonset.yaml": error validating data: found invalid field privileged for v1.PodSecurityContext; if you choose to ignore these errors, turn validation off with --validate=false
<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24564)
<!-- Reviewable:end -->
Automatic merge from submit-queue
Fixed namespace name to spark-cluster
Just changed the namespace from **default** to **spark-cluster** in the spark example docs.
The guestbook-go example is broken because the latest tag of redis has
moved to redis 3.0 which speaks a new protocol. This means that the
slaves, which have fixed 2.0 versions, will error out on the protocol:
```
[7] 15 May 23:37:44.403 # Can't handle RDB format version 7
[7] 15 May 23:37:44.403 # Failed trying to load the MASTER synchronization DB from disk
[7] 15 May 23:37:45.333 * Connecting to MASTER redis-master:6379
[7] 15 May 23:37:45.427 * MASTER <-> SLAVE sync started
```
In this case the app simply never persists data.
cc @luebken @Gurpartap
The codec factory should support two distinct interfaces - negotiating
for a serializer with a client, vs reading or writing data to a storage
form (etcd, disk, etc). Make the EncodeForVersion and DecodeToVersion
methods only take Encoder and Decoder, and slight refactoring elsewhere.
In the storage factory, use a content type to control what serializer to
pick, and use the universal deserializer. This ensures that storage can
read JSON (which might be from older objects) while only writing
protobuf. Add exceptions for those resources that may not be able to
write to protobuf (specifically third party resources, but potentially
others in the future).
Automatic merge from submit-queue
Deleting duplicate code from federated-apiserver.Run()
This removes most of duplicate code from federated-apiserver.Run().
The code remaining is related to storage or authz and authn.
https://github.com/kubernetes/kubernetes/pull/24787 refactors the storage related code.
I am still figuring out authz and authn.
cc @jianhuiz
Automatic merge from submit-queue
Refactored SeedProvider and Updated Docker
This is a redo of the last PR that I munged 😄
- fixed maven build folder structure
- updated build to C* 3.4
- refactored Seed Provider - improved error handling, updated default SeedProvider code
- added start of unit tests. Not as comprehensive as I would like
- updated docker image to debian:jessie
- installed openjdk 8
- added some docker fu to make the image smaller
- updated docker to C* 3.4 and update yaml
- updated README content. Added a section about the docker, and the SeedProvider
Have not had a chance to test the docker on k8s, because I do not have a local docker repo.
NOTE: someone needs to push the docker image into the google repo. Not sure what the process is ... I will submit another PR request with changes to the yaml files.
Automatic merge from submit-queue
Move internal types of job from pkg/apis/extensions to pkg/apis/batch
This addressed the job part of #23216, this is still WIP. Will notify once finished. I'd like to have it in before starting working on ScheduledJob.
@lavalamp @erictune fyi
Automatic merge from submit-queue
Intial draft on SeedProvider docs
Alsa more documentation. We need to reference the config section in the example docs. There are multiple PRs open in those docs, so at this point I do not want to make a mess.
Let me know if there are docs standard template that will make this more pretty.
Automatic merge from submit-queue
update gb-frontend image. New image includes the change in PR # 23381.
Update to use the gcr.io/google-samples/gb-frontend:v4 image. New image includes the change in https://github.com/kubernetes/kubernetes/pull/23381.
Add tests to watch behavior in both protocols (http and websocket)
against all 3 media types. Adopt the
`application/vnd.kubernetes.protobuf;stream=watch` media type for the
content that comes back from a watch call so that it can be
distinguished from a Status result.
Automatic merge from submit-queue
Add mpio support for iscsi
This allows the iscsi volume to check if a iscsi device belongs to a mpio device
If it does belong to the device then we make sure we mount the mpio device instead of
the raw device.
The code is based on the current FibreChannel volume support for mpio
example
/dev/disk/by-path/iqn-example.com.2999 -> /dev/sde
Then we check
/sys/block/[dm-X]/slaves/xx
until we find the [dm-X] containing /dev/sde and mount it
Additional work that can be done in future
1. Add multiple portal support to iscsi
2. Move the FibreChannel volume provider to use the code that has been extracted
If it does belong to the device then we make sure we mount the mpio device instead of
the raw device.
Heuristics
Login into /dev/disk/by-path/iqn-example.com.2999 -> /dev/sde
Check if sde existsin in /sys/block/[dm-X]/slaves/xx
If it does mount /dev/[dm-x] which will look like /dev/mapper/mpiodevicename in mount
examples/iscsi has more details
Automatic merge from submit-queue
Flexvolume: Add support for multiple secrets
This PR adds support to pass multiple secrets for flexvolume plugins.
To allow multiple secrets, secrets are now passed as:
"kubernetes.io/secret/id-rsa":"value-2\r\n\r\n","kubernetes.io/secret/id-rsa.pub":"value-1\r\n"
Automatic merge from submit-queue
phase 2 of cassandra example overhaul
Here's the next iteration in overhauling this example, towards https://github.com/kubernetes/kubernetes/issues/20961. This removes the pod adoption part, but doesn't (yet) otherwise change any of the resources used.
It also includes some README cleanup, and removes some explicit specification of labels in the rc yaml.
This PR doesn't yet add any commentary on how we're using the seed provider (re: https://github.com/kubernetes/kubernetes/issues/20961#issuecomment-190405959 etc.). Maybe we should add that.
Also: LMK if this PR should include any changes to the links out to the docs.
cc @bgrant0607 @johndmulhausen