The codec factory should support two distinct interfaces - negotiating
for a serializer with a client, vs reading or writing data to a storage
form (etcd, disk, etc). Make the EncodeForVersion and DecodeToVersion
methods only take Encoder and Decoder, and slight refactoring elsewhere.
In the storage factory, use a content type to control what serializer to
pick, and use the universal deserializer. This ensures that storage can
read JSON (which might be from older objects) while only writing
protobuf. Add exceptions for those resources that may not be able to
write to protobuf (specifically third party resources, but potentially
others in the future).
Automatic merge from submit-queue
Redo Unstructured to have accessor methods
Add accessor methods that implement pkg/api/unversioned.ObjectKind,
pkg/api/meta.Object, pkg/api/meta.Type and pkg/api/meta.List.
Removed the convenience fields since writing to them was not reflected
in serialized JSON.
Add accessor methods that implement pkg/api/unversioned.ObjectKind,
pkg/api/meta.Object, pkg/api/meta.Type and pkg/api/meta.List.
Removed the convenience fields since writing to them was not reflected
in serialized JSON.
reflect.Call is fairly expensive, performing 8 allocations and having to
set up a call stack. Using a fairly straightforward to generate switch
statement, we can bypass that early in conversion (as long as the
function takes responsibility for invocation). We may also be able to
avoid an allocation for the conversion scope, but not positive yet.
```
benchmark old ns/op new ns/op delta
BenchmarkPodConversion-8 14713 12173 -17.26%
benchmark old allocs new allocs delta
BenchmarkPodConversion-8 80 72 -10.00%
benchmark old bytes new bytes delta
BenchmarkPodConversion-8 9133 8712 -4.61%
```
Add tests to watch behavior in both protocols (http and websocket)
against all 3 media types. Adopt the
`application/vnd.kubernetes.protobuf;stream=watch` media type for the
content that comes back from a watch call so that it can be
distinguished from a Status result.
Automatic merge from submit-queue
Implement a streaming serializer for watch
Changeover watch to use streaming serialization. Properly version the
watch objects. Implement simple framing for JSON and Protobuf (but not
YAML).
@wojtek-t @lavalamp
Automatic merge from submit-queue
Preserve int data when unmarshaling
There are several places we use `json.Unmarshal` into an unstructured map (StrategicMergePatch, UnstructuredJSONScheme, many others).
In this scenario, the json package converts all numbers to float64. This exposes many of the int64 fields in our API types to corruption when the unstructured map is marshalled back to json.
A simple example is a pod with an `"activeDeadlineSeconds": 1000000`. Trying to use `kubectl label`, `annotate`, `patch`, etc results in that int64 being converted to a float64, submitted to the server, and the server rejecting it with an error about "cannot unmarshal number 1e+6 into Go value of type int64"
The json package provides a way to defer conversion of numbers (`json.Decoder#UseNumber`), but does not actually do conversions to int or float. This PR makes use of that feature, and post-processes the unmarshalled map to convert json.Number objects to either int64 or float64 values
Automatic merge from submit-queue
Additional go vet fixes
Mostly:
- pass lock by value
- bad syntax for struct tag value
- example functions not formatted properly
Automatic merge from submit-queue
storage.Interface KV impl. of etcd v3
This is the initial implementation of #22448.
The PR consists of two parts:
- add godep of "clientv3" and "integration" (for testing)
- create new package "etcd3" under "pkg/storage/"
- implement KV methods of storage.Interface using etcd v3 APIs
- Create, Set, Get, Delete, GetToList, List, GuaranteedUpdate
Add a recognizer that is capable of sniffing content type from data by
asking each serializer to try to decode - this is for a "universal
decoder/deserializer" which can be used by client logic.
Add codec factory, which provides the core primitives for content type
negotiation. Codec factories depend only on schemes, serializers, and
groupversion pairs.
Break Codec into two general purpose interfaces, Encoder and Decoder,
and move parameter codec responsibilities to ParameterCodec.
Make unversioned types explicit when registering - these types go
through conversion without modification.
Switch to use "__internal" instead of "" to represent the internal
version. Future commits will also add group defaulting (so that "" is
expanded internally into a known group version, and only cleared during
set).
For embedded types like runtime.Object -> runtime.RawExtension, put the
responsibility on the caller of Decode/Encode to handle transformation
into destination serialization. Future commits will expand RawExtension
and Unknown to accept a content encoding as well as bytes.
Make Unknown a bit more powerful and use it to carry unrecognized types.
Replace many of the remaining s.Convert() invocations with direct
execution, and make generated methods public. Removes 10% of the
allocations during decode of a pod and ~20-40% of the total CPU time.
The pending codec -> conversion split changes the signature of
Encode and Decode to be more complicated. Create a stub helper
with the exact semantics of today and do the simple mechanical
refactor here to reduce the cost of that change.
Rather than an "all or nothing" approach to defining a custom conversion
function (which seems destined to cause problems eventually), this is an
attempt to make it possible to call the auto-generated code and then "fix it
up".
Specifically, consider you have a fooBar struct. If you don't define a
conversion for FooBar, you will get a generated function like:
convert_v1_FooBar_To_api_FooBar()
Before this PR, if you define your own conversion function, you get no
generated function. After this PR you get:
autoconvert_v1_FooBar_To_api_FooBar()
...which you can call yourself in your custom function.
A lot of packages use StringSet, but they don't use anything else from
the util package. Moving StringSet into another package will shrink
their dependency trees significantly.
Running reflect.ValueOf(X) where X is a nil interface will return
a zero Value. We cannot get the type (because no concrete type is
known) and cannot check if the Value is nil later on due to the way
reflect.Value works. So we should handle this case by immediately
returning nil. We cannot type-assert a nil interface to another
interface type (as no concrete type is assigned), so we must add
another check to see if the returned interface is nil.
OpenShift uses multiple API packages (types are split) which
Kube will also eventually have as we introduce more plugins.
These changes make the generators able to handle importing different
API object packages into a single generator function.
Make clients opt in to decoding objects that are stored
in the generic api.List object by invoking runtime.DecodeList()
with a set of schemes. Makes it easier to handle unknown
schema objects because decoding is in the control of the code.
Add runtime.Unstructured, which is a simple in memory
representation of an external object.
kubectl get can output a series of objects as a List in versioned
form, but not all API objects are available in the same schema.
Make the act of converting a []runtime.Object to api.List more
robust and add a test to verify its behavior in Get.
Makes it easier for client code to output unified objects.