Remove the deprecated `scheduler.alpha.kubernetes.io/critical-pod` pod annotation and use
the `priorityClassName` first class attribute instead, setting all master components to
`system-cluster-critical`.
Add test files that exclude the field in question
under KubeletConfiguration -> evictionHard for non-Linux.
Add runtime abstraction for the test files in initconfiguration_tests.go
The current code logs an error and full blown backtrace if we fail to remove
the containers upon reset. This creates unneeded, huge and rather scary log
message. Fix that by leaving just the error message.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Found using script:
https://gist.github.com/dims/384dea60754042f61d79233603034038
Just run using:
`find . -name .import-restrictions | xargs python ~/junk/sanitize-import-boss.py`
The removed entries are either packages that got moved/renamed/deleted
but are still not cleaned up from .import-restrictions files.
Change-Id: I92c400f74e6f012cc75539311ed4de280e25e918
Currently ConfigFileAndDefaultsToInternalConfig and
FetchConfigFromFileOrCluster are used to default and load InitConfiguration
from file or cluster. These two APIs do a couple of completely separate things
depending on how they were invoked. In the case of
ConfigFileAndDefaultsToInternalConfig, an InitConfiguration could be either
defaulted with external override parameters, or loaded from file.
With FetchConfigFromFileOrCluster an InitConfiguration is either loaded from
file or from the config map in the cluster.
The two share both some functionality, but not enough code. They are also quite
difficult to use and sometimes even error prone.
To solve the issues, the following steps were taken:
- Introduce DefaultedInitConfiguration which returns defaulted version agnostic
InitConfiguration. The function takes InitConfiguration for overriding the
defaults.
- Introduce LoadInitConfigurationFromFile, which loads, converts, validates and
defaults an InitConfiguration from file.
- Introduce FetchInitConfigurationFromCluster that fetches InitConfiguration
from the config map.
- Reduce, when possible, the usage of ConfigFileAndDefaultsToInternalConfig by
replacing it with DefaultedInitConfiguration or LoadInitConfigurationFromFile
invocations.
- Replace all usages of FetchConfigFromFileOrCluster with calls to
LoadInitConfigurationFromFile or FetchInitConfigurationFromCluster.
- Delete FetchConfigFromFileOrCluster as it's no longer used.
- Rename ConfigFileAndDefaultsToInternalConfig to
LoadOrDefaultInitConfiguration in order to better describe what the function
is actually doing.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
systemd is the recommended driver as per the setup of running
the kubelet using systemd as the init system. Add a preflight
check that throws a warning if this isn't the case.
Currently JoinConfigFileAndDefaultsToInternalConfig is doing a couple of
different things depending on its parameters. It:
- loads a versioned JoinConfiguration from an YAML file.
- returns defaulted JoinConfiguration allowing for some overrides.
In order to make code more manageable, the following steps are taken:
- Introduce LoadJoinConfigurationFromFile, which loads a versioned
JoinConfiguration from an YAML file, defaults it (both dynamically and
statically), converts it to internal JoinConfiguration and validates it.
- Introduce DefaultedJoinConfiguration, which returns defaulted (both
dynamically and statically) and verified internal JoinConfiguration.
The possibility of overwriting defaults via versioned JoinConfiguration is
retained.
- Re-implement JoinConfigFileAndDefaultsToInternalConfig to use
LoadJoinConfigurationFromFile and DefaultedJoinConfiguration.
- Replace some calls to JoinConfigFileAndDefaultsToInternalConfig with calls to
either LoadJoinConfigurationFromFile or DefaultedJoinConfiguration where
appropriate.
- Rename JoinConfigFileAndDefaultsToInternalConfig to the more appropriate name
LoadOrDefaultJoinConfiguration.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
DetectUnsupportedVersion is somewhat uncomfortable, complex and inefficient
function to use. It takes an entire YAML document as bytes, splits it up to
byte slices of the different YAML sub-documents and group-version-kinds and
searches through those to detect an unsupported kubeadm config. If such config
is detected, the function returns an error, if it is not (i.e. the normal
function operation) everything done so far is discarded.
This could have been acceptable, if not the fact, that in all cases that this
function is called, the YAML document bytes are split up and an iteration on
GVK map is performed yet again. Hence, we don't need DetectUnsupportedVersion
in its current form as it's inefficient, complex and takes only YAML document
bytes.
This change replaces DetectUnsupportedVersion with ValidateSupportedVersion,
which takes a GroupVersion argument and checks if it is on the list of
unsupported config versions. In that case an error is returned.
ValidateSupportedVersion relies on the caller to read and split the YAML
document and then iterate on its GVK map checking if the particular
GroupVersion is supported or not.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
The storage version now is solely decided by the
scheme.PrioritizedVersionsForGroup(). For cohabitating resources, the storage
version will be that of the overriding group as returned by
storageFactory.getStorageGroupResource().