start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
/*
|
|
|
|
Copyright 2016 The Kubernetes Authors.
|
|
|
|
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
you may not use this file except in compliance with the License.
|
|
|
|
You may obtain a copy of the License at
|
|
|
|
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
See the License for the specific language governing permissions and
|
|
|
|
limitations under the License.
|
|
|
|
*/
|
|
|
|
|
|
|
|
package cloud
|
|
|
|
|
|
|
|
import (
|
2018-02-02 21:12:07 +00:00
|
|
|
"context"
|
2018-05-31 17:53:57 +00:00
|
|
|
"errors"
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
"fmt"
|
|
|
|
"time"
|
|
|
|
|
2018-11-28 21:07:04 +00:00
|
|
|
v1 "k8s.io/api/core/v1"
|
2017-01-11 14:09:48 +00:00
|
|
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
|
|
|
"k8s.io/apimachinery/pkg/types"
|
|
|
|
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
|
|
|
|
"k8s.io/apimachinery/pkg/util/wait"
|
2017-06-23 20:56:37 +00:00
|
|
|
coreinformers "k8s.io/client-go/informers/core/v1"
|
|
|
|
clientset "k8s.io/client-go/kubernetes"
|
2017-07-10 17:54:48 +00:00
|
|
|
"k8s.io/client-go/kubernetes/scheme"
|
2017-01-30 18:39:54 +00:00
|
|
|
v1core "k8s.io/client-go/kubernetes/typed/core/v1"
|
2017-03-29 23:21:42 +00:00
|
|
|
"k8s.io/client-go/tools/cache"
|
2017-01-30 18:39:54 +00:00
|
|
|
"k8s.io/client-go/tools/record"
|
2017-08-10 07:03:41 +00:00
|
|
|
clientretry "k8s.io/client-go/util/retry"
|
2018-09-05 22:58:22 +00:00
|
|
|
cloudprovider "k8s.io/cloud-provider"
|
2018-11-28 21:07:04 +00:00
|
|
|
"k8s.io/klog"
|
2017-05-30 14:46:00 +00:00
|
|
|
kubeletapis "k8s.io/kubernetes/pkg/kubelet/apis"
|
2018-09-28 02:37:38 +00:00
|
|
|
schedulerapi "k8s.io/kubernetes/pkg/scheduler/api"
|
2017-03-29 23:21:42 +00:00
|
|
|
nodeutil "k8s.io/kubernetes/pkg/util/node"
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
)
|
|
|
|
|
2018-02-16 11:24:27 +00:00
|
|
|
var UpdateNodeSpecBackoff = wait.Backoff{
|
|
|
|
Steps: 20,
|
|
|
|
Duration: 50 * time.Millisecond,
|
|
|
|
Jitter: 1.0,
|
|
|
|
}
|
2017-03-29 23:21:42 +00:00
|
|
|
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
type CloudNodeController struct {
|
2017-02-06 18:35:50 +00:00
|
|
|
nodeInformer coreinformers.NodeInformer
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
kubeClient clientset.Interface
|
|
|
|
recorder record.EventRecorder
|
|
|
|
|
|
|
|
cloud cloudprovider.Interface
|
|
|
|
|
2017-03-29 23:21:42 +00:00
|
|
|
nodeStatusUpdateFrequency time.Duration
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
const (
|
|
|
|
// nodeStatusUpdateRetry controls the number of retries of writing NodeStatus update.
|
|
|
|
nodeStatusUpdateRetry = 5
|
|
|
|
|
|
|
|
// The amount of time the nodecontroller should sleep between retrying NodeStatus updates
|
|
|
|
retrySleepTime = 20 * time.Millisecond
|
|
|
|
)
|
|
|
|
|
2016-12-17 17:27:48 +00:00
|
|
|
// NewCloudNodeController creates a CloudNodeController object
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
func NewCloudNodeController(
|
2017-02-06 18:35:50 +00:00
|
|
|
nodeInformer coreinformers.NodeInformer,
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
kubeClient clientset.Interface,
|
|
|
|
cloud cloudprovider.Interface,
|
2017-03-29 23:21:42 +00:00
|
|
|
nodeStatusUpdateFrequency time.Duration) *CloudNodeController {
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
|
|
|
|
eventBroadcaster := record.NewBroadcaster()
|
2017-09-13 18:37:30 +00:00
|
|
|
recorder := eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: "cloud-node-controller"})
|
2018-11-09 18:49:10 +00:00
|
|
|
eventBroadcaster.StartLogging(klog.Infof)
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
if kubeClient != nil {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.V(0).Infof("Sending events to api server.")
|
2018-03-29 12:24:26 +00:00
|
|
|
eventBroadcaster.StartRecordingToSink(&v1core.EventSinkImpl{Interface: kubeClient.CoreV1().Events("")})
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
} else {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.V(0).Infof("No api server defined - no events will be sent to API server.")
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
cnc := &CloudNodeController{
|
2017-03-29 23:21:42 +00:00
|
|
|
nodeInformer: nodeInformer,
|
|
|
|
kubeClient: kubeClient,
|
|
|
|
recorder: recorder,
|
|
|
|
cloud: cloud,
|
|
|
|
nodeStatusUpdateFrequency: nodeStatusUpdateFrequency,
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
}
|
2017-03-29 23:21:42 +00:00
|
|
|
|
2018-02-14 20:20:34 +00:00
|
|
|
// Use shared informer to listen to add/update of nodes. Note that any nodes
|
|
|
|
// that exist before node controller starts will show up in the update method
|
|
|
|
cnc.nodeInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
|
|
|
|
AddFunc: cnc.AddCloudNode,
|
|
|
|
UpdateFunc: cnc.UpdateCloudNode,
|
2017-03-29 23:21:42 +00:00
|
|
|
})
|
|
|
|
|
2017-02-06 07:33:27 +00:00
|
|
|
return cnc
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
}
|
|
|
|
|
2018-10-29 01:57:23 +00:00
|
|
|
// This controller updates newly registered nodes with information
|
|
|
|
// from the cloud provider. This call is blocking so should be called
|
|
|
|
// via a goroutine
|
2018-05-15 09:08:35 +00:00
|
|
|
func (cnc *CloudNodeController) Run(stopCh <-chan struct{}) {
|
2017-03-29 23:21:42 +00:00
|
|
|
defer utilruntime.HandleCrash()
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
|
2017-03-29 23:21:42 +00:00
|
|
|
// The following loops run communicate with the APIServer with a worst case complexity
|
|
|
|
// of O(num_nodes) per cycle. These functions are justified here because these events fire
|
|
|
|
// very infrequently. DO NOT MODIFY this to perform frequent operations.
|
|
|
|
|
|
|
|
// Start a loop to periodically update the node addresses obtained from the cloud
|
2018-10-29 01:57:23 +00:00
|
|
|
wait.Until(cnc.UpdateNodeStatus, cnc.nodeStatusUpdateFrequency, stopCh)
|
2017-03-29 23:21:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// UpdateNodeStatus updates the node status, such as node addresses
|
|
|
|
func (cnc *CloudNodeController) UpdateNodeStatus() {
|
|
|
|
instances, ok := cnc.cloud.Instances()
|
|
|
|
if !ok {
|
|
|
|
utilruntime.HandleError(fmt.Errorf("failed to get instances from cloud provider"))
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
nodes, err := cnc.kubeClient.CoreV1().Nodes().List(metav1.ListOptions{ResourceVersion: "0"})
|
|
|
|
if err != nil {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.Errorf("Error monitoring node status: %v", err)
|
2017-03-29 23:21:42 +00:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
for i := range nodes.Items {
|
|
|
|
cnc.updateNodeAddress(&nodes.Items[i], instances)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// UpdateNodeAddress updates the nodeAddress of a single node
|
|
|
|
func (cnc *CloudNodeController) updateNodeAddress(node *v1.Node, instances cloudprovider.Instances) {
|
|
|
|
// Do not process nodes that are still tainted
|
|
|
|
cloudTaint := getCloudTaint(node.Spec.Taints)
|
|
|
|
if cloudTaint != nil {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.V(5).Infof("This node %s is still tainted. Will not process.", node.Name)
|
2017-03-29 23:21:42 +00:00
|
|
|
return
|
|
|
|
}
|
2017-08-22 14:35:43 +00:00
|
|
|
// Node that isn't present according to the cloud provider shouldn't have its address updated
|
2018-04-28 05:43:29 +00:00
|
|
|
exists, err := ensureNodeExistsByProviderID(instances, node)
|
2017-08-22 14:35:43 +00:00
|
|
|
if err != nil {
|
|
|
|
// Continue to update node address when not sure the node is not exists
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.Errorf("%v", err)
|
2017-08-22 14:35:43 +00:00
|
|
|
} else if !exists {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.V(4).Infof("The node %s is no longer present according to the cloud provider, do not process.", node.Name)
|
2017-08-22 14:35:43 +00:00
|
|
|
return
|
|
|
|
}
|
2017-03-29 23:21:42 +00:00
|
|
|
|
|
|
|
nodeAddresses, err := getNodeAddressesByProviderIDOrName(instances, node)
|
|
|
|
if err != nil {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.Errorf("%v", err)
|
2017-03-29 23:21:42 +00:00
|
|
|
return
|
|
|
|
}
|
2018-05-31 17:53:57 +00:00
|
|
|
|
|
|
|
if len(nodeAddresses) == 0 {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.V(5).Infof("Skipping node address update for node %q since cloud provider did not return any", node.Name)
|
2018-05-31 17:53:57 +00:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-03-29 23:21:42 +00:00
|
|
|
// Check if a hostname address exists in the cloud provided addresses
|
|
|
|
hostnameExists := false
|
|
|
|
for i := range nodeAddresses {
|
|
|
|
if nodeAddresses[i].Type == v1.NodeHostName {
|
|
|
|
hostnameExists = true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// If hostname was not present in cloud provided addresses, use the hostname
|
|
|
|
// from the existing node (populated by kubelet)
|
|
|
|
if !hostnameExists {
|
|
|
|
for _, addr := range node.Status.Addresses {
|
|
|
|
if addr.Type == v1.NodeHostName {
|
|
|
|
nodeAddresses = append(nodeAddresses, addr)
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
}
|
2017-03-29 23:21:42 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
// If nodeIP was suggested by user, ensure that
|
|
|
|
// it can be found in the cloud as well (consistent with the behaviour in kubelet)
|
|
|
|
if nodeIP, ok := ensureNodeProvidedIPExists(node, nodeAddresses); ok {
|
|
|
|
if nodeIP == nil {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.Errorf("Specified Node IP not found in cloudprovider")
|
2017-03-29 23:21:42 +00:00
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
2017-08-15 12:14:21 +00:00
|
|
|
newNode := node.DeepCopy()
|
2017-03-29 23:21:42 +00:00
|
|
|
newNode.Status.Addresses = nodeAddresses
|
|
|
|
if !nodeAddressesChangeDetected(node.Status.Addresses, newNode.Status.Addresses) {
|
|
|
|
return
|
|
|
|
}
|
2017-07-31 05:08:42 +00:00
|
|
|
_, _, err = nodeutil.PatchNodeStatus(cnc.kubeClient.CoreV1(), types.NodeName(node.Name), node, newNode)
|
2017-03-29 23:21:42 +00:00
|
|
|
if err != nil {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.Errorf("Error patching node with cloud ip addresses = [%v]", err)
|
2017-03-29 23:21:42 +00:00
|
|
|
}
|
|
|
|
}
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
|
2018-02-14 20:20:34 +00:00
|
|
|
func (cnc *CloudNodeController) UpdateCloudNode(_, newObj interface{}) {
|
2018-11-28 21:07:04 +00:00
|
|
|
node, ok := newObj.(*v1.Node)
|
|
|
|
if !ok {
|
2018-02-14 20:20:34 +00:00
|
|
|
utilruntime.HandleError(fmt.Errorf("unexpected object type: %v", newObj))
|
|
|
|
return
|
|
|
|
}
|
2018-11-28 21:07:04 +00:00
|
|
|
|
|
|
|
cloudTaint := getCloudTaint(node.Spec.Taints)
|
|
|
|
if cloudTaint == nil {
|
|
|
|
// The node has already been initialized so nothing to do.
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
|
|
|
cnc.initializeNode(node)
|
2018-02-14 20:20:34 +00:00
|
|
|
}
|
|
|
|
|
2018-11-28 21:07:04 +00:00
|
|
|
// AddCloudNode handles initializing new nodes registered with the cloud taint.
|
2017-03-29 23:21:42 +00:00
|
|
|
func (cnc *CloudNodeController) AddCloudNode(obj interface{}) {
|
|
|
|
node := obj.(*v1.Node)
|
|
|
|
|
|
|
|
cloudTaint := getCloudTaint(node.Spec.Taints)
|
|
|
|
if cloudTaint == nil {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.V(2).Infof("This node %s is registered without the cloud taint. Will not process.", node.Name)
|
2017-03-29 23:21:42 +00:00
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2018-11-28 21:07:04 +00:00
|
|
|
cnc.initializeNode(node)
|
|
|
|
}
|
|
|
|
|
|
|
|
// This processes nodes that were added into the cluster, and cloud initialize them if appropriate
|
|
|
|
func (cnc *CloudNodeController) initializeNode(node *v1.Node) {
|
|
|
|
|
2018-02-15 19:50:22 +00:00
|
|
|
instances, ok := cnc.cloud.Instances()
|
|
|
|
if !ok {
|
|
|
|
utilruntime.HandleError(fmt.Errorf("failed to get instances from cloud provider"))
|
|
|
|
return
|
|
|
|
}
|
|
|
|
|
2017-03-29 23:21:42 +00:00
|
|
|
err := clientretry.RetryOnConflict(UpdateNodeSpecBackoff, func() error {
|
2018-06-13 11:58:41 +00:00
|
|
|
// TODO(wlan0): Move this logic to the route controller using the node taint instead of condition
|
|
|
|
// Since there are node taints, do we still need this?
|
|
|
|
// This condition marks the node as unusable until routes are initialized in the cloud provider
|
|
|
|
if cnc.cloud.ProviderName() == "gce" {
|
|
|
|
if err := nodeutil.SetNodeCondition(cnc.kubeClient, types.NodeName(node.Name), v1.NodeCondition{
|
|
|
|
Type: v1.NodeNetworkUnavailable,
|
|
|
|
Status: v1.ConditionTrue,
|
|
|
|
Reason: "NoRouteCreated",
|
|
|
|
Message: "Node created without a route",
|
|
|
|
LastTransitionTime: metav1.Now(),
|
|
|
|
}); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-29 23:21:42 +00:00
|
|
|
curNode, err := cnc.kubeClient.CoreV1().Nodes().Get(node.Name, metav1.GetOptions{})
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
|
2017-08-16 03:34:50 +00:00
|
|
|
if curNode.Spec.ProviderID == "" {
|
2018-02-02 21:12:07 +00:00
|
|
|
providerID, err := cloudprovider.GetInstanceProviderID(context.TODO(), cnc.cloud, types.NodeName(curNode.Name))
|
2017-08-16 03:34:50 +00:00
|
|
|
if err == nil {
|
|
|
|
curNode.Spec.ProviderID = providerID
|
|
|
|
} else {
|
|
|
|
// we should attempt to set providerID on curNode, but
|
|
|
|
// we can continue if we fail since we will attempt to set
|
|
|
|
// node addresses given the node name in getNodeAddressesByProviderIDOrName
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.Errorf("failed to set node provider id: %v", err)
|
2017-08-16 03:34:50 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-03-29 23:21:42 +00:00
|
|
|
nodeAddresses, err := getNodeAddressesByProviderIDOrName(instances, curNode)
|
|
|
|
if err != nil {
|
2018-05-31 17:53:57 +00:00
|
|
|
return err
|
2017-03-29 23:21:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// If user provided an IP address, ensure that IP address is found
|
|
|
|
// in the cloud provider before removing the taint on the node
|
|
|
|
if nodeIP, ok := ensureNodeProvidedIPExists(curNode, nodeAddresses); ok {
|
|
|
|
if nodeIP == nil {
|
2018-05-31 17:53:57 +00:00
|
|
|
return errors.New("failed to find kubelet node IP from cloud provider")
|
2017-03-29 23:21:42 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if instanceType, err := getInstanceTypeByProviderIDOrName(instances, curNode); err != nil {
|
|
|
|
return err
|
|
|
|
} else if instanceType != "" {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.V(2).Infof("Adding node label from cloud provider: %s=%s", kubeletapis.LabelInstanceType, instanceType)
|
2017-05-30 14:46:00 +00:00
|
|
|
curNode.ObjectMeta.Labels[kubeletapis.LabelInstanceType] = instanceType
|
2017-03-29 23:21:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if zones, ok := cnc.cloud.Zones(); ok {
|
2017-08-17 18:46:25 +00:00
|
|
|
zone, err := getZoneByProviderIDOrName(zones, curNode)
|
2017-03-29 23:21:42 +00:00
|
|
|
if err != nil {
|
|
|
|
return fmt.Errorf("failed to get zone from cloud provider: %v", err)
|
|
|
|
}
|
|
|
|
if zone.FailureDomain != "" {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.V(2).Infof("Adding node label from cloud provider: %s=%s", kubeletapis.LabelZoneFailureDomain, zone.FailureDomain)
|
2017-05-30 14:46:00 +00:00
|
|
|
curNode.ObjectMeta.Labels[kubeletapis.LabelZoneFailureDomain] = zone.FailureDomain
|
2017-03-29 23:21:42 +00:00
|
|
|
}
|
|
|
|
if zone.Region != "" {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.V(2).Infof("Adding node label from cloud provider: %s=%s", kubeletapis.LabelZoneRegion, zone.Region)
|
2017-05-30 14:46:00 +00:00
|
|
|
curNode.ObjectMeta.Labels[kubeletapis.LabelZoneRegion] = zone.Region
|
2017-03-29 23:21:42 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-11-28 21:07:04 +00:00
|
|
|
curNode.Spec.Taints = excludeCloudTaint(curNode.Spec.Taints)
|
2017-03-29 23:21:42 +00:00
|
|
|
|
|
|
|
_, err = cnc.kubeClient.CoreV1().Nodes().Update(curNode)
|
|
|
|
if err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
// After adding, call UpdateNodeAddress to set the CloudProvider provided IPAddresses
|
|
|
|
// So that users do not see any significant delay in IP addresses being filled into the node
|
|
|
|
cnc.updateNodeAddress(curNode, instances)
|
|
|
|
return nil
|
|
|
|
})
|
|
|
|
if err != nil {
|
|
|
|
utilruntime.HandleError(err)
|
|
|
|
return
|
|
|
|
}
|
2017-10-06 03:33:40 +00:00
|
|
|
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.Infof("Successfully initialized node %s with cloud provider", node.Name)
|
2017-03-29 23:21:42 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func getCloudTaint(taints []v1.Taint) *v1.Taint {
|
|
|
|
for _, taint := range taints {
|
2018-09-28 02:37:38 +00:00
|
|
|
if taint.Key == schedulerapi.TaintExternalCloudProvider {
|
2017-03-29 23:21:42 +00:00
|
|
|
return &taint
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-11-28 21:07:04 +00:00
|
|
|
func excludeCloudTaint(taints []v1.Taint) []v1.Taint {
|
2017-03-29 23:21:42 +00:00
|
|
|
newTaints := []v1.Taint{}
|
|
|
|
for _, taint := range taints {
|
2018-11-28 21:07:04 +00:00
|
|
|
if taint.Key == schedulerapi.TaintExternalCloudProvider {
|
2017-03-29 23:21:42 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
newTaints = append(newTaints, taint)
|
|
|
|
}
|
|
|
|
return newTaints
|
|
|
|
}
|
|
|
|
|
2018-04-28 05:43:29 +00:00
|
|
|
// ensureNodeExistsByProviderID checks if the instance exists by the provider id,
|
|
|
|
// If provider id in spec is empty it calls instanceId with node name to get provider id
|
|
|
|
func ensureNodeExistsByProviderID(instances cloudprovider.Instances, node *v1.Node) (bool, error) {
|
2018-04-27 03:26:40 +00:00
|
|
|
providerID := node.Spec.ProviderID
|
|
|
|
if providerID == "" {
|
|
|
|
var err error
|
|
|
|
providerID, err = instances.InstanceID(context.TODO(), types.NodeName(node.Name))
|
|
|
|
if err != nil {
|
2018-05-27 05:29:36 +00:00
|
|
|
if err == cloudprovider.InstanceNotFound {
|
|
|
|
return false, nil
|
|
|
|
}
|
2018-04-27 03:26:40 +00:00
|
|
|
return false, err
|
2017-08-21 18:55:43 +00:00
|
|
|
}
|
|
|
|
|
2018-04-27 03:26:40 +00:00
|
|
|
if providerID == "" {
|
2018-11-09 18:49:10 +00:00
|
|
|
klog.Warningf("Cannot find valid providerID for node name %q, assuming non existence", node.Name)
|
2018-04-27 03:26:40 +00:00
|
|
|
return false, nil
|
|
|
|
}
|
2017-08-21 18:55:43 +00:00
|
|
|
}
|
|
|
|
|
2018-04-27 03:26:40 +00:00
|
|
|
return instances.InstanceExistsByProviderID(context.TODO(), providerID)
|
2017-08-21 18:55:43 +00:00
|
|
|
}
|
|
|
|
|
2017-03-29 23:21:42 +00:00
|
|
|
func getNodeAddressesByProviderIDOrName(instances cloudprovider.Instances, node *v1.Node) ([]v1.NodeAddress, error) {
|
2018-02-02 21:12:07 +00:00
|
|
|
nodeAddresses, err := instances.NodeAddressesByProviderID(context.TODO(), node.Spec.ProviderID)
|
2017-03-29 23:21:42 +00:00
|
|
|
if err != nil {
|
|
|
|
providerIDErr := err
|
2018-02-02 21:12:07 +00:00
|
|
|
nodeAddresses, err = instances.NodeAddresses(context.TODO(), types.NodeName(node.Name))
|
2017-03-29 23:21:42 +00:00
|
|
|
if err != nil {
|
|
|
|
return nil, fmt.Errorf("NodeAddress: Error fetching by providerID: %v Error fetching by NodeName: %v", providerIDErr, err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nodeAddresses, nil
|
|
|
|
}
|
|
|
|
|
|
|
|
func nodeAddressesChangeDetected(addressSet1, addressSet2 []v1.NodeAddress) bool {
|
|
|
|
if len(addressSet1) != len(addressSet2) {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
addressMap1 := map[v1.NodeAddressType]string{}
|
|
|
|
addressMap2 := map[v1.NodeAddressType]string{}
|
|
|
|
|
|
|
|
for i := range addressSet1 {
|
|
|
|
addressMap1[addressSet1[i].Type] = addressSet1[i].Address
|
|
|
|
addressMap2[addressSet2[i].Type] = addressSet2[i].Address
|
|
|
|
}
|
|
|
|
|
|
|
|
for k, v := range addressMap1 {
|
|
|
|
if addressMap2[k] != v {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
|
|
|
func ensureNodeProvidedIPExists(node *v1.Node, nodeAddresses []v1.NodeAddress) (*v1.NodeAddress, bool) {
|
|
|
|
var nodeIP *v1.NodeAddress
|
|
|
|
nodeIPExists := false
|
2017-05-30 14:46:00 +00:00
|
|
|
if providedIP, ok := node.ObjectMeta.Annotations[kubeletapis.AnnotationProvidedIPAddr]; ok {
|
2017-03-29 23:21:42 +00:00
|
|
|
nodeIPExists = true
|
|
|
|
for i := range nodeAddresses {
|
|
|
|
if nodeAddresses[i].Address == providedIP {
|
|
|
|
nodeIP = &nodeAddresses[i]
|
|
|
|
break
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return nodeIP, nodeIPExists
|
|
|
|
}
|
|
|
|
|
|
|
|
func getInstanceTypeByProviderIDOrName(instances cloudprovider.Instances, node *v1.Node) (string, error) {
|
2018-02-02 21:12:07 +00:00
|
|
|
instanceType, err := instances.InstanceTypeByProviderID(context.TODO(), node.Spec.ProviderID)
|
2017-03-29 23:21:42 +00:00
|
|
|
if err != nil {
|
|
|
|
providerIDErr := err
|
2018-02-02 21:12:07 +00:00
|
|
|
instanceType, err = instances.InstanceType(context.TODO(), types.NodeName(node.Name))
|
2017-03-29 23:21:42 +00:00
|
|
|
if err != nil {
|
|
|
|
return "", fmt.Errorf("InstanceType: Error fetching by providerID: %v Error fetching by NodeName: %v", providerIDErr, err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return instanceType, err
|
start breaking up controller manager into two pieces
Addresses: kubernetes/features#88
This commit starts breaking the controller manager into two pieces, namely,
1. cloudprovider dependent piece
2. coudprovider agnostic piece
the controller manager has the following control loops -
- nodeController
- volumeController
- routeController
- serviceController
- replicationController
- endpointController
- resourcequotacontroller
- namespacecontroller
- deploymentController etc..
among the above controller loops,
- nodeController
- volumeController
- routeController
- serviceController
are cloud provider dependent. As kubernetes has evolved tremendously, it has become difficult
for different cloudproviders (currently 8), to make changes and iterate quickly. Moreover, the
cloudproviders are constrained by the kubernetes build/release lifecycle. This commit is the first
step in moving towards a kubernetes code base where cloud providers specific code will move out of
the core repository, and will be maintained by the cloud providers themselves.
Finally, along with the controller-manager, the kubelet also has cloud-provider specific code, and that will
be addressed in a different commit/issue.
2016-11-24 02:30:01 +00:00
|
|
|
}
|
2017-08-17 18:46:25 +00:00
|
|
|
|
|
|
|
// getZoneByProviderIDorName will attempt to get the zone of node using its providerID
|
|
|
|
// then it's name. If both attempts fail, an error is returned
|
|
|
|
func getZoneByProviderIDOrName(zones cloudprovider.Zones, node *v1.Node) (cloudprovider.Zone, error) {
|
2018-02-02 21:12:07 +00:00
|
|
|
zone, err := zones.GetZoneByProviderID(context.TODO(), node.Spec.ProviderID)
|
2017-08-17 18:46:25 +00:00
|
|
|
if err != nil {
|
|
|
|
providerIDErr := err
|
2018-02-02 21:12:07 +00:00
|
|
|
zone, err = zones.GetZoneByNodeName(context.TODO(), types.NodeName(node.Name))
|
2017-08-17 18:46:25 +00:00
|
|
|
if err != nil {
|
|
|
|
return cloudprovider.Zone{}, fmt.Errorf("Zone: Error fetching by providerID: %v Error fetching by NodeName: %v", providerIDErr, err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return zone, nil
|
|
|
|
}
|