If the container status exists but the container hasn't been created yet, it won't have an ID.
Have the probe wait for a valid status if the container ID is not yet set; otherwise, you'll see the
following cryptic log message from runtime.go: invalid container ID: "".
bridge-nf-call-iptables appears to only be relevant when the containers are
attached to a Linux bridge, which is usually the case with default Kubernetes
setups, docker, and flannel. That ensures that the container traffic is
actually subject to the iptables rules since it traverses a Linux bridge
and bridged traffic is only subject to iptables when bridge-nf-call-iptables=1.
But with other networking solutions (like openshift-sdn) that don't use Linux
bridges, bridge-nf-call-iptables may not be not relevant, because iptables is
invoked at other points not involving a Linux bridge.
The decision to set bridge-nf-call-iptables should be influenced by networking
plugins, so push the responsiblity out to them. If no network plugin is
specified, fall back to the existing bridge-nf-call-iptables=1 behavior.
Add aws cloud config:
[global]
disableSecurityGroupIngress = true
The aws provider creates an inbound rule per load balancer on the node
security group. However, this can quickly run into the AWS security
group rule limit of 50.
This disables the automatic ingress creation. It requires that the user
has setup a rule that allows inbound traffic on kubelet ports from the
local VPC subnet (so load balancers can access it). E.g. `10.82.0.0/16
30000-32000`.
Limits: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html#vpc-limits-security-groups
Authors: @jsravn, @balooo
When finding instance by node name in AWS, only retrieve running
instances. Otherwise terminated, old nodes can show up with the same
tag when rebuilding nodes in the cluster.
Another improvement made is to filter instances by the node names
provided, rather than selecting all instances and filtering in code.
Authors: @jsravn, @chbatey, @balooo