7.5 KiB
PLEASE NOTE: This document applies to the HEAD of the source tree only. If you are using a released version of Kubernetes, you almost certainly want the docs that go with that version.
Documentation for specific releases can be found at releases.k8s.io.
Getting started on Amazon EC2 with CoreOS
The example below creates an elastic Kubernetes cluster with a custom number of worker nodes and a master.
Warning: contrary to the supported procedure, the examples below provision Kubernetes with an insecure API server (plain HTTP, no security tokens, no basic auth). For demonstration purposes only.
Highlights
- Cluster bootstrapping using cloud-config
- Cross container networking with flannel
- Auto worker registration with kube-register
- Kubernetes v0.19.3 official binaries
Prerequisites
Starting a Cluster
CloudFormation
The cloudformation-template.json can be used to bootstrap a Kubernetes cluster with a single command:
aws cloudformation create-stack --stack-name kubernetes --region us-west-2 \
--template-body file://aws/cloudformation-template.json \
--parameters ParameterKey=KeyPair,ParameterValue=<keypair> \
ParameterKey=ClusterSize,ParameterValue=<cluster_size> \
ParameterKey=VpcId,ParameterValue=<vpc_id> \
ParameterKey=SubnetId,ParameterValue=<subnet_id> \
ParameterKey=SubnetAZ,ParameterValue=<subnet_az>
It will take a few minutes for the entire stack to come up. You can monitor the stack progress with the following command:
aws cloudformation describe-stack-events --stack-name kubernetes
Record the Kubernetes Master IP address:
aws cloudformation describe-stacks --stack-name kubernetes
Skip to kubectl client configuration
AWS CLI
The following commands shall use the latest CoreOS alpha AMI for the us-west-2
region. For a list of different regions and corresponding AMI IDs see the CoreOS EC2 cloud provider documentation.
Create the Kubernetes Security Group
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
Save the master and node cloud-configs
Launch the master
Attention: replace <ami_image_id>
below for a suitable version of CoreOS image for AWS.
aws ec2 run-instances --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://master.yaml
Record the InstanceId
for the master.
Gather the public and private IPs for the master node:
aws ec2 describe-instances --instance-id <instance-id>
{
"Reservations": [
{
"Instances": [
{
"PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com",
"RootDeviceType": "ebs",
"State": {
"Code": 16,
"Name": "running"
},
"PublicIpAddress": "54.68.97.117",
"PrivateIpAddress": "172.31.9.9",
...
Update the node.yaml cloud-config
Edit node.yaml
and replace all instances of <master-private-ip>
with the private IP address of the master node.
Launch 3 worker nodes
Attention: Replace <ami_image_id>
below for a suitable version of CoreOS image for AWS.
aws ec2 run-instances --count 3 --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://node.yaml
Add additional worker nodes
Attention: replace <ami_image_id>
below for a suitable version of CoreOS image for AWS.
aws ec2 run-instances --count 1 --image-id <ami_image_id> --key-name <keypair> \
--region us-west-2 --security-groups kubernetes --instance-type m3.medium \
--user-data file://node.yaml
Configure the kubectl SSH tunnel
This command enables secure communication between the kubectl client and the Kubernetes API.
ssh -f -nNT -L 8080:127.0.0.1:8080 core@<master-public-ip>
Listing worker nodes
Once the worker instances have fully booted, they will be automatically registered with the Kubernetes API server by the kube-register service running on the master node. It may take a few mins.
kubectl get nodes
Starting a simple pod
Create a pod manifest: pod.json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "hello",
"labels": {
"name": "hello",
"environment": "testing"
}
},
"spec": {
"containers": [{
"name": "hello",
"image": "quay.io/kelseyhightower/hello",
"ports": [{
"containerPort": 80,
"hostPort": 80
}]
}]
}
}
Create the pod using the kubectl command line tool
kubectl create -f ./pod.json
Testing
kubectl get pods
Record the Host of the pod, which should be the private IP address.
Gather the public IP address for the worker node.
aws ec2 describe-instances --filters 'Name=private-ip-address,Values=<host>'
{
"Reservations": [
{
"Instances": [
{
"PublicDnsName": "ec2-54-68-97-117.us-west-2.compute.amazonaws.com",
"RootDeviceType": "ebs",
"State": {
"Code": 16,
"Name": "running"
},
"PublicIpAddress": "54.68.97.117",
...
Visit the public IP address in your browser to view the running pod.
Delete the pod
kubectl delete pods hello