aws(学习笔记第二十八课)
- 使用
aws eks
学习内容:
- 什么是
aws eks
aws eks
的hands on
aws eks
的创建application
eks
和kubernetes
简介
1. 使用aws eks
- 什么是
aws eks
aws eks
的概念
aws eks
是kubernetes
在aws
上包装出来 的新的方式,旨在更加方便结合aws
,在aws
上使用kubernetes
。实际上aws eks
是aws
上的managed service
。作为执行container
的server
来说,可以使用EC2
或者Fargate
都是可以的。aws eks
和ECS
的区别
区别在于orchestration tool
的不同,aws eks
使用的kubernetes
作为orchestration tool
,如果onpromise
上使用的是kubernetes
,那么同样的架构能够在aws
同样使用。
ECS
使用的orchestration tool
是aws
独自的,智能在aws
上使用。- 什么是
orchestration tool
orchestration tool
是指一种用于协调和管理系统资源、服务和应用程序的工具,以确保它们能够高效、可靠地运行。这种工具通常用于自动化和优化资源的分配和管理,特别是在云计算和容器化环境中。
aws eks
的架构
aws
上的示例程序
aws
上提供了示例程序,能够使用练习eks
eks的示例程序
2. aws eks
的hands on
- 环境(软件安装)准备
- 练习用的
ec2
这里依然采用方便的cloudshell
进行练习 aws cli
的版本确认aws --version
- 安装
eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp eksctl version
- 安装
kubectl
如果使用cloudshell
,不用安装kubectl
,如果使用的EC2
,那么执行下面的命令。curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.8/2020-09-18/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin kubectl version --client
- 练习用的
- 对于需要的权限设定
role
进行作成eksClusterRole
的创建
这里的role
主要是赋予给eks
服务足够的权利。role
的名字是eksClusterRole
。
保存到文件,之后使用cloudformation
进行创建role
的操作。AWSTemplateFormatVersion: '2010-09-09' Description: 'Amazon EKS Cluster Role' Resources: eksClusterRole: Type: 'AWS::IAM::Role' Properties: AssumeRolePolicyDocument: Version: '2012-10-17' Statement: - Effect: Allow Principal: Service: - eks.amazonaws.com Action: - sts:AssumeRole ManagedPolicyArns: - arn:aws:iam::aws:policy/AmazonEKSClusterPolicy Outputs: RoleArn: Description: 'The role that Amazon EKS will use to create AWS resources for Kubernetes clusters' Value: !GetAtt eksClusterRole.Arn Export: Name: !Sub '${AWS::StackName}-RoleArn'
eks-nodegroup-role
的创建
还需要创建work node
的需要的权限role
,这个代码由aws
提供。
-> aws work node role link
AWSTemplateFormatVersion: "2010-09-09" Description: Amazon EKS - Node Group Role Mappings: ServicePrincipals: aws-cn: ec2: ec2.amazonaws.com.cn aws-us-gov: ec2: ec2.amazonaws.com aws: ec2: ec2.amazonaws.com Resources: NodeInstanceRole: Type: "AWS::IAM::Role" Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: Service: - !FindInMap [ServicePrincipals, !Ref "AWS::Partition", ec2] Action: - "sts:AssumeRole" ManagedPolicyArns: - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEKSWorkerNodePolicy" - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEKS_CNI_Policy" - !Sub "arn:${AWS::Partition}:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" Path: / Outputs: NodeInstanceRole: Description: The node instance role Value: !GetAtt NodeInstanceRole.Arn
- 创建
eks
所在的vpc
- 使用
cloudformation
来创建vpc
这个vpc
的创建json
由aws
提供示例代码。->eks
所在的vpc
示例json
这里在subnet
指定采用了hardcoding
,如果不进行hardcoding
发现总是提示错误,暂定对应。--- AWSTemplateFormatVersion: '2010-09-09' Description: 'Amazon EKS Sample VPC - Public subnets only' Parameters: VpcBlock: Type: String Default: 192.168.0.0/16 Description: The CIDR range for the VPC. This should be a valid private (RFC 1918) CIDR range. Subnet01Block: Type: String Default: 192.168.64.0/18 Description: CidrBlock for subnet 01 within the VPC Subnet02Block: Type: String Default: 192.168.128.0/18 Description: CidrBlock for subnet 02 within the VPC Subnet03Block: Type: String Default: 192.168.192.0/18 Description: CidrBlock for subnet 03 within the VPC. This is used only if the region has more than 2 AZs. Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Worker Network Configuration" Parameters: - VpcBlock - Subnet01Block - Subnet02Block - Subnet03Block Conditions: Has2Azs: Fn::Or: - Fn::Equals: - {Ref: 'AWS::Region'} - ap-south-1 - Fn::Equals: - {Ref: 'AWS::Region'} - ap-northeast-2 - Fn::Equals: - {Ref: 'AWS::Region'} - ca-central-1 - Fn::Equals: - {Ref: 'AWS::Region'} - cn-north-1 - Fn::Equals: - {Ref: 'AWS::Region'} - sa-east-1 - Fn::Equals: - {Ref: 'AWS::Region'} - us-west-1 HasMoreThan2Azs: Fn::Not: - Condition: Has2Azs Resources: VPC: Type: AWS::EC2::VPC Properties: CidrBlock: !Ref VpcBlock EnableDnsSupport: true EnableDnsHostnames: true Tags: - Key: Name Value: !Sub '${AWS::StackName}-VPC' InternetGateway: Type: "AWS::EC2::InternetGateway" VPCGatewayAttachment: Type: "AWS::EC2::VPCGatewayAttachment" Properties: InternetGatewayId: !Ref InternetGateway VpcId: !Ref VPC RouteTable: Type: AWS::EC2::RouteTable Properties: VpcId: !Ref VPC Tags: - Key: Name Value: Public Subnets - Key: Network Value: Public Route: DependsOn: VPCGatewayAttachment Type: AWS::EC2::Route Properties: RouteTableId: !Ref RouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway Subnet01: Type: AWS::EC2::Subnet Metadata: Comment: Subnet 01 Properties: MapPublicIpOnLaunch: true AvailabilityZone: ap-northeast-1a CidrBlock: Ref: Subnet01Block VpcId: Ref: VPC Tags: - Key: Name Value: !Sub "${AWS::StackName}-Subnet01" - Key: kubernetes.io/role/elb Value: 1 Subnet02: Type: AWS::EC2::Subnet Metadata: Comment: Subnet 02 Properties: MapPublicIpOnLaunch: true AvailabilityZone: ap-northeast-1c CidrBlock: Ref: Subnet02Block VpcId: Ref: VPC Tags: - Key: Name Value: !Sub "${AWS::StackName}-Subnet02" - Key: kubernetes.io/role/elb Value: 1 Subnet03: Condition: HasMoreThan2Azs Type: AWS::EC2::Subnet Metadata: Comment: Subnet 03 Properties: MapPublicIpOnLaunch: true AvailabilityZone: ap-northeast-1d CidrBlock: Ref: Subnet03Block VpcId: Ref: VPC Tags: - Key: Name Value: !Sub "${AWS::StackName}-Subnet03" - Key: kubernetes.io/role/elb Value: 1 Subnet01RouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref Subnet01 RouteTableId: !Ref RouteTable Subnet02RouteTableAssociation: Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref Subnet02 RouteTableId: !Ref RouteTable Subnet03RouteTableAssociation: Condition: HasMoreThan2Azs Type: AWS::EC2::SubnetRouteTableAssociation Properties: SubnetId: !Ref Subnet03 RouteTableId: !Ref RouteTable ControlPlaneSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster communication with worker nodes VpcId: !Ref VPC Outputs: SubnetIds: Description: All subnets in the VPC Value: Fn::If: - HasMoreThan2Azs - !Join [ ",", [ !Ref Subnet01, !Ref Subnet02, !Ref Subnet03 ] ] - !Join [ ",", [ !Ref Subnet01, !Ref Subnet02 ] ] SecurityGroups: Description: Security group for the cluster control plane communication with worker nodes Value: !Join [ ",", [ !Ref ControlPlaneSecurityGroup ] ] VpcId: Description: The VPC Id Value: !Ref VPC
- 使用
- 在
eks
所在的vpc
中创建eks cluster
- 选择自定义配置
- 设定
cluster
名字和role
这里设定eks-cluster
,role
选择上面创建的eksClusterRole
。
- 设置
IAM角色
这里设置administor
的权限,ec-role
。
ec2-role
的权限设定
- 设置
vpc
这里设置前面已经创建的vpc
。
- 集群端点访问设定
这里设置成public
- 等待
cluster
创建
这里大概要等待10分钟
- 选择自定义配置
- 在
eks
所在的cluster
中创建kubeconfig
- 作成
kubeconfig
文件
如果不做成kubeconfig
文件,无法访问eks cluster
。aws eks --region ap-northeast-1 update-kubeconfig --name eks-cluster
- 作成
- 尝试链接
eks
所在的cluster
- 链接测试
kubectl get svc
- 链接测试
- 作成
eks cluster
中的work node
- 启动
work node group
- 这是
work node group
的名字和IAM role
这里设定的role
就是前面创建的role
。名字设置为work-node-group
。
- 设定
ec2 type
,节点数和磁盘大小
- 等待
work node group
创建,这里需要5分钟
- 检查
ec2
可以看到这里作成了三个ec2 instances
。
- 启动
3. aws eks
的创建application
- 部署
redis
数据库- 使用下面的
github
上的官方sample
程序
redis-master-controllergit clone https://github.com/kubernetes/examples.git cd examples/guestbook-go
- 之后进入
example
文件夹,启动redis controller
kubectl apply -f redis-master-controller.yaml
- 启动
redis service
kubectl apply -f redis-master-service.yaml
- 使用下面的
- 部署
guest-book
服务- 启动
guest-book
kubectl apply -f guestbook-controller.yaml kubectl apply -f guestbook-service.yaml
- 确定
external ip
这里看出get pod,svc -o wide
external ip
没有生成成功,说明还是有问题。检查下work node group
,看出这里创建之后存在问题,还在解析中。
- 启动
4. aws eks
和kubernetes
简介
- 关于
aws eks
和kubernetes
EKS(Elastic Kubernetes Service)
简介。未完待续。