当前文档适配PolarDB-X V2.4.0 版本
环境描述:
部署机(ops)1x2.2x.2x8.116,部署机需要可以访问互联网。使用ansible进行部署,自行安装ansible。需要部署两个k8s集群,分别在其上安装一个polardb-x集群。
部署步骤:
环境准备:
BIOS 设置
k8s集群的全部机器均需设置,不同类型 CPU 平台及服务器厂商的 BIOS 设置界面有较大区别。在部署数据库前,建议参考服务器厂商文档,检查以下 BIOS 参数是否设置正确:
安装ansible
登陆部署机
ssh 1x2.2x.2x8.116
yum install ansible python-netaddr -y
建立ansible配置文件
vi $HOME/all.ini
[all]
1x2.2x.2x8.116 # ops
[k1]
1x2.2x.2x7.6 ansible_ssh_port=22
1x2.2x.2x7.7 ansible_ssh_port=22
1x2.2x.2x7.8 ansible_ssh_port=22
1x2.2x.2x7.9 ansible_ssh_port=22
[k2]
1x2.2x.2x7.5 ansible_ssh_port=22
1x2.2x.2x7.10 ansible_ssh_port=22
1x2.2x.2x7.11 ansible_ssh_port=22
1x2.2x.2x7.12 ansible_ssh_port=22
[all:vars]
registry=1x2.2x.2x8.116
配置文件路径放入环境变量:
export ini_file=$HOME/all.ini
服务器免密
打通 ops 与所有服务器的免密登录:
生成 ssh 密钥
ssh-keygen -q -t rsa -N ‘’ -f ~/.ssh/id_rsa <<<y
自动添加 known_hosts
echo “StrictHostKeyChecking no” >> /etc/ssh/ssh_config
打通 ssh 免密, ini_file 指定上述服务器列表
ansible -i ${ini_file} all -m authorized_key -a " user=root key="{{ lookup(‘file’, ‘/root/.ssh/id_rsa.pub’) }} " " -u root --become-method=sudo --ask-become-pass --become -k
配置系统参数
配置时区/时钟
批量设置服务器时区,时钟。如果是生产环境部署,建议配置 NTP 服务以保证服务器时钟保持同步
ansible -i ${ini_file} all -m shell -a " timedatectl set-timezone Asia/Shanghai "
ansible -i ${ini_file} all -m shell -a " date -s ‘date '+%Y-%m-%d %H:%M:%S'
’ "
完成后,用以下命令检查服务器时钟:
ansible -i ${ini_file} all -m shell -a " date ‘+%D %T.%6N’ "
配置 /etc/hosts
如果需安装私有 Docker 镜像仓库,需要修改服务器 /etc/hosts 文件,加入 registry 域名(本预研环境docker仓库就部署在116上):
ansible -i ${ini_file} all -m shell -a " sed -i ‘/registry/d’ /etc/hosts "
ansible -i ${ini_file} all -m shell -a " echo ‘1x2.2x.2x8.116 registry’ >> /etc/hosts "
配置 sysctl.conf
vi $HOME/sysctl.conf
vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time=120
see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
kernel.sysrq=1
net.core.somaxconn = 256
net.core.wmem_max = 262144
net.ipv4.tcp_keepalive_time = 20
net.ipv4.tcp_keepalive_probes = 60
net.ipv4.tcp_keepalive_intvl = 3
net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_fin_timeout = 15
#perf
kernel.perf_event_paranoid = 1
fs.aio-max-nr = 1048576
更新服务器 sysctl.conf 配置文件:
ansible -i i n i f i l e a l l − m s y n c h r o n i z e − a " s r c = {ini_file} all -m synchronize -a " src= inifileall−msynchronize−a"src=HOME/sysctl.conf dest=/etc/sysctl.conf "
在服务器上加载最新配置:
ansible -i ${ini_file} all -m shell -a " sysctl -p /etc/sysctl.conf "
关闭防火墙
ansible -i ${ini_file} all -m shell -a " systemctl disable firewalld "
ansible -i ${ini_file} all -m shell -a " systemctl stop firewalld "
禁用 SELinux
vi $HOME/selinux
This file controls the state of SELinux on the system.
SELINUX= can take one of these three values:
enforcing - SELinux security policy is enforced.
permissive - SELinux prints warnings instead of enforcing.
disabled - No SELinux policy is loaded.
#SELINUX=enforcing
SELINUX=disabled
SELINUXTYPE= can take one of three values:
targeted - Targeted processes are protected,
minimum - Modification of targeted policy. Only selected processes are protected.
mls - Multi Level Security protection.
SELINUXTYPE=targeted
ansible -i i n i f i l e a l l − m s y n c h r o n i z e − a " s r c = {ini_file} all -m synchronize -a " src= inifileall−msynchronize−a"src=HOME/selinux dest=/etc/selinux/config "
ansible -i ${ini_file} all -m shell -a " setenforce 0 "
禁用交换分区
ansible -i ${ini_file} all -m shell -a " swapoff -a "
ansible -i ${ini_file} all -m shell -a " sed -i ‘/=SWAP/d’ /etc/fstab "
配置K8s 部署对应目录的软链接
ansible -i ${ini_file} all -m shell -a " mkdir -p /data/polarx/kubelet"
ansible -i ${ini_file} all -m shell -a " ln -s /data/polarx/kubelet /var/lib/kubelet "
ansible -i ${ini_file} all -m shell -a " mkdir -p /data/polarx/docker "
ansible -i ${ini_file} all -m shell -a " ln -s /data/polarx/docker /var/lib/docker "
ansible -i ${ini_file} all -m shell -a " mkdir -p /data/polarx/data-log "
ansible -i ${ini_file} all -m shell -a " ln -s /data/polarx/data-log /data-log "
ansible -i ${ini_file} all -m shell -a " mkdir -p /data/polarx/filestream "
ansible -i ${ini_file} all -m shell -a " ln -s /data/polarx/filestream /filestream "
安装常用工具
ansible -i ${ini_file} all -m shell -a " yum install mysql -y "
ansible -i ${ini_file} all -m shell -a " yum install dstat iostat htop -y "
配置docker私有仓库
所有k8s相关节点安装docker
ansible -i ${ini_file} all -m shell -a " yum install docker-ce -y "
启动服务
使用私有 Dokcer 镜像仓库,要求在 daemon.json 加入以下配置:
cat > $HOME/daemon.json<< EOF
{
“exec-opts”: [“native.cgroupdriver=systemd”],
“insecure-registries”: [“registry:5000”]
}
EOF
ansible -i ${ini_file} all -m shell -a " mkdir /etc/docker "
ansible -i i n i f i l e a l l − m s y n c h r o n i z e − a " s r c = {ini_file} all -m synchronize -a " src= inifileall−msynchronize−a"src=HOME/daemon.json dest=/etc/docker/daemon.json " -u root
ansible -i ${ini_file} all -m shell -a " systemctl daemon-reload "
ansible -i ${ini_file} all -m shell -a " systemctl enable docker "
ansible -i ${ini_file} all -m shell -a " systemctl restart docker "
ansible -i ${ini_file} all -m shell -a " docker ps -a "
启动镜像仓库
只需要在一台服务器上运行私有镜像仓库,通常我们选择在部署机(ops)上启动镜像仓库。 部署方法非常简单,只需要 3 个步骤:
首先,下载 Docker 镜像仓库的容器镜像:
docker pull registry
运行以下命令创建 Docker 容器:
docker run -d --net=host -p 5000:5000 --restart=always --name registry registry
检查镜像仓库的 Docker 容器是否正常运行:
docker ps
部署工具下载和仓库镜像配置
安装pxd工具
yum update -y
yum install -y python3
python3 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install -i https://mirrors.aliyun.com/pypi/simple/ --upgrade pxd
pxd version
配置部署工具和PolarDB-X 相关 Docker 镜像
mkdir /data/pxd
cd /data/pxd
vi images.list
polardbx/polardbx-sql:v2.4.0_5.4.19
polardbx/polardbx-engine:v2.4.0_8.4.19
polardbx/polardbx-cdc:v2.4.0_5.4.19
polardbx/polardbx-columnar:v2.4.0_5.4.19
polardbx/polardbx-operator:v1.6.0
polardbx/polardbx-exporter:v1.6.0
polardbx/polardbx-hpfs:v1.6.0
polardbx/polardbx-init:v1.6.0
polardbx/polardbx-clinic:v1.6.0
polardbx/xstore-tools:v1.6.0
polardbx/probe-proxy:v1.6.0
prom/mysqld-exporter:master
quay.io/prometheus/prometheus:v2.22.1
quay.io/prometheus/alertmanager:v0.21.0
quay.io/brancz/kube-rbac-proxy:v0.8.0
quay.io/prometheus/node-exporter:v1.0.1
quay.io/prometheus-operator/prometheus-operator:v0.44.1
quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1
grafana/grafana:8.5.27
kubesphere/kube-state-metrics:v2.3.0
directxman12/k8s-prometheus-adapter:v0.8.2
polardbx/polardbx-logstash:latest
docker.elastic.co/beats/filebeat:8.9.0
执行下面命令保证images.list更新到最新
curl -s “https://polardbx-opensource.oss-cn-hangzhou.aliyuncs.com/scripts/get-version.sh” | sh
pxd download --env k8s --arch amd64 --repo “registry:5000” --dest /data/pxd/ -i images.list
配置k8s相关镜像
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.0
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
docker pull docker.io/calico/cni:v3.15.1
docker pull docker.io/calico/pod2daemon-flexvol:v3.15.1
docker pull docker.io/calico/node:v3.15.1
docker pull docker.io/calico/kube-controllers:v3.15.1
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 registry:5000/kube-apiserver:v1.21.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0 registry:5000/kube-proxy:v1.21.0
docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.0 registry:5000/coredns/coredns:v1.8.0
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 registry:5000/etcd:3.4.13-0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 registry:5000/kube-controller-manager:v1.21.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 registry:5000/kube-scheduler:v1.21.0
docker tag registry.aliyuncs.com/google_containers/pause:3.4.1 registry:5000/pause:3.4.1
docker tag docker.io/calico/cni:v3.15.1 registry:5000/calico/cni:v3.15.1
docker tag docker.io/calico/pod2daemon-flexvol:v3.15.1 registry:5000/calico/pod2daemon-flexvol:v3.15.1
docker tag docker.io/calico/node:v3.15.1 registry:5000/calico/node:v3.15.1
docker tag docker.io/calico/kube-controllers:v3.15.1 registry:5000/calico/kube-controllers:v3.15.1
docker push registry:5000/kube-apiserver:v1.21.0
docker push registry:5000/kube-proxy:v1.21.0
docker push registry:5000/coredns/coredns:v1.8.0
docker push registry:5000/etcd:3.4.13-0
docker push registry:5000/kube-controller-manager:v1.21.0
docker push registry:5000/kube-scheduler:v1.21.0
docker push registry:5000/pause:3.4.1
docker push registry:5000/calico/node:v3.15.1
docker push registry:5000/calico/pod2daemon-flexvol:v3.15.1
docker push registry:5000/calico/cni:v3.15.1
docker push registry:5000/calico/kube-controllers:v3.15.1
安装k8s
在部署机(ops)上编辑 kubernetes.repo 配置文件:
vi $HOME/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
ansible -i i n i f i l e a l l − m s y n c h r o n i z e − a " s r c = {ini_file} all -m synchronize -a " src= inifileall−msynchronize−a"src=HOME/kubernetes.repo dest=/etc/yum.repos.d/ " -u root
服务部署
ansible -i ${ini_file} all -m shell -a " yum install --nogpgcheck -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 "
启动主节点
登录规划的主节点
模拟数据中心A环境在253.5部署,模拟数据中心B环境在253.6部署
ssh 1x2.21.253.5
kubeadm init --image-repository=registry:5000 --kubernetes-version=v1.21.0 --pod-network-cidr=10.244.0.0/16 --v=5
253.6同上执行
253.5成功后结尾输出如下:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (id−u):(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 1x2.2x.2x7.5:6443 --token ex75w9.xlbgj61avvywq2yp
–discovery-token-ca-cert-hash sha256:302744a4fa996a95f6f64406efbeb29b4da7feb03ce8d02c8c8e2bba01b9dad4
253.6成功后结尾输出如下:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (id−u):(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 1x2.2x.2x7.6:6443 --token 9yywt3.59cfgnaxw6xp0wzl
–discovery-token-ca-cert-hash sha256:13705e50c00591ce1838478dbc43ceb04ddb18dd2703d308bc10648766ca1685
加入工作节点
模拟数据中心A登陆工作节点(此步骤中心内逐个工作节点均需要执行)
ssh 1x2.2x.2x7.10
kubeadm join 1x2.2x.2x7.5:6443 --token ex75w9.xlbgj61avvywq2yp
–discovery-token-ca-cert-hash sha256:302744a4fa996a95f6f64406efbeb29b4da7feb03ce8d02c8c8e2bba01b9dad4
模拟数据中心B登陆工作节点(此步骤中心内逐个工作节点均需要执行)
ssh 1x2.2x.2x7.7
kubeadm join 1x2.2x.2x7.6:6443 --token 9yywt3.59cfgnaxw6xp0wzl
–discovery-token-ca-cert-hash sha256:13705e50c00591ce1838478dbc43ceb04ddb18dd2703d308bc10648766ca1685
这个join语句即是初始化主节点时最后输出的语句,如果token过了有效期,可以去主节点重新生成
kubeadm token create
配置 kubectl
在部署机(ops)安装 kubectl 客户端:
ssh 1x2.21.228.116
yum install kubectl-1.21.0 -y
kubectl工具管理k8s集群需要集群的配置文件/etc/kubernetes/admin.conf,为了同时管理模拟数据中心A和B的两个集群将两份文件都拷贝到ops机并文件合并为 /data/pxd/config-mdc,为kubectl自定义该控制文件路径时需要一个环境变量$KUBECONFIG
export KUBECONFIG=“/data/pxd/config-mdc”
scp 1x2.2x.2x7.5:/etc/kubernetes/admin.conf /data/pxd/config-dca
查看文件内容
vi /data/pxd/config-dca
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJME1EVXlOREF6TWpFd01sb1hEVE0wTURVeU1qQXpNakV3TWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnJXClVnakVWS3o0Q2kybUJBbExqVkRZZmtUWStiQXZuNEdla093M3FtL0xtMjk5b2JBdW5sRVN0eGw5a0xuVHExSXIKNCtNRzhtamM0dlZRcU9wZHp2THF3anZnOHR4aGIrSnFpNTQyWUl3bEdEaTR6OTA5dllDVDliTDlBUWczVkxYZgo2cUpwbDhPSFdTMDBFNUNBTkJSc3E1VlNLMlh0c3dFL3p5NkliTTk3Vjd6N1l5cFRXa0FKdk1XTFowOEFwY0ZYCmp6a0piRjNac0gyQWt0VWhXNDBjaC9wTk9oUTREZFM0S2U4YU5PVFMzT3RhT2xvc2U2R2x0ZWxVbkxBL2x2MUUKOFRnT2YybFVoOHpzRDhlWlkya0FkdjIzZU1ieVF0RlBJbVpBMFJkQUthd0dqcGZ6U0xsZVo5ckJxNlY5b0c5ZQpoeGtKWldEdHpMQU5MVytXelMwQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCUWluZUFxMm1aNStrVE0zY2gzdysxUVRiYmxNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCM0xrTCtDL2lpaFZIeXpPbUp2cENPTExTVHRDdmE0NDZzWWZVQnZ6OFlJKzVMT1RLSwpjc2xnaWxET1FEUTBFc2ZucUp3M2NtaytNLzhMc2o1VVlBeVp4SVRrbVFIbjdGRGJOdlBuckJrVmF4UGxrRGpiCjg5OGZJS0N6Q2NuN1g3aTFQZVVLMzVSSFZwVU5jSjNka0R3Yk9pUFBYbXJicWw3RU15K0s5eGk5RkY3UjdOTnEKUEVXNkJPQ0JwSEpGZTEwUFRtY1hLa1pkM2JvZHMxQnNHcXpFWG84QmtyYjE0WERqWXN4SUc4UEl3RVM2SlFBKwpuamtlUHpMQS9HblVFZnYvOHMwUDRhN3dPVUYvMkliUVpyZE15YXNxYlczTEFxV3J6V3g1OUVldDlYQmhxbTRwCmdCdlNxdVNwWmFEZGVSc0paeXIwUGpBaTVMZ3hvaG9BV0NORAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://1x2.2x.2x7.5:6443
name: kubernetes
contexts: - context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users: - name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWXV3WEdDbHF6N3N3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBMU1qUXdNekl4TURKYUZ3MHlOVEExTWpRd016SXhNRFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXlMTUdHVVJXdTZnZzBmZjEKcGlxYnFqdXZGWlFOQm9iTlNtd1hpSDdqUkxPdmRrVzNwdkRSR0lmV1lvdGpNUDZydTdjVzRZbDBMeDIycEdodgpiRkhUWFFvUENmUzhOK1lsNEp3TFNqYnNBSDdpMW00NVVNNWZHenJlbHhqTjRHS0sxSVFPR2pwWjRyUkpZOHBZCmhSUExuRXBHWGpyVW0wWXZGYkFseW84bDFWQVZ5WTh6UzlUL0JKY0JvcjE0MHZtNkRXNDFFeEx0N2JRT0lCRGIKbmVtdWxDMFFmV1EzallKRUEvbFpRN0FUZ0tyblIzSGhZS0Z3enFmU2NDK1VyOVlnRWlwODRzODBQN0Q3a1ZZcApBVzdaYW5PZ2duYituaTFJSXlvY0FoTGVOQVRYbE9qaWJEc1RBUG44SS9qZHNmaksyVk82bXk4UkFyZnhsdXlXClVjL2VPUUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JRVUlwM2dLdHBtZWZwRXpOM0lkOFB0VUUyMgo1VEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBUEtLSG5VcFJ0b0ZXZEE5aFV6TmNaQy8rOXByc0lhZkwwVGplCm94aUpQWENYOGtyWTVzbko2M2IwemFNSEs1Rzh2OEJIYTFDT0V4VXp2c3JZY05oanBET2hZVUhSenNMN1FVUUMKQjVnclhuZmdSZGJrSzhkUkNINTN1UXpBLzZQRXZRbDVrYzMxbjd6Y1Y3eEM4L3lVSWpUaHdHUjUzZ3ZqSHhKSQozbzdRaHVYaTlPUmhnTWxVL3BCNkZ0amMvVzIvODNyaFdEdC9UOFhXSGNiUVRkQm0va0NLNnhubzJ4UnNPbEltClNTMnBsWUk1K2QyVGlGeFdVZmttaWRkSld0MzdGbC9KbURVaWpOUGZuUXAwd0dxRURuNG9nWlFmRFBFSE5IcWwKd000T3BSeHIwbVBhdkRiYnlDL0xKZGN6b1lxYzZLaGxZbURuSENDTk1aSkZMRHl0ZlE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeUxNR0dVUld1NmdnMGZmMXBpcWJxanV2RlpRTkJvYk5TbXdYaUg3alJMT3Zka1czCnB2RFJHSWZXWW90ak1QNnJ1N2NXNFlsMEx4MjJwR2h2YkZIVFhRb1BDZlM4TitZbDRKd0xTamJzQUg3aTFtNDUKVU01Zkd6cmVseGpONEdLSzFJUU9HanBaNHJSSlk4cFloUlBMbkVwR1hqclVtMFl2RmJBbHlvOGwxVkFWeVk4egpTOVQvQkpjQm9yMTQwdm02RFc0MUV4THQ3YlFPSUJEYm5lbXVsQzBRZldRM2pZSkVBL2xaUTdBVGdLcm5SM0hoCllLRnd6cWZTY0MrVXI5WWdFaXA4NHM4MFA3RDdrVllwQVc3WmFuT2dnbmIrbmkxSUl5b2NBaExlTkFUWGxPamkKYkRzVEFQbjhJL2pkc2ZqSzJWTzZteThSQXJmeGx1eVdVYy9lT1FJREFRQUJBb0lCQVFDQzErUHorUStxaS9QSgpyNlJnem9wcHN6dDBEKzlsLytBNjBybU03Vnh4WDh2V2lhRXZudlJTejh3K3RxeldObEh6c1d1alloOXkwQ1pRCmpSMkhPdGxYWU1WcE1qcTdIcm8yOHNTUmY3amdvZGgwLzZzeU9UamI0Y2RZTG4yWitlU1VvL3Nsc2tFRGdaSVAKRXM0ZkJFYkwvOGhjaW5JdFFOWlZoMTg3N1pNMnV6VFdwTXpzZHBPamh0bk1NTGRqaEtCK3lBRXQ1bnVIZmYrNQo1K2hzSXN1NC85aWtNNnduYWdMaUMxdEoydHZLYksvdW1JTVRGdmwxcmJ2MXJMVUwycHJkYjhVeDEvV2RocXhPCldnQ2NsYzhxTmN2bnBkTTduVGdhYzc1cG91cXUyVEdkRmVKb1FZUFJWcjFSTTJkaG1PTDA5cWZyZmwxcHdxazEKTmpBYUdYTmhBb0dCQVBXbmorZ01wZFJiSU41RWovTHorV1ZNdUxIWlRVcmJ0T3R0bW85Y05iSFowR0xsSzM5ZwpOMytKd0ExQXdmY2RhUldHbGhqV0F5RmpjcHhCVHo4Wi94dTg0ZXg4bmM4UU9uY0lOSDJWTXdaWWg5aVBiQ08xCksvTTJoL1BtWlBvajg5ZERaQTZQbjAvSURZMGY5OVhvNXVaT2pScU1qcEZxT21xUkJiWjVQZ21WQW9HQkFORW0KeXZEN0V1N3NwaXZodldLcG83QVUwU3VkdHc2RFZueWIwTmVNK1orTCtpUXJZQUs3RU4vTWttV2k5R2Q5MkdOSQpoT3NMUERrc2ZlMi9WTmdPZDF5VUF5MUFXc24zNVA0N2R6Wi9jOUw1V1hPc2hhZXlYdGJpdGs2MXpRdXVXdU5CCjFlOFFKalNqdHpsRlR4TUxORTQ1V2ZlTy9hQ2lDbVhSYUE4U0VZRVZBb0dBWEpoWGZ4RmRaSWtnLzRiNmQ0cU4KQkNrQ0tVK09lZHdNK3Z6cVdJVmFXL3FOT09uSEZwRXUraXp6TGt1dGtUY055Q1pkNTJpcjcyYnI2WWdZbGVGMwpybjNvN3RvZUpkR3BKL3I0eGlsNS9UZGJwVDZTZFhjeDVOQTJPTElzZDdrYmpaV0NYcGEyWnoweUZuTHBXVUViCjM4M1dGQjdORW5Ubkpnb2FEQ2p4UUcwQ2dZQlBjZ3JZYXFhUWR2ZlA1MW1HNXFVMHQxT1UyNzJ6RjVSOGxMdEoKaFZVMGsza2EwQmNTTW5pQWFqYVp3TUpScFczU21MTlVqTm45WmJjWDNmdWViakJNekRSQXRoZEdiSkZoT0xsWgp6Q1AwMlo1dTMvT005YVlzdmNVK05MU0VZV0JJdnJOQ3NjR3hjUmFoL0gvQzNoaXFOZ0xFbEY0bTdDWkM4cjR5CkswelcyUUtCZ1FDV2FhL1Nva3NuSnFCbytDMWhtZVY1a3VqWTA2R0JFRk10VXNIQ1k5YzlIUnEvMXFuMXJCR2wKb1pFbjFIQnJKZUJ5UWk5b0VrNXJWMm9UbWhXYUdOM3JiMDlxZGs3SFRwWEVtSlFJdXkwK2VnaVFnNUNxdENybgpjZlJaWlBCSjNPa0FIN3hoODAvUmJMRXkvUWRHZ2tKVkQ4b2FkWk54TVZjZFRUaklVL1V6amc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
scp 1x2.2x.2x7.6:/etc/kubernetes/admin.conf /data/pxd/config-dcb
查看文件内容
vi /data/pxd/config-dcb
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJME1ERXpNVEE1TVRJMU5Gb1hEVE0wTURFeU9EQTVNVEkxTkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTC9tCnhxZ093ZG0yMW5ZdVkrT0ZGR2xDZk8rYTdjREJWdzBoVnZnM1dRTkNUZlVEMllicUlmYlhCa2crdVFkSWkzdTUKYUlLWHNxSjZYelc2MzZBYWtiYit3ZFZabVdXeDAxZ3lzbTg0ejRRRTlSZjdoV25GSmN4YTVRZjJjNmlaeFd0RQpEMGdqek1YMm5pMFBUS05oQzI4Y055U21yTGNOczIwcWpkOFYyVml5VE51TklVVjlUMWl0cjc2eUlDdTRtL3UyCm5qL054cUlPT0xjNHM0SWhoVW5vcno3VnR4b1lLRFZQOFlDMUFGNlJYMmMxSVV4MVg2aHg1NXVjVDIyblRtMFcKcG5vS2N5eGtxdUVXUVo5Qjc0bm1NQ2R3c2I1Yy90VTNhRUYzYTZhSmN6MjkvUGRqMlhpQ2lkRlh4MTd6aE1PUwp0ZWRiUXNHVDFKcEhVd1g0aEwwQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZOaENWVHI1bTBXc0tpUUpXQS80a00xVmN2RG5NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCQnF4TGdiSTg0aVpBQUlhbzZQR0k2UlFUQndoS1lMdnNWSkVuNTc1eWV2ZTlTdmd0OQo2c2lML0Jka2JscWc2c3BMTlZESTg4TVplRG02b01sVTVJa2lXSHpzd2dBeDNYdzY0LzdFNmtKQnlFUUlIdWNWCmJZOWRtMW5GZjV4bWpNVVhwUVl4V1poOWNBTXBUWWkrVnp1cm5lZ2pHK2laNHV2TW5QUTd0THBUQmU1WWxJZzIKYi9oWkZnMmczTUNHbU1LaWRWVEJ1MzM4RXdteDZTRVhMdTVvNlV1MHlMRWhZNVJTam90UWtsTHppclNXd245egovcFl3R3NDZE9sZzM5RFlmbmsvd2RZaEd1L2prdGZtaDRYY1cwZ1dJbEdaNDJFNE5ncWpEQk9VazFmdnhQUER0CnJFaEhNTlUrQVBoanhJdHRvdzVBNm93SFNHdzQrcTdPYWQ4UAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://1x2.2x.2x7.6:6443
name: kubernetes
contexts: - context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users: - name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWFZxMXZRbm54cEF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBeE16RXdPVEV5TlRSYUZ3MHlOVEF4TXpBd09URXlOVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXZRMmFoRm1qN3NPY0duUS8KRkNhUmRUMCtTS05qUXpwWHJ6cGdDOFgrQ1hBdXBBTjFlWk13Mm0yTGR3VC9FZmpJeVY4SUNkMHd2a25KUWY0agpEQTNvMW1NR0RnSVBQamV6VzNObHAvR3d0MDdIYmlwaXNWdlY4aDQ5TEEyNXRLYmJuVi9wUU1CTXRlUHV1Y2VICk1sRmFjK1RzL2szNVdCS1gwUGhsUGZIYkJtMEkzZFdBWWU1NTFjVXArTDNYZjBNQ1g5b2RMOW1uSGxmVUR0Q08KM3Q3amdpY3I2ZmttRmJldGFGbE1NMXo3OUxrTlY5MFRhNUxCenZSOHo0OUhIMkdMTHJOT0FDOC9RNGRFeUV1MApiSklqT1VBMFdLaXh3blE2OWlBRlhPSlRSTmV3ZzdHVzVueEU5S1dlS2dCSHlyM1ZMb1kxTjlzYnNFTllCV1ZyCi8yZFowd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JUWVFsVTYrWnRGckNva0NWZ1ArSkROVlhMdwo1ekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBVHpDNVF3RkR6ekVlRUNsdjJUaDBzRmZ6bllrTmlBOFBzZjJJCktEZGRDRFRQdHVrSU1mZXc4Q3JNV2hqbGo4MTVZTTc5UGlKSEp1YTVxVGRtN3Y3NGJuQ3ZBdDJZT25ubTc1Z2YKL08vTGFRdXdUUVhHTWNwa2xZYUVXS2ExRWVRS2cxVlV5aXAyMDhRNDd3RGlPcHdJWXBIL0l1MGRuTlM2eUZaMApENFhqUTk0ZVdsVVd4RXF2RGJqY0RVOVUvVjBZMzI4S1Rsc3ozbkNTZitsV0hROFRncHRzQU94UVhtd3BuR1YyCjNuVDdsL1VYZEpZVDFMWE8yUXRCdjZuZS8zaEYwVmEzbUcrRjR1Q1pDZHhkckxSL05xK3VSaC9QY04zWkhjY2sKRmR1NG5mbEQ3eFFrTzJGRUU3b0RONFM0bm1ZSVBadmtHVHlMd2p1eTZwVk1iTnk4WFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdlEyYWhGbWo3c09jR25RL0ZDYVJkVDArU0tOalF6cFhyenBnQzhYK0NYQXVwQU4xCmVaTXcybTJMZHdUL0Vmakl5VjhJQ2Qwd3ZrbkpRZjRqREEzbzFtTUdEZ0lQUGplelczTmxwL0d3dDA3SGJpcGkKc1Z2VjhoNDlMQTI1dEtiYm5WL3BRTUJNdGVQdXVjZUhNbEZhYytUcy9rMzVXQktYMFBobFBmSGJCbTBJM2RXQQpZZTU1MWNVcCtMM1hmME1DWDlvZEw5bW5IbGZVRHRDTzN0N2pnaWNyNmZrbUZiZXRhRmxNTTF6NzlMa05WOTBUCmE1TEJ6dlI4ejQ5SEgyR0xMck5PQUM4L1E0ZEV5RXUwYkpJak9VQTBXS2l4d25RNjlpQUZYT0pUUk5ld2c3R1cKNW54RTlLV2VLZ0JIeXIzVkxvWTFOOXNic0VOWUJXVnIvMmRaMHdJREFRQUJBb0lCQVFDWXlYYTRRYzNYK0NTZgp5SlQyRHhsOVc2cUxmK2lIKzQzRDR2U2ViendZbXA1SEZUaUtzYWRJMEVGblJoVnlpOUtSMUFpbUdDbjlqNXBrCmlDUUE2UGprMHBDaEg0NzhKSDRDaWFkOWJEbjZXMk9YcUErczhPQmVWWXZ3bjRNVytjY0JUL010em52d2dDNTkKM0VCcUxROWlISUJnSWRwREVIdTdlaFF3Vk5kRFA5UGFGTjVTV01XOHFSVHpRdFZyNVpLa05KM3hnZUhBcktQNApFdTZkNnRlazRHQ1JtY0pTUzRucUJRaFE4WDhNQTdwdHlQNFhKanloRUNNdXZKd3dYY3lRWlVQVmxkeHprWHI2Ck55ZVVsQjMwa20zQ3NxUzA4MUE1QjZ6L2kvaXMyRm92Z2NORDkwTjZ0WWlyanQ4TzJiS2xPVUV5emRUZjMyQ2UKVXJlUWdnNkJBb0dCQU1YOFVqU3J1VEFwcUMrdytMakxKb1BEMnNpMnJKc2V4eTkxTkxGNG50eUZqOWIydHlRNApRNFgzUU1DV2trdTVhTkVxUGpvK2dXeU02ODh1ZWYzM1o1Tms5bWxoSVZlOStaNVBlUTBoUFRoU2NvaTV1UkJiCnhRRDJJc091dlBMRmRQUmhLR3d1N2Q1WjN5WWFNaFFUaVN6RlhFTjJ1WjhZZlkvRG9YUGpSUGtUQW9HQkFQUnoKT0tnZi9IblBpYlRZNFBpdVA1b0ZRZXN5WnRCdDFQUnhxODFQVUVGTVJpNmJiSStNN2ljRXdZT09LcmFWeW1IUwpxeksvWUp4NHRkR0RTb3VPcUNZWFJDQUdOdkhHcTBDSmxacWpEZCs4NjVqYUtzeDdwWkdSeWVnV2I1M1pQUFBFCmprbFk4eTh1SzAwWHNPRTNUUmFVRXpoNHFMRkJCRnVQZVpmNlN2UkJBb0dBZkdOOTVuZXBmdmY5SWhHSEF0c24KMUlzOXJ2TU9XTnNxZThlZ2xvdlpDMldpcklVUEpXTndFUC82SDhXNkhuZGJ3bVpPK0ZzREI0YzJORkhYOVZiMgpMU1cycHhpT1VVa2JSbnBaN0lUZ3FMMHNGbmpSSzlUc1hpRkdVRGs5bnkydHdFZzJsRm1idXlJdDBBdVBRUXZSCkdGN2JDOHZRN1lMK2lFOTU1WXg1Ymg4Q2dZQkhEek42RkFwRnNxSGFNMjE2Zk5TNlJpcjZYdVZxVTNNak4rUDAKUThrVm9rR0lqTi9LL3ZHLzMrOE0rZ2ZLbWRLQ0MwWis4d2ozazFOdk94WXhhVi9SNnROLzU2NlRLK2hlVTJCcwoybGRQSWREdTF3UzMrbjJQeW15Q0RmdVdUQzhld1pXSEZ0ZGljSzVmczdKVVZjb1A5UzE5TGY0RHdOMnViQSt4CnNTMld3UUtCZ0h5cEl0MFpOUmxJNUZVVStKV0JTdkxMaHh4QTFMQUNzWVdXcWFIdCsxZ0RKcHowNEVIbmErVkQKZGtQd1N4NUc1UzFiTmhSc3RRS1g5S004YmozVGd1ai84VHY4aVBEbWdIbE9XczR5Lzg4WDgvbWVOWlN0bTErTwp4OXAxN2ZCNjYzaXF2WWdIeFFISFhVa3dOSXBuUGE5MW1kYjRKN0loaFRmd2cxWFYxWEZqCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
合并cluster,context,user三项到一个文件config-mdc,这三部分的name属性需要改名不重复,查看修改合并后config-mdc的内容
vi /data/pxd/config-mdc
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJME1EVXlOREF6TWpFd01sb1hEVE0wTURVeU1qQXpNakV3TWxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnJXClVnakVWS3o0Q2kybUJBbExqVkRZZmtUWStiQXZuNEdla093M3FtL0xtMjk5b2JBdW5sRVN0eGw5a0xuVHExSXIKNCtNRzhtamM0dlZRcU9wZHp2THF3anZnOHR4aGIrSnFpNTQyWUl3bEdEaTR6OTA5dllDVDliTDlBUWczVkxYZgo2cUpwbDhPSFdTMDBFNUNBTkJSc3E1VlNLMlh0c3dFL3p5NkliTTk3Vjd6N1l5cFRXa0FKdk1XTFowOEFwY0ZYCmp6a0piRjNac0gyQWt0VWhXNDBjaC9wTk9oUTREZFM0S2U4YU5PVFMzT3RhT2xvc2U2R2x0ZWxVbkxBL2x2MUUKOFRnT2YybFVoOHpzRDhlWlkya0FkdjIzZU1ieVF0RlBJbVpBMFJkQUthd0dqcGZ6U0xsZVo5ckJxNlY5b0c5ZQpoeGtKWldEdHpMQU5MVytXelMwQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCUWluZUFxMm1aNStrVE0zY2gzdysxUVRiYmxNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCM0xrTCtDL2lpaFZIeXpPbUp2cENPTExTVHRDdmE0NDZzWWZVQnZ6OFlJKzVMT1RLSwpjc2xnaWxET1FEUTBFc2ZucUp3M2NtaytNLzhMc2o1VVlBeVp4SVRrbVFIbjdGRGJOdlBuckJrVmF4UGxrRGpiCjg5OGZJS0N6Q2NuN1g3aTFQZVVLMzVSSFZwVU5jSjNka0R3Yk9pUFBYbXJicWw3RU15K0s5eGk5RkY3UjdOTnEKUEVXNkJPQ0JwSEpGZTEwUFRtY1hLa1pkM2JvZHMxQnNHcXpFWG84QmtyYjE0WERqWXN4SUc4UEl3RVM2SlFBKwpuamtlUHpMQS9HblVFZnYvOHMwUDRhN3dPVUYvMkliUVpyZE15YXNxYlczTEFxV3J6V3g1OUVldDlYQmhxbTRwCmdCdlNxdVNwWmFEZGVSc0paeXIwUGpBaTVMZ3hvaG9BV0NORAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://1x2.2x.2x7.5:6443
name: kubernetes-dca - cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJME1ERXpNVEE1TVRJMU5Gb1hEVE0wTURFeU9EQTVNVEkxTkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTC9tCnhxZ093ZG0yMW5ZdVkrT0ZGR2xDZk8rYTdjREJWdzBoVnZnM1dRTkNUZlVEMllicUlmYlhCa2crdVFkSWkzdTUKYUlLWHNxSjZYelc2MzZBYWtiYit3ZFZabVdXeDAxZ3lzbTg0ejRRRTlSZjdoV25GSmN4YTVRZjJjNmlaeFd0RQpEMGdqek1YMm5pMFBUS05oQzI4Y055U21yTGNOczIwcWpkOFYyVml5VE51TklVVjlUMWl0cjc2eUlDdTRtL3UyCm5qL054cUlPT0xjNHM0SWhoVW5vcno3VnR4b1lLRFZQOFlDMUFGNlJYMmMxSVV4MVg2aHg1NXVjVDIyblRtMFcKcG5vS2N5eGtxdUVXUVo5Qjc0bm1NQ2R3c2I1Yy90VTNhRUYzYTZhSmN6MjkvUGRqMlhpQ2lkRlh4MTd6aE1PUwp0ZWRiUXNHVDFKcEhVd1g0aEwwQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZOaENWVHI1bTBXc0tpUUpXQS80a00xVmN2RG5NQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFCQnF4TGdiSTg0aVpBQUlhbzZQR0k2UlFUQndoS1lMdnNWSkVuNTc1eWV2ZTlTdmd0OQo2c2lML0Jka2JscWc2c3BMTlZESTg4TVplRG02b01sVTVJa2lXSHpzd2dBeDNYdzY0LzdFNmtKQnlFUUlIdWNWCmJZOWRtMW5GZjV4bWpNVVhwUVl4V1poOWNBTXBUWWkrVnp1cm5lZ2pHK2laNHV2TW5QUTd0THBUQmU1WWxJZzIKYi9oWkZnMmczTUNHbU1LaWRWVEJ1MzM4RXdteDZTRVhMdTVvNlV1MHlMRWhZNVJTam90UWtsTHppclNXd245egovcFl3R3NDZE9sZzM5RFlmbmsvd2RZaEd1L2prdGZtaDRYY1cwZ1dJbEdaNDJFNE5ncWpEQk9VazFmdnhQUER0CnJFaEhNTlUrQVBoanhJdHRvdzVBNm93SFNHdzQrcTdPYWQ4UAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://1x2.2x.2x7.6:6443
name: kubernetes-dcb
contexts: - context:
cluster: kubernetes-dca
user: kubernetes-admin-dca
name: adm@kube-dca - context:
cluster: kubernetes-dcb
user: kubernetes-admin-dcb
name: adm@kube-dcb
current-context: adm@kube-dca
kind: Config
preferences: {}
users: - name: kubernetes-admin-dca
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWXV3WEdDbHF6N3N3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBMU1qUXdNekl4TURKYUZ3MHlOVEExTWpRd016SXhNRFphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXlMTUdHVVJXdTZnZzBmZjEKcGlxYnFqdXZGWlFOQm9iTlNtd1hpSDdqUkxPdmRrVzNwdkRSR0lmV1lvdGpNUDZydTdjVzRZbDBMeDIycEdodgpiRkhUWFFvUENmUzhOK1lsNEp3TFNqYnNBSDdpMW00NVVNNWZHenJlbHhqTjRHS0sxSVFPR2pwWjRyUkpZOHBZCmhSUExuRXBHWGpyVW0wWXZGYkFseW84bDFWQVZ5WTh6UzlUL0JKY0JvcjE0MHZtNkRXNDFFeEx0N2JRT0lCRGIKbmVtdWxDMFFmV1EzallKRUEvbFpRN0FUZ0tyblIzSGhZS0Z3enFmU2NDK1VyOVlnRWlwODRzODBQN0Q3a1ZZcApBVzdaYW5PZ2duYituaTFJSXlvY0FoTGVOQVRYbE9qaWJEc1RBUG44SS9qZHNmaksyVk82bXk4UkFyZnhsdXlXClVjL2VPUUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JRVUlwM2dLdHBtZWZwRXpOM0lkOFB0VUUyMgo1VEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBUEtLSG5VcFJ0b0ZXZEE5aFV6TmNaQy8rOXByc0lhZkwwVGplCm94aUpQWENYOGtyWTVzbko2M2IwemFNSEs1Rzh2OEJIYTFDT0V4VXp2c3JZY05oanBET2hZVUhSenNMN1FVUUMKQjVnclhuZmdSZGJrSzhkUkNINTN1UXpBLzZQRXZRbDVrYzMxbjd6Y1Y3eEM4L3lVSWpUaHdHUjUzZ3ZqSHhKSQozbzdRaHVYaTlPUmhnTWxVL3BCNkZ0amMvVzIvODNyaFdEdC9UOFhXSGNiUVRkQm0va0NLNnhubzJ4UnNPbEltClNTMnBsWUk1K2QyVGlGeFdVZmttaWRkSld0MzdGbC9KbURVaWpOUGZuUXAwd0dxRURuNG9nWlFmRFBFSE5IcWwKd000T3BSeHIwbVBhdkRiYnlDL0xKZGN6b1lxYzZLaGxZbURuSENDTk1aSkZMRHl0ZlE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeUxNR0dVUld1NmdnMGZmMXBpcWJxanV2RlpRTkJvYk5TbXdYaUg3alJMT3Zka1czCnB2RFJHSWZXWW90ak1QNnJ1N2NXNFlsMEx4MjJwR2h2YkZIVFhRb1BDZlM4TitZbDRKd0xTamJzQUg3aTFtNDUKVU01Zkd6cmVseGpONEdLSzFJUU9HanBaNHJSSlk4cFloUlBMbkVwR1hqclVtMFl2RmJBbHlvOGwxVkFWeVk4egpTOVQvQkpjQm9yMTQwdm02RFc0MUV4THQ3YlFPSUJEYm5lbXVsQzBRZldRM2pZSkVBL2xaUTdBVGdLcm5SM0hoCllLRnd6cWZTY0MrVXI5WWdFaXA4NHM4MFA3RDdrVllwQVc3WmFuT2dnbmIrbmkxSUl5b2NBaExlTkFUWGxPamkKYkRzVEFQbjhJL2pkc2ZqSzJWTzZteThSQXJmeGx1eVdVYy9lT1FJREFRQUJBb0lCQVFDQzErUHorUStxaS9QSgpyNlJnem9wcHN6dDBEKzlsLytBNjBybU03Vnh4WDh2V2lhRXZudlJTejh3K3RxeldObEh6c1d1alloOXkwQ1pRCmpSMkhPdGxYWU1WcE1qcTdIcm8yOHNTUmY3amdvZGgwLzZzeU9UamI0Y2RZTG4yWitlU1VvL3Nsc2tFRGdaSVAKRXM0ZkJFYkwvOGhjaW5JdFFOWlZoMTg3N1pNMnV6VFdwTXpzZHBPamh0bk1NTGRqaEtCK3lBRXQ1bnVIZmYrNQo1K2hzSXN1NC85aWtNNnduYWdMaUMxdEoydHZLYksvdW1JTVRGdmwxcmJ2MXJMVUwycHJkYjhVeDEvV2RocXhPCldnQ2NsYzhxTmN2bnBkTTduVGdhYzc1cG91cXUyVEdkRmVKb1FZUFJWcjFSTTJkaG1PTDA5cWZyZmwxcHdxazEKTmpBYUdYTmhBb0dCQVBXbmorZ01wZFJiSU41RWovTHorV1ZNdUxIWlRVcmJ0T3R0bW85Y05iSFowR0xsSzM5ZwpOMytKd0ExQXdmY2RhUldHbGhqV0F5RmpjcHhCVHo4Wi94dTg0ZXg4bmM4UU9uY0lOSDJWTXdaWWg5aVBiQ08xCksvTTJoL1BtWlBvajg5ZERaQTZQbjAvSURZMGY5OVhvNXVaT2pScU1qcEZxT21xUkJiWjVQZ21WQW9HQkFORW0KeXZEN0V1N3NwaXZodldLcG83QVUwU3VkdHc2RFZueWIwTmVNK1orTCtpUXJZQUs3RU4vTWttV2k5R2Q5MkdOSQpoT3NMUERrc2ZlMi9WTmdPZDF5VUF5MUFXc24zNVA0N2R6Wi9jOUw1V1hPc2hhZXlYdGJpdGs2MXpRdXVXdU5CCjFlOFFKalNqdHpsRlR4TUxORTQ1V2ZlTy9hQ2lDbVhSYUE4U0VZRVZBb0dBWEpoWGZ4RmRaSWtnLzRiNmQ0cU4KQkNrQ0tVK09lZHdNK3Z6cVdJVmFXL3FOT09uSEZwRXUraXp6TGt1dGtUY055Q1pkNTJpcjcyYnI2WWdZbGVGMwpybjNvN3RvZUpkR3BKL3I0eGlsNS9UZGJwVDZTZFhjeDVOQTJPTElzZDdrYmpaV0NYcGEyWnoweUZuTHBXVUViCjM4M1dGQjdORW5Ubkpnb2FEQ2p4UUcwQ2dZQlBjZ3JZYXFhUWR2ZlA1MW1HNXFVMHQxT1UyNzJ6RjVSOGxMdEoKaFZVMGsza2EwQmNTTW5pQWFqYVp3TUpScFczU21MTlVqTm45WmJjWDNmdWViakJNekRSQXRoZEdiSkZoT0xsWgp6Q1AwMlo1dTMvT005YVlzdmNVK05MU0VZV0JJdnJOQ3NjR3hjUmFoL0gvQzNoaXFOZ0xFbEY0bTdDWkM4cjR5CkswelcyUUtCZ1FDV2FhL1Nva3NuSnFCbytDMWhtZVY1a3VqWTA2R0JFRk10VXNIQ1k5YzlIUnEvMXFuMXJCR2wKb1pFbjFIQnJKZUJ5UWk5b0VrNXJWMm9UbWhXYUdOM3JiMDlxZGs3SFRwWEVtSlFJdXkwK2VnaVFnNUNxdENybgpjZlJaWlBCSjNPa0FIN3hoODAvUmJMRXkvUWRHZ2tKVkQ4b2FkWk54TVZjZFRUaklVL1V6amc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= - name: kubernetes-admin-dcb
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWFZxMXZRbm54cEF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TkRBeE16RXdPVEV5TlRSYUZ3MHlOVEF4TXpBd09URXlOVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXZRMmFoRm1qN3NPY0duUS8KRkNhUmRUMCtTS05qUXpwWHJ6cGdDOFgrQ1hBdXBBTjFlWk13Mm0yTGR3VC9FZmpJeVY4SUNkMHd2a25KUWY0agpEQTNvMW1NR0RnSVBQamV6VzNObHAvR3d0MDdIYmlwaXNWdlY4aDQ5TEEyNXRLYmJuVi9wUU1CTXRlUHV1Y2VICk1sRmFjK1RzL2szNVdCS1gwUGhsUGZIYkJtMEkzZFdBWWU1NTFjVXArTDNYZjBNQ1g5b2RMOW1uSGxmVUR0Q08KM3Q3amdpY3I2ZmttRmJldGFGbE1NMXo3OUxrTlY5MFRhNUxCenZSOHo0OUhIMkdMTHJOT0FDOC9RNGRFeUV1MApiSklqT1VBMFdLaXh3blE2OWlBRlhPSlRSTmV3ZzdHVzVueEU5S1dlS2dCSHlyM1ZMb1kxTjlzYnNFTllCV1ZyCi8yZFowd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JUWVFsVTYrWnRGckNva0NWZ1ArSkROVlhMdwo1ekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBVHpDNVF3RkR6ekVlRUNsdjJUaDBzRmZ6bllrTmlBOFBzZjJJCktEZGRDRFRQdHVrSU1mZXc4Q3JNV2hqbGo4MTVZTTc5UGlKSEp1YTVxVGRtN3Y3NGJuQ3ZBdDJZT25ubTc1Z2YKL08vTGFRdXdUUVhHTWNwa2xZYUVXS2ExRWVRS2cxVlV5aXAyMDhRNDd3RGlPcHdJWXBIL0l1MGRuTlM2eUZaMApENFhqUTk0ZVdsVVd4RXF2RGJqY0RVOVUvVjBZMzI4S1Rsc3ozbkNTZitsV0hROFRncHRzQU94UVhtd3BuR1YyCjNuVDdsL1VYZEpZVDFMWE8yUXRCdjZuZS8zaEYwVmEzbUcrRjR1Q1pDZHhkckxSL05xK3VSaC9QY04zWkhjY2sKRmR1NG5mbEQ3eFFrTzJGRUU3b0RONFM0bm1ZSVBadmtHVHlMd2p1eTZwVk1iTnk4WFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdlEyYWhGbWo3c09jR25RL0ZDYVJkVDArU0tOalF6cFhyenBnQzhYK0NYQXVwQU4xCmVaTXcybTJMZHdUL0Vmakl5VjhJQ2Qwd3ZrbkpRZjRqREEzbzFtTUdEZ0lQUGplelczTmxwL0d3dDA3SGJpcGkKc1Z2VjhoNDlMQTI1dEtiYm5WL3BRTUJNdGVQdXVjZUhNbEZhYytUcy9rMzVXQktYMFBobFBmSGJCbTBJM2RXQQpZZTU1MWNVcCtMM1hmME1DWDlvZEw5bW5IbGZVRHRDTzN0N2pnaWNyNmZrbUZiZXRhRmxNTTF6NzlMa05WOTBUCmE1TEJ6dlI4ejQ5SEgyR0xMck5PQUM4L1E0ZEV5RXUwYkpJak9VQTBXS2l4d25RNjlpQUZYT0pUUk5ld2c3R1cKNW54RTlLV2VLZ0JIeXIzVkxvWTFOOXNic0VOWUJXVnIvMmRaMHdJREFRQUJBb0lCQVFDWXlYYTRRYzNYK0NTZgp5SlQyRHhsOVc2cUxmK2lIKzQzRDR2U2ViendZbXA1SEZUaUtzYWRJMEVGblJoVnlpOUtSMUFpbUdDbjlqNXBrCmlDUUE2UGprMHBDaEg0NzhKSDRDaWFkOWJEbjZXMk9YcUErczhPQmVWWXZ3bjRNVytjY0JUL010em52d2dDNTkKM0VCcUxROWlISUJnSWRwREVIdTdlaFF3Vk5kRFA5UGFGTjVTV01XOHFSVHpRdFZyNVpLa05KM3hnZUhBcktQNApFdTZkNnRlazRHQ1JtY0pTUzRucUJRaFE4WDhNQTdwdHlQNFhKanloRUNNdXZKd3dYY3lRWlVQVmxkeHprWHI2Ck55ZVVsQjMwa20zQ3NxUzA4MUE1QjZ6L2kvaXMyRm92Z2NORDkwTjZ0WWlyanQ4TzJiS2xPVUV5emRUZjMyQ2UKVXJlUWdnNkJBb0dCQU1YOFVqU3J1VEFwcUMrdytMakxKb1BEMnNpMnJKc2V4eTkxTkxGNG50eUZqOWIydHlRNApRNFgzUU1DV2trdTVhTkVxUGpvK2dXeU02ODh1ZWYzM1o1Tms5bWxoSVZlOStaNVBlUTBoUFRoU2NvaTV1UkJiCnhRRDJJc091dlBMRmRQUmhLR3d1N2Q1WjN5WWFNaFFUaVN6RlhFTjJ1WjhZZlkvRG9YUGpSUGtUQW9HQkFQUnoKT0tnZi9IblBpYlRZNFBpdVA1b0ZRZXN5WnRCdDFQUnhxODFQVUVGTVJpNmJiSStNN2ljRXdZT09LcmFWeW1IUwpxeksvWUp4NHRkR0RTb3VPcUNZWFJDQUdOdkhHcTBDSmxacWpEZCs4NjVqYUtzeDdwWkdSeWVnV2I1M1pQUFBFCmprbFk4eTh1SzAwWHNPRTNUUmFVRXpoNHFMRkJCRnVQZVpmNlN2UkJBb0dBZkdOOTVuZXBmdmY5SWhHSEF0c24KMUlzOXJ2TU9XTnNxZThlZ2xvdlpDMldpcklVUEpXTndFUC82SDhXNkhuZGJ3bVpPK0ZzREI0YzJORkhYOVZiMgpMU1cycHhpT1VVa2JSbnBaN0lUZ3FMMHNGbmpSSzlUc1hpRkdVRGs5bnkydHdFZzJsRm1idXlJdDBBdVBRUXZSCkdGN2JDOHZRN1lMK2lFOTU1WXg1Ymg4Q2dZQkhEek42RkFwRnNxSGFNMjE2Zk5TNlJpcjZYdVZxVTNNak4rUDAKUThrVm9rR0lqTi9LL3ZHLzMrOE0rZ2ZLbWRLQ0MwWis4d2ozazFOdk94WXhhVi9SNnROLzU2NlRLK2hlVTJCcwoybGRQSWREdTF3UzMrbjJQeW15Q0RmdVdUQzhld1pXSEZ0ZGljSzVmczdKVVZjb1A5UzE5TGY0RHdOMnViQSt4CnNTMld3UUtCZ0h5cEl0MFpOUmxJNUZVVStKV0JTdkxMaHh4QTFMQUNzWVdXcWFIdCsxZ0RKcHowNEVIbmErVkQKZGtQd1N4NUc1UzFiTmhSc3RRS1g5S004YmozVGd1ai84VHY4aVBEbWdIbE9XczR5Lzg4WDgvbWVOWlN0bTErTwp4OXAxN2ZCNjYzaXF2WWdIeFFISFhVa3dOSXBuUGE5MW1kYjRKN0loaFRmd2cxWFYxWEZqCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
配置容器网络
在安装容器网络前,Kubernetes 集群无法正常工作。 容器网络的解决方案有多个选项,本方案使用基本 calico 网络的安装方式。
vi calico_v3.15.1.yaml
Source: calico/templates/calico-config.yaml
This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
Typha is disabled.
typha_service_name: “none”
Configure the backend to use.
calico_backend: “bird”
Configure the MTU to use
veth_mtu: “1440”
The CNI network configuration to install on each node. The special
values in this config will be automatically populated.
cni_network_config: |-
{
“name”: “k8s-pod-network”,
“cniVersion”: “0.3.1”,
“plugins”: [
{
“type”: “calico”,
“log_level”: “info”,
“datastore_type”: “kubernetes”,
“nodename”: “KUBERNETES_NODE_NAME”,
“mtu”: CNI_MTU,
“ipam”: {
“type”: “calico-ipam”
},
“policy”: {
“type”: “k8s”
},
“kubernetes”: {
“kubeconfig”: “KUBECONFIG_FILEPATH”
}
},
{
“type”: “portmap”,
“snat”: true,
“capabilities”: {“portMappings”: true}
}
]
}
Source: calico/templates/kdd-crds.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: felixconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: FelixConfiguration
plural: felixconfigurations
singular: felixconfiguration
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamblocks.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMBlock
plural: ipamblocks
singular: ipamblock
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: blockaffinities.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BlockAffinity
plural: blockaffinities
singular: blockaffinity
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamhandles.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMHandle
plural: ipamhandles
singular: ipamhandle
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ipamconfigs.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPAMConfig
plural: ipamconfigs
singular: ipamconfig
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgppeers.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPPeer
plural: bgppeers
singular: bgppeer
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: bgpconfigurations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: BGPConfiguration
plural: bgpconfigurations
singular: bgpconfiguration
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ippools.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: IPPool
plural: ippools
singular: ippool
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: hostendpoints.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: HostEndpoint
plural: hostendpoints
singular: hostendpoint
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: clusterinformations.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: ClusterInformation
plural: clusterinformations
singular: clusterinformation
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworkpolicies.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkPolicy
plural: globalnetworkpolicies
singular: globalnetworkpolicy
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: globalnetworksets.crd.projectcalico.org
spec:
scope: Cluster
group: crd.projectcalico.org
version: v1
names:
kind: GlobalNetworkSet
plural: globalnetworksets
singular: globalnetworkset
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networkpolicies.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkPolicy
plural: networkpolicies
singular: networkpolicy
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: networksets.crd.projectcalico.org
spec:
scope: Namespaced
group: crd.projectcalico.org
version: v1
names:
kind: NetworkSet
plural: networksets
singular: networkset
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: (devel)
name: kubecontrollersconfigurations.crd.projectcalico.org
spec:
group: crd.projectcalico.org
names:
kind: KubeControllersConfiguration
listKind: KubeControllersConfigurationList
plural: kubecontrollersconfigurations
singular: kubecontrollersconfiguration
scope: Cluster
versions:
- name: v1
schema:
openAPIV3Schema:
properties:
apiVersion:
description: ‘APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources’
type: string
kind:
description: ‘Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds’
type: string
metadata:
type: object
spec:
description: KubeControllersConfigurationSpec contains the values of the
Kubernetes controllers configuration.
properties:
controllers:
description: Controllers enables and configures individual Kubernetes
controllers
properties:
namespace:
description: Namespace enables and configures the namespace controller.
Enabled by default, set to nil to disable.
properties:
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform reconciliation
with the Calico datastore. [Default: 5m]’
type: string
type: object
node:
description: Node enables and configures the node controller.
Enabled by default, set to nil to disable.
properties:
hostEndpoint:
description: HostEndpoint controls syncing nodes to host endpoints.
Disabled by default, set to nil to disable.
properties:
autoCreate:
description: ‘AutoCreate enables automatic creation of
host endpoints for every node. [Default: Disabled]’
type: string
type: object
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform reconciliation
with the Calico datastore. [Default: 5m]’
type: string
syncLabels:
description: ‘SyncLabels controls whether to copy Kubernetes
node labels to Calico nodes. [Default: Enabled]’
type: string
type: object
policy:
description: Policy enables and configures the policy controller.
Enabled by default, set to nil to disable.
properties:
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform reconciliation
with the Calico datastore. [Default: 5m]’
type: string
type: object
serviceAccount:
description: ServiceAccount enables and configures the service
account controller. Enabled by default, set to nil to disable.
properties:
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform reconciliation
with the Calico datastore. [Default: 5m]’
type: string
type: object
workloadEndpoint:
description: WorkloadEndpoint enables and configures the workload
endpoint controller. Enabled by default, set to nil to disable.
properties:
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform reconciliation
with the Calico datastore. [Default: 5m]’
type: string
type: object
type: object
etcdV3CompactionPeriod:
description: ‘EtcdV3CompactionPeriod is the period between etcdv3
compaction requests. Set to 0 to disable. [Default: 10m]’
type: string
healthChecks:
description: ‘HealthChecks enables or disables support for health
checks [Default: Enabled]’
type: string
logSeverityScreen:
description: ‘LogSeverityScreen is the log severity above which logs
are sent to the stdout. [Default: Info]’
type: string
required:
- controllers
type: object
status:
description: KubeControllersConfigurationStatus represents the status
of the configuration. It’s useful for admins to be able to see the actual
config that was applied, which can be modified by environment variables
on the kube-controllers process.
properties:
environmentVars:
additionalProperties:
type: string
description: EnvironmentVars contains the environment variables on
the kube-controllers that influenced the RunningConfig.
type: object
runningConfig:
description: RunningConfig contains the effective config that is running
in the kube-controllers pod, after merging the API resource with
any environment variables.
properties:
controllers:
description: Controllers enables and configures individual Kubernetes
controllers
properties:
namespace:
description: Namespace enables and configures the namespace
controller. Enabled by default, set to nil to disable.
properties:
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform
reconciliation with the Calico datastore. [Default:
5m]’
type: string
type: object
node:
description: Node enables and configures the node controller.
Enabled by default, set to nil to disable.
properties:
hostEndpoint:
description: HostEndpoint controls syncing nodes to host
endpoints. Disabled by default, set to nil to disable.
properties:
autoCreate:
description: ‘AutoCreate enables automatic creation
of host endpoints for every node. [Default: Disabled]’
type: string
type: object
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform
reconciliation with the Calico datastore. [Default:
5m]’
type: string
syncLabels:
description: ‘SyncLabels controls whether to copy Kubernetes
node labels to Calico nodes. [Default: Enabled]’
type: string
type: object
policy:
description: Policy enables and configures the policy controller.
Enabled by default, set to nil to disable.
properties:
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform
reconciliation with the Calico datastore. [Default:
5m]’
type: string
type: object
serviceAccount:
description: ServiceAccount enables and configures the service
account controller. Enabled by default, set to nil to disable.
properties:
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform
reconciliation with the Calico datastore. [Default:
5m]’
type: string
type: object
workloadEndpoint:
description: WorkloadEndpoint enables and configures the workload
endpoint controller. Enabled by default, set to nil to disable.
properties:
reconcilerPeriod:
description: ‘ReconcilerPeriod is the period to perform
reconciliation with the Calico datastore. [Default:
5m]’
type: string
type: object
type: object
etcdV3CompactionPeriod:
description: ‘EtcdV3CompactionPeriod is the period between etcdv3
compaction requests. Set to 0 to disable. [Default: 10m]’
type: string
healthChecks:
description: ‘HealthChecks enables or disables support for health
checks [Default: Enabled]’
type: string
logSeverityScreen:
description: ‘LogSeverityScreen is the log severity above which
logs are sent to the stdout. [Default: Info]’
type: string
required:
- controllers
type: object
type: object
type: object
served: true
storage: true
Source: calico/templates/rbac.yaml
Include a clusterrole for the kube-controllers component,
and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
rules:
Nodes are watched to monitor for deletions.
- apiGroups: [“”]
resources:- nodes
verbs: - watch
- list
- get
- nodes
Pods are queried to check for existence.
- apiGroups: [“”]
resources:- pods
verbs: - get
- pods
IPAM resources are manipulated when nodes are deleted.
- apiGroups: [“crd.projectcalico.org”]
resources:- ippools
verbs: - list
- ippools
- apiGroups: [“crd.projectcalico.org”]
resources:- blockaffinities
- ipamblocks
- ipamhandles
- hostendpoints
verbs: - get
- list
- create
- update
- delete
Needs access to update clusterinformations.
- apiGroups: [“crd.projectcalico.org”]
resources:- clusterinformations
- kubecontrollersconfigurations
verbs: - get
- create
- update
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-kube-controllers
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-kube-controllers
subjects:
- kind: ServiceAccount
name: calico-kube-controllers
namespace: kube-system
Include a clusterrole for the calico-node DaemonSet,
and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: calico-node
rules:
The CNI plugin needs to get pods, nodes, and namespaces.
- apiGroups: [“”]
resources:- pods
- nodes
- namespaces
- configmaps
verbs: - get
- apiGroups: [“”]
resources:- endpoints
- services
verbs:
Used to discover service IPs for advertisement.
- watch
- list
Used to discover Typhas.
- get
- apiGroups: [“”]
resources:- nodes/status
verbs:
Needed for clearing NodeNetworkUnavailable flag.
- patch
Calico stores some configuration information in node annotations.
- update
- nodes/status
Watch for changes to Kubernetes NetworkPolicies.
- apiGroups: [“networking.k8s.io”]
resources:- networkpolicies
verbs: - watch
- list
- networkpolicies
Used by Calico for policy information.
- apiGroups: [“”]
resources:- pods
- namespaces
- serviceaccounts
verbs: - list
- watch
The CNI plugin patches pods/status.
- apiGroups: [“”]
resources:- pods/status
verbs: - patch
- pods/status
Calico monitors various CRDs for config.
- apiGroups: [“crd.projectcalico.org”]
resources:- globalfelixconfigs
- felixconfigurations
- bgppeers
- globalbgpconfigs
- bgpconfigurations
- ippools
- ipamblocks
- globalnetworkpolicies
- globalnetworksets
- networkpolicies
- networksets
- clusterinformations
- hostendpoints
verbs: - get
- list
- watch
Calico must create and update some CRDs on startup.
- apiGroups: [“crd.projectcalico.org”]
resources:- ippools
- felixconfigurations
- clusterinformations
verbs: - create
- update
Calico stores some configuration information on the node.
- apiGroups: [“”]
resources:- nodes
verbs: - get
- list
- watch
- nodes
These permissions are only requried for upgrade from v2.6, and can
be removed after upgrade or on fresh installations.
- apiGroups: [“crd.projectcalico.org”]
resources:- bgpconfigurations
- bgppeers
verbs: - create
- update
These permissions are required for Calico CNI to perform IPAM allocations.
- apiGroups: [“crd.projectcalico.org”]
resources:- blockaffinities
- ipamblocks
- ipamhandles
verbs: - get
- list
- create
- update
- delete
- apiGroups: [“crd.projectcalico.org”]
resources:- ipamconfigs
verbs: - get
- ipamconfigs
Block affinities must also be watchable by confd for route aggregation.
- apiGroups: [“crd.projectcalico.org”]
resources:- blockaffinities
verbs: - watch
- blockaffinities
The Calico IPAM migration needs to get daemonsets. These permissions can be
removed if not upgrading from an installation using host-local IPAM.
- apiGroups: [“apps”]
resources:- daemonsets
verbs: - get
- daemonsets
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: calico-node
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: calico-node
subjects:
- kind: ServiceAccount
name: calico-node
namespace: kube-system
Source: calico/templates/calico-node.yaml
This manifest installs the calico-node container, as well
as the CNI plugins and network config on
each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
annotations:
# This, along with the CriticalAddonsOnly toleration below,
# marks the pod as a critical add-on, ensuring it gets
# priority scheduling and that its resources are reserved
# if it ever gets evicted.
scheduler.alpha.kubernetes.io/critical-pod: ‘’
spec:
nodeSelector:
beta.kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a “force
# deletion”: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container performs upgrade from host-local IPAM to calico-ipam.
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: registry:5000/calico/cni:v3.15.1
command: [“/opt/cni/bin/calico-ipam”, “-upgrade”]
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
securityContext:
privileged: true
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: registry:5000/calico/cni:v3.15.1
command: [“/install-cni.sh”]
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: “10-calico.conflist”
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: “false”
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
securityContext:
privileged: true
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: registry:5000/calico/pod2daemon-flexvol:v3.15.1
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: registry:5000/calico/node:v3.15.1
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: “kubernetes”
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: “true”
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: “k8s,bgp”
# Auto-detect the BGP IP address.
- name: IP
value: “autodetect”
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: “Always”
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within --cluster-cidr
.
- name: CALICO_IPV4POOL_CIDR
value: “192.168.0.0/16”
# Disable file logging so kubectl logs
works.
- name: CALICO_DISABLE_FILE_LOGGING
value: “true”
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: “ACCEPT”
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: “false”
# Set Felix logging to “info”
- name: FELIX_LOGSEVERITYSCREEN
value: “info”
- name: FELIX_HEALTHENABLED
value: “true”
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -bird-ready
- -felix-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-node
namespace: kube-system
Source: calico/templates/calico-kube-controllers.yaml
See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ‘’
spec:
nodeSelector:
beta.kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: registry:5000/calico/kube-controllers:v3.15.1
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
apiVersion: v1
kind: ServiceAccount
metadata:
name: calico-kube-controllers
namespace: kube-system
Source: calico/templates/calico-etcd-secrets.yaml
Source: calico/templates/calico-typha.yaml
Source: calico/templates/configure-canal.yaml
配置数据中心A的k8s集群的calico网络
设置kubectl 控制的集群切换到adm@kube-dca的上下文,即控制1x2.2x.2x7.5为主节点的集群
kubectl config --kubeconfig=/data/pxd/config-mdc use-context adm@kube-dca
切换后,查看当前集群的节点是否是想设置的集群
kubectl get nodes -o wide
kubectl apply -f calico_v3.15.1.yaml
检查 calico 容器是否创建:
kubectl -n kube-system get pods -o wide
耐心等待容器进入 Running 状态后,检查所有 Kubernetes 节点是否处于 “Ready”:
kubectl get nodes -o wide
配置数据中心B的k8s集群的calico网络
设置kubectl 控制的集群切换到adm@kube-dcb的上下文,即控制1x2.2x.2x7.6为主节点的集群
kubectl config use-context adm@kube-dcb
切换后,查看当前集群的节点是否是想设置的集群
kubectl get nodes -o wide
重复上诉设置A集群时的步骤完成B集群的calico网络配置
基于k8s部署PolarDB-X
先部署A集群
安装前置工具和启动PolarDB-X集群相关的容器
切换到控制A集群
kubectl config use-context adm@kube-dca
kubectl get nodes -o wide
cd /data/pxd/polardbx-install
sh install.sh
检查容器启动是否成功:
kubectl get pods -n polardbx-operator-system
部署PolarDB-X
查看配置文件模版
kubectl get pxpt -A
返回:
NAMESPACE NAME AGE
polardbx-operator-system product-57 80m
polardbx-operator-system product-80 80m
polardbx-operator-system product-8032 80m
上面列出用于配置PolarDB-X的模版,在后续部署集群的拓扑文件里面需要指定该模版,polardb-x 2.4版使用product-8032。
编辑集群拓扑配置文件:
vi polarx_lite.yaml
apiVersion: polardbx.aliyun.com/v1
kind: PolarDBXCluster
metadata:
name: pxc-product
spec:
PolarDB-X 初始账号密码
privileges:
- username: admin
password: “123456”
type: SUPER
配置模板,采用生产配置
parameterTemplate:
name: product-8032
PolarDB-X 集群配置
config:
# CN 相关配置
cn:
# 静态配置
static:
# 使用新 RPC 协议
RPCProtocolVersion: 2
PolarDB-X 集群拓扑
topology:
# 集群部署规则
rules:
# 预定义节点选择器
selectors:
- name: node-cn
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: polardbx/node
operator: In
values:
- cn
- name: node-dn
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: polardbx/node
operator: In
values:
- dn
components:
# DN 部署规则
dn:
nodeSets:
- name: cands
role: Candidate
replicas: 2
selector:
reference: node-dn
- name: log
role: Voter
replicas: 1
selector:
reference: node-dn
# CN 部署规则
cn:
- name: cn
selector:
reference: node-cn
nodes:
# GMS 规格配置
gms:
template:
# 存储节点镜像
image: registry:5000/polardbx-engine:v2.4.0_8.4.19
# 使用宿主机网络
hostNetwork: true
# gms 的资源规格
resources:
requests:
cpu: 1
memory: 8Gi
limits:
cpu: 2
memory: 8Gi
# DN 规格配置
dn:
# DN 数量配置
replicas: 3
template:
image: registry:5000/polardbx-engine:v2.4.0_8.4.19
# 使用宿主机网络
hostNetwork: true
# dn 的资源规格
resources:
requests:
cpu: 1
memory: 32Gi
limits:
cpu: 4
memory: 32Gi
# CN 规格配置
cn:
# CN 数量配置
replicas: 2
template:
image: registry:5000/polardbx-sql:v2.4.0_5.4.19
# 使用宿主机网络
hostNetwork: true
resources:
requests:
cpu: 2
memory: 16Gi
limits:
cpu: 4
memory: 16Gi
cdc:
# CDC 数量配置
replicas: 2
template:
image: registry:5000/polardbx-cdc:v2.4.0_5.4.19
# 使用宿主机网络
hostNetwork: true
resources:
requests:
cpu: 1
memory: 8Gi
limits:
cpu: 2
memory: 8Gi
执行如下命令将 product-8032 参数模板拷贝到 default 命名空间
kubectl get pxpt product-8032 -n polardbx-operator-system -o json | jq ‘.metadata.namespace = “default”’ | kubectl apply -f -
运行以下命令在 Kubernetes 集群部署 PolarDB-X 数据库:
kubectl create -f polarx_lite.yaml
检查容器 Pod 状态,直到所有容器显示 “Running”:
kubectl get pods
用以下命令确认 PolarDB-X 数据库状态:
kubectl get pxc pxc-product
调整容器分布
vi rebalance.yaml
apiVersion: polardbx.aliyun.com/v1
kind: SystemTask
metadata:
name: rbsystemtask
spec:
taskType: “BalanceResource”
创建 Rebalance 任务:
kubectl apply -f rebalance.yaml
观察任务状态:
kubectl get SystemTask -w
当任务状态显示 Success, 代表自动调整任务已完成。
观察pod是否分布均匀
kubectl get pods -o wide
切换访问方式
在 Kubernetes 集群内,PolarDB-X 数据库通常使用 Cluster-IP 方式提供服务。但是在 Kubernetes 集群外部的服务器无法访问 Cluster-IP,这时需要调整 PolarDB-X 配置,使用 NodePort 方式提供服务。 运行以下命令:
kubectl edit svc pxc-product
进入 Yaml 编辑模式,修改子项 spec: type: ClusterIP 内容为 NodePort,保存退出编辑
kubectl get svc pxc-product
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pxc-product NodePort 10.109.12.187 3306:32402/TCP,8081:30698/TCP 43h
mysql -h 1x2.2x.2x7.11 -P32402 -u admin -p123456 -Ac
kubectl edit pxc pxc-product
进入 Yaml 编辑模式,修改子项 serviceType: ClusterIP 内容为 NodePort,保存退出编辑,保证元数据记录的服务模式和服务一致。
再部署B级群,切换到控制B集群
kubectl config use-context adm@kube-dcb
kubectl get nodes -o wide
重复上述的部署步骤完成B集群部署
参数调优
set ENABLE_SET_GLOBAL = true;
set global RECORD_SQL=false;
set global MPP_METRIC_LEVEL=0;
set global ENABLE_CPU_PROFILE=false;
set global ENABLE_BACKGROUND_STATISTIC_COLLECTION=false;
set global ENABLE_STATISTIC_FEEDBACK=false;
set global ENABLE_DEADLOCK_DETECTION=false;
set global ENABLE_TRANS_LOG=false;
set global GROUP_PARALLELISM=1;
set global CONN_POOL_MAX_POOL_SIZE=3000;
set global ENABLE_STATEMENTS_SUMMARY=false;
set global ENABLE_AUTO_SAVEPOINT=false;
set global INNODB_ADAPTIVE_HASH_INDEX=off;
set global TABLE_OPEN_CACHE = 60000;
set global SHARE_READ_VIEW = false;
set global CONN_POOL_XPROTO_XPLAN = true;
set global NEW_SEQ_GROUPING_TIMEOUT=30000;
set global XPROTO_MAX_DN_WAIT_CONNECTION=3072000;
set global XPROTO_MAX_DN_CONCURRENT=3072000;
查看管理员帐号
eval pxc=pxc-product;eval user=$(kubectl get secret $pxc -o jsonpath={.data} | jq ‘keys[0]’); echo “User: $user”; kubectl get secret KaTeX parse error: Expected '}', got 'EOF' at end of input: …npath="{.data['user’]}" | base64 -d - | xargs echo “Password:”
遗留问题:
V2.0:
监控和日志收集组件没能正常启动。
polardbx-logcollector filebeat-77qjk 0/1 ImagePullBackOff 0 24d
polardbx-logcollector filebeat-hkwdk 0/1 ImagePullBackOff 0 24d
polardbx-logcollector filebeat-zh6n9 0/1 ImagePullBackOff 0 24d
polardbx-logcollector logstash-58667b7d4-jkff8 1/1 Running 0 24d
polardbx-monitor grafana-55569cfd68-xcttq 0/1 ImagePullBackOff 0 24d
polardbx-monitor kube-state-metrics-658d95ff68-8sc4g 3/3 Running 0 24d
polardbx-monitor node-exporter-jh7jb 1/2 CrashLoopBackOff 12535 24d
polardbx-monitor node-exporter-lrc9v 0/2 CrashLoopBackOff 12529 24d
polardbx-monitor node-exporter-vf7sx 0/2 CrashLoopBackOff 12529 24d
polardbx-monitor node-exporter-x6t45 0/2 CrashLoopBackOff 12569 24d
filebeat、grafana为image.list提供的版本不对,重新下载正确版本后解决。
polardbx-monitor node-exporter,起不来是9100端口和宿主机上面的原的监控端口冲突,修改端口号解决。
查看
kubectl get ds -n polardbx-monitor
编辑
kubectl edit ds node-exporter -n polardbx-monitor
修改全部的9100为9111,保存退出
V2.4.0 :
pxd download --env k8s --arch amd64 --repo “registry:5000” --dest /data/pxd/ -i images.list
这个下载镜像脚步会由于新版的polardbx/polardbx-engine镜像太大导致失败
install时组件的的tag不对导致相关pod启动失败
vi /data/pxd/polardbx-install/helm/operator-values.yaml
修改imageTag: v1.6.0,再手动helm安装operator
chmod +x /data/pxd/polardbx-install/helm/bin/helm
/data/pxd/polardbx-install/helm/bin/helm upgrade --install --create-namespace --namespace polardbx-operator-system polardbx-operator /data/pxd/polardbx-install/helm/polardbx-operator-1.6.1.tgz -f /data/pxd/polardbx-install/helm/operator-values.yaml
set global参数不能永久生效
kc get xstore 查看主节点
连入gms的pod
kubectl exec -it pxc-product-b8m6-gms-cand-0 – /bin/bash
myc
use polardbx_meta_db;
update user_priv set account_type=5 where user_name=‘admin’;
update config_listener
set op_version = op_version + 1 where data_id = ‘polardbx.privilege.info’;