k3s入门教程(二)部署前后端分离程序

发布于:2025-06-16 ⋅ 阅读:(20) ⋅ 点赞:(0)

部署基础服务

# 安装仓库(通过镜像安装)
[root@k3s-m soft]# helm repo add stable https://mirror.azure.cn/kubernetes/charts/
"stable" has been added to your repositories
[root@k3s-m soft]# helm repo add bitnami https://charts.bitnami.com/bitnami/
"bitnami" has been added to your repositories

helm repo update

# 配置k8s路径
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# 持久配置
echo 'export KUBECONFIG=/etc/rancher/k3s/k3s.yaml' | sudo tee -a /etc/profile
# 查看配置
echo $KUBECONFIG

# 查看已有部署
[root@k3s-m ~]# helm list
NAME    	NAMESPACE	REVISION	UPDATED                                	STATUS  	CHART       	APP VERSION
my-mysql	default  	1       	2025-06-07 14:37:37.262899896 +0800 CST	deployed	mysql-1.6.9 	5.7.30     
mysql   	default  	1       	2025-06-06 21:54:20.065172314 +0800 CST	deployed	mysql-1.6.9 	5.7.30     
redis   	default  	1       	2025-06-06 22:33:45.539588468 +0800 CST	deployed	redis-10.5.7	5.0.7      
# 删除部署
[root@k3s-m ~]# helm delete my-mysql
release "my-mysql" uninstalled

部署Redis

helm search repo bitnami/redis --versions
# 这里需要注意,新版本的参数语法可能有所不同,会导致standalone失效
helm install redis bitnami/redis --version 17.3.7 \
--set architecture=standalone \
--set-string auth.password=123456 \
--set replica.replicaCount=0 \
--set master.persistence.enabled=false \
--set master.persistence.medium=Memory \
--set master.persistence.sizeLimit=1Gi \
 --kubeconfig=/etc/rancher/k3s/k3s.yaml

# 复制地址,以供后续链接使用
redis-master.default.svc.cluster.local
# 查看pod
[root@k3s-m helm]# kubectl get pod
NAME                        READY   STATUS    RESTARTS        AGE
mynginx                     1/1     Running   11 (134m ago)   7d20h
redis-master-0              1/1     Running   0               4m38s
NAME: redis
LAST DEPLOYED: Wed Jun 11 10:45:53 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: redis
CHART VERSION: 17.3.7
APP VERSION: 7.0.5

** Please be patient while the chart is being deployed **

Redis® can be accessed via port 6379 on the following DNS name from within your cluster:

    redis-master.default.svc.cluster.local

To get your password run:
    export REDIS_PASSWORD=$(kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 -d)

To connect to your Redis® server:

1. Run a Redis® pod that you can use as a client:

   kubectl run --namespace default redis-client --restart='Never'  --env REDIS_PASSWORD=$REDIS_PASSWORD  --image docker.io/bitnami/redis:7.0.5-debian-11-r7 --command -- sleep infinity

   Use the following command to attach to the pod:

   kubectl exec --tty -i redis-client \
   --namespace default -- bash

2. Connect using the Redis® CLI:
   REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h redis-master

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/redis-master 6379:6379 &
    REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h 127.0.0.1 -p 6379

部署MySQL

  1. 将ruoyi-vue项目中的sql文件夹拷贝到服务器,执行命令生成configMap
[root@k3s-m app]# kubectl create configmap ruoyi-init-sql --from-file=/home/app/sql
configmap/ruoyi-init-sql created
  1. 编写部署配置文件ruoyi-mysql.yaml
auth:
  rootPassword: "123456"
  database: ry-vue

initdbScriptsConfigMap: ruoyi-init-sql

primary:
  persistence:
    size: 2Gi
    enabled: true

secondary:
  replicaCount: 2
  persistence:
    size: 2Gi
    enabled: true

architecture: replication
[root@k3s-m helm]# ls
ruoyi-mysql.yaml
# 注意指定版本,否则可能导致副本数不生效。
[root@k3s-m helm]# helm install db -f ruoyi-mysql.yaml  bitnami/mysql   --version 9.4.1  --kubeconfig=/etc/rancher/k3s/k3s.yaml 
NAME: db
LAST DEPLOYED: Wed Jun 11 10:52:49 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mysql
CHART VERSION: 9.4.1
APP VERSION: 8.0.31

** Please be patient while the chart is being deployed **

Tip:

  Watch the deployment status using the command: kubectl get pods -w --namespace default

Services:

  echo Primary: db-mysql-primary.default.svc.cluster.local:3306
  echo Secondary: db-mysql-secondary.default.svc.cluster.local:3306

Execute the following to get the administrator credentials:

  echo Username: root
  MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default db-mysql -o jsonpath="{.data.mysql-root-password}" | base64 -d)

To connect to your database:

  1. Run a pod that you can use as a client:

      kubectl run db-mysql-client --rm --tty -i --restart='Never' --image  docker.io/bitnami/mysql:8.0.31-debian-11-r0 --namespace default --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash

  2. To connect to primary service (read/write):

      mysql -h db-mysql-primary.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"

  3. To connect to secondary service (read-only):

      mysql -h db-mysql-secondary.default.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"

# 复制访问地址
echo Primary: db-mysql-primary.default.svc.cluster.local:3306
echo Secondary: db-mysql-secondary.default.svc.cluster.local:3306

端口转发测试

[root@k3s-m helm]# kubectl get svc
NAME                          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
kubernetes                    ClusterIP   10.43.0.1      <none>        443/TCP    13d
redis-headless                ClusterIP   None           <none>        6379/TCP   16m
redis-master                  ClusterIP   10.43.156.26   <none>        6379/TCP   16m
db-mysql-secondary-headless   ClusterIP   None           <none>        3306/TCP   9m11s
db-mysql-primary-headless     ClusterIP   None           <none>        3306/TCP   9m11s
db-mysql-secondary            ClusterIP   10.43.81.111   <none>        3306/TCP   9m11s
db-mysql-primary              ClusterIP   10.43.71.82    <none>        3306/TCP   9m11s

[root@k3s-m helm]# kubectl port-forward svc/db-mysql-primary --address=192.168.55.10 3306:3306
Forwarding from 192.168.55.10:3306 -> 3306
[root@k3s-m helm]# kubectl port-forward svc/redis-master --address=192.168.55.10 6379:6379
Forwarding from 192.168.55.10:6379 -> 6379

运行与构建前后端镜像

构建后端镜像 docker build -t ruoyi-admin:v3.8 .

#编译
FROM maven AS build
WORKDIR /build/app
#将本地的maven目录装载到容器中的maven目录下,这样就不用重复下载依赖的jar包了
#VOLUME ~/.m2 /root/.m2
COPY . .
RUN mvn clean package

#打包
FROM openjdk:8u342-jre
WORKDIR /app/ruoyi
COPY --from=build /build/app/ruoyi-admin/target/ruoyi-admin.jar .
EXPOSE 8080
ENTRYPOINT ["java","-jar","ruoyi-admin.jar"]

构建前端镜像 docker build -t ruoyi-ui:v3.8 .

FROM node:14-alpine AS build
WORKDIR /build/ruoyi-ui
COPY . .
# 安装依赖并打包到正式环境
RUN npm install --registry=https://registry.npmmirror.com && npm run build:prod

FROM nginx:1.22
WORKDIR /app/ruoyi-ui
COPY --from=build /build/ruoyi-ui/dist .
EXPOSE 80

创建私库,推拉镜像

# 本地docker自建仓库
docker run -d  -p 5000:5000  --name registry  registry:2
# k3s仓库配置/etc/rancher/k3s/registries.yaml
mirrors:
  "192.168.55.1:5000":
    endpoint:
      - "http://192.168.55.1:5000"
    insecure: true	# 信任非https协议
#重启master组件
systemctl restart k3s
#重启node组件
systemctl restart k3s-agent
查看containerd的配置文件
cat  /var/lib/rancher/k3s/agent/etc/containerd/config.toml
# 宿主机标记并推送镜像
docker tag ruoyi-admin:v3.8 127.0.0.1:5000/ruoyi-admin:v3.8
docker push 127.0.0.1:5000/ruoyi-admin:v3.8
docker tag ruoyi-ui:v3.8 127.0.0.1:5000/ruoyi-ui:v3.8
docker push 127.0.0.1:5000/ruoyi-ui:v3.8
# 虚拟机拉去镜像
crictl pull 192.168.55.1:5000/ruoyi-ui:v3.8
crictl pull 192.168.55.1:5000/ruoyi-admin:v3.8

前后端应用部署

  1. 复制dns地址
#Redis can be accessed via port 6379 on the following DNS name from within your cluster:
redis-master.default.svc.cluster.local

#MySQL DNS NAME
Primary: 
	 db-mysql-primary.default.svc.cluster.local:3306
Secondary: 
	db-mysql-secondary.default.svc.cluster.local:3306

后端应用部署

  1. 编写配置文件 application-k8s.yaml
# 数据源配置
spring:
    # redis 配置
    redis:
        # 地址
        host: redis-master
        # 端口,默认为6379
        port: 6379
        # 数据库索引
        database: 0
        # 密码
        password: 123456
        # 连接超时时间
        timeout: 10s
        lettuce:
            pool:
                # 连接池中的最小空闲连接
                min-idle: 0
                # 连接池中的最大空闲连接
                max-idle: 8
                # 连接池的最大数据库连接数
                max-active: 8
                # #连接池最大阻塞等待时间(使用负值表示没有限制)
                max-wait: -1ms
    datasource:
        type: com.alibaba.druid.pool.DruidDataSource
        driverClassName: com.mysql.cj.jdbc.Driver
        druid:
            # 主库数据源
            master:
                url: jdbc:mysql://db-mysql-primary:3306/ry-vue?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
                username: root
                password: 123456
            # 从库数据源
            slave:
                # 从数据源开关/默认关闭
                enabled: true
                url: jdbc:mysql://db-mysql-secondary:3306/ry-vue?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
                username: root
                password: 123456
            # 初始连接数
            initialSize: 5
            # 最小连接池数量
            minIdle: 10
            # 最大连接池数量
            maxActive: 20
            # 配置获取连接等待超时的时间
            maxWait: 60000
            # 配置连接超时时间
            connectTimeout: 30000
            # 配置网络超时时间
            socketTimeout: 60000
            # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
            timeBetweenEvictionRunsMillis: 60000
            # 配置一个连接在池中最小生存的时间,单位是毫秒
            minEvictableIdleTimeMillis: 300000
            # 配置一个连接在池中最大生存的时间,单位是毫秒
            maxEvictableIdleTimeMillis: 900000
            # 配置检测连接是否有效
            validationQuery: SELECT 1 FROM DUAL
            testWhileIdle: true
            testOnBorrow: false
            testOnReturn: false
            webStatFilter: 
                enabled: true
            statViewServlet:
                enabled: true
                # 设置白名单,不填则允许所有访问
                allow:
                url-pattern: /druid/*
                # 控制台管理用户名和密码
                login-username: ruoyi
                login-password: 123456
            filter:
                stat:
                    enabled: true
                    # 慢SQL记录
                    log-slow-sql: true
                    slow-sql-millis: 1000
                    merge-sql: true
                wall:
                    config:
                        multi-statement-allow: true
  1. 使用配置文件生成configMap
[root@k3s-m app]# kubectl create configmap ruoyi-admin-config --from-file=/home/app/application-k8s.yaml
configmap/ruoyi-admin-config created
  1. 编写部署配置 ruoyi-admin.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-admin
  labels:
    app: ruoyi-admin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ruoyi-admin
  template:
    metadata:
      labels:
        app: ruoyi-admin
    spec:
      containers:
        - name: ruoyi-admin
          image: 10.150.36.72:5000/ruoyi-admin:v3.8
          ports:
            - containerPort: 8080
          volumeMounts:
            # springBoot启动时,在jar包所在位置的config目录下查找配置文件
            # jar包所在的位置就是dockerfile中WORKDIR定义的目录,即/app/ruoyi
            - mountPath: /app/ruoyi/config
              name: config
          # 使用application-k8s.yaml作为配置文件
          # 启动命令如下: java -jar ruoyi-admin.jar --spring.profiles.active=k8s
          args: ["--spring.profiles.active=k8s"]
      volumes:
        - name: config
          configMap:
            name: ruoyi-admin-config
---
apiVersion: v1
kind: Service
metadata:
  name: ruoyi-admin
spec:
  type: ClusterIP
  selector:
    app: ruoyi-admin
  ports:
    - port: 8080
      targetPort: 8080
  1. 执行部署命令
kubectl apply -f ruoyi-admin.yaml 
kubectl get pods
kubectl logs -f ruoyi-admin-559d7f64c5-vx2lc 

kubectl get svc
kubectl port-forward svc/ruoyi-admin --address=192.168.55.10 8080:8080

前端应用部署

  1. 编写nginx配置
server {
    listen       80;
    server_name  localhost;
    charset utf-8;

    location / {
        # dockerfile中WORKDIR目录
        root   /app/ruoyi-ui;
        try_files $uri $uri/ /index.html;
        index  index.html index.htm;
    }

    location /prod-api/ {
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header REMOTE-HOST $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        # 后端service的DNS
        proxy_pass http://ruoyi-admin:8080/;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}
  1. 使用nginx.conf生成configMap
[root@k3s-m app]# kubectl create configmap ruoyi-ui-config --from-file=/home/app/nginx.conf 
configmap/ruoyi-ui-config created
  1. 编写k3s部署配置ruoyi-ui.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-ui
  labels:
    app: ruoyi-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ruoyi-ui
  template:
    metadata:
      labels:
        app: ruoyi-ui
    spec:
      containers:
        - name: ruoyi-ui
          image: 192.168.55.1:5000/ruoyi-ui:v3.8
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /etc/nginx/conf.d
              name: config
      volumes:
        - name: config
          configMap:
            name: ruoyi-ui-config
            items:
              - key: nginx.conf
                path: default.conf
---
apiVersion: v1
kind: Service
metadata:
  name: ruoyi-ui
spec:
  type: NodePort
  selector:
    app: ruoyi-ui
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
  1. 执行部署命令
kubectl apply -f ruoyi-ui.yaml 
kubectl get pods
  1. 访问前端页面:http://192.168.55.10:30080/index

启动顺序与初始化容器

我们可以使用初始化容器(Init Container)来控制启动顺序。
● Pod中的初始化容器在应用容器之前启动。
● 初始化容器未执行完成,应用容器不会启动。
● 多个初始化容器按顺序执行,前一个执行完成才会执行下一个。

修改前端部署配置,使用until-do实现等待就绪

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-ui
  labels:
    app: ruoyi-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ruoyi-ui
  template:
    metadata:
      labels:
        app: ruoyi-ui
    spec:
      # 修改这里,加入检查配置
      initContainers:
        - name: wait-for-ruoyi-admin  # 名称
          image: nginx:1.22           # 镜像
          command:	   # 首先进入循环,每隔5秒执行一次。访问后端,-m 3 表示超时3秒
            - sh
            - -c
            - |
              until curl -m 3 ruoyi-admin:8080 
              do 
                echo waiting for ruoyi-admin;
                sleep 5;
              done
      containers:
        - name: ruoyi-ui
          image: 192.168.55.1:5000/ruoyi-ui:v3.8
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /etc/nginx/conf.d
              name: config
      volumes:
        - name: config
          configMap:
            name: ruoyi-ui-config
            items:
              - key: nginx.conf
                path: default.conf
---
apiVersion: v1
kind: Service
metadata:
  name: ruoyi-ui
spec:
  type: NodePort
  selector:
    app: ruoyi-ui
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
[root@k3s-m app]# kubectl delete -f ruoyi-ui.yaml 
[root@k3s-m app]# kubectl apply -f ruoyi-ui2.yaml 
[root@k3s-m app]# kubectl get pods -owide --watch
ruoyi-ui-787bbfb854-9gl7h      0/1     Terminating   2 (81m ago)    22h   10.42.1.101   k3s-w1   <none>           <none>
ruoyi-ui-b6bc44dd6-9rc48       0/1     Pending       0              0s    <none>        <none>   <none>           <none>
ruoyi-ui-b6bc44dd6-9rc48       0/1     Pending       0              0s    <none>        k3s-w1   <none>           <none>
ruoyi-ui-b6bc44dd6-9rc48       0/1     Init:0/1      0              0s    <none>        k3s-w1   <none>           <none>
ruoyi-ui-b6bc44dd6-9rc48       0/1     Init:0/1      0              3s    10.42.1.102   k3s-w1   <none>           <none>
ruoyi-ui-b6bc44dd6-9rc48       0/1     PodInitializing   0              5s    10.42.1.102   k3s-w1   <none>           <none>
ruoyi-ui-b6bc44dd6-9rc48       1/1     Running           0              6s    10.42.1.102   k3s-w1   <none>           <none>

修改后端部署配置

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-admin
  labels:
    app: ruoyi-admin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ruoyi-admin
  template:
    metadata:
      labels:
        app: ruoyi-admin
    spec:
      initContainers:
        - name: wait-for-mysql
          image: bitnami/mysql:8.0.31-debian-11-r0
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: "123456"
          command:
            - sh
            - -c
            - |
              set -e
              maxTries=10
              while [ "$$maxTries" -gt 0 ] \
                    && ! mysqladmin ping --connect-timeout=3 -s \
                                    -hdb-mysql-primary -uroot -p$$MYSQL_ROOT_PASSWORD
              do 
                  echo 'Waiting for MySQL to be available'
                  sleep 5
                  let maxTries--
              done
              if [ "$$maxTries" -le 0 ]; then
                  echo >&2 'error: unable to contact MySQL after 10 tries'
                  exit 1
              fi
        - name: wait-for-redis
          image: bitnami/redis:7.0.5-debian-11-r7
          env:
            - name: REDIS_PASSWORD
              value: "123456"
          command:
            - sh
            - -c
            - |
              set -e
              maxTries=10
              while [ "$$maxTries" -gt 0 ] \
                    && ! timeout 3 redis-cli -h redis-master -a $$REDIS_PASSWORD ping
              do 
                  echo 'Waiting for Redis to be available'
                  sleep 5
                  let maxTries--
              done
              if [ "$$maxTries" -le 0 ]; then
                  echo >&2 'error: unable to contact Redis after 10 tries'
                  exit 1
              fi
      containers:
        - name: ruoyi-admin
          image: 192.168.55.1:5000/ruoyi-admin:v3.8
          ports:
            - containerPort: 8080
          volumeMounts:
            # springBoot启动时,在jar包所在位置的config目录下查找配置文件
            # jar包所在的位置就是dockerfile中WORKDIR定义的目录,即/app/ruoyi
            - mountPath: /app/ruoyi/config
              name: config
          # 使用application-k8s.yaml作为配置文件
          # 启动命令如下: java -jar ruoyi-admin.jar --spring.profiles.active=k8s
          args: ["--spring.profiles.active=k8s"]
      volumes:
        - name: config
          configMap:
            name: ruoyi-admin-config
---
apiVersion: v1
kind: Service
metadata:
  name: ruoyi-admin
spec:
  type: ClusterIP
  selector:
    app: ruoyi-admin
  ports:
    - port: 8080
      targetPort: 8080
kubectl delete -f ruoyi-admin.yaml 
kubectl apply -f ruoyi-admin2.yaml 
[root@k3s-m app]# kubectl get pods -owide --watch
ruoyi-admin-559d7f64c5-vx2lc   0/1     Terminating       4              23h   10.42.0.131   k3s-m    <none>           <none>
ruoyi-admin-56d5b45cbc-52j72   0/1     Pending           0              0s    <none>        <none>   <none>           <none>
ruoyi-admin-56d5b45cbc-52j72   0/1     Pending           0              0s    <none>        k3s-m    <none>           <none>
ruoyi-admin-56d5b45cbc-52j72   0/1     Init:0/2          0              0s    <none>        k3s-m    <none>           <none>
ruoyi-admin-56d5b45cbc-52j72   0/1     Init:1/2          0              2s    10.42.0.133   k3s-m    <none>           <none>
ruoyi-admin-56d5b45cbc-52j72   0/1     Init:1/2          0              9s    10.42.0.133   k3s-m    <none>           <none>
ruoyi-admin-56d5b45cbc-52j72   0/1     PodInitializing   0              10s   10.42.0.133   k3s-m    <none>           <none>
ruoyi-admin-56d5b45cbc-52j72   1/1     Running           0              11s   10.42.0.133   k3s-m    <none>           <none>

使用until do的方式虽然可以实现等待依赖的服务就绪,但是他是一个无限循环,最好的方式是设置失败重试次数,超过这个次数,初始化容器以失败的状态退出,Pod启动终止。

Ingress入口

在这里插入图片描述

ingress作用

功能类似一个Nginx服务器。

  1. URL路由规则配置
  2. 实现负载均衡、流量分割、流量限制
  3. https配置
  4. 基于名字的虚拟托管

创建Ingress资源需要先部署Ingress控制器,如ingress-nginx。
不同控制器用法和配置是不一样的。
k3s自带一个基于Traefik的ingress控制器,因此我们可以直接创建ingress资源,无需再安装ingress控制器。

注意:ingress只能公开http和https类型的服务到互联网。公开其他类型的服务需要NodePort或LoadBalancer类型的Service。

ingress部署

  1. 编写部署配置
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ruoyi-ingress
spec:
  rules:
    - http:
        paths:
          - path: /		# 注意:这里的path,需要跟ruoyi-ui使用的nginx.conf中的location一致,不然会报错。
            pathType: Prefix
            backend:
              service:
                name: ruoyi-ui
                port:
                  number: 80
  1. 执行部署命令
[root@k3s-m app]# vi ruoyi-ingress.yaml
[root@k3s-m app]# kubectl apply -f ruoyi-ingress.yaml 
ingress.networking.k8s.io/ruoyi-ingress created
[root@k3s-m app]# kubectl get ingress
NAME            CLASS    HOSTS   ADDRESS                       PORTS   AGE
ruoyi-ingress   <none>   *       192.168.55.10,192.168.55.11   80      8s
[root@k3s-m app]# kubectl describe ingress
Name:             ruoyi-ingress
Labels:           <none>
Namespace:        default
Address:          192.168.55.10,192.168.55.11
Ingress Class:    <none>
Default backend:  <default>
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /   ruoyi-ui:80 (10.42.1.102:80)
Annotations:  <none>
Events:       <none>
  1. 访问:http://192.168.55.10

ingress路径类型

Ingress 中的每个路径必须设置路径类型(Path Type),当前支持的路径类型有三种:
Exact:精确匹配 URL 路径。区分大小写。
Prefix:URL 路径前缀匹配。区分大小写。并且对路径中的元素逐个完成。
(说明:/foo/bar 匹配 /foo/bar/baz, 但不匹配 /foo/barbaz。)
ImplementationSpecific:对于这种路径类型,匹配方法取决于 IngressClass定义的处理逻辑。

ingress部署(主机名匹配)

  1. 编写部署配置
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ruoyi-ingress
spec:
  rules:
    #类似于nginx的虚拟主机配置
    - host: "front.ruoyi.com"
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: ruoyi-ui
                port:
                  number: 80
    - host: "backend.ruoyi.com"
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: ruoyi-admin
                port:
                  number: 8080
  1. 执行部署
[root@k3s-m app]# vi ruoyi-ingress2.yaml
[root@k3s-m app]# kubectl apply -f ruoyi-ingress2.yaml 
ingress.networking.k8s.io/ruoyi-ingress configured
[root@k3s-m app]# kubectl get ingress
NAME            CLASS    HOSTS                               ADDRESS                       PORTS   AGE
ruoyi-ingress   <none>   front.ruoyi.com,backend.ruoyi.com   192.168.55.10,192.168.55.11   80      13m
  1. 访问前后端
    在hosts中添加域名映射后,访问域名地址。
192.168.55.10  front.ruoyi.com
192.168.55.10  backend.ruoyi.com

网站公告

今日签到

点亮在社区的每一天
去签到