Kubernetes 1.28 无 Docker 运行时环境下的容器化构建实践:Kaniko + Jenkins 全链路详解

发布于:2025-05-17 ⋅ 阅读:(20) ⋅ 点赞:(0)
背景说明

随着 Kubernetes 1.28 正式弃用 Docker 作为默认容器运行时(CRI 规范演进),传统的 docker build 方式已无法直接在集群内运行。Kaniko 作为 Google 开源的容器镜像构建工具,凭借其无需特权容器、兼容 OCI 标准的特性,成为替代 Docker 构建的首选方案。本文将基于以下技术栈搭建全容器化构建流水线:

  • 构建工具:Kaniko(版本 ≥ v1.9.0)
  • 编排平台:Kubernetes 1.28(集群需支持 Ephemeral Containers)
  • CI/CD 引擎:Jenkins(容器化部署于 K8s)

Part 1:Kaniko 的 Dockerfile 

FROM gcr.io/kaniko-project/executor:latest AS plugin
FROM gcr.io/kaniko-project/warmer:latest AS kaniko-warmer

FROM debian:11-slim as builder

# 设置非交互模式,避免交互式提示阻塞安装过程
ENV DEBIAN_FRONTEND=noninteractive

# 安装运行时依赖
RUN apt-get update && \
    apt-get install -y --no-install-recommends \
        g++ make git curl ca-certificates\
        && rm -rf /var/lib/apt/lists/*

# 安装 kubectl
ARG KUBECTL_VERSION=v1.30.0
RUN curl -LO "https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl" && \
    install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl && \
    rm -f kubectl

# 安装 jsonnet
RUN git clone https://github.com/google/jsonnet.git /tmp/jsonnet && \
    cd /tmp/jsonnet && \
    make && \
    cp jsonnet /usr/local/bin/ && \
    cd / && \
    rm -rf /tmp/jsonnet

FROM debian:11-slim
# 从构建阶段复制工具到生产镜像

RUN apt-get update && \
    apt-get install -y --no-install-recommends \
    git \
    && rm -rf /var/lib/apt/lists/*

COPY --from=plugin /kaniko/executor /usr/local/bin/kaniko
COPY --from=kaniko-warmer /kaniko/warmer /usr/local/bin/warmer
COPY --from=builder /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY --from=builder /usr/local/bin/jsonnet /usr/local/bin/jsonnet

ENV DOCKER_CONFIG /kaniko/.docker

RUN mkdir -p /kaniko/.docker/

Part 2:Jenkins 与 Kaniko 集成部分配置

  clouds:
  - kubernetes:
      containerCapStr: "10"
      defaultsProviderTemplate: ""
      connectTimeout: "5"
      readTimeout: "15"
      jenkinsUrl: "http://jenkins.jenkins.svc.cluster.local:8080"
      jenkinsTunnel: "jenkins-agent.jenkins.svc.cluster.local:50000"
      skipTlsVerify: false
      usageRestricted: false
      maxRequestsPerHostStr: "32"
      retentionTimeout: "5"
      waitForPodSec: "600"
      name: "kubernetes"
      namespace: "jenkins"
      restrictedPssSecurityContext: false
      serverUrl: "https://kubernetes.default"
      credentialsId: ""
      podLabels:
      - key: "jenkins/jenkins-jenkins-agent"
        value: "true"
      templates:
        - name: "default"
          namespace: "jenkins"
          containers:
          - name: "jnlp"
            alwaysPullImage: false
            args: "^${computer.jnlpmac} ^${computer.name}"
            envVars:
              - envVar:
                  key: "JENKINS_URL"
                  value: "http://jenkins.jenkins.svc.cluster.local:8080/"
            image: "jenkins/inbound-agent:3283.v92c105e0f819-7"
            privileged: "false"
            resourceLimitCpu: 512m
            resourceLimitMemory: 512Mi
            resourceRequestCpu: 512m
            resourceRequestMemory: 512Mi
            ttyEnabled: false
            workingDir: /home/jenkins/agent
          idleMinutes: 0
          instanceCap: 2147483647
          label: "jenkins-jenkins-agent "
          nodeUsageMode: "NORMAL"
          podRetention: Never
          showRawYaml: true
          serviceAccount: "default"
          slaveConnectTimeoutStr: "100"
          yamlMergeStrategy: override
          inheritYamlMergeStrategy: false
        - name: maven
          label: jenkins-maven
          showRawYaml: true
          containers:
          - name: maven
            image: xxx.cr.aliyuncs.com/public/maven-kubectl-jsonnet:v4
            envVars:
              - envVar:
                  key: "TZ"
                  value: "Asia/Shanghai"
            command: cat
            args: ""
            ttyEnabled: true
          - name: kubedock
            image: joyrex2001/kubedock:latest
            command: "/usr/local/bin/kubedock"
            args: "server --reverse-proxy --pre-archive --timeout=2m"
            ttyEnabled: true
          imagePullSecrets:
            - name: "docker-registry"
          serviceAccount: "jenkins"
          volumes:
          - persistentVolumeClaim:
              claimName: jenkins-maven-agent-pvc
              mountPath: /root/.m2
          - configMapVolume:
              configMapName: kubeconfig
              mountPath: /root/.kube/prod-config
              subPath: prod-config
          - configMapVolume:
              configMapName: kubeconfig
              mountPath: /root/.kube/test-config
              subPath: test-config
        - name: kaniko
          label: jenkins-kaniko
          showRawYaml: true
          containers:
          - name: kaniko
            image: xxx.cr.aliyuncs.com/public/kaniko-kubectl-jsonnet:v3
            resourceRequestEphemeralStorage: 6Gi
            resourceLimitEphemeralStorage: 10Gi
            envVars:
              - envVar:
                  key: "TZ"
                  value: "Asia/Shanghai"
            command: cat
            alwaysPullImage: true
            ttyEnabled: true
          imagePullSecrets:
            - name: "registry-tmp"
          volumes:
          - persistentVolumeClaim:
              claimName: kaniko-cache-pvc
              mountPath: /cache
          - configMapVolume:
              configMapName: kubeconfig
              mountPath: /root/.kube/prod-config
              subPath: prod-config
          - configMapVolume:
              configMapName: kubeconfig
              mountPath: /root/.kube/test-config
              subPath: test-config
          - secretVolume:
              secretName: kaniko-registry
              mountPath: /kaniko/.docker

Part 3:Jenkins 的pipeline

// Uses Declarative syntax to run commands inside a container.
pipeline {
    agent {
        kubernetes {
            inheritFrom 'kaniko'
            defaultContainer 'kaniko'
        }
    }
    stages {
        stage('checkout') {
            steps {
                git branch: 'master', credentialsId: 'gitlab', url: 'git@xxx:backend/xxx.git'
            }
        }
        stage('warmer') {
            steps {
                script {
                    sh(label: 'kaniko warmer', script: "warmer --skip-tls-verify-registry=index.docker.io --cache-dir=/cache/xxx --dockerfile=./Dockerfile")
                }
            }
        }
        stage('build') {
            steps {
                script {
                    sh(label: 'kaniko build', script: "kaniko --skip-tls-verify --cache=true -cache-dir=/cache/xxx  -f Dockerfile -c ./  -d xxx.cr.aliyuncs.com/packages/xxx:v4")
                }
            }
        }
    }
}

结语

本文为无 Docker 运行时环境下的容器构建提供了实践框架,安装jenkins 是基于helm 安装,故省略。jenkins 配置采用jcasc 进行管理。


网站公告

今日签到

点亮在社区的每一天
去签到