跳至正文
-
Subscribe to our newsletter & never miss our best posts. Subscribe Now!
Winter's Life

技术分享 | 程序开发 | 产品测评 | 技术咨询 | 远程协助 | 生活感悟 | 行业新闻

Winter's Life

技术分享 | 程序开发 | 产品测评 | 技术咨询 | 远程协助 | 生活感悟 | 行业新闻

  • 首页
  • 关于我
  • 首页
  • 关于我
关

搜索

  • 首页
  • 关于我
Subscribe
Winter's Life
K8S

kubespray部署k8s集群

作者 winter.yu
2026年2月11日 11 分钟阅读
0

1. 服务器说明

1.1 节点要求

节点数 >= 3台 CPU >= 2c 内存 >= 2GB 安全组:关闭(允许节点之间任意端口访问,以及ipip隧道协议通讯)

1.2 环境说明

我们使用三台CentOS7.6的虚拟机,具体信息如下: 系统类型 IP地址 节点角色 CPU Memory Hostname
centos-7.6 10.211.55.14 master >=2 >=2G node-1
centos-7.6 10.211.55.15 master >=2 >=2G node-2
centos-7.6 10.211.55.16 worker >=2 >=2G node-3

1.3 科学上网

注意:安装过程需要访问很多google、github、docker.io、k8s.io等外网地址 环境中只要有一台作为代理可访问外网即可

服务器需要能够有访问外网的能力,若使用替代源会有很多需要修改的地方,很容易遗漏导致安装失败,建议搭建proxy或者部署在能访问外网的环境中

1.3.1 购买ss服务

https://portal.shadowsocks.au/login

1.3.2 搭建trojan+privoxy实现科学上网

具体操作请看文档 Linux trojan proxy

注意:由于部分代理模式问题,导致开启了代理后在yum安装时会报503,所以需要对privoxy的forward做一些策略修改

将全转发策略 forward-socks5t / 127.0.0.1:1080 . 替换为

forward-socks5t .google.com 127.0.0.1:1080 .
forward-socks5t .googleapis.com 127.0.0.1:1080 .
forward-socks5t .githubusercontent.com 127.0.0.1:1080 .
forward-socks5t .github.com 127.0.0.1:1080 .
forward-socks5t .docker.io 127.0.0.1:1080 .
forward-socks5t .k8s.io 127.0.0.1:1080 .
forward-socks5t .pkg.dev 127.0.0.1:1080 .

只对这些地址做外部转发

编辑所有节点的/root/.bash_profile,添加环境变量设置命令行代理

export https_proxy=http://10.211.55.14:8118
export http_proxy=http://10.211.55.14:8118
export all_proxy=http://10.211.55.14:8118

注意:10.211.55.14为部署了trojan+privoxy可科学上网的代理服务器

加载配置

source /root/.bash_profile

2. 系统设置(所有节点)

注意:所有操作使用root用户执行

2.1 主机名

主机名必须合法,并且每个节点都不一样(建议命名规范:数字+字母+中划线组合,不要包含其他特殊字符)。

# 查看主机名

$ hostname

# 修改主机名

$ hostnamectl set-hostname

#### 2.2 关闭防火墙、selinux、swap,重置iptables

# 关闭selinux

$ setenforce 0 $ sed -i ‘/SELINUX/s/enforcing/disabled/’ /etc/selinux/config

# 关闭防火墙

$ systemctl stop firewalld && systemctl disable firewalld

# 设置iptables规则

$ iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

# 关闭swap

$ swapoff -a && free –h

# 关闭dnsmasq(否则可能导致容器无法解析域名)

$ service dnsmasq stop && systemctl disable dnsmasq

2.3 k8s参数设置

# 制作配置文件

$ cat > /etc/sysctl.d/kubernetes.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0 vm.overcommit_memory = 1 EOF

# 生效文件

$ sysctl -p /etc/sysctl.d/kubernetes.conf

2.4 etcdV3 API环境变量设置

注意:使用etcd V3版本调用API时需要设置

# 编辑/etc/profile

$ vi /etc/profile

# 在文件尾部添加如下命令

export ETCDCTL_API=3

# 加载配置

$ source /etc/profile

2.5 移除docker相关软件包(可选)

$ yum remove -y docker* $ rm -f /etc/docker/daemon.json

3. 使用kubespray部署集群

这部分只需要在一个 操作 节点执行,可以是集群中的一个节点,也可以是集群之外的节点。甚至可以是你自己的笔记本电脑。我们这里使用更普遍的集群中的任意一个linux节点。

3.1 配置免密

# 1. 生成keygen(执行ssh-keygen,一路回车下去)

$ ssh-keygen

# 2. 查看并复制生成的pubkey

$ cat /root/.ssh/id_rsa.pub

# 3. 分别登陆到每个节点上,将pubkey写入/root/.ssh/authorized_keys

$ mkdir -p /root/.ssh $ echo “” >> /root/.ssh/authorized_keys

设置完成后在 操作 节点对每个节点使用ssh测试登录,不用输入密码即配置完成

3.2 依赖软件下载、安装

根据需要部署的k8s版本可下载对应的kubespray版本,建议下载最新版,本例使用目前最新的2.21.0版本,该版本支持最新的python3.11,后续我们也将采用这个版本的python进行部署

3.2.1 kubespray2.21.0对应的Component versions
  • Core
    • kubernetes v1.25.6
    • etcd v3.5.6
    • docker v20.10 (cri_dockerd: v0.3.0)
    • containerd v1.6.15
    • cri-o v1.24
  • Network Plugin
    • cni-plugins v1.2.0
    • calico v3.24.5
    • cilium v1.12.1
    • flannel v0.20.2
    • kube-ovn v1.10.7
    • kube-router v1.5.1
    • multus v3.8
    • weave v2.8.1
    • kube-vip v0.5.5
  • Application
    • cert-manager v1.11.0
    • coredns v1.9.3
    • ingress-nginx v1.5.1
    • krew v0.4.3
    • argocd v2.5.7
    • helm v3.10.3
    • metallb v0.12.1
    • registry v2.8.1
  • Storage Plugin
    • cephfs-provisioner v2.1.0-k8s1.11
    • rbd-provisioner v2.1.1-k8s1.11
    • aws-ebs-csi-plugin v0.5.0
    • azure-csi-plugin v1.10.0
    • cinder-csi-plugin v1.22.0
    • gcp-pd-csi-plugin v1.4.0
    • local-path-provisioner v0.0.22
    • local-volume-provisioner v2.5.0
3.2.2 CentOS 7 安装python3.11

安装过程比较繁琐,不在这里过多赘述,可以参考以下文章

CentOS 7 安装python3.11

注意:pip执行是需要ssl支持的,而python3.11需要openssl11,如果系统默认的openssl版本未更新,会导致编译后的python没有ssl模块

3.2.3 kubespray下载

kubespray github

# 下载部署包

wget https://github.com/kubernetes-sigs/kubespray/archive/refs/tags/v2.21.0.tar.gz

# 解压部署包

tar -xvf v2.21.0.tar.gz && cd kubespray-2.21.0

# 安装requirements

cat requirements.txt pip3 install -r requirements.txt

# 如果install遇到问题可以尝试升级pip,可能会遇到ruamel包无法安装

pip3 install –upgrade pip

注意:必须确保所有依赖包正确安装,不然部署过程会有莫名其妙问题导致失败

3.3 生成配置

项目中有一个目录是集群的基础配置,示例配置在目录inventory/sample中,我们复制一份出来作为自己集群的配置

cp -rpf inventory/sample inventory/mycluster

由于kubespray给我们准备了py脚本,可以直接根据环境变量自动生成配置文件,所以我们现在只需要设定好环境变量就可以啦

# 使用真实的hostname(否则会自动把你的hostname改成node1/node2…这种哦)

$ export USE_REAL_HOSTNAME=true

# 指定配置文件位置

$ export CONFIG_FILE=inventory/mycluster/hosts.yaml

# 定义ip列表(你的服务器内网ip地址列表,3台及以上,前两台默认为master节点)

$ declare -a IPS=(10.211.55.14 10.211.55.15 10.211.55.16)

# 生成配置文件

$ python3 contrib/inventory_builder/inventory.py ${IPS[@]}

3.4 个性化配置

配置文件都生成好了,虽然可以直接用,但并不能完全满足大家的个性化需求,比如用docker还是containerd?docker的工作目录是否用默认的/var/lib/docker?等等。当然默认的情况kubespray还会到google的官方仓库下载镜像、二进制文件,这个就需要你的服务器可以上外面的网,想上外网也需要修改一些配置。

3.4.1 节点组织配置(这里可以调整每个节点的角色)
[root@node-1 kubespray-2.21.0]# cat inventory/mycluster/hosts.yaml 
all:
  hosts:
    node-1:
      ansible_host: 10.211.55.14
      ip: 10.211.55.14
      access_ip: 10.211.55.14
    node-2:
      ansible_host: 10.211.55.15
      ip: 10.211.55.15
      access_ip: 10.211.55.15
    node-3:
      ansible_host: 10.211.55.16
      ip: 10.211.55.16
      access_ip: 10.211.55.16
  children:
    kube_control_plane: ## master节点的主机
      hosts:
        node-1:
        node-2:
    kube_node:  ## worker节点的主机
      hosts:
        node-1:
        node-2:
        node-3:
    etcd:
      hosts:
        node-1:
        node-2:
        node-3:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}
3.4.2 k8s集群配置(包括设置容器运行时、svc网段、pod网段等)

cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

根据需要修改相关配置,主要配置参考

## Change this to use another Kubernetes version, e.g. a current beta release kube_version: v1.25.6

网络插件,默认为calico,可根据实际需要配置

# Choose network plugin (cilium, calico, kube-ovn, weave or flannel. Use cni for generic cni plugin) # Can also be set to ‘cloud’, which lets the cloud provider setup appropriate routing kube_network_plugin: calico

service所在网段,默认为10.233.0.0/18,可以根据实际需要设置

# Kubernetes internal network for services, unused block of space. kube_service_addresses: 10.100.0.0/16

pod所在网段,默认为10.233.64.0/18,可根据实际需要设置

# internal network. When used, it will assign IP # addresses from this range to individual pods. # This network must be unused in your network infrastructure! kube_pods_subnet: 10.200.0.0/16

容器运行时,以前默认为docker,现在默认值为containerd

## Container runtime ## docker for docker, crio for cri-o and containerd for containerd. ## Default: containerd container_manager: containerd

3.4.3 containerd配置(本文使用containerd作为i容器引擎)

cat inventory/mycluster/group_vars/all/containerd.yml

---
# Please see roles/container-engine/containerd/defaults/main.yml for more configuration options

# containerd_storage_dir: "/var/lib/containerd"
# containerd_state_dir: "/run/containerd"
containerd_oom_score: -999 ##默认为0, 需要设置为-999防止被杀

# containerd_default_runtime: "runc"
# containerd_snapshotter: "native"

# containerd_runc_runtime:
#   name: runc
#   type: "io.containerd.runc.v2"
#   engine: ""
#   root: ""

# containerd_additional_runtimes:
# Example for Kata Containers as additional runtime:
#   - name: kata
#     type: "io.containerd.kata.v2"
#     engine: ""
#     root: ""

# containerd_grpc_max_recv_message_size: 16777216
# containerd_grpc_max_send_message_size: 16777216

# containerd_debug_level: "info"

# containerd_metrics_address: ""

# containerd_metrics_grpc_histogram: false

## An obvious use case is allowing insecure-registry access to self hosted registries.
## Can be ipaddress and domain_name.
## example define mirror.registry.io or 172.19.16.11:5000
## set "name": "url". insecure url must be started http://
## Port number is also needed if the default HTTPS port is not used.
# containerd_insecure_registries:
#   "localhost": "http://127.0.0.1"
#   "172.19.16.11:5000": "http://172.19.16.11:5000"

# containerd_registries:
#   "docker.io": "https://registry-1.docker.io"

# containerd_max_container_log_line_size: -1

# containerd_registry_auth:
#   - registry: 10.0.0.2:5000
#     username: user
#     password: pass
3.4.4 配置etcd部署类型(新版本默认是host,容器引擎是docker的需要修改为docker)

cat inventory/mycluster/group_vars/all/etcd.yml

---
## Directory where etcd data stored
etcd_data_dir: /var/lib/etcd

## Container runtime
## docker for docker, crio for cri-o and containerd for containerd.
## Additionally you can set this to kubeadm if you want to install etcd using kubeadm
## Kubeadm etcd deployment is experimental and only available for new deployments
## If this is not set, container manager will be inherited from the Kubespray defaults
## and not from k8s_cluster/k8s-cluster.yml, which might not be what you want.
## Also this makes possible to use different container manager for etcd nodes.
# container_manager: containerd

## Settings for etcd deployment type
# Set this to docker if you are using container_manager: docker
etcd_deployment_type: host
3.4.5 附加组件(ingress、dashboard等)

cat inventory/mycluster/group_vars/k8s_cluster/addons.yml

---
# Kubernetes dashboard
# RBAC required. see docs/getting-started.md for access details.
dashboard_enabled: true ##默认为false,如需要安装设置为true

# Helm deployment
helm_enabled: false

# Registry deployment
registry_enabled: false
# registry_namespace: kube-system
# registry_storage_class: ""
# registry_disk_size: "10Gi"

# Metrics Server deployment
metrics_server_enabled: false
# metrics_server_container_port: 4443
# metrics_server_kubelet_insecure_tls: true
# metrics_server_metric_resolution: 15s
# metrics_server_kubelet_preferred_address_types: "InternalIP,ExternalIP,Hostname"
# metrics_server_host_network: false
# metrics_server_replicas: 1

# Rancher Local Path Provisioner
local_path_provisioner_enabled: false
# local_path_provisioner_namespace: "local-path-storage"
# local_path_provisioner_storage_class: "local-path"
# local_path_provisioner_reclaim_policy: Delete
# local_path_provisioner_claim_root: /opt/local-path-provisioner/
# local_path_provisioner_debug: false
# local_path_provisioner_image_repo: "rancher/local-path-provisioner"
# local_path_provisioner_image_tag: "v0.0.22"
# local_path_provisioner_helper_image_repo: "busybox"
# local_path_provisioner_helper_image_tag: "latest"

# Local volume provisioner deployment
local_volume_provisioner_enabled: false
# local_volume_provisioner_namespace: kube-system
# local_volume_provisioner_nodelabels:
#   - kubernetes.io/hostname
#   - topology.kubernetes.io/region
#   - topology.kubernetes.io/zone
# local_volume_provisioner_storage_classes:
#   local-storage:
#     host_dir: /mnt/disks
#     mount_dir: /mnt/disks
#     volume_mode: Filesystem
#     fs_type: ext4
#   fast-disks:
#     host_dir: /mnt/fast-disks
#     mount_dir: /mnt/fast-disks
#     block_cleaner_command:
#       - "/scripts/shred.sh"
#       - "2"
#     volume_mode: Filesystem
#     fs_type: ext4
# local_volume_provisioner_tolerations:
#   - effect: NoSchedule
#     operator: Exists

# CSI Volume Snapshot Controller deployment, set this to true if your CSI is able to manage snapshots
# currently, setting cinder_csi_enabled=true would automatically enable the snapshot controller
# Longhorn is an extenal CSI that would also require setting this to true but it is not included in kubespray
# csi_snapshot_controller_enabled: false
# csi snapshot namespace
# snapshot_controller_namespace: kube-system

# CephFS provisioner deployment
cephfs_provisioner_enabled: false
# cephfs_provisioner_namespace: "cephfs-provisioner"
# cephfs_provisioner_cluster: ceph
# cephfs_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
# cephfs_provisioner_admin_id: admin
# cephfs_provisioner_secret: secret
# cephfs_provisioner_storage_class: cephfs
# cephfs_provisioner_reclaim_policy: Delete
# cephfs_provisioner_claim_root: /volumes
# cephfs_provisioner_deterministic_names: true

# RBD provisioner deployment
rbd_provisioner_enabled: false
# rbd_provisioner_namespace: rbd-provisioner
# rbd_provisioner_replicas: 2
# rbd_provisioner_monitors: "172.24.0.1:6789,172.24.0.2:6789,172.24.0.3:6789"
# rbd_provisioner_pool: kube
# rbd_provisioner_admin_id: admin
# rbd_provisioner_secret_name: ceph-secret-admin
# rbd_provisioner_secret: ceph-key-admin
# rbd_provisioner_user_id: kube
# rbd_provisioner_user_secret_name: ceph-secret-user
# rbd_provisioner_user_secret: ceph-key-user
# rbd_provisioner_user_secret_namespace: rbd-provisioner
# rbd_provisioner_fs_type: ext4
# rbd_provisioner_image_format: "2"
# rbd_provisioner_image_features: layering
# rbd_provisioner_storage_class: rbd
# rbd_provisioner_reclaim_policy: Delete

# Nginx ingress controller deployment
ingress_nginx_enabled: true ##默认为false,如需要安装设置为true
# ingress_nginx_host_network: false
ingress_publish_status_address: ""
# ingress_nginx_nodeselector:
#   kubernetes.io/os: "linux"
# ingress_nginx_tolerations:
#   - key: "node-role.kubernetes.io/master"
#     operator: "Equal"
#     value: ""
#     effect: "NoSchedule"
#   - key: "node-role.kubernetes.io/control-plane"
#     operator: "Equal"
#     value: ""
#     effect: "NoSchedule"
# ingress_nginx_namespace: "ingress-nginx"
# ingress_nginx_insecure_port: 80
# ingress_nginx_secure_port: 443
# ingress_nginx_configmap:
#   map-hash-bucket-size: "128"
#   ssl-protocols: "TLSv1.2 TLSv1.3"
# ingress_nginx_configmap_tcp_services:
#   9000: "default/example-go:8080"
# ingress_nginx_configmap_udp_services:
#   53: "kube-system/coredns:53"
# ingress_nginx_extra_args:
#   - --default-ssl-certificate=default/foo-tls
# ingress_nginx_termination_grace_period_seconds: 300
# ingress_nginx_class: nginx

# ALB ingress controller deployment
ingress_alb_enabled: false
# alb_ingress_aws_region: "us-east-1"
# alb_ingress_restrict_scheme: "false"
# Enables logging on all outbound requests sent to the AWS API.
# If logging is desired, set to true.
# alb_ingress_aws_debug: "false"

# Cert manager deployment
cert_manager_enabled: false
# cert_manager_namespace: "cert-manager"
# cert_manager_tolerations:
#   - key: node-role.kubernetes.io/master
#     effect: NoSchedule
#   - key: node-role.kubernetes.io/control-plane
#     effect: NoSchedule
# cert_manager_affinity:
#  nodeAffinity:
#    preferredDuringSchedulingIgnoredDuringExecution:
#    - weight: 100
#      preference:
#        matchExpressions:
#        - key: node-role.kubernetes.io/control-plane
#          operator: In
#          values:
#          - ""
# cert_manager_nodeselector:
#   kubernetes.io/os: "linux"

# cert_manager_trusted_internal_ca: |
#   -----BEGIN CERTIFICATE-----
#   [REPLACE with your CA certificate]
#   -----END CERTIFICATE-----
# cert_manager_leader_election_namespace: kube-system

# MetalLB deployment
metallb_enabled: false
metallb_speaker_enabled: "{{ metallb_enabled }}"
# metallb_ip_range:
#   - "10.5.0.50-10.5.0.99"
# metallb_pool_name: "loadbalanced"
# metallb_auto_assign: true
# metallb_avoid_buggy_ips: false
# metallb_speaker_nodeselector:
#   kubernetes.io/os: "linux"
# metallb_controller_nodeselector:
#   kubernetes.io/os: "linux"
# metallb_speaker_tolerations:
#   - key: "node-role.kubernetes.io/master"
#     operator: "Equal"
#     value: ""
#     effect: "NoSchedule"
#   - key: "node-role.kubernetes.io/control-plane"
#     operator: "Equal"
#     value: ""
#     effect: "NoSchedule"
# metallb_controller_tolerations:
#   - key: "node-role.kubernetes.io/master"
#     operator: "Equal"
#     value: ""
#     effect: "NoSchedule"
#   - key: "node-role.kubernetes.io/control-plane"
#     operator: "Equal"
#     value: ""
#     effect: "NoSchedule"
# metallb_version: v0.12.1
# metallb_protocol: "layer2"
# metallb_port: "7472"
# metallb_memberlist_port: "7946"
# metallb_additional_address_pools:
#   kube_service_pool:
#     ip_range:
#       - "10.5.1.50-10.5.1.99"
#     protocol: "layer2"
#     auto_assign: false
#     avoid_buggy_ips: false
# metallb_protocol: "bgp"
# metallb_peers:
#   - peer_address: 192.0.2.1
#     peer_asn: 64512
#     my_asn: 4200000000
#   - peer_address: 192.0.2.2
#     peer_asn: 64513
#     my_asn: 4200000000

argocd_enabled: false
# argocd_version: v2.5.7
# argocd_namespace: argocd
# Default password:
#   - https://argo-cd.readthedocs.io/en/stable/getting_started/#4-login-using-the-cli
#   ---
#   The initial password is autogenerated to be the pod name of the Argo CD API server. This can be retrieved with the command:
#   kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2
#   ---
# Use the following var to set admin password
# argocd_admin_password: "password"

# The plugin manager for kubectl
krew_enabled: false
krew_root_dir: "/usr/local/krew"
3.4.6 全局配置(可以在这配置http(s)代理实现外网访问)

cat inventory/mycluster/group_vars/all/all.yml

---
## Directory where the binaries will be installed
bin_dir: /usr/local/bin

## The access_ip variable is used to define how other nodes should access
## the node.  This is used in flannel to allow other flannel nodes to see
## this node for example.  The access_ip is really useful AWS and Google
## environments where the nodes are accessed remotely by the "public" ip,
## but don't know about that address themselves.
# access_ip: 1.1.1.1

## External LB example config
## apiserver_loadbalancer_domain_name: "elb.some.domain"
# loadbalancer_apiserver:
#   address: 1.2.3.4
#   port: 1234

## Internal loadbalancers for apiservers
# loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy"
# loadbalancer_apiserver_type: nginx  # valid values "nginx" or "haproxy"

## If the cilium is going to be used in strict mode, we can use the
## localhost connection and not use the external LB. If this parameter is
## not specified, the first node to connect to kubeapi will be used.
# use_localhost_as_kubeapi_loadbalancer: true

## Local loadbalancer should use this port
## And must be set port 6443
loadbalancer_apiserver_port: 6443

## If loadbalancer_apiserver_healthcheck_port variable defined, enables proxy liveness check for nginx.
loadbalancer_apiserver_healthcheck_port: 8081

### OTHER OPTIONAL VARIABLES

## By default, Kubespray collects nameservers on the host. It then adds the previously collected nameservers in nameserverentries.
## If true, Kubespray does not include host nameservers in nameserverentries in dns_late stage. However, It uses the nameserver to make sure cluster installed safely in dns_early stage.
## Use this option with caution, you may need to define your dns servers. Otherwise, the outbound queries such as www.google.com may fail.
# disable_host_nameservers: false

## Upstream dns servers
# upstream_dns_servers:
#   - 8.8.8.8
#   - 8.8.4.4

## There are some changes specific to the cloud providers
## for instance we need to encapsulate packets with some network plugins
## If set the possible values are either 'gce', 'aws', 'azure', 'openstack', 'vsphere', 'oci', or 'external'
## When openstack is used make sure to source in the openstack credentials
## like you would do when using openstack-client before starting the playbook.
# cloud_provider:

## When cloud_provider is set to 'external', you can set the cloud controller to deploy
## Supported cloud controllers are: 'openstack', 'vsphere' and 'hcloud'
## When openstack or vsphere are used make sure to source in the required fields
# external_cloud_provider:

## Set these proxy values in order to update package manager and docker daemon to use proxies
http_proxy: "http://10.211.55.14:8118"  ##需要设置为环境中配置了privoxy的代理地址
https_proxy: "http://10.211.55.14:8118"

## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
# no_proxy: ""

## Some problems may occur when downloading files over https proxy due to ansible bug
## https://github.com/ansible/ansible/issues/32750. Set this variable to False to disable
## SSL validation of get_url module. Note that kubespray will still be performing checksum validation.
# download_validate_certs: False

## If you need exclude all cluster nodes from proxy and other resources, add other resources here.
# additional_no_proxy: ""

## If you need to disable proxying of os package repositories but are still behind an http_proxy set
## skip_http_proxy_on_os_packages to true
## This will cause kubespray not to set proxy environment in /etc/yum.conf for centos and in /etc/apt/apt.conf for debian/ubuntu
## Special information for debian/ubuntu - you have to set the no_proxy variable, then apt package will install from your source of wish
skip_http_proxy_on_os_packages: true ##yum安装os包时不启用代理,否则会直接将代理地址设置到yum.conf中

## Since workers are included in the no_proxy variable by default, docker engine will be restarted on all nodes (all
## pods will restart) when adding or removing workers.  To override this behaviour by only including master nodes in the
## no_proxy variable, set below to true:
no_proxy_exclude_workers: false

## Certificate Management
## This setting determines whether certs are generated via scripts.
## Chose 'none' if you provide your own certificates.
## Option is  "script", "none"
# cert_management: script

## Set to true to allow pre-checks to fail and continue deployment
# ignore_assert_errors: false

## The read-only port for the Kubelet to serve on with no authentication/authorization. Uncomment to enable.
# kube_read_only_port: 10255

## Set true to download and cache container
# download_container: true

## Deploy container engine
# Set false if you want to deploy container engine manually.
# deploy_container_engine: true

## Red Hat Enterprise Linux subscription registration
## Add either RHEL subscription Username/Password or Organization ID/Activation Key combination
## Update RHEL subscription purpose usage, role and SLA if necessary
# rh_subscription_username: ""
# rh_subscription_password: ""
# rh_subscription_org_id: ""
# rh_subscription_activation_key: ""
# rh_subscription_usage: "Development"
# rh_subscription_role: "Red Hat Enterprise Server"
# rh_subscription_sla: "Self-Support"

## Check if access_ip responds to ping. Set false if your firewall blocks ICMP.
# ping_access_ip: true

# sysctl_file_path to add sysctl conf to
# sysctl_file_path: "/etc/sysctl.d/99-sysctl.conf"

## Variables for webhook token auth https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication
kube_webhook_token_auth: false
kube_webhook_token_auth_url_skip_tls_verify: false
# kube_webhook_token_auth_url: https://...
## base64-encoded string of the webhook's CA certificate
# kube_webhook_token_auth_ca_data: "LS0t..."

## NTP Settings
# Start the ntpd or chrony service and enable it at system boot.
ntp_enabled: false
ntp_manage_config: false
ntp_servers:
  - "0.pool.ntp.org iburst"
  - "1.pool.ntp.org iburst"
  - "2.pool.ntp.org iburst"
  - "3.pool.ntp.org iburst"

## Used to control no_log attribute
unsafe_show_logs: false

3.5 一键部署

配置文件都调整好了后,就可以开始一键部署啦,不过部署过程不出意外会非常慢。

一键部署

需要使用root用户执行ansible-playbook –become参数是必须的,例如在/etc中写入SSL Key 没有become参数,playbook将运行失败 -vvvv会打印最详细的日志信息,建议开启

ansible-playbook -i inventory/mycluster/hosts.yaml –become –become-user=root cluster.yml -vvvv

经过漫长的等待后,如果没有问题,整个集群都部署起来啦

3.6 清理代理设置

清理代理设置(运行时不再需要代理,删掉代理配置即可)

删除docker的http代理(在每个节点执行)

$ rm -f /etc/systemd/system/containerd.service.d/http-proxy.conf $ systemctl daemon-reload $ systemctl restart containerd

删除yum代理(如果之前没有配置skip_http_proxy_on_os_packages: true)

# 把grep出来的代理配置手动删除即可

$ grep 8118 -r /etc/yum*

删除.bash_profile中代理配置(在每个节点执行)

标签:

K8Skuberneteskubspray运维
作者

winter.yu

关注我
其他文章
Winter's Life
上一个

K8S1.24版本后获取用户Token

Winter's Life
下一个

etcd rejected connection error

暂无评论!成为第一个。

发表回复 取消回复

要发表评论,您必须先登录。

联系方式(咨询、协助需付费)

微信:yuxiaodong9916

QQ:95888623

近期文章

  • OpenClaw Agent 聊天交互完全指南:从单聊到群聊,从人工到自动化
  • OpenClaw 常用命令完全指南:从入门到精通
  • OpenClaw 五大核心文件配置技巧:从入门到精通
  • OpenClaw 记忆系统配置完全指南:从零搭建智能体长期记忆
  • OpenClaw 金融投资利器:10个必备 Skill 助你智胜市场

近期评论

您尚未收到任何评论。

归档

  • 2026 年 3 月
  • 2026 年 2 月

分类

  • Ai
  • Github
  • K8S
  • Linux
  • Oracle
  • Python
  • Redis
  • 企业协作
  • 数据库
  • 科学上网

agent agents Ai AI Agent clawdbot ClawHub Django Etcd Github K8S kubernetes kubspray Linux LVM openclaw Oracle Playwright Python Redis skill token trojan proxy 企业协作 企业微信 单用户 常用命令 技能推荐,2026 投资理财 排障 教程 数据库 机器人 浏览器自动化 科学上网 聊天交互 自动化 表空间 记忆系统 运维 运维配置 配置文件 金融市场 钉钉 飞书 飞书,OpenClaw,AI 助手,教程,自动化,企业协作

您可能错过了

Winter's Life
Ai

OpenClaw Agent 聊天交互完全指南:从单聊到群聊,从人工到自动化

winter.yu
作者 winter.yu
2026年3月31日
Ai

OpenClaw 常用命令完全指南:从入门到精通

winter.yu
作者 winter.yu
2026年3月31日
Ai

OpenClaw 五大核心文件配置技巧:从入门到精通

winter.yu
作者 winter.yu
2026年3月27日
Ai

OpenClaw 记忆系统配置完全指南:从零搭建智能体长期记忆

winter.yu
作者 winter.yu
2026年3月26日
Ai

OpenClaw 金融投资利器:10个必备 Skill 助你智胜市场

winter.yu
作者 winter.yu
2026年3月16日
Ai

OpenClaw 多 Agent 配置实战:实现飞书多机器人协同工作

winter.yu
作者 winter.yu
2026年3月16日
Ai

2026 年最热门的 10 个 OpenClaw 技能:让你的 AI agent 能力翻倍

winter.yu
作者 winter.yu
2026年3月8日
Ai

openclaw浏览器自动化详细教程

winter.yu
作者 winter.yu
2026年3月7日
企业协作

企业协作平台接入详细教程

winter.yu
作者 winter.yu
2026年3月7日
Copyright 2026 — Winter's Life. All rights reserved. Blogsy WordPress Theme