运维八一 运维八一
首页
运维杂记
编程浅尝
周积跬步
专栏
生活
关于
收藏
  • 分类
  • 标签
  • 归档
Source (opens new window)

运维八一

运维,运维!
首页
运维杂记
编程浅尝
周积跬步
专栏
生活
关于
收藏
  • 分类
  • 标签
  • 归档
Source (opens new window)
  • 操作系统

  • 域名解析

  • 公有云

  • CI&CD

  • 数据库

  • 负载均衡&反向代理

  • 存储系统

  • 容器&容器编排

    • kubeadm安装k8s单点(centos系统)
    • kubeadm安装k8s单点(debian系统)
    • k8s单master集群部署
      • k8s单master集群部署
      • 1. 环境规划
      • 2. 安装docker
      • 3. 自签TLS证书
      • 4. 部署etcd集群
      • 5. 部署Flannel网络
      • 6. 创建Node节点kubeconfig文件
      • 7. 获取K8S二进制包、运行Master组件
      • 8. 运行Node组件
      • 9. 启动一个测试示例
      • 10. 部署Web UI (Dashboard)
      • 11. 其他
    • k8s安全策略
    • Dockerfile参数说明
    • Docker环境磁盘清理
    • docker常用命令
    • docker卷挂载
    • docker网络模式
    • kubectl命令补全
    • k8s ingress代理外部IIS服务
    • k8s安装ingress-nginx
    • harbor (docker compose)安装
    • k8s进行pod级的抓包tcpdump
    • k8s使用secret拉取私有仓库镜像
    • k8s常用命令
    • k8s内存使用及监控
    • openshift 3_11单节点all-in-one安装
    • k8s 1_26版本创建serviceaccount不会自动创建secret
    • ctr和crictl显示镜像不一致
    • alpine镜像集成常用数据库客户端
  • 批量管理

  • 邮件系统

  • 监控系统

  • Web服务

  • 虚拟化

  • 防火墙

  • 压测

  • 文件同步

  • 私有云

  • 日志系统

  • 代码仓库&版本管理

  • 安全审计

  • 远程拨号

  • 大数据

  • 统一认证

  • 消息队列

  • Apollo

  • 运维杂记
  • 容器&容器编排
lyndon
2022-10-05
目录

k8s单master集群部署

# k8s单master集群部署

# 1. 环境规划

角色 IP
master 192.168.200.101
node01 192.168.200.102
node02 192.168.200.103

# 2. 安装docker

在master/node01/node02操作:

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce
cat <<EOF > /etc/docker/daemon.json
{ "registry-mirrors": [ "https://registry.docker-cn.com"],"insecure-registries":["192.168.200.101:5000"] } 
EOF
systemctl start docker
systemctl enable docker
1
2
3
4
5
6
7
8

# 3. 自签TLS证书

组件 使用的证书
etcd ca.pem,server.pem,server-key.pem
flannel ca.pem,server.pem,server-key.pem
kube-apiserver ca.pem,server.pem,server-key.pem
kubelet ca.pem,ca-key.pem
kube-proxy ca.pem,kube-proxy.pem,kube-proxy-key.pem
kubectl ca.pem,admin.pem,admin-key.pem

master操作:

安装证书生成工具 cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
1
2
3
4
5
6
7

生成证书,使用脚本生成:cat certificate.sh

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
          "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
    "CN": "kubernetes",
    "hosts": [
      "127.0.0.1",
      "192.168.200.101",
      "192.168.200.102",
      "192.168.200.103",
      "10.10.10.1",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
#-----------------------
cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
#-----------------------
cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113

执行脚本:

sh certificate.sh
1

脚本执行成功会生成一批证书,创建ssl目录存放生成的证书

mkdir -p /root/ssl
1

移动所有证书至/root/ssl

# 4. 部署etcd集群

master操作:

创建kubernets目录

mkdir -p /opt/kubernetes/{bin,cfg,ssl}
1

上传etcd源码包etcd-v3.2.12-linux-amd64.tar.gz

tar xf etcd-v3.2.12-linux-amd64.tar.gz
cd etcd-v3.2.12-linux-amd64
1
2

移动etcd命令到kubernets工作目录bin下

cp etcd etcdctl /opt/kubernetes/bin/
1

移动etcd所需要的证书到kubernets工作目录ssl下

cp /root/ssl/ca*pem /root/ssl/server*pem /opt/kubernetes/ssl/
1

使用脚本生成配置文件并启动:cat etcd.sh

#!/bin/bash
ETCD_NAME=${1:-"etcd01"}
ETCD_IP=${2:-"127.0.0.1"}
ETCD_CLUSTER=${3:-"etcd01=http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd \\
--name=\${ETCD_NAME} \\
--data-dir=\${ETCD_DATA_DIR} \\
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \\
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \\
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \\
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \\
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \\
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \\
--initial-cluster-state=new \\
--cert-file=/opt/kubernetes/ssl/server.pem \\
--key-file=/opt/kubernetes/ssl/server-key.pem \\
--peer-cert-file=/opt/kubernetes/ssl/server.pem \\
--peer-key-file=/opt/kubernetes/ssl/server-key.pem \\
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \\
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50

在master上使用脚本启动etcd

./etcd.sh etcd01 192.168.200.101 etcd01=https://192.168.200.101:2380,etcd02=https://192.168.200.102:2380,etcd03=https://192.168.200.103:2380
1

(可选) 为了方便部署,配置master到node互信:

生成公密钥:ssh-keygen
一路回车
生成后发送公钥到node
ssh-copy-id root@192.168.200.102
ssh-copy-id root@192.168.200.103
1
2
3
4
5

发送master文件到node

scp -r /opt/kubernetes/  root@192.168.200.102:/opt
scp -r /opt/kubernetes/  root@192.168.200.103:/opt
scp etcd.sh root@192.168.200.102:~
scp etcd.sh root@192.168.200.103:~
1
2
3
4

启动node01的etcd

./etcd.sh etcd02 192.168.200.102 etcd01=https://192.168.200.101:2380,etcd02=https://192.168.200.102:2380,etcd03=https://192.168.200.103:2380
1

启动node02的etcd

./etcd.sh etcd03 192.168.200.103 etcd01=https://192.168.200.101:2380,etcd02=https://192.168.200.102:2380,etcd03=https://192.168.200.103:2380
1

进入到/root/ssl目录下,执行以下命令在master查看集群状态

/opt/kubernetes/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" \
cluster-health
1
2
3
4

# 5. 部署Flannel网络

  • Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
  • VXLAN :将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目标地址。
  • Flannel :是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式。

多主机容器网络通信其他主流方案:隧道方案( Weave、OpenvSwitch ),路由方案(Calico)等。

在master/node上操作(master部署flannel在一些特殊场景会用到):

写入分配的子网段到 etcd ,供 flanneld 使用(只在master上操作即可)

/opt/kubernetes/bin/etcdctl \
--ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
1
2
3
4

下载二进制包

wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz
tar xf flannel-v0.9.1-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/
1
2
3

配置 Flannel/systemd管理,使用脚本配置cat flanneld.sh

#!/bin/bash
ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
cat <<EOF >/opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/kubernetes/ssl/ca.pem \
-etcd-certfile=/opt/kubernetes/ssl/server.pem \
-etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd  \$DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl restart docker
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49

启动flannel

./flanneld.sh https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379
1

验证网络

查看已存在的子网

[root@k8s-master ssl]# /opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" ls /coreos.com/network/subnets
会显示以下docker子网段:
/coreos.com/network/subnets/172.17.78.0-24
/coreos.com/network/subnets/172.17.84.0-24
/coreos.com/network/subnets/172.17.49.0-24
1
2
3
4
5

查看某个子网详细信息

[root@k8s-master ssl]# /opt/kubernetes/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379" get /coreos.com/network/subnets/172.17.78.0-24
子网详细信息:
{"PublicIP":"192.168.200.101","BackendType":"vxlan","BackendData":{"VtepMAC":"62:5f:9d:cd:51:aa"}}
1
2
3

如果集群内部节点无法通信,可以添加防火墙规则:

iptables -I INPUT -s 192.168.200.0/24 -j ACCEPT
1

# 6. 创建Node节点kubeconfig文件

在master节点/root/ssl目录下使用以下脚本:

cat kubeconfig.sh

# 创建 TLS Bootstrapping Token
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
#----------------------
# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://192.168.200.101:6443"
# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
  --certificate-authority=./ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
  --client-certificate=./kube-proxy.pem \
  --client-key=./kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

注意:执行此脚本时必须存在kubectl命令(上传kubectl命令到/usr/bin下)

./kubeconfig.sh
1

会生成以下文件:

  • token.csv
  • bootstrap.kubeconfig
  • kube-proxy.kubeconfig

将配置文件cp到node:

scp *kubeconfig root@192.168.200.102:/opt/kubernetes/cfg/
scp *kubeconfig root@192.168.200.103:/opt/kubernetes/cfg/
1
2

# 7. 获取K8S二进制包、运行Master组件

K8S二进制包下载地址:https://github.com/kubernetes/kubernetes/

移动二进制文件到工作目录bin下:

mv kubectl kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/
1

移动token认证信息到配置目录下:

mv token.csv /opt/kubernetes/cfg/
1

使用以下脚本apiserver.sh、scheduler.sh、controller-manager.sh:

cat apiserver.sh

#!/bin/bash
MASTER_ADDRESS=${1:-"192.168.200.101"}
ETCD_SERVERS=${2:-"http://192.168.200.101:2379"}
cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=${ETCD_SERVERS} \\
--insecure-bind-address=127.0.0.1 \\
--bind-address=${MASTER_ADDRESS} \\
--insecure-port=8080 \\
--secure-port=6443 \\
--advertise-address=${MASTER_ADDRESS} \\
--allow-privileged=true \\
--service-cluster-ip-range=10.10.10.0/24 \\
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/kubernetes/ssl/server.pem \\
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42

cat controller-manager.sh

#!/bin/bash
MASTER_ADDRESS=${1:-"127.0.0.1"}
cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.10.10.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29

cat scheduler.sh

#!/bin/bash
MASTER_ADDRESS=${1:-"127.0.0.1"}
cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \\
--v=4 \\
--master=${MASTER_ADDRESS}:8080 \\
--leader-elect"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

启动组件:

./apiserver.sh 192.168.200.101 https://192.168.200.101:2379,https://192.168.200.102:2379,https://192.168.200.103:2379
./scheduler.sh
./controller-manager.sh
1
2
3

查看组件启动状态:

kubectl get cs
1

# 8. 运行Node组件

mv kubelet kube-proxy /opt/kubernetes/bin/
chmod +x /opt/kubernetes/bin/*
1
2

在node节点使用脚本启动kubelet:

cat kubelet.sh

#!/bin/bash
NODE_ADDRESS=${1:-"192.168.1.196"}
DNS_SERVER_IP=${2:-"10.10.10.2"}
cat <<EOF >/opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=${NODE_ADDRESS} \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--cert-dir=/opt/kubernetes/ssl \\
--allow-privileged=true \\
--cluster-dns=${DNS_SERVER_IP} \\
--cluster-domain=cluster.local \\
--fail-swap-on=false \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
EOF
cat <<EOF >/usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

在node01启动:

./kubelet.sh 192.168.200.102
1

在node02启动:

./kubelet.sh 192.168.200.103
1

如果启动报错:kubelet: error: failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope

原因是:kubelet-bootstrap并没有权限创建证书。所以要创建这个用户的权限并绑定到这个角色上。

在master执行命令创建kubelet-bootstrap用户:

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
1

在master节点查看认证状态:

kubectl get csr
显示为等待签名认证状态
NAME                                   AGE       REQUESTOR        CONDITION
node-csr-QmkBSqwpZJnC5CJyowdOwYi_SvD2Q5h_e9l-axZRf3s   27s       kubelet-bootstrap   Pending
node-csr-piPDu1XYXFMdWSKyucooft7bc-L5dfvgCiKjigjXgKI   5m        kubelet-bootstrap   Pending
1
2
3
4
5

master进行签名认证:

kubectl certificate approve node-csr-QmkBSqwpZJnC5CJyowdOwYi_SvD2Q5h_e9l-axZRf3s
kubectl certificate approve node-csr-piPDu1XYXFMdWSKyucooft7bc-L5dfvgCiKjigjXgKI
1
2

再次查看:

kubectl get csr
显示为签发状态
NAME                                   AGE       REQUESTOR         CONDITION
node-csr-QmkBSqwpZJnC5CJyowdOwYi_SvD2Q5h_e9l-axZRf3s   5m        kubelet-bootstrap   Approved,Issued
node-csr-piPDu1XYXFMdWSKyucooft7bc-L5dfvgCiKjigjXgKI   10m       kubelet-bootstrap   Approved,Issued
1
2
3
4
5
kubectl get node
显示node节点已准备就绪
NAME          STATUS    ROLES     AGE       VERSION
192.168.200.102   Ready     <none>    1m        v1.9.0
192.168.200.103   Ready     <none>    2m        v1.9.0
1
2
3
4
5

使用脚本在node节点启动kube-pory

cat proxy.sh

#!/bin/bash
NODE_ADDRESS=${1:-"192.168.1.200"}
cat <<EOF >/opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=${NODE_ADDRESS} \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
EOF
cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

启动kube-proxy:

在node1启动:

./proxy.sh 192.168.200.102
1

在nodo2启动:

./proxy.sh 192.168.200.103
1

# 9. 启动一个测试示例

启动一个nginx服务(只能内网访问):

kubectl run nginx --image=nginx --replicas=3
kubectl get pod
1
2

启动一个nginx服务(使用NodePort网络映射到外网):

# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
# kubectl get svc nginx
1
2

# 10. 部署Web UI (Dashboard)

使用kubernets模板文件dashboard-rbac.yaml、dashboard-deployment.yaml、dashboard-service.yaml:

# kubectl create -f dashboard-rbac.yaml
# kubectl create -f dashboard-deployment.yaml
# kubectl create -f dashboard-service.yaml
1
2
3

查看启动

kubectl get pods -n kube-system   //获取podid
kubectl describe po/podid -n kube-system
1
2

#查看service信息

kubectl get svc -n kube-system
1

# 11. 其他

Kubectl管理工具,远程管理集群服务,在远程服务器上操作:

# 设置集群项中名为kubernetes的apiserver地址与根证书
./kubectl config set-cluster kubernetes --server=https://192.168.200.101:6443 --certificate-authority=ca.pem
# 设置用户项中cluster-admin用户证书认证字段
./kubectl config set-credentials cluster-admin --certificate-authority=ca.pem --client-key=admin-key.pem --client-certificate=admin.pem
# 设置环境项中名为default的默认集群和用户
./kubectl config set-context default --cluster=kubernetes --user=cluster-admin
# 设置默认环境项为default
./kubectl config use-context default
1
2
3
4
5
6
7
8
上次更新: 2022/10/05, 02:02:36
kubeadm安装k8s单点(debian系统)
k8s安全策略

← kubeadm安装k8s单点(debian系统) k8s安全策略→

最近更新
01
ctr和crictl显示镜像不一致
03-13
02
alpine镜像集成常用数据库客户端
03-13
03
create-cluster
02-26
更多文章>
Theme by Vdoing | Copyright © 2015-2024 op81.com
苏ICP备18041258号-2
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式