分享 | 交流
让学习成为一种习惯

CentOS7使用yum方式部署kubernetes集群

系统:CentOS Linux release 7.4.1708 (Core)

内核可以升级,如果不升级也可以运行,但是会在日志里有提示。

节点名    主机名     IP地址
etcd       etcd       10.146.0.3
master   master   10.146.0.3
node1    node1    10.146.0.4
node2    node2    10.146.0.5

机器之间做好时间同步:

时间同步:

yum install ntp -y
timedatectl set-ntp true
检查是否正常同步:
[root@master ~]# timedatectl
Local time: Fri 2018-05-06 11:02:51 CST
Universal time: Fri 2018-05-06 03:02:51 UTC
RTC time: Fri 2018-05-06 03:02:52
Time zone: Asia/Shanghai (CST, +0800)
NTP enabled: yes     #此处为yes即正常了,发现时间已经同步到最新了
NTP synchronized: no
RTC in local TZ: no
DST active: n/a

添加主机名IP地址的对应解析到/etc/hosts文件:

[root@master system]# tail -4 /etc/hosts
10.146.0.3 master
10.146.0.3 etcd
10.146.0.4 node1
10.146.0.5 node2

并在Master节点上做一个ssh-key免密登陆,传到每个节点上,和etcd机器。

以下每台机器都要执行
[root@master ~]# ssh-keygen -t rsa
[root@master ~]# ssh-copy-id 10.146.0.4
[root@master ~]# ssh-copy-id 10.146.0.5
[root@master ~]# cat<> /etc/hosts
> 10.146.0.3 master
> 10.146.0.3 etcd
> 10.146.0.4 node1
> 10.146.0.5 node2
> EOF

[root@master ~]# yum install -y kubernetes etcd flannel ntp
[root@node1 ~]# yum install -y kubernetes flannel ntp
[root@node2 ~]# yum install -y kubernetes flannel ntp

[root@master ~]# grep ^[^#] /etc/etcd/etcd.conf
ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”
ETCD_LISTEN_CLIENT_URLS=”http://localhost:2379″
ETCD_NAME=”default”
ETCD_ADVERTISE_CLIENT_URLS=”http://localhost:2379″

[root@master ~]# vim /etc/etcd/etcd.conf
You have mail in /var/spool/mail/root

[root@master ~]# grep ^[^#] /etc/etcd/etcd.conf
ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”
ETCD_LISTEN_CLIENT_URLS=”http://localhost:2379,http://10.146.0.3:2379″
ETCD_NAME=”default”
ETCD_ADVERTISE_CLIENT_URLS=”http://10.146.0.3:2379″

[root@master ~]# systemctl start etcd
[root@master ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

[root@master ~]# netstat -lntup|grep 2379
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 28402/etcd
tcp 0 0 10.146.0.3:2379 0.0.0.0:* LISTEN 28402/etcd

[root@master ~]# etcdctl cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://10.146.0.3:2379
cluster is healthy

[root@master ~]# etcdctl member list
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://10.146.0.3:2379 isLeader=true
到此,etcd节点成功

[root@master ~]# grep ^[^#] /etc/kubernetes/config
KUBE_LOGTOSTDERR=”–logtostderr=true”
KUBE_LOG_LEVEL=”–v=0″
# Flanneld configuration options
KUBE_ALLOW_PRIV=”–allow-privileged=false”
KUBE_MASTER=”–master=http://127.0.0.1:8080″

[root@master ~]# vim /etc/kubernetes/config
[root@master ~]# grep ^[^#] /etc/kubernetes/config
KUBE_LOGTOSTDERR=”–logtostderr=true”
KUBE_LOG_LEVEL=”–v=0″
KUBE_ALLOW_PRIV=”–allow-privileged=false”
KUBE_MASTER=”–master=http://10.146.0.3:8080″

[root@master ~]# grep ^[^#] /etc/kubernetes/apiserver
KUBE_API_ADDRESS=”–insecure-bind-address=127.0.0.1″
KUBE_ETCD_SERVERS=”–etcd-servers=http://127.0.0.1:2379″
KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″
KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota”
KUBE_API_ARGS=””

[root@master ~]# vim /etc/kubernetes/apiserver
[root@master ~]# grep ^[^#] /etc/kubernetes/apiserver
KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″
KUBE_ETCD_SERVERS=”–etcd-servers=http://10.146.0.3:2379″
KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″
KUBE_ADMISSION_CONTROL=”–admission-control=AlwaysAdmit”
KUBE_API_ARGS=””

[root@master ~]# grep ^[^#] /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=””

[root@master ~]# grep ^[^#] /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS=””

[root@master ~]# vim /etc/kubernetes/scheduler
[root@master ~]# grep ^[^#] /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS=”–address=0.0.0.0″

[root@master ~]# grep ^[^#] /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS=”http://127.0.0.1:2379″
FLANNEL_ETCD_PREFIX=”/atomic.io/network”

[root@master ~]# mkdir -p /var/log/k8s/flannel
[root@master ~]# vim /etc/sysconfig/flanneld

[root@master ~]# grep ^[^#] /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS=”http://10.146.0.3:2379″
FLANNEL_ETCD_PREFIX=”/liushike.com/network”
FLANNEL_OPTIONS=”–log_dir=/var/log/k8s/flannel/ –iface=eth0″

[root@master ~]# etcdctl set /liushike.com/network/config ‘{“Network”: “10.255.0.0/16”}’
{“Network”: “10.255.0.0/16”}
[root@master ~]# etcdctl get /liushike.com/network/config
{“Network”: “10.255.0.0/16″}

[root@master ~]# systemctl start flanneld
[root@master ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.

[root@master ~]# netstat -lntup|grep flanneld
udp 0 0 10.146.0.3:8285 0.0.0.0:* 28695/flanneld

[root@master ~]# systemctl restart kube-apiserver kube-controller-manager kube-scheduler flanneld docker
[root@master ~]# systemctl enable kube-apiserver kube-controller-manager kube-scheduler flanneld docker
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

Node1
[root@master ~]# scp /etc/sysconfig/flanneld 10.146.0.4:/etc/sysconfig/
flanneld 100% 427 311.0KB/s 00:00

[root@master ~]# scp /etc/kubernetes/config 10.146.0.4:/etc/kubernetes/
config 100% 656 484.7KB/s 00:00

[root@node1 ~]# grep ^[^#] /etc/kubernetes/kubelet
KUBELET_ADDRESS=”–address=127.0.0.1″
KUBELET_HOSTNAME=”–hostname-override=127.0.0.1″
KUBELET_API_SERVER=”–api-servers=http://127.0.0.1:8080″
KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”
KUBELET_ARGS=””

[root@node1 ~]# vim /etc/kubernetes/kubelet
[root@node1 ~]# grep ^[^#] /etc/kubernetes/kubelet
KUBELET_ADDRESS=”–address=0.0.0.0″
KUBELET_HOSTNAME=”–hostname-override=node1″
KUBELET_API_SERVER=”–api-servers=http://10.146.0.3:8080″
KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”
KUBELET_ARGS=””

[root@node1 ~]# systemctl restart flanneld kube-proxy kubelet docker
[root@node1 ~]# systemctl enable flanneld kube-proxy kubelet docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@node1 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.255.99.1 netmask 255.255.255.0 broadcast 0.0.0.0
……
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1432
inet 10.255.99.0 netmask 255.255.0.0 destination 10.255.100.0
[root@node1 ~]# netstat -antup|grep proxy
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 4619/kube-proxy
tcp 0 0 10.146.0.4:57948 10.146.0.5:8080 ESTABLISHED 4619/kube-proxy
tcp 0 0 10.146.0.4:57952 10.146.0.5:8080 ESTABLISHED 4619/kube-proxy
至此,node1节点成了

[root@master ~]# scp /etc/sysconfig/flanneld 10.146.0.5:/etc/sysconfig/
flanneld 100% 427 311.0KB/s 00:00
[root@master ~]# scp /etc/kubernetes/config 10.146.0.5:/etc/kubernetes/
config 100% 656 484.7KB/s 00:00
[root@node1 ~]# scp /etc/kubernetes/kubelet 10.146.0.5:/etc/kubernetes/
kubelet 100% 610 1.3MB/s 00:00

[root@node2 ~]# grep ^[^#] /etc/kubernetes/kubelet
KUBELET_ADDRESS=”–address=0.0.0.0″
KUBELET_HOSTNAME=”–hostname-override=node2″
KUBELET_API_SERVER=”–api-servers=http://10.146.0.5:8080″
KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”
KUBELET_ARGS=””

[root@node2 ~]# systemctl start flanneld kube-proxy kubelet docker
[root@node2 ~]# systemctl enable flanneld kube-proxy kubelet docker
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@node2 ~]# netstat -antup|grep proxy
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 2809/kube-proxy
tcp 0 0 10.146.0.3:36910 10.146.0.5:8080 ESTABLISHED 2809/kube-proxy
tcp 0 0 10.146.0.3:36906 10.146.0.5:8080 ESTABLISHED 2809/kube-proxy
tcp 0 0 10.146.0.3:36908 10.146.0.5:8080 ESTABLISHED 2809/kube-proxy

[root@node2 ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.255.59.1 netmask 255.255.255.0 broadcast 0.0.0.0
……
flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1432
inet 10.255.59.0 netmask 255.255.0.0 destination 10.255.73.0

测试
[root@master ~]# kubectl get nodes
NAME STATUS AGE
node1 Ready 1m
node2 Ready 47s
至此,整个 Kubernetes 集群搭建完毕
node节点下载基础镜像和应用镜像
yum install *rhsm* -y
docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
docker pull nginx
注意:在master上启动的pod使用的镜像,master上可以没有,但是node机上必须有,否则pod启动不起来
启动一个pod
kubectl run nginx –image=docker.io/nginx –replicas=1 –port=9000
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-2187705812-q8sdp 1/1 Running 0 11s

未经允许不得转载:留时刻运维网 » CentOS7使用yum方式部署kubernetes集群

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址

留时刻 - Linux系统教程,运维经验分享

加入我们给我留言