Centos7搭建kubernetes搭建

Centos7搭建kubernetes搭建

本文介绍在两台Centos7上安装kubernetes集群,一台作为master,2台作为node。并介绍安装过程中遇到的问题,和跑简单nginx服务。

注:如果用于生产环境,应该搭建高可用集群。

Kubernetes包提供了一些服务:kube-apiserver,kube-scheduler,kube-controller-manager,kubelet,kube-proxy。 这些服务由systemd管理,配置位于:/etc/kubernetes

Kubernetes master 将会跑这些服务:kube-apiserver, kube-controller-manager ,kube-scheduler和etcd。 kubernates工作节点跑的服务有:kubelet, proxy, cadvisor and docker。 所有节点都会起flanneld实现跨主机网络。

一、 安装前准备

现有两台机器:

172-31-17-187 master
172-31-25-80 node1
172-31-16-52 node2

1.1 卸载docker

如果已经安装docker,先卸载,后面使用kubernetes指定版本安装。卸载方法如下:

$ yum list installed | grep docker
docker.x86_64                         2:1.12.6-11.el7.centos           @extras
docker-client.x86_64                  2:1.12.6-11.el7.centos           @extras
docker-common.x86_64                  2:1.12.6-11.el7.centos           @extras
$ yum remove -y docker.x86_64   docker-client.x86_64 docker-common.x86_64  

1.2 关闭防火墙

centos7默认防火墙为firewalld,如果自己安装了iptables,都得禁用。

systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld

针对所有master和node

二、安装配置master

不做详细介绍,直接使用脚本安装

sh -x k8s-master.sh 172.31.17.187

k8s-master.sh

#!/usr/bin/env bash
set -e

MASTER_IP=$1
if [ ! $MASTER_IP ]
then
	echo "MASTER_IP is null"
	exit 1
fi

echo "=================install ntpd==================="
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd

echo "=================install docker, k8s, etcd, flannel==================="
cat <<EOF > /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
EOF

yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

echo "=================config kubernetes==================="
mv /etc/kubernetes/config /etc/kubernetes/config.bak
cat <<EOF >/etc/kubernetes/config
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://${MASTER_IP}:8080"
EOF

setenforce 0
#systemctl disable iptables-services firewalld
#systemctl stop iptables-services firewalld

echo "================= config etcd======================"
sed -i s#'ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"'#'ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"'#g /etc/etcd/etcd.conf
sed -i s#'ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"'#'ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"'#g /etc/etcd/etcd.conf 

echo "================= config apiserver==================="
mv /etc/kubernetes/apiserver /etc/kubernetes/apiserver.bak 
cat <<EOF >/etc/kubernetes/apiserver
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://${MASTER_IP}:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# Add your own!
KUBE_API_ARGS=""
EOF

echo "=================start and set etcd==============="
systemctl start etcd
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"

echo "=================config flannel==================="
mv /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
cat <<EOF >/etc/sysconfig/flanneld
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://${MASTER_IP}:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
EOF

echo "=================start etcd k8s ==================="
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld ; do
	systemctl restart $SERVICES
	systemctl enable $SERVICES
	systemctl status $SERVICES
done

注: 上面脚本并没有启动docker和kublet,如果测试时需要在master上运行服务,请启动docker,并按照node的kublet配置并启动kublet。

三、安装配置nodes

执行脚本

sh install-k8s-node.sh 172.31.17.187 172.31.25.80 # master_ip node_ip

install-k8s-node.sh 脚本内容

#/usr/bin/env bash
set -e

MASTER_IP=$1
NODE_IP=$2
if [ ! $MASTER_IP ] || [ ! $NODE_IP ]
then
	echo "MASTER_IP or NODE_IP is null"
	exit 1
fi

echo '=================install ntpd==================='
yum -y install ntp
systemctl start ntpd
systemctl enable ntpd

echo "=================install docker, k8s, etcd, flannel==================="
cat <<EOF > /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
EOF

yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

setenforce 0

echo "===============config kubernetes================"
mv /etc/kubernetes/config /etc/kubernetes/config.bak
cat <<EOF >/etc/kubernetes/config
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the replication controller and scheduler find the kube-apiserver
KUBE_MASTER="--master=http://${MASTER_IP}:8080"
EOF

echo "===============install docker, k8s, etcd, flannel================"
cat <<EOF > /etc/yum.repos.d/virt7-docker-common-release.repo
[virt7-docker-common-release]
name=virt7-docker-common-release
baseurl=http://cbs.centos.org/repos/virt7-docker-common-release/x86_64/os/
gpgcheck=0
EOF
yum -y install --enablerepo=virt7-docker-common-release kubernetes etcd flannel

echo "===============config kublet================"
mv /etc/kubernetes/kubelet  /etc/kubernetes/kubelet.bak
cat <<EOF >/etc/kubernetes/kubelet
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=${NODE_IP}"

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://${MASTER_IP}:8080"

# Add your own!
KUBELET_ARGS=""
EOF

echo "===============config flanneld================"
mv /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
cat <<EOF >/etc/sysconfig/flanneld
# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://${MASTER_IP}:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
EOF

echo "==========start kube-proxy kubelet flanneld docker==========="
for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

echo "==============set kubectl================"
kubectl config set-cluster default-cluster --server=http://${MASTER_IP}:8080
kubectl config set-context default-context --cluster=default-cluster --user=default-admin
kubectl config use-context default-context

至此,集群就算搭建完成,查看节点状态。

$ kubectl get no
NAME            STATUS     AGE
NAME           STATUS    AGE
172.31.16.52   Ready     1h
172.31.25.80   Ready     2h 

四、测试服务

测试通过master部署两个nginx到node。 在master上新建文件nginx-deployment.yml。

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2 # tells deployment to run 2 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
      # generated from the deployment name
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

创建deployment

$ kubectl create -f nginx-deployment.yml
deployment "nginx-deployment" created

查看pod:

$ kubectl get pods -o wide
NAME                                READY     STATUS    RESTARTS   AGE       IP            NODE
nginx-deployment-4087004473-kbbgs   1/1       Running   0          1h        172.30.41.2   172.31.25.80
nginx-deployment-4087004473-m47bg   1/1       Running   0          1h        172.30.93.2   172.31.16.52

# 访问nginx
$curl 172.30.41
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

如果发现STATUS一直处于ContainerCreating状态,可能是正在拉取镜像。可以查看详细信息.

$  kubectl describe pod <pod-name> #pod-name 即nginx-deployment-4087004473-

五、问题

1. 拉取组件失败

k8s安装时,可能会自动下载一些组件或者容器,如果拉取失败,请检查是否存,如果存在,可能因为google被墙引起。

2. docker 运行失败

shim error: docker-runc not installed on system

我先安装了docker 1.13.1 ,后从卸载通过k8s源安装1.12,发现运行容器时出现以上错误。 解决办法:docker配置不对,可以参考一下ExecStart配置:

ExecStart=/usr/bin/dockerd-current \
          --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
          --default-runtime=docker-runc \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY

3. 不同节点上docker 容器相互ping不通

可能原因:flanneld配置不对,kubelet配置不对。 对比docker绑定子网ip和flanneld子网:

$ ps -ef|grep docker #--bip=xxx  --mtu=xxx
$ cat /run/flannel/subnet.env #FLANNEL_SUBNET

查看kubelet是否指向master ip

ps -ef | grep kube

4. 网络端口

如果你在aws等云平台,注意需要打开内部UDP,flannel通过UDP/VxLAN等进行报文的封装和转发。

CONTENTS