本文共 12925 字,大约阅读时间需要 43 分钟。
之前的博文中已经介绍过使用kubeadm自动化安装Kubernetes ,但是由于各个组件都是以容器的方式运行,对于具体的配置细节没有太多涉及,为了更好的理解Kubernetes中各个组件的作用,本篇博文将使用二进制的方式安装Kubernetes集群,对于各个组件的配置做进一步的详细说明。
在1.10版本中,已经逐步废弃掉了非安全端口(默认8080)的连接方式,这里会介绍使用ca证书双向认证的方式来建立集群,配置过程稍复杂。
1、两台CentOS7 主机,解析主机名,关闭防火墙,Selinux,同步系统时间:
10.0.0.1 node-1 Master10.0.0.2 node-2 NodeMaster上部署:Node上部署:
2、下载官方的软件包https://github.com/kubernetes/kubernetes/ ,这里我们下载二进制文件,这里我们选择了1.10.2的版本:
由于使用的是二进制包,解压后直接将对应的文件拷贝到执行目录即可:
# tar xf kubernetes-server-linux-amd64.tar.gz# cd kubernetes/server/bin# cp `ls|egrep -v "*.tar|*_tag"` /usr/bin/
下面对具体的服务配置进行说明。
etcd服务是Kubernetes集群的核心数据库,在安装各个服务之前需要先安装启动。这里演示的是部署etcd单节点,当然也可以配置3节点的集群。如果想配置更加简单,推荐直接使用yum方式安装。
# wget https://github.com/coreos/etcd/releases/download/v3.2.20/etcd-v3.2.20-linux-amd64.tar.gz# tar xf etcd-v3.2.20-linux-amd64.tar.gz# cd etcd-v3.2.20-linux-amd64# cp etcd etcdctl /usr/bin/# mkdir /var/lib/etcd# mkdir /etc/etcd
编辑systemd管理文件:
vim /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.target[Service]Type=simpleWorkingDirectory=/var/lib/etcd/EnvironmentFile=-/etc/etcd/etcd.confExecStart=/usr/bin/etcd[Install]WantedBy=multi-user.target
启动服务:
systemctl daemon-reloadsystemctl start etcdsystemctl status etcd.service
查看服务状态:
[root@node-1 ~]# netstat -lntp|grep etcdtcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN 18794/etcd tcp 0 0 127.0.0.1:2380 0.0.0.0:* LISTEN 18794/etcd [root@node-1 ~]# etcdctl cluster-healthmember 8e9e05c52164694d is healthy: got healthy result from http://localhost:2379cluster is healthy
说明: etcd 会启用两个端口,其中2380 是集群的通信端口,2379是服务端口。如果是配置etcd集群,则要修改配置文件,设置监听IP和端口。
1、编辑systemd的启动文件:
vim /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://kubernetes.io/docs/concepts/overviewAfter=network.targetAfter=etcd.service[Service]EnvironmentFile=/etc/kubernetes/apiserverExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGSRestart=on-failureType=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target
2、配置参数文件(需要先创建配置目录):
# cat /etc/kubernetes/apiserver KUBE_API_ARGS="--storage-backend=etcd3 \ --etcd-servers=http://127.0.0.1:2379 \ --bind-address=0.0.0.0 \ --secure-port=6443 \ --service-cluster-ip-range=10.222.0.0/16 \ --service-node-port-range=1-65535 \ --client-ca-file=/etc/kubernetes/ssl/ca.crt \ --tls-private-key-file=/etc/kubernetes/ssl/server.key \ --tls-cert-file=/etc/kubernetes/ssl/server.crt \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota \ --logtostderr=false \ --log-dir=/var/log/kubernetes \ --v=2"
3、创建日志目录和证书目录,如果没有配文件目录也需要创建:
mkdir /var/log/kubernetesmkdir /etc/kubernetesmkdir /etc/kubernetes/ssl
1、配置systemd的启动文件:
# cat /usr/lib/systemd/system/kube-controller-manager.service [Unit]Description=Kubernetes Controller Manager Documentation=https://kubernetes.io/docs/setupAfter=kube-apiserver.serviceRequires=kube-apiserver.service[Service]EnvironmentFile=/etc/kubernetes/controller-managerExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
2、配置启动参数文件:
# cat /etc/kubernetes/controller-manager KUBE_CONTROLLER_MANAGER_ARGS="--master=https://10.0.0.1:6443 \--service-account-private-key-file=/etc/kubernetes/ssl/server.key \--root-ca-file=/etc/kubernetes/ssl/ca.crt --kubeconfig=/etc/kubernetes/kubeconfig"
1、配置systemd启动文件:
# cat /usr/lib/systemd/system/kube-scheduler.service [Unit]Description=Kubernetes Controller Manager Documentation=https://kubernetes.io/docs/setupAfter=kube-apiserver.serviceRequires=kube-apiserver.service[Service]EnvironmentFile=/etc/kubernetes/schedulerExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
2、配置参数文件:
# cat /etc/kubernetes/scheduler KUBE_SCHEDULER_ARGS="--master=https://10.0.0.1:6443 --kubeconfig=/etc/kubernetes/kubeconfig"
# cat /etc/kubernetes/kubeconfig apiVersion: v1kind: Configusers:- name: controllermanager user: client-certificate: /etc/kubernetes/ssl/cs_client.crt client-key: /etc/kubernetes/ssl/cs_client.keyclusters:- name: local cluster: certificate-authority: /etc/kubernetes/ssl/ca.crtcontexts:- context: cluster: local user: controllermanager name: my-contextcurrent-context: my-context
1、配置kube-apiserver的CA证书和私钥文件,:
# cd /etc/kubernetes/ssl/# openssl genrsa -out ca.key 2048# openssl req -x509 -new -nodes -key ca.key -subj "/CN=10.0.0.1" -days 5000 -out ca.crt # CN指定Master的IP地址# openssl genrsa -out server.key 2048
2、创建master_ssl.cnf文件:
# cat master_ssl.cnf [req]req_extensions = v3_reqdistinguished_name = req_distinguished_name[req_distinguished_name][ v3_req ]basicConstraints = CA:FALSEkeyUsage = nonRepudiation, digitalSignature, keyEnciphermentsubjectAltName = @alt_names[alt_names]DNS.1 = kubernetesDNS.2 = kubernetes.defaultDNS.3 = kubernetes.default.svcDNS.4 = kubernetes.default.svc.cluster.localDNS.5 = k8s_masterIP.1 = 10.222.0.1 # ClusterIP 地址IP.2 = 10.0.0.1 # master IP地址
3、基于上述文件,创建server.csr和 server.crt文件,执行如下命令:
# openssl req -new -key server.key -subj "/CN=node-1" -config master_ssl.cnf -out server.csr # CN指定主机名# openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out server.crt
提示: 执行以上命令后会生成6个文件,ca.crt ca.key ca.srl server.crt server.csr server.key。
4、设置kube-controller-manager相关证书:
# cd /etc/kubernetes/ssl/# openssl genrsa -out cs_client.key 2048# openssl req -new -key cs_client.key -subj "/CN=node-1" -out cs_client.csr # CN指定主机名# openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cs_client.crt -days 5000
5、确保/etc/kubernetes/ssl/ 目录下有如下文件:
[root@node-1 ssl]# lltotal 36-rw-r--r-- 1 root root 1090 May 25 15:34 ca.crt-rw-r--r-- 1 root root 1675 May 25 15:33 ca.key-rw-r--r-- 1 root root 17 May 25 15:41 ca.srl-rw-r--r-- 1 root root 973 May 25 15:41 cs_client.crt-rw-r--r-- 1 root root 887 May 25 15:41 cs_client.csr-rw-r--r-- 1 root root 1675 May 25 15:40 cs_client.key-rw-r--r-- 1 root root 1192 May 25 15:37 server.crt-rw-r--r-- 1 root root 1123 May 25 15:36 server.csr-rw-r--r-- 1 root root 1675 May 25 15:34 server.key
1、启动kube-apiserver:
# systemctl daemon-reload# systemctl enable kube-apiserver# systemctl start kube-apiserver
说明:kube-apiserver 默认会启动两个端口(8080和6443),其中,8080是各个组件之间通信的端口,在新的版本中已经很少使用,kube-apiserver所在的主机一般称为Master, 另一个端口6443是为HTTPS提供身份验证和授权的端口。
2、启动kube-controller-manager:
# systemctl daemon-reload# systemctl enable kube-controller-manager# systemctl start kube-controller-manager
说明:此服务会启动一个10252的端口
3、启动kube-scheduler
# systemctl daemon-reload# systemctl enable kube-scheduler# systemctl start kube-scheduler
说明: 此服务会启动一个10251的端口
5、启动各项服务时,分别查看对应的日志和启动状态信息,确认服务没有报错
# systemctl status KUBE-SERVEICE-NAME
Node节点上部署的服务非常简单,只需要部署 docker、kubelet和kube-proxy服务即可。
先配置如下文件:
# cat /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1
上传Kubernetes的Node节点二进制包,解压后执行如下命令:
tar xf kubernetes-node-linux-amd64.tar.gz cd /kubernetes/node/bincp kubectl kubelet kube-proxy /usr/bin/mkdir /var/lib/kubeletmkdir /var/log/kubernetesmkdir /etc/kubernetes
1、安装Docker17.03版本:
yum install docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm -yyum install docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm -y
2、配置启动参数:
vim /usr/lib/systemd/system/docker.service...ExecStart=/usr/bin/dockerd --registry-mirror https://qxx96o44.mirror.aliyuncs.com...
3、启动:
systemctl daemon-reloadsystemctl enable dockersystemctl start docker
每台Node节点上都需要配置kubelet的客户端证书。
复制Master上的ca.crt,ca.key到Node节点上的ssl目录,执行如下命令生成kubelet_client.crt和kubelet_client.csr文件:
# cd /etc/kubernetes/ssl/# openssl genrsa -out kubelet_client.key 2048# openssl req -new -key kubelet_client.key -subj "/CN=10.0.0.2" -out kubelet_client.csr # CN指定Node节点的IP# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
1、配置启动文件:
# cat /usr/lib/systemd/system/kubelet.service [Unit]Description=Kubernetes API ServerDocumentation=https://kubernetes.io/docAfter=docker.serviceRequires=docker.service[Service]WorkingDirectory=/var/lib/kubeletExecStart=/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubeconfig.yaml --logtostderr=false --log-dir=/var/log/kubernetes --v=2Restart=on-failure[Install]WantedBy=multi-user.target
2、配置文件:
# cat /etc/kubernetes/kubeconfig.yaml apiVersion: v1kind: Configusers:- name: kubelet user: client-certificate: /etc/kubernetes/ssl/kubelet_client.crt client-key: /etc/kubernetes/ssl/kubelet_client.keyclusters:- name: local cluster: certificate-authority: /etc/kubernetes/ssl/ca.crt server: https://10.0.0.1:6443contexts:- context: cluster: local user: kubelet name: my-contextcurrent-context: my-context
3、启动服务:
# systemctl daemon-reload# systemctl start kubelet# systemctl enable kubelet
4、在master上验证:
[root@node-1 ~]# kubectl get nodesNAME STATUS ROLES AGE VERSIONnode-2 Ready36m v1.10.2
说明:kubelet充当了一个agent的角色,安装好kubelet就可以在master上查看到节点信息。kubelet的配置文件是一个yaml格式文件,对master的指定需要在配置文件中说明。默认监听10248、10250、10255、4194端口。
1、创建systemd启动文件:
# cat /usr/lib/systemd/system/kube-proxy.service [Unit]Description=Kubernetes kubelet agent Documentation=https://kubernetes.io/docAfter=network.serviceRequires=network.service[Service]EnvironmentFile=/etc/kubernetes/proxyExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS Restart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
2、创建参数文件:
# cat /etc/kubernetes/proxy KUBE_PROXY_ARGS="--master=https://10.0.0.1:6443 --kubeconfig=/etc/kubernetes/kubeconfig.yaml"
3、启动服务:
# systemctl daemon-reload# systemctl start kube-proxy# systemctl enable kube-proxy
说明:启动服务后默认监听10249,10256.
完成上述的部署后,就可以创建应用了,但是在开始前,每个Node节点上必须要有pause的镜像,否则国内由于无法访问谷歌镜像,创建不会成功。
在Node节点执行如下命令,解决镜像问题:docker pull mirrorgooglecontainers/pause-amd64:3.1docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
下面会用一个创建简单的应用,来验证我们的集群是否能正常工作。
1、编辑nginx.yaml文件:
apiVersion: v1kind: ReplicationControllermetadata: name: mywebspec: replicas: 2 selector: app: myweb template: metadata: labels: app: myweb spec: containers: - name: myweb image: nginx ports: - containerPort: 80
2、执行:
# kubectl create -f nginx.yaml
3、查看状态:
[root@node-1 ~]# kubectl get rcNAME DESIRED CURRENT READY AGEmyweb 2 2 2 3h[root@node-1 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEmyweb-qtgrv 1/1 Running 0 1hmyweb-z9d2c 1/1 Running 0 1h[root@node-2 ~]# docker ps|grep nginx067db96d0c97 nginx@sha256:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884 "nginx -g 'daemon ..." About an hour ago Up About an hour k8s_myweb_myweb-qtgrv_default_3213ec67-5fef-11e8-9e43-000c295f81fb_0dd8f7458e410 nginx@sha256:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884 "nginx -g 'daemon ..." About an hour ago Up About an hour k8s_myweb_myweb-z9d2c_default_3214600e-5fef-11e8-9e43-000c295f81fb_0
4、创建一个service,映射到本地端口:
# cat nginx-service.yaml apiVersion: v1kind: Servicemetadata: name: mywebspec: type: NodePort # 定义外网访问模式 ports: - port: 80 nodePort: 30001 # 外网访问的端口,映射的本地宿主机端口 selector: app: myweb# 创建service# kubectl create -f nginx-service.yaml# 验证:[root@node-1 ~]# kubectl get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.222.0.1443/TCP 1dmyweb NodePort 10.222.35.97 80:30001/TCP 1h
5、会在所有安装proxy服务的节点上映射一个30001的端口,访问此端口就可以访问到nginx的默认起始页。
# netstat -lntp|grep 30001tcp6 0 0 :::30001 :::* LISTEN 7713/kube-proxy
以上内容让我们实现了一个k8s的集群,但是在实际应用中,我们还需要添加网络服务来实现pod之间的相互通信。k8s本身不提供网络支持,但是可以使用多种第三方网络插件来实现,在后序的博文中,我们将会介绍kubernetes网络模块。
转载于:https://blog.51cto.com/tryingstuff/2120374