kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。 这个工具能通过两条指令完成一个kubernetes集群的部署:创建一个Master节点kubeadminit将一个Node节点加入到当前集群中kubeadmjoinMaster节点的IP和端口安装要求 在开始之前,部署Kubernetes集群机器需要满足以下几个条件:一台或多台机器,操作系统CentOS7。x86x64硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点禁止swap分区准备环境 角色 IP master1 192。168。3。155 master2 192。168。3。156 node1 192。168。3。157 VIP(虚拟ip) 192。168。3。158关闭防火墙如果是minimal安装,默认没有装firewalldsystemctlstopfirewalldsystemctldisablefirewalld关闭selinuxsedisenforcingdisabledetcselinuxconfig永久setenforce0临时关闭swapswapoffa临时sedris。swap。etcfstab永久根据规划设置主机名hostnamectlsethostnamehostname分别设置为master1、master2、node1在master添加hostscatetchostsEOF192。168。3。158master。k8s。iok8svip192。168。3。155master01。k8s。iomaster1192。168。3。156master02。k8s。iomaster2192。168。3。157node01。k8s。ionode1EOFpingnode1或pingnode01。k8s。io确认配置生效将桥接的IPv4流量传递到iptables的链catetcsysctl。dk8s。confEOFnet。bridge。bridgenfcallip6tables1net。bridge。bridgenfcalliptables1EOFsysctlsystem生效时间同步yuminstallntpdateyntpdatetime。windows。com所有master节点部署keepalived 3。1安装相关包和keepalivedyuminstallyconntracktoolslibseccomplibtoolltdlyuminstallykeepalived 3。2配置master节点 master1节点配置catetckeepalivedkeepalived。confEOF!ConfigurationFileforkeepalivedglobaldefs{routeridk8s}vrrpscriptcheckhaproxy{scriptkillall0haproxyinterval3weight2fall10rise2}vrrpinstanceVI1{stateMASTERinterfaceeno33554984virtualrouterid51priority250advertint1authentication{authtypePASSauthpassceb1b3ec013d66163d6ab}virtualipaddress{192。168。3。158}trackscript{checkhaproxy}}EOF master2节点配置catetckeepalivedkeepalived。confEOF!ConfigurationFileforkeepalivedglobaldefs{routeridk8s}vrrpscriptcheckhaproxy{scriptkillall0haproxyinterval3weight2fall10rise2}vrrpinstanceVI1{stateBACKUPinterfaceeno33554984virtualrouterid51priority200advertint1authentication{authtypePASSauthpassceb1b3ec013d66163d6ab}virtualipaddress{192。168。3。158}trackscript{checkhaproxy}}EOF 3。3启动和检查 在两台master节点都执行启动keepalivedsystemctlstartkeepalived。service设置开机启动systemctlenablekeepalived。service查看启动状态systemctlstatuskeepalived。service以master1为例keepalived。serviceLVSandVRRPHighAvailabilityMonitorLoaded:loaded(usrlibsystemdsystemkeepalived。service;enabled;vendorpreset:disabled)Active:active(running)sinceSun2022020602:42:13EST;11sagoMainPID:2985(keepalived)CGroup:system。slicekeepalived。service2985usrsbinkeepalivedD2986usrsbinkeepalivedD2987usrsbinkeepalivedDFeb0602:42:15master1Keepalivedvrrp〔2987〕:SendinggratuitousARPoneno33554984for192。168。3。158Feb0602:42:15master1Keepalivedvrrp〔2987〕:SendinggratuitousARPoneno33554984for192。168。3。158Feb0602:42:15master1Keepalivedvrrp〔2987〕:SendinggratuitousARPoneno33554984for192。168。3。158Feb0602:42:15master1Keepalivedvrrp〔2987〕:SendinggratuitousARPoneno33554984for192。168。3。158Feb0602:42:20master1Keepalivedvrrp〔2987〕:SendinggratuitousARPoneno33554984for192。168。3。158Feb0602:42:20master1Keepalivedvrrp〔2987〕:VRRPInstance(VI1)SendingqueueinggratuitousARPsoneno33554984for192。168。3。158Feb0602:42:20master1Keepalivedvrrp〔2987〕:SendinggratuitousARPoneno33554984for192。168。3。158Feb0602:42:20master1Keepalivedvrrp〔2987〕:SendinggratuitousARPoneno33554984for192。168。3。158Feb0602:42:20master1Keepalivedvrrp〔2987〕:SendinggratuitousARPoneno33554984for192。168。3。158Feb0602:42:20master1Keepalivedvrrp〔2987〕:SendinggratuitousARPoneno33554984for192。168。3。158 启动后查看master1的网卡信息ipaseno335549843:eno33554984:BROADCAST,MULTICAST,UP,LOWERUPmtu1500qdiscpfifofaststateUPqlen1000linkether00:0c:29:b8:e6:c1brdff:ff:ff:ff:ff:ffinet192。168。3。15524brd192。168。3。255scopeglobaleno33554984validlftforeverpreferredlftforeverinet192。168。3。15832scopeglobaleno33554984validlftforeverpreferredlftforeverinet6fe80::20c:29ff:feb8:e6c164scopelinkvalidlftforeverpreferredlftforever部署haproxy 4。1安装yuminstallyhaproxy 4。2配置 两台master节点的配置均相同,配置中声明了后端代理的两个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口catetchaproxyhaproxy。cfgEOFGlobalsettingsglobaltohavethesemessagesendupinvarloghaproxy。logyouwillneedto:1)configuresyslogtoacceptnetworklogevents。ThisisdonebyaddingtheroptiontotheSYSLOGDOPTIONSinetcsysconfigsyslog2)configurelocal2eventstogotothevarloghaproxy。logfile。Alinelikethefollowingcanbeaddedtoetcsysconfigsysloglocal2。varloghaproxy。loglog127。0。0。1local2chrootvarlibhaproxypidfilevarrunhaproxy。pidmaxconn4000userhaproxygrouphaproxydaemonturnonstatsunixsocketstatssocketvarlibhaproxystatscommondefaultsthatallthelistenandbackendsectionswilluseifnotdesignatedintheirblockdefaultsmodehttplogglobaloptionhttplogoptiondontlognulloptionhttpservercloseoptionforwardforexcept127。0。0。08optionredispatchretries3timeouthttprequest10stimeoutqueue1mtimeoutconnect10stimeoutclient1mtimeoutserver1mtimeouthttpkeepalive10stimeoutcheck10smaxconn3000kubernetesapiserverfrontendwhichproxystothebackendsfrontendkubernetesapiservermodetcpbind:16443optiontcplogdefaultbackendkubernetesapiserverroundrobinbalancingbetweenthevariousbackendsbackendkubernetesapiservermodetcpbalanceroundrobinservermaster01。k8s。io192。168。3。155:6443checkservermaster02。k8s。io192。168。3。156:6443checkcollectionhaproxystatisticsmessagelistenstatsbind:1080statsauthadmin:awesomePasswordstatsrefresh5sstatsrealmHAProxyStatisticsstatsuriadmin?statsEOF 4。3启动和检查 两台master都启动设置开机启动systemctlenablehaproxy开启haproxysystemctlstarthaproxy查看启动状态systemctlstatushaproxy以master1为例haproxy。serviceHAProxyLoadBalancerLoaded:loaded(usrlibsystemdsystemhaproxy。service;enabled;vendorpreset:disabled)Active:active(running)sinceSun2022020602:43:21EST;7sagoMainPID:3067(haproxysystemd)CGroup:system。slicehaproxy。service3067usrsbinhaproxysystemdwrapperfetchaproxyhaproxy。cfgprunhaproxy。pid3068usrsbinhaproxyfetchaproxyhaproxy。cfgprunhaproxy。pidDs3069usrsbinhaproxyfetchaproxyhaproxy。cfgprunhaproxy。pidDsFeb0602:43:21master1systemd〔1〕:StartedHAProxyLoadBalancer。Feb0602:43:21master1systemd〔1〕:StartingHAProxyLoadBalancer。。。Feb0602:43:21master1haproxysystemdwrapper〔3067〕:haproxysystemdwrapper:executingusrsbinhaproxyfetchaproxyhaproxy。cfgprunhaproxy。pidDsFeb0602:43:21master1haproxysystemdwrapper〔3067〕:〔WARNING〕036024321(3068):config:optionforwardforignoredforfrontendkubernetesapi。。。TPmode。Feb0602:43:21master1haproxysystemdwrapper〔3067〕:〔WARNING〕036024321(3068):config:optionforwardforignoredforbackendkubernetesapis。。。TPmode。Hint:Somelineswereellipsized,useltoshowinfull。 检查端口yuminstallynettoolsnetstatlntupgrephaproxytcp000。0。0。0:10800。0。0。0:LISTEN3069haproxytcp000。0。0。0:164430。0。0。0:LISTEN3069haproxyudp000。0。0。0:525990。0。0。0:3068haproxy所有节点安装Dockerkubeadmkubelet Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。 5。1安装Dockeryuminstallywgetwgethttps:mirrors。aliyun。comdockercelinuxcentosdockerce。repoOetcyum。repos。ddockerce。repoyumyinstalldockerce18。06。1。ce3。el7systemctlenabledockersystemctlstartdockersystemctlstatusdocker以master1为例docker。serviceDockerApplicationContainerEngineLoaded:loaded(usrlibsystemdsystemdocker。service;enabled;vendorpreset:disabled)Active:active(running)sinceSun2022020602:46:29EST;6sagoDocs:https:docs。docker。comMainPID:14229(dockerd)Memory:49。1MCGroup:system。slicedocker。service14229usrbindockerd14236dockercontainerdconfigvarrundockercontainerdcontainerd。tomlFeb0602:46:29master1dockerd〔14229〕:time20220206T02:46:29。69883278405:00levelinfomsgpickfirstBalancer:HandleSubConnStateChange:0xc4。。。odulegrpcFeb0602:46:29master1dockerd〔14229〕:time20220206T02:46:29。69885926205:00levelinfomsgLoadingcontainers:start。Feb0602:46:29master1dockerd〔14229〕:time20220206T02:46:29。70872098605:00levelwarningmsgRunningmodprobebridgebrnetfilterfailedwithmessage。。。Feb0602:46:29master1dockerd〔14229〕:time20220206T02:46:29。81506036205:00levelinfomsgDefaultbridge(docker0)isassignedwithanIPa。。。PaddressFeb0602:46:29master1dockerd〔14229〕:time20220206T02:46:29。87975724505:00levelinfomsgLoadingcontainers:done。Feb0602:46:29master1dockerd〔14229〕:time20220206T02:46:29。89738783905:00levelinfomsgDockerdaemoncommite68fc7agraphdriver(s)devi。。。18。06。1ceFeb0602:46:29master1dockerd〔14229〕:time20220206T02:46:29。89765549405:00levelinfomsgDaemonhascompletedinitializationFeb0602:46:29master1dockerd〔14229〕:time20220206T02:46:29。90306306705:00levelwarningmsgCouldnotregisterbuildergitsource:failed。。。inPATHFeb0602:46:29master1dockerd〔14229〕:time20220206T02:46:29。91804762105:00levelinfomsgAPIlistenonvarrundocker。sockFeb0602:46:29master1systemd〔1〕:StartedDockerApplicationContainerEngine。Hint:Somelineswereellipsized,useltoshowinfull。dockerversion以master1为例Dockerversion18。06。1ce,builde68fc7acatetcdockerdaemon。jsonEOF{registrymirrors:〔https:b9pmyelo。mirror。aliyuncs。com〕}EOFsystemctlrestartdocker 5。2添加阿里云YUM软件源catetcyum。repos。dkubernetes。repoEOF〔kubernetes〕nameKubernetesbaseurlhttps:mirrors。aliyun。comkubernetesyumreposkubernetesel7x8664enabled1gpgcheck0repogpgcheck0gpgkeyhttps:mirrors。aliyun。comkubernetesyumdocyumkey。gpghttps:mirrors。aliyun。comkubernetesyumdocrpmpackagekey。gpgEOF 5。3安装kubeadm,kubelet和kubectl 由于版本更新频繁,这里指定版本号部署:yuminstallykubelet1。16。3kubeadm1。16。3kubectl1。16。3systemctlenablekubelet部署KubernetesMaster 6。1创建kubeadm配置文件 在具有vip的master上操作,这里为master1mkdirusrlocalkubernetesmanifestspcdusrlocalkubernetesmanifestsvikubeadmconfig。yaml创建文件,复制以下内容apiServer:certSANs:master1master2master。k8s。io192。168。3。158192。168。3。155192。168。3。156127。0。0。1extraArgs:authorizationmode:Node,RBACtimeoutForControlPlane:4m0sapiVersion:kubeadm。k8s。iov1beta1certificatesDir:etckubernetespkiclusterName:kubernetescontrolPlaneEndpoint:master。k8s。io:16443controllerManager:{}dns:type:CoreDNSetcd:local:dataDir:varlibetcdimageRepository:registry。aliyuncs。comgooglecontainerskind:ClusterConfigurationkubernetesVersion:v1。16。3networking:dnsDomain:cluster。localpodSubnet:10。244。0。016serviceSubnet:10。1。0。016scheduler:{} 6。2在master1节点执行kubeadminitconfigkubeadmconfig。yaml 按照提示保存以下内容,一会要使用(kubeadminit中的回显内容):kubeadmjoinmaster。k8s。io:16443tokena8r4cl。ipnc8uwnwg35alhndiscoverytokencacerthashsha256:2686517c55d2093a7e59ca34ecf72a1a44b36b416e1c2d9ac15565e5b2affb39controlplane 按照提示配置环境变量,使用kubectl工具:mkdirpHOME。kubesudocpietckubernetesadmin。confHOME。kubeconfigsudochown(idu):(idg)HOME。kubeconfigkubectlgetnodesNAMESTATUSROLESAGEVERSIONmaster1NotReadymaster68sv1。16。3kubectlgetpodsnkubesystemNAMEREADYSTATUSRESTARTSAGEcoredns58cc8c89f4r5q6901Pending059scoredns58cc8c89f4tkhpq01Pending059setcdmaster111Running019skubeapiservermaster111Running07skubecontrollermanagermaster101Pending02skubeproxy68d6s11Running059skubeschedulermaster101Pending04s 查看集群状态kubectlgetcsNAMEAGEcontrollermanagerunknownschedulerunknownetcd0unknownkubectlgetpodsnkubesystemNAMEREADYSTATUSRESTARTSAGEcoredns58cc8c89f4r5q6901Pending079scoredns58cc8c89f4tkhpq01Pending079setcdmaster111Running039skubeapiservermaster111Running027skubecontrollermanagermaster111Running022skubeproxy68d6s11Running079skubeschedulermaster111Running024s安装集群网络 创建kubeflannel。yml,在master1上执行catkubeflannel。ymlEOFapiVersion:policyv1beta1kind:PodSecurityPolicymetadata:name:psp。flannel。unprivilegedannotations:seccomp。security。alpha。kubernetes。ioallowedProfileNames:dockerdefaultseccomp。security。alpha。kubernetes。iodefaultProfileName:dockerdefaultapparmor。security。beta。kubernetes。ioallowedProfileNames:runtimedefaultapparmor。security。beta。kubernetes。iodefaultProfileName:runtimedefaultspec:privileged:falsevolumes:configMapsecretemptyDirhostPathallowedHostPaths:pathPrefix:etccninet。dpathPrefix:etckubeflannelpathPrefix:runflannelreadOnlyRootFilesystem:falseUsersandgroupsrunAsUser:rule:RunAsAnysupplementalGroups:rule:RunAsAnyfsGroup:rule:RunAsAnyPrivilegeEscalationallowPrivilegeEscalation:falsedefaultAllowPrivilegeEscalation:falseCapabilitiesallowedCapabilities:〔NETADMIN〕defaultAddCapabilities:〔〕requiredDropCapabilities:〔〕HostnamespaceshostPID:falsehostIPC:falsehostNetwork:truehostPorts:min:0max:65535SELinuxseLinux:SELinuxisunusedinCaaSPrule:RunAsAnykind:ClusterRoleapiVersion:rbac。authorization。k8s。iov1beta1metadata:name:flannelrules:apiGroups:〔extensions〕resources:〔podsecuritypolicies〕verbs:〔use〕resourceNames:〔psp。flannel。unprivileged〕apiGroups:resources:podsverbs:getapiGroups:resources:nodesverbs:listwatchapiGroups:resources:nodesstatusverbs:patchkind:ClusterRoleBindingapiVersion:rbac。authorization。k8s。iov1beta1metadata:name:flannelroleRef:apiGroup:rbac。authorization。k8s。iokind:ClusterRolename:flannelsubjects:kind:ServiceAccountname:flannelnamespace:kubesystemapiVersion:v1kind:ServiceAccountmetadata:name:flannelnamespace:kubesystemkind:ConfigMapapiVersion:v1metadata:name:kubeflannelcfgnamespace:kubesystemlabels:tier:nodeapp:flanneldata:cniconf。json:{name:cbr0,cniVersion:0。3。1,plugins:〔{type:flannel,delegate:{hairpinMode:true,isDefaultGateway:true}},{type:portmap,capabilities:{portMappings:true}}〕}netconf。json:{Network:10。244。0。016,Backend:{Type:vxlan}}apiVersion:appsv1kind:DaemonSetmetadata:name:kubeflanneldsamd64namespace:kubesystemlabels:tier:nodeapp:flannelspec:selector:matchLabels:app:flanneltemplate:metadata:labels:tier:nodeapp:flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:matchExpressions:key:beta。kubernetes。ioosoperator:Invalues:linuxkey:beta。kubernetes。ioarchoperator:Invalues:amd64hostNetwork:truetolerations:operator:Existseffect:NoScheduleserviceAccountName:flannelinitContainers:name:installcniimage:quay。iocoreosflannel:v0。11。0amd64command:cpargs:fetckubeflannelcniconf。jsonetccninet。d10flannel。conflistvolumeMounts:name:cnimountPath:etccninet。dname:flannelcfgmountPath:etckubeflannelcontainers:name:kubeflannelimage:quay。iocoreosflannel:v0。11。0amd64command:optbinflanneldargs:ipmasqkubesubnetmgrresources:requests:cpu:100mmemory:50Milimits:cpu:100mmemory:50MisecurityContext:privileged:falsecapabilities:add:〔NETADMIN〕env:name:PODNAMEvalueFrom:fieldRef:fieldPath:metadata。namename:PODNAMESPACEvalueFrom:fieldRef:fieldPath:metadata。namespacevolumeMounts:name:runmountPath:runflannelname:flannelcfgmountPath:etckubeflannelvolumes:name:runhostPath:path:runflannelname:cnihostPath:path:etccninet。dname:flannelcfgconfigMap:name:kubeflannelcfgapiVersion:appsv1kind:DaemonSetmetadata:name:kubeflanneldsarm64namespace:kubesystemlabels:tier:nodeapp:flannelspec:selector:matchLabels:app:flanneltemplate:metadata:labels:tier:nodeapp:flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:matchExpressions:key:beta。kubernetes。ioosoperator:Invalues:linuxkey:beta。kubernetes。ioarchoperator:Invalues:arm64hostNetwork:truetolerations:operator:Existseffect:NoScheduleserviceAccountName:flannelinitContainers:name:installcniimage:quay。iocoreosflannel:v0。11。0arm64command:cpargs:fetckubeflannelcniconf。jsonetccninet。d10flannel。conflistvolumeMounts:name:cnimountPath:etccninet。dname:flannelcfgmountPath:etckubeflannelcontainers:name:kubeflannelimage:quay。iocoreosflannel:v0。11。0arm64command:optbinflanneldargs:ipmasqkubesubnetmgrresources:requests:cpu:100mmemory:50Milimits:cpu:100mmemory:50MisecurityContext:privileged:falsecapabilities:add:〔NETADMIN〕env:name:PODNAMEvalueFrom:fieldRef:fieldPath:metadata。namename:PODNAMESPACEvalueFrom:fieldRef:fieldPath:metadata。namespacevolumeMounts:name:runmountPath:runflannelname:flannelcfgmountPath:etckubeflannelvolumes:name:runhostPath:path:runflannelname:cnihostPath:path:etccninet。dname:flannelcfgconfigMap:name:kubeflannelcfgapiVersion:appsv1kind:DaemonSetmetadata:name:kubeflanneldsarmnamespace:kubesystemlabels:tier:nodeapp:flannelspec:selector:matchLabels:app:flanneltemplate:metadata:labels:tier:nodeapp:flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:matchExpressions:key:beta。kubernetes。ioosoperator:Invalues:linuxkey:beta。kubernetes。ioarchoperator:Invalues:armhostNetwork:truetolerations:operator:Existseffect:NoScheduleserviceAccountName:flannelinitContainers:name:installcniimage:quay。iocoreosflannel:v0。11。0armcommand:cpargs:fetckubeflannelcniconf。jsonetccninet。d10flannel。conflistvolumeMounts:name:cnimountPath:etccninet。dname:flannelcfgmountPath:etckubeflannelcontainers:name:kubeflannelimage:quay。iocoreosflannel:v0。11。0armcommand:optbinflanneldargs:ipmasqkubesubnetmgrresources:requests:cpu:100mmemory:50Milimits:cpu:100mmemory:50MisecurityContext:privileged:falsecapabilities:add:〔NETADMIN〕env:name:PODNAMEvalueFrom:fieldRef:fieldPath:metadata。namename:PODNAMESPACEvalueFrom:fieldRef:fieldPath:metadata。namespacevolumeMounts:name:runmountPath:runflannelname:flannelcfgmountPath:etckubeflannelvolumes:name:runhostPath:path:runflannelname:cnihostPath:path:etccninet。dname:flannelcfgconfigMap:name:kubeflannelcfgapiVersion:appsv1kind:DaemonSetmetadata:name:kubeflanneldsppc64lenamespace:kubesystemlabels:tier:nodeapp:flannelspec:selector:matchLabels:app:flanneltemplate:metadata:labels:tier:nodeapp:flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:matchExpressions:key:beta。kubernetes。ioosoperator:Invalues:linuxkey:beta。kubernetes。ioarchoperator:Invalues:ppc64lehostNetwork:truetolerations:operator:Existseffect:NoScheduleserviceAccountName:flannelinitContainers:name:installcniimage:quay。iocoreosflannel:v0。11。0ppc64lecommand:cpargs:fetckubeflannelcniconf。jsonetccninet。d10flannel。conflistvolumeMounts:name:cnimountPath:etccninet。dname:flannelcfgmountPath:etckubeflannelcontainers:name:kubeflannelimage:quay。iocoreosflannel:v0。11。0ppc64lecommand:optbinflanneldargs:ipmasqkubesubnetmgrresources:requests:cpu:100mmemory:50Milimits:cpu:100mmemory:50MisecurityContext:privileged:falsecapabilities:add:〔NETADMIN〕env:name:PODNAMEvalueFrom:fieldRef:fieldPath:metadata。namename:PODNAMESPACEvalueFrom:fieldRef:fieldPath:metadata。namespacevolumeMounts:name:runmountPath:runflannelname:flannelcfgmountPath:etckubeflannelvolumes:name:runhostPath:path:runflannelname:cnihostPath:path:etccninet。dname:flannelcfgconfigMap:name:kubeflannelcfgapiVersion:appsv1kind:DaemonSetmetadata:name:kubeflanneldss390xnamespace:kubesystemlabels:tier:nodeapp:flannelspec:selector:matchLabels:app:flanneltemplate:metadata:labels:tier:nodeapp:flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:matchExpressions:key:beta。kubernetes。ioosoperator:Invalues:linuxkey:beta。kubernetes。ioarchoperator:Invalues:s390xhostNetwork:truetolerations:operator:Existseffect:NoScheduleserviceAccountName:flannelinitContainers:name:installcniimage:quay。iocoreosflannel:v0。11。0s390xcommand:cpargs:fetckubeflannelcniconf。jsonetccninet。d10flannel。conflistvolumeMounts:name:cnimountPath:etccninet。dname:flannelcfgmountPath:etckubeflannelcontainers:name:kubeflannelimage:quay。iocoreosflannel:v0。11。0s390xcommand:optbinflanneldargs:ipmasqkubesubnetmgrresources:requests:cpu:100mmemory:50Milimits:cpu:100mmemory:50MisecurityContext:privileged:falsecapabilities:add:〔NETADMIN〕env:name:PODNAMEvalueFrom:fieldRef:fieldPath:metadata。namename:PODNAMESPACEvalueFrom:fieldRef:fieldPath:metadata。namespacevolumeMounts:name:runmountPath:runflannelname:flannelcfgmountPath:etckubeflannelvolumes:name:runhostPath:path:runflannelname:cnihostPath:path:etccninet。dname:flannelcfgconfigMap:name:kubeflannelcfgEOF 安装flannel网络podsecuritypolicy。policypsp。flannel。unprivilegedcreatedclusterrole。rbac。authorization。k8s。ioflannelcreatedclusterrolebinding。rbac。authorization。k8s。ioflannelcreatedserviceaccountflannelcreatedconfigmapkubeflannelcfgcreateddaemonset。appskubeflanneldsamd64createddaemonset。appskubeflanneldsarm64createddaemonset。appskubeflanneldsarmcreatedkubectlapplyfkubeflannel。ymldaemonset。appskubeflanneldsppc64lecreateddaemonset。appskubeflanneldss390xcreated 检查kubectlgetpodsnkubesystem执行apply完后等待会儿再查看NAMEREADYSTATUSRESTARTSAGEcoredns58cc8c89f4r5q6901Pending02m18scoredns58cc8c89f4tkhpq01Pending02m18setcdmaster111Running098skubeapiservermaster111Running086skubecontrollermanagermaster111Running081skubeflanneldsamd647qhgr11Running035skubeproxy68d6s11Running02m18skubeschedulermaster111Running083skubectlgetnodesNAMESTATUSROLESAGEVERSIONmaster1Readymaster2m46sv1。16。3master2节点加入集群 8。1复制密钥及相关文件 从master1复制密钥及相关文件到master2sshroot192。168。3。156mkdirpetckubernetespkietcdscpetckubernetesadmin。confroot192。168。3。156:etckubernetesscpetckubernetespki{ca。,sa。,frontproxyca。}root192。168。3。156:etckubernetespkiscpetckubernetespkietcdca。root192。168。3。156:etckubernetespkietcd 8。2master2加入集群 执行在master1上init后输出的join命令,需要带上参数controlplane表示把master控制节点加入集群(之前kubeadminit回显内容)kubeadmjoinmaster。k8s。io:16443tokena8r4cl。ipnc8uwnwg35alhndiscoverytokencacerthashsha256:2686517c55d2093a7e59ca34ecf72a1a44b36b416e1c2d9ac15565e5b2affb39controlplanemkdirpHOME。kubesudocpietckubernetesadmin。confHOME。kubeconfigsudochown(idu):(idg)HOME。kubeconfig 检查状态(master1上执行)kubectlgetnodeNAMESTATUSROLESAGEVERSIONmaster1Readymaster6m28sv1。16。3master2Readymaster43sv1。16。3kubectlgetpodsallnamespacesNAMESPACENAMEREADYSTATUSRESTARTSAGEkubesystemcoredns58cc8c89f4r5q6911Running06m28skubesystemcoredns58cc8c89f4tkhpq11Running06m28skubesystemetcdmaster111Running05m48skubesystemetcdmaster211Running060skubesystemkubeapiservermaster111Running05m36skubesystemkubeapiservermaster211Running061skubesystemkubecontrollermanagermaster111Running15m31skubesystemkubecontrollermanagermaster211Running061skubesystemkubeflanneldsamd642ntbn11Running061skubesystemkubeflanneldsamd647qhgr11Running04m45skubesystemkubeproxy68d6s11Running06m28skubesystemkubeproxykdjwc11Running061skubesystemkubeschedulermaster111Running15m33skubesystemkubeschedulermaster211Running061s加入KubernetesNode 在node1上执行 向集群添加新节点,执行在kubeadminit输出的kubeadmjoin命令(之前kubeadminit回显内容,注意不加controlplane):kubeadmjoinmaster。k8s。io:16443tokena8r4cl。ipnc8uwnwg35alhndiscoverytokencacerthashsha256:2686517c55d2093a7e59ca34ecf72a1a44b36b416e1c2d9ac15565e5b2affb39〔preflight〕Runningpreflightchecks〔WARNINGIsDockerSystemdCheck〕:detectedcgroupfsastheDockercgroupdriver。Therecommendeddriverissystemd。Pleasefollowtheguideathttps:kubernetes。iodocssetupcri〔preflight〕Readingconfigurationfromthecluster。。。〔preflight〕FYI:Youcanlookatthisconfigfilewithkubectlnkubesystemgetcmkubeadmconfigoyaml〔kubeletstart〕Downloadingconfigurationforthekubeletfromthekubeletconfig1。16ConfigMapinthekubesystemnamespace〔kubeletstart〕Writingkubeletconfigurationtofilevarlibkubeletconfig。yaml〔kubeletstart〕Writingkubeletenvironmentfilewithflagstofilevarlibkubeletkubeadmflags。env〔kubeletstart〕Activatingthekubeletservice〔kubeletstart〕WaitingforthekubelettoperformtheTLSBootstrap。。。Thisnodehasjoinedthecluster:Certificatesigningrequestwassenttoapiserverandaresponsewasreceived。TheKubeletwasinformedofthenewsecureconnectiondetails。Runkubectlgetnodesonthecontrolplanetoseethisnodejointhecluster。 集群网络重新安装,因为添加了新的node节点(在master1上执行)kubectldeletefkubeflannel。ymlkubectlapplyfkubeflannel。yml 检查状态(在master1上执行)kubectlgetnodeNAMESTATUSROLESAGEVERSIONmaster1Readymaster9m10sv1。16。3master2Readymaster3m25sv1。16。3node1Readynone104sv1。16。3kubectlgetpodsallnamespacesNAMESPACENAMEREADYSTATUSRESTARTSAGEkubesystemcoredns58cc8c89f4r5q6911Running09m8skubesystemcoredns58cc8c89f4tkhpq11Running09m8skubesystemetcdmaster111Running08m28skubesystemetcdmaster211Running03m40skubesystemkubeapiservermaster111Running08m16skubesystemkubeapiservermaster211Running03m41skubesystemkubecontrollermanagermaster111Running18m11skubesystemkubecontrollermanagermaster211Running03m41skubesystemkubeflanneldsamd6444fdc11Running093skubesystemkubeflanneldsamd64swgdm11Running093skubesystemkubeflanneldsamd64swwck11Running093skubesystemkubeproxy68d6s11Running09m8skubesystemkubeproxykdjwc11Running03m41skubesystemkubeproxylvc4v11Running02mkubesystemkubeschedulermaster111Running18m13skubesystemkubeschedulermaster211Running03m41s测试kubernetes集群 在Kubernetes集群中创建一个pod,验证是否正常运行:kubectlcreatedeploymentnginximagenginxkubectlexposedeploymentnginxport80typeNodePortkubectlgetpod,svcNAMEREADYSTATUSRESTARTSAGEpodnginx86c57db6852mwds01ContainerCreating08sNAMETYPECLUSTERIPEXTERNALIPPORT(S)AGEservicekubernetesClusterIP10。1。0。1none443TCP9m48sservicenginxNodePort10。1。4。26none80:31030TCP5s 访问地址:http:192。168。3。158:31030