Error downloading packages: 3:docker-ce-23.0.4-1.el7.x86_64: [Errno 256] No more mirrors to try.
[root@k8s-master01 ~]# sudo yum clean all Loaded plugins: fastestmirror Cleaning repos: base docker-ce-stable elrepo extras updates Cleaning up list of fastest mirrors Other repos take up 11 M of disk space (use --verbose for details)
[root@k8s-master01 ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log Flag --experimental-upload-certs has been deprecated, use --upload-certs instead [init] Using Kubernetes version: v1.15.1 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.4. Latest validated version: 18.09 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.129.78.136] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.129.78.136 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [172.129.78.136 127.0.0.1 ::1] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for"kube-apiserver" [control-plane] Creating static Pod manifest for"kube-controller-manager" [control-plane] Creating static Pod manifest for"kube-scheduler" [etcd] Creating static Pod manifest forlocal etcd in"/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 17.002118 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config"in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15"in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in Secret "kubeadm-certs"in the "kube-system" Namespace [upload-certs] Using certificate key: 9280d519bb53c33fec7149b1ac2e6f0385b863dcee2ff7bf901d07d715de4dea [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/d893bcbfe6b04791054aea6c7569dea4080cc289/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
# 配好网络之后 就可以加入子节点 [root@k8s-node01 ~]# kubeadm join 172.129.78.136:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:a93684cdb29000b025a9ed35054b9611bc913fe1ddbf880f8e9077b812704396 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.4. Latest validated version: 18.09 [WARNING Hostname]: hostname "k8s-node01.novalocal" could not be reached [WARNING Hostname]: hostname "k8s-node01.novalocal": lookup k8s-node01.novalocal on 223.5.5.5:53: no such host [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-5c98db65d4-gjnpg 0/1 ContainerCreating 0 19m <none> k8s-master01 <none> <none> coredns-5c98db65d4-v89m2 0/1 ContainerCreating 0 19m <none> k8s-master01 <none> <none>
[root@k8s-master01 ~]# kubectl describe pods -n kube-system coredns-5c98db65d4-dhv4 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SandboxChanged 32m (x1132 over 4h37m) kubelet, k8s-master01.novalocal Pod sandbox changed, it will be killed and re-created.
# 无报错信息 所以去看系统日志来排错 [root@k8s-master01 ~]# journalctl -u kubelet Apr 22 23:06:49 k8s-master01.novalocal kubelet[29592]: E0422 23:06:49.663339 29592 kuberuntime_gc.go:170] Failed to stop sandbox "5924b44e76f68d801163e3e53762cd85f25692821690fc0f5f11c58d640e65ed" before removing: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "coredns-5c98db65d4-sx9z5_kube-system" network: failed to find plugin "flannel"in path [/opt/cni/bin] Apr 22 23:06:54 k8s-master01.novalocal kubelet[29592]: W0422 23:06:54.472770 29592 cni.go:309] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "81f56d50bac714133bbbc0132b378d5a57383203050febb5ad36c8a7d5cf022f" Apr 22 23:06:54 k8s-master01.novalocal kubelet[29592]: E0422 23:06:54.478568 29592 cni.go:352] Error deleting kube-system_coredns-5c98db65d4-nf2xb/81f56d50bac714133bbbc0132b378d5a57383203050febb5ad36c8a7d5cf022f from network flannel/cbr0: failed to find plugin "flannel"in path [/opt/cni/bin] Apr 22 23:06:54 k8s-master01.novalocal kubelet[29592]: E0422 23:06:54.479098 29592 remote_runtime.go:128] StopPodSandbox "81f56d50bac714133bbbc0132b378d5a57383203050febb5ad36c8a7d5cf022f" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "coredns-5c98db65d4-nf2xb_kube-system" network: failed to find plugin "flannel"in path [/opt/cni/bin] Apr 22 23:06:54 k8s-master01.novalocal kubelet[29592]: E0422 23:06:54.479146 29592 kuberuntime_manager.go:845] Failed to stop sandbox {"docker""81f56d50bac714133bbbc0132b378d5a57383203050febb5ad36c8a7d5cf022f"} Apr 22 23:06:54 k8s-master01.novalocal kubelet[29592]: E0422 23:06:54.479200 29592 kuberuntime_manager.go:640] killPodWithSyncResult failed: failed to "KillPodSandbox"for"db56e197-6d03-4628-984b-2694f5da5edc" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-5c98db65d4-nf2xb_kube-system\" network: failed to find plugin \"flannel\" in path [/opt/cni/bin]" Apr 22 23:06:54 k8s-master01.novalocal kubelet[29592]: E0422 23:06:54.479222 29592 pod_workers.go:190] Error syncing pod db56e197-6d03-4628-984b-2694f5da5edc ("coredns-5c98db65d4-nf2xb_kube-system(db56e197-6d03-4628-984b-2694f5da5edc)"), skipping: failed to "KillPodSandbox"for"db56e197-6d03-4628-984b-2694f5da5edc" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-5c98db65d4-nf2xb_kube-system\" network: failed to find plugin \"flannel\" in path [/opt/cni/bin]" Apr 22 23:07:01 k8s-master01.novalocal kubelet[29592]: W0422 23:07:01.472361 29592 cni.go:309] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "3d132d33ad7a6256158167561dcfc2ffbd76398ec61daadecae244d9ff80d73e" Apr 22 23:07:01 k8s-master01.novalocal kubelet[29592]: E0422 23:07:01.477122 29592 cni.go:352] Error deleting kube-system_coredns-5c98db65d4-dhv45/3d132d33ad7a6256158167561dcfc2ffbd76398ec61daadecae244d9ff80d73e from network flannel/cbr0: failed to find plugin "flannel"in path [/opt/cni/bin] Apr 22 23:07:01 k8s-master01.novalocal kubelet[29592]: E0422 23:07:01.477687 29592 remote_runtime.go:128] StopPodSandbox "3d132d33ad7a6256158167561dcfc2ffbd76398ec61daadecae244d9ff80d73e" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "coredns-5c98db65d4-dhv45_kube-system" network: failed to find plugin "flannel"in path [/opt/cni/bin] Apr 22 23:07:01 k8s-master01.novalocal kubelet[29592]: E0422 23:07:01.477733 29592 kuberuntime_manager.go:845] Failed to stop sandbox {"docker""3d132d33ad7a6256158167561dcfc2ffbd76398ec61daadecae244d9ff80d73e"} Apr 22 23:07:01 k8s-master01.novalocal kubelet[29592]: E0422 23:07:01.477805 29592 kuberuntime_manager.go:640] killPodWithSyncResult failed: failed to "KillPodSandbox"for"d70ce214-46c7-4d89-aa39-7c437e430ec4" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-5c98db65d4-dhv45_kube-system\" network: failed to find plugin \"flannel\" in path [/opt/cni/bin]" Apr 22 23:07:01 k8s-master01.novalocal kubelet[29592]: E0422 23:07:01.478155 29592 pod_workers.go:190] Error syncing pod d70ce214-46c7-4d89-aa39-7c437e430ec4 ("coredns-5c98db65d4-dhv45_kube-system(d70ce214-46c7-4d89-aa39-7c437e430ec4)"), skipping: failed to "KillPodSandbox"for"d70ce214-46c7-4d89-aa39-7c437e430ec4" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"coredns-5c98db65d4-dhv45_kube-system\" network: failed to find plugin \"flannel\" in path [/opt/cni/bin]"
# 提示 /opt/cni/bin 没有 flannel # 所以是需要安装一个 CNI 插件来解决此问题 // https://github.com/containernetworking/plugins/releases/tag/v0.8.6 // https://blog.csdn.net/qq_29385297/article/details/127682552 [root@k8s-master01 ~]# cd /opt/cni/bin [root@k8s-master01 bin]# ls -al total 52828 drwxr-xr-x. 2 root root 263 Apr 22 23:07 . drwxr-xr-x. 3 root root 17 Apr 22 18:39 .. -rwxr-xr-x. 1 root root 2782728 Jan 19 05:09 bandwidth -rwxr-xr-x. 1 root root 3104192 Jan 19 05:09 bridge -rwxr-xr-x. 1 root root 7607056 Jan 19 05:09 dhcp -rwxr-xr-x. 1 root root 2863024 Jan 19 05:09 dummy -rwxr-xr-x. 1 root root 3165352 Jan 19 05:09 firewall -rwxr-xr-x. 1 root root 2775224 Jan 19 05:09 host-device -rwxr-xr-x. 1 root root 2332792 Jan 19 05:09 host-local -rwxr-xr-x. 1 root root 2871792 Jan 19 05:09 ipvlan -rwxr-xr-x. 1 root root 2396976 Jan 19 05:09 loopback -rwxr-xr-x. 1 root root 2893624 Jan 19 05:09 macvlan -rwxr-xr-x. 1 root root 2689440 Jan 19 05:09 portmap -rwxr-xr-x. 1 root root 3000032 Jan 19 05:09 ptp -rwxr-xr-x. 1 root root 2542400 Jan 19 05:09 sbr -rwxr-xr-x. 1 root root 2074072 Jan 19 05:09 static -rwxr-xr-x. 1 root root 2456920 Jan 19 05:09 tuning -rwxr-xr-x. 1 root root 2867512 Jan 19 05:09 vlan -rwxr-xr-x. 1 root root 2566424 Jan 19 05:09 vrf [root@k8s-master01 ~]# wget https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz [root@k8s-master01 ~]# tar -zxvf cni-plugins-linux-amd64-v0.8.6.tgz
[root@k8s-master01 ~]# kubectl get pod nginx-deployment-68c7f5464c-5722g Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31m default-scheduler Successfully assigned default/nginx-deployment-68c7f5464c-5722g to k8s-node02.novalocal Normal Pulling 31m kubelet, k8s-node02.novalocal Pulling image "nginx:latest" Normal Pulled 30m kubelet, k8s-node02.novalocal Successfully pulled image "nginx:latest" Normal Created 30m kubelet, k8s-node02.novalocal Created container nginx Normal Started 30m kubelet, k8s-node02.novalocal Started container nginx
[root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h55m nginx-service NodePort 10.96.196.183 <none> 80:32750/TCP 31m
/[root@harbor harbor]# vi harbor.cfg ## Configuration file of Harbor
#The IP address or hostname to access admin UI and registry service. #DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients. hostname = 172.129.78.187 // 只修改这个即可 使用 http 不实用证书
[root@k8s-node02 ~]# docker login http://172.129.78.187 Username: admin Password: Error response from daemon: Get "http://172.129.78.187/v2/": unauthorized: authentication required [root@k8s-node02 ~]# docker login http://172.129.78.187 Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@k8s-node02 ~]# docker pull hello-world Using default tag: latest latest: Pulling from library/hello-world 2db29710123e: Pull complete Digest: sha256:4e83453afed1b4fa1a3500525091dbfca6ce1e66903fd4c01ff015dbcb1ba33e Status: Downloaded newer image for hello-world:latest docker.io/library/hello-world:latest [root@k8s-node02 ~]# docker run hello-world
Hello from Docker! This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/
For more examples and ideas, visit: https://docs.docker.com/get-started/
[root@k8s-node02 ~]# docker tag hello-world:latest 172.129.78.187/library/hello-world-local:latest [root@k8s-node02 ~]# docker push 172.129.78.187/library/hello-world-local:latest The push refers to repository [172.129.78.187/library/hello-world-local] e07ee1baac5f: Pushed latest: digest: sha256:f54a58bc1aac5ea1a25d796ae155dc228b3f0e11d046ae276b39c4bf2f13d8c4 size: 525