Kubernetes 集群常見操作的完整指南

Mondo 科技 更新 2024-02-29

它涵蓋了 Kubernetes 集群管理的所有方面,並可作為快速參考手冊。

翻譯自 Mohamed Ben Hassine 的 Kubernetes 集群中常見操作綜合指南。

# uninstall:kubeadm reset# cleanup:kubeadm reset -fmodprobe -r ipiplsmodrm -rf ~/.kube/rm -rf /etc/kubernetes/rm -rf /etc/systemd/system/kubelet.service.drm -rf /etc/systemd/system/kubelet.servicerm -rf /usr/bin/kube*rm -rf /etc/cnirm -rf /opt/cnirm -rf /var/lib/etcdrm -rf /var/etcd
[root@teckbootcamps ~]ps -ef|grep kuberoot 8395 26979 0 18:03 pts/1 00:00:00 grep --color=auto kuberoot 20501 1 2 13:42 ? 00:06:50 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --teckbootcamps-driver=systemd --rotate-certificates=true --cert-dir=/var/lib/kubelet/pkiroot 20744 20728 0 13:42 ? 00:02:26 etcd --advertise-client-urls= --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls= --initial-cluster=teckbootcamps= --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls= --listen-peer-urls= --name=teckbootcamps --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crtroot 20793 20745 1 13:42 ? 00:03:56 kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --cluster-cidr=192.168.0.0/16 --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --node-cidr-mask-size=24 --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --use-service-account-credentials=trueroot 20806 20746 1 13:42 ? 00:04:47 kube-apiserver --authorization-mode=node,rbac --advertise-address=172.17.211.142 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=persistentvolumelabel --enable-admission-plugins=noderestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers= --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=internalip,externalip,hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=x-remote-extra- -requestheader-group-headers=x-remote-group --requestheader-username-headers=x-remote-user --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.keyroot 20814 20760 0 13:42 ? 00:01:18 kube-scheduler --address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=trueroot 21095 21071 0 13:43 ? 00:00:22 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.confroot 22065 22047 0 13:43 ? 00:00:03 /usr/bin/kube-controllers65534 22166 22137 0 13:43 ? 00:00:12 /heapster --source=kubernetes: -sink=influxdb:
[root@teckbootcamps ~]# swapoff -a &&systemctl stop kubelet
[root@teckbootcamps /]kubectl cluster-infokubernetes master is running at is running at is running at is running at is running at to further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.[root@teckbootcamps /]kubectl cluster-infokubernetes master is running at is running at is running at is running at is running at to further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@teckbootcamps ~]kubectl cluster-info dump,
[root@teckbootcamps ~]kubectl -n kube-system get deploymentsname desired current up-to-date **ailable agecalico-kube-controllers 1 1 1 1 98dcoredns 2 2 2 2 98dheapster 1 1 1 1 98dheapster-teckbootcamps 1 0 0 0 1hheapster-teckbootcamps2 1 0 0 0 1hkubernetes-dashboard 1 1 1 1 98dmonitoring-grafana 1 1 1 1 98dmonitoring-grafana-teckbootcamps 1 0 0 0 1hmonitoring-influxdb 1 1 1 1 98dmonitoring-influxdb-teckbootcamps 1 0 0 0 2h
[root@teckbootcamps ~]kubectl -n kube-system delete deployment heapster-teckbootcampsdeployment.extensions "heapster-teckbootcamps" deleted
[root@teckbootcamps shell]kubectl -n kube-system get svc -o widename type cluster-ip external-ip port(s) age selectorheapster clusterip 10.106.70.78 80/tcp 23h k8s-app=heapsterkube-dns clusterip 10.96.0.10 53/udp,53/tcp,9153/tcp 23h k8s-app=kube-dnskubelet clusterip none 10250/tcp 23h kubernetes-dashboard nodeport 10.110.202.105 443:32000/tcp 23h k8s-app=kubernetes-dashboardmonitoring-grafana nodeport 10.98.68.122 80:32001/tcp 23h k8s-app=grafanamonitoring-influxdb clusterip 10.104.109.169 8086/tcp 23h k8s-app=influxdb
[root@teckbootcamps ~]kubectl get nodesname status roles age versionteckbootcamps ready 90d v1.26.0teckbootcamps2 ready 90d v1.26.0teckbootcamps3 ready master 98d v1.26.0
[root@teckbootcamps3 ~]kubectl get sa --all-namespacesnamespace name secrets agedefault default 1 98dkube-public default 1 98dkube-system attachdetach-controller 1 98d...output truncated for brevity)kube-system service-controller 1 98dkube-system statefulset-controller 1 98dkube-system token-cleaner 1 98dkube-system ttl-controller 1 98d
[root@teckbootcamps /]kubectl get service -l k8s-app=kube-dns --namespace=kube-systemname type cluster-ip external-ip port(s) agekube-dns clusterip 10.96.0.10 53/udp,53/tcp 12d
[root@teckbootcamps /]# kubectl get pod --selector k8s-app=kube-dns --namespace=kube-systemname ready status restarts agecoredns-78fcdf6894-m7rgl 1/1 running 0 3dcoredns-78fcdf6894-tpkql 1/1 running 0 3d
[root@teckbootcamps /]kubectl get service -l k8s-app=kube-dns --namespace=kube-systemname type cluster-ip external-ip port(s) agekube-dns clusterip 10.96.0.10 53/udp,53/tcp 12d
[root@teckbootcamps /] kubectl -s get componentstatusname status message errorcontroller-manager healthy okscheduler healthy oketcd-0 healthy
[root@teckbootcamps shell] kubectl get endpointsname endpoints agekubernetes 172.17.211.142:6443 23h
[root@teckbootcamps /]kubectl -s get nodesname status roles age versionteckbootcamps ready 3d v1.11.0teckbootcamps2 ready 3d v1.11.0teckbootcamps3 ready master 12d v1.11.0
[root@teckbootcamps shell]# kubectl get nodename status roles age versionteckbootcamps ready 17m v1.14.4teckbootcamps2 ready 13h v1.14.4teckbootcamps3 ready master 23h v1.14.4[root@teckbootcamps shell] kubectl describe node teckbootcampsname: teckbootcampsroles: labels: beta.kubernetes.io/arch=amd64beta.kubernetes.io/os=linuxkubernetes.io/arch=amd64kubernetes.io/hostname=teckbootcampskubernetes.io/os=linuxannotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.socknode.alpha.kubernetes.io/ttl: 0projectcalico.org/ipv4address: 172.17.211.143/20projectcalico.org/ipv4ipiptunneladdr: 100.67.134.64volumes.kubernetes.io/controller-managed-attach-detach: truecreationtimestamp: sun, 28 jul 2019 10:51:34 +0800taints: unschedulable: falseconditions:type status lastheartbeattime lasttransitiontime reason message---networkun**ailable false sun, 28 jul 2019 10:51:58 +0800 sun, 28 jul 2019 10:51:58 +0800 calicoisup calico is running on this nodememorypressure false sun, 28 jul 2019 11:09:15 +0800 sun, 28 jul 2019 10:51:33 +0800 kubelethassufficientmemory kubelet has sufficient memory **ailablediskpressure false sun, 28 jul 2019 11:09:15 +0800 sun, 28 jul 2019 10:51:33 +0800 kubelethasnodiskpressure kubelet has no disk pressurepidpressure false sun, 28 jul 2019 11:09:15 +0800 sun, 28 jul 2019 10:51:33 +0800 kubelethassufficientpid kubelet has sufficient pid **ailableready true sun, 28 jul 2019 11:09:15 +0800 sun, 28 jul 2019 10:52:04 +0800 kubeletready kubelet is posting ready statusaddresses:internalip: 172.17.211.143hostname: teckbootcampscapacity:cpu: 2ephemeral-storage: 41152832kihugepages-1gi: 0hugepages-2mi: 0memory: 3882308kipods: 110allocatable:cpu: 2ephemeral-storage: 37926449909hugepages-1gi: 0hugepages-2mi: 0memory: 3779908kipods: 110system info:machine id: 7d26c16f128042a684ea474c9e2c240fsystem uuid: 09d50368-65d8-41bd-a923-fbcf9b8851abboot id: acc62473-6237-49e9-8bf8-222771e267e1kernel version: 3.10.0-327.28.2.el7.x86_64os image: centos linux 7 (core)operating system: linuxarchitecture: amd64container runtime version: docker: version: v1.14.4kube-proxy version: v1.14.4podcidr: 100.64.2.0/24non-terminated pods: (4 in total)namespace name cpu requests cpu limits memory requests memory limits age---kube-system calico-node-stc89 250m (12%) 0 (0%) 0 (0%) 0 (0%) 18mkube-system kube-proxy-qzplb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18mkube-system kube-sealyun-lvscare-teckbootcamps 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18mmonitoring node-exporter-tz6ms 112m (5%) 270m (13%) 200mi (5%) 240mi (6%) 18mallocated resources:(total limits may be over 100 percent, i.e., overcommitted.)resource requests limits---cpu 362m (18%) 270m (13%)memory 200mi (5%) 240mi (6%)ephemeral-storage 0 (0%) 0 (0%)events:type reason age from message---normal starting 18m kubelet, teckbootcamps starting kubelet.normal nodehassufficientmemory 18m kubelet, teckbootcamps node teckbootcamps status is now: nodehassufficientmemorynormal nodehasnodiskpressure 18m kubelet, teckbootcamps node teckbootcamps status is now: nodehasnodiskpressurenormal nodehassufficientpid 18m kubelet, teckbootcamps node teckbootcamps status is now: nodehassufficientpidnormal starting 18m kube-proxy, teckbootcamps starting kube-proxy.normal nodeready 17m kubelet, teckbootcamps node teckbootcamps status is now: nodereadywarning imagegcfailed 13m kubelet, teckbootcamps wanted to free 1747577241 bytes, but freed 1771511194 bytes space with errors in image deletion: [rpc error: code = unknown desc = error response from daemon: conflict: unable to delete abf312888d13 (must be forced) -image is being used by stopped container e5285e77b550, rpc error: code = unknown desc = error response from daemon: conflict: unable to remove repository reference "tutum/influxdb:0.13" (must force) -container c986b59b91ed is using its referenced image 39fa42a093e0, rpc error: code = unknown desc = error response from daemon: conflict: unable to remove repository reference "teckbootcamps-tomcat8-2:latest" (must force) -container ddc7e49946f1 is using its referenced image c375edce8dfd, rpc error: code = unknown desc = error response from daemon: conflict: unable to remove repository reference "teckbootcamps-tomcat8:latest" (must force) -container f627e4cb0dbc is using its referenced image 7e69e1b21246]warning failednodeallocatableenforcement 27s (x19 over 18m) kubelet, teckbootcamps failed to update node allocatable limits ["kubepods"]: failed to set supported teckbootcamps subsystems for teckbootcamps [kubepods]: failed to find subsystem mount for required subsystem: pids
[root@teckbootcamps /] journalctl -xeu kubelet-- logs begin at mon 2023-02-27 10:00:20 utc, end at wed 2023-02-28 10:16:01 utc. -feb 28 10:16:01 teckbootcamps kubelet[1963]: i0228 10:16:01.517740 1963 kubelet.go:2107] syncloop (pleg): "kube-proxy-teckbootcamps/172.17.211.142:6443" did not receive an update for 3m45.215672686s, fallback to rate-limited syncfeb 28 10:16:01 teckbootcamps kubelet[1963]: i0228 10:16:01.517792 1963 kubelet.go:2107] syncloop (pleg): "we**e-net-c69dt_kube-system(87511ea3-98d5-4ff6-89a2-74fd50f6d7f2)": waiting for 3m20.702903443s to syncfeb 28 10:16:01 teckbootcamps kubelet[1963]: i0228 10:16:01.517814 1963 kubelet.go:2107] syncloop (pleg): "coredns-78fcdf6894-m7rgl_kube-system(73649fbc-791a-4b99-a575-53a6a77d5a43)": waiting for 3m5.682036808s to sync
[root@teckbootcamps shell] kubectl config viewapiversion: v1clusters:- cluster: certificate-authority-data: data+omitted server: name: kubernetescontexts:- context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetescurrent-context: kubernetes-admin@kuberneteskind: configpreferences: {users:- name: kubernetes-admin user: client-certificate-data: redacted client-key-data: redacted
[root@teckbootcamps kubernets]kubelet --versionkubernetes v1.28
[root@teckbootcamps ~]kubeadm config viewapiserver: extraargs: authorization-mode: node,rbac timeoutforcontrolplane: 4m0sapiversion: kubeadm.k8s.io/v1beta2certificatesdir: /etc/kubernetes/pkiclustername: kubernetesdns: type: corednsetcd: local: datadir: /var/lib/etcd imagerepository: k8s.gcr.iokind: clusterconfigurationkubernetesversion: v1.14.4networking: dnsdomain: cluster.local podsubnet: 100.64.0.0/10 servicesubnet: 10.96.0.0/12
[root@teckbootcamps ~]kubeadm config images listw0728 10:09:45.567500 28248 version.go:98] could not fetch a kubernetes version from the internet: unable to get url "": get : net/http: request canceled while waiting for connection (client.timeout exceeded while awaiting headers)w0728 10:09:45.567584 28248 version.go:99] falling back to the local client version: v1.15.0k8s.gcr.io/kube-apiserver:v1.15.0k8s.gcr.io/kube-controller-manager:v1.15.0k8s.gcr.io/kube-scheduler:v1.15.0k8s.gcr.io/kube-proxy:v1.15.0k8s.gcr.io/pause:3.1k8s.gcr.io/etcd:3.3.10k8s.gcr.io/coredns:1.3.1
[root@teckbootcamps ~]kubeadm config print init-defaultsapiversion: kubeadm.k8s.io/v1beta2bootstraptokens:- groups: -system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: -signing - authenticationkind: initconfigurationlocalapiendpoint: advertiseaddress: 1.2.3.4 bindport: 6443noderegistration: crisocket: /var/run/dockershim.sock name: teckbootcamps taints: -effect: noschedule key: node-role.kubernetes.io/master---apiserver: timeoutforcontrolplane: 4m0sapiversion: kubeadm.k8s.io/v1beta2certificatesdir: /etc/kubernetes/pkiclustername: kubernetesdns: type: corednsetcd: local: datadir: /var/lib/etcd imagerepository: k8s.gcr.iokind: clusterconfigurationkubernetesversion: v1.14.0networking: dnsdomain: cluster.local servicesubnet: 10.96.0.0/12
檢視特定 Pod 的日誌。

kubectl logs kubectl logs -f #view similar to tail -f (tail -f to view the log file in real time tail -f log file log)
檢視特定容器的特定 Pod 的日誌。

kubectl logs -c ps: view docker container logsdocker logs
kubectl get pod -n -o yaml
例如:

[root@teckbootcamps shell]kubectl get pod -n kube-system kube-apiserver-teckbootcamps -o yamlapiversion: v1kind: podmetadata: annotations: kubernetes.io/config.hash: 4c09c523e34dd307dbfa1702d7e5f326 kubernetes.io/config.mirror: 4c09c523e34dd307dbfa1702d7e5f326 kubernetes.io/config.seen: "2019-07-27t11:32:59.183084282+08:00" kubernetes.io/config.source: file creationtimestamp: "2019-07-27t03:34:32z" labels: component: kube-apiserver tier: control-plane name: kube-apiserver-teckbootcamps namespace: kube-system resourceversion: "41809" selflink: /api/v1/namespaces/kube-system/pods/kube-apiserver-teckbootcamps uid: 72b76059-b01f-11e9-9ad8-00163e06971espec: containers: -command: -kube-apiserver - advertise-address=172.17.211.142 - allow-privileged=true ..
登入容器時,需要注意容器支援的 shell。

kubectl exec -it -n bashkubectl exec -it -n sh
[root@teckbootcamps shell] kubectl exec -it monitoring-grafana-95cbdd789-fzl49 -n kube-system /bin/sh /lsbin dashboards dev etc home proc root run.sh sys tmp usr var
登入時出現錯誤訊息:

kubectl oci runtime exec failed: exec failed: container_linux.go:345: starting container process ca
,表示 shell 型別不正確。

# create resources based on yaml. apply can be executed repeatedly, but create cannot.kubectl create -f pod.yamlkubectl apply -f pod.yaml
# delete a pod based on the name defined in pod.yamlkubectl delete -f pod.yaml
# delete all pods and services containing a certain labelkubectl delete pod,svc -l name=
[root@teckbootcamps ~]kubectl get podsname ready status restarts agefrontend-2szjk 1/1 running 0 3d1hfrontend-cv5qw 1/1 running 0 3d1hfrontend-lp4tc 0/1 evicted 0 3d2hfrontend-scc** 1/1 running 0 2d7hredis-master-6ssmn 1/1 running 3 3d1hredis-sl**e-6vtrs 1/1 running 1 3d2h[root@teckbootcamps ~]kubectl delete pod frontend-lp4tcpod "frontend-lp4tc" deleted[root@teckbootcamps ~]kubectl get podsname ready status restarts agefrontend-2szjk 1/1 running 0 3d1hfrontend-cv5qw 1/1 running 0 3d1hfrontend-scc** 1/1 running 0 2d7hredis-master-6ssmn 1/1 running 3 3d1hredis-sl**e-6vtrs 1/1 running 1 3d2h
[root@teckbootcamps ~]kubectl top nodesname cpu(cores) cpu% memory(bytes) memory%teckbootcamps 129m 6% 1567mi 42%teckbootcamps2 233m 11% 1811mi 49%teckbootcamps3 510m 25% 2651mi 71%[root@teckbootcamps3 ~]kubectl top podname cpu(cores) memory(bytes)frontend-2szjk 0m 16mifrontend-cv5qw 0m 16mifrontend-scc** 0m 21miredis-master-6ssmn 0m 1miredis-sl**e-6vtrs 1m 8mi
#edit pod’s yaml filekubectl get deployment -n kubectl edit depolyment -n -o yaml
下面是乙個示例:

[root@teckbootcamps shell] kubectl get deployment -n kube-systemname ready up-to-date **ailable agecalico-kube-controllers 1/1 1 1 23hcoredns 2/2 2 2 23hheapster 1/1 1 1 23hkubernetes-dashboard 1/1 1 1 23hmonitoring-grafana 1/1 1 1 23hmonitoring-influxdb 1/1 1 1 23h[root@teckbootcamps shell]kubectl edit deployment monitoring-grafana -n kube-system -o yaml
[root@teckbootcamps data]kubectl exec -it monitoring-grafana-95cbdd789-fzl49 sh -n kube-system /lsbin dashboards dev etc home proc root run.sh sys tmp usr var
[root@teckbootcamps kubernetes]pwd/etc/kubernetes[root@teckbootcamps kubernetes] tree -h.├─5.3k] admin.conf├──5.4k] controller-manager.conf├──5.3k] kubelet.conf├──4.0k] manifests│ ├1.9k] etcd.yaml│ ├2.5k] kube-apiserver.yaml│ ├2.2k] kube-controller-manager.yaml│ └990] kube-scheduler.yaml├──4.0k] pki│ ├1.2k] apiserver.crt│ ├1.1k] apiserver-etcd-client.crt│ ├1.6k] apiserver-etcd-client.key│ ├1.6k] apiserver.key│ ├1.1k] apiserver-kubelet-client.crt│ ├1.6k] apiserver-kubelet-client.key│ ├1.0k] ca.crt│ ├1.6k] ca.key│ ├4.0k] etcd│ │1021] ca.crt│ │1.6k] ca.key│ │1.1k] healthcheck-client.crt│ │1.6k] healthcheck-client.key│ │1.1k] peer.crt│ │1.6k] peer.key│ │1.1k] server.crt│ │1.6k] server.key│ ├1.0k] front-proxy-ca.crt│ ├1.6k] front-proxy-ca.key│ ├1.0k] front-proxy-client.crt│ ├1.6k] front-proxy-client.key│ ├1.6k] sa.key│ └451] sa.pub└──5.3k] scheduler.conf3 directories, 30 files
zees-air-2:ssl zee$ openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048generating rsa private key, 2048 bit long modulus...e is 65537 (0x10001)zees-air-2:ssl zee$ lltotal 32-rw-r--r-- 1 zee staff 1751 nov 22 09:23 dashboard.pass.keyzees-air-2:ssl zee$ openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.keywriting rsa keyzees-air-2:ssl zee$ lltotal 40-rw-r--r-- 1 zee staff 1751 nov 22 09:23 dashboard.pass.key-rw-r--r-- 1 zee staff 1679 nov 22 09:23 dashboard.keyzees-air-2:ssl zee$ openssl req -new -key dashboard.key -out dashboard.csryou are about to be asked to enter information that will be incorporatedinto your certificate request.what you are about to enter is what is called a distinguished name or a dn.there are quite a few fields but you can le**e some blankfor some fields there will be a default value,if you enter '.', the field will be left blank.--country name (2 letter code) [au]:cnstate or province name (full name) [some-state]:beijinglocality name (eg, city) [beijingorganization name (eg, company) [internet widgits pty ltd]:teckbootcampsorganizational unit name (eg, section) [teckbootcampscommon name (e.g. server fqdn or your name) [teckbootcamps.comemail address please enter the following 'extra' attributesto be sent with your certificate requesta challenge password an optional company name zees-air-2:ssl zee$ lltotal 48-rw-r--r-- 1 zee staff 1751 nov 22 09:23 dashboard.pass.key-rw-r--r-- 1 zee staff 1679 nov 22 09:23 dashboard.key-rw-r--r-- 1 zee staff 1009 nov 22 09:24 dashboard.csrzees-air-2:ssl zee$ openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crtsignature oksubject=/c=cn/st=beijing/l=beijing/o=teckbootcamps/ou=teckbootcamps/cn=teckbootcamps.comgetting private keyzees-air-2:ssl zee$ lltotal 56-rw-r--r-- 1 zee staff 1751 nov 22 09:23 dashboard.pass.key-rw-r--r-- 1 zee staff 1679 nov 22 09:23 dashboard.key-rw-r--r-- 1 zee staff 1009 nov 22 09:24 dashboard.csr-rw-r--r-- 1 zee staff 1212 nov 22 09:25 dashboard.crtzees-air-2:ssl zee$ openssl x509 -in dashboard.crt -out dashboard.pemzees-air-2:ssl zee$ lltotal 72-rw-r--r-- 1 zee staff 1751 nov 22 09:23 dashboard.pass.key-rw-r--r-- 1 zee staff 1679 nov 22 09:23 dashboard.key-rw-r--r-- 1 zee staff 1009 nov 22 09:24 dashboard.csr-rw-r--r-- 1 zee staff 1212 nov 22 09:25 dashboard.crt-rw-r--r-- 1 zee staff 1212 nov 22 09:28 dashboard.out-rw-r--r-- 1 zee staff 1212 nov 22 09:28 dashboard.pem
zees-air-2:ssl zee$ openssl genrsa -out server.key 2048generating rsa private key, 2048 bit long modulus...e is 65537 (0x10001)zees-air-2:ssl zee$ lltotal 72-rw-r--r-- 1 zee staff 1751 nov 22 09:23 dashboard.pass.key-rw-r--r-- 1 zee staff 1679 nov 22 09:23 dashboard.key-rw-r--r-- 1 zee staff 1009 nov 22 09:24 dashboard.csr-rw-r--r-- 1 zee staff 1212 nov 22 09:25 dashboard.crt-rw-r--r-- 1 zee staff 1212 nov 22 09:28 dashboard.pem-rw-r--r-- 1 zee staff 1679 nov 22 09:54 server.keyzees-air-2:ssl zee$ openssl req -new -key server.key -subj "/cn=teckbootcamps" -out server.csrzees-air-2:ssl zee$ lltotal 80-rw-r--r-- 1 zee staff 1751 nov 22 09:23 dashboard.pass.key-rw-r--r-- 1 zee staff 1679 nov 22 09:23 dashboard.key-rw-r--r-- 1 zee staff 1009 nov 22 09:24 dashboard.csr-rw-r--r-- 1 zee staff 1212 nov 22 09:25 dashboard.crt-rw-r--r-- 1 zee staff 1212 nov 22 09:28 dashboard.pem-rw-r--r-- 1 zee staff 1679 nov 22 09:54 server.key-rw-r--r-- 1 zee staff 891 nov 22 09:55 server.csrzees-air-2:ssl zee$ openssl x509 -req -in server.csr -ca dashboard.crt -cakey dashboard.key -cacreateserial -out server.crt -days 5000signature oksubject=/cn=teckbootcampsgetting ca private keyzees-air-2:ssl zee$ lltotal 96-rw-r--r-- 1 zee staff 1751 nov 22 09:23 dashboard.pass.key-rw-r--r-- 1 zee staff 1679 nov 22 09:23 dashboard.key-rw-r--r-- 1 zee staff 1009 nov 22 09:24 dashboard.csr-rw-r--r-- 1 zee staff 1212 nov 22 09:25 dashboard.crt-rw-r--r-- 1 zee staff 1212 nov 22 09:28 dashboard.pem-rw-r--r-- 1 zee staff 1679 nov 22 09:54 server.key-rw-r--r-- 1 zee staff 891 nov 22 09:55 server.csr-rw-r--r-- 1 zee staff 1094 nov 22 09:56 server.crt-rw-r--r-- 1 zee staff 17 nov 22 09:56 dashboard.srl

相關問題答案

    KUBERNETES 太複雜了,但還有其他解決方案嗎?

    今天,我看到 非法的 文章 資料庫應該放到ks裡嗎?我想分享我的一些看法。我職業生涯的早期是無中介軟體時代 主要是國內的 的末期。當時,該公司的乙個銀行系統都是用C語言編寫的,從通訊到介面,都是自己的 當時開源專案不多,也沒有中介軟體。有資料庫,但它們主要是這些製造商的產品。當時,公司幾位核心程式設...

    押注 KUBERNETES 發行版的日子已經一去不復返了

    這個時代的真正重點是同時管理跨多個環境的多集群 多發行版 Kubernetes。翻譯自 Kubernetes Evolves from Distro Bets to Choice at Scale,作者 Tenry Fu 是 Spectro Cloud 的首席執行官兼聯合創始人。他在系統軟體方面擁有...

    從 Kubernetes 的探測到 DevOps

    今天在群裡看到有人問如何搭建 kubernetes 的探針,感覺要加的詞太多了,再加上我們在一些 DevOps 專案中的痛苦經歷,今天我就一勞永逸地一一說,另外,我還要給大家講講為什麼 DevOps 這麼難 從功能上講,探頭的作用很簡單,之前我也已經澄清了很多人的一些概念,這篇文章是為了讓運維和開發...

    在 Kubernetes 應用中啟用 OCI 服務網格,使雲計算更智慧型、更高效

    Oracle 雲基礎設施 OCI 客戶越來越多地轉向微服務架構,該架構不僅帶來了許多好處,但也帶來了新的挑戰。在微服務架構中,整體式應用程式被拆分為多個較小的微服務,這些微服務通過 API 通過網路進行通訊。這導致網路流量激增,並增加了架構和整體攻擊面的複雜性。將服務網格新增到微服務可以緩解微服務架...

    什麼是集群?定義雲集群和節點

    在過去十年中,計算機集群 尤其是 Kubernetes 集群 的採用率顯著上公升。初創公司和科技巨頭都在利用基於集群的架構在雲中部署和管理他們的應用程式。但什麼是集群?集群和容器是什麼關係?為什麼要考慮使用群集來託管自己的應用程式?概括地說,計算機群集是一組並行執行以實現共同目標的兩台或多台計算機或...