root@labs--10000:/home/project/lab-env/DockerLab# docker run -p 5432:5432 --name mypostgres -e POSTGRES_PASSWORD=700103 -d postgres
Unable to find image 'postgres:latest' locally latest: Pulling from library/postgres 33847f680f63: Already exists 1b09e96014b3: Pull complete eb49b6d9d1f3: Pull complete 4057ebf78d2d: Pull complete f92d870e2c4f: Pull complete b03847575a18: Pull complete 475945131fa9: Pull complete c042b5a6607d: Pull complete cfe883b776dc: Pull complete 61af04e5c3eb: Pull complete 4e9965ae9062: Pull complete 7b9708b81aa6: Pull complete 871877336770: Pull complete Digest: sha256:6647385dd9ae11aa2216bf55c54d126b0a85637b3cf4039ef24e3234113588e3 Status: Downloaded newer image for postgres:latest a868fc06d7b462e13116f917301a4fbbbb26578cfa3ace48936e439bbf222182 |
root@labs--10000:/home/project/lab-env/DockerLab# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a868fc06d7b4 postgres "docker-entrypoint..." 17 seconds ago Up 13 seconds 0.0.0.0:5432->5432/tcp mypostgres 9b5bf23d92f3 mson218/my-nginx:v1 "/docker-entrypoin..." 45 minutes ago Up 45 minutes 0.0.0.0:8087->80/tcp my-nginx |
root@labs--10000:/home/project/lab-env/DockerLab# docker exec -it mypostgres /bin/bash
root@a868fc06d7b4:/#
Pod 안에 컨테이너가 있음
쿠버네티스는 컨테이너/서비스를 어떻게 관리할 것이냐
Pod 단위로 관리함
Pod를 관리하기 위한 전체 아키텍처라고 이해하면 된다.
Public Kubernetes Services…
Microsoft Azure, AWS, Google Cloud Platform 등…
노드만 관리하면 된다.
Node1에서 작업
[node1 ~]$ kubeadm init --apiserver-advertise-address $(hostname -i) --pod-network-cidr 10.5.0.0/16 Initializing machine ID from random generator. I0805 07:17:20.997084 8684 version.go:251] remote version is much newer: v1.22.0; falling back to: stable-1.20 [init] Using Kubernetes version: v1.20.9 [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not active, please run 'systemctl start docker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow theguide at https://kubernetes.io/docs/setup/cri/ [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 4.4.0-101-generic DOCKER_VERSION: 20.10.1 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [WARNING SystemVerification]: this Docker version is not on thelist of validated versions: 20.10.1. Latest validated version: 19.03 [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1 [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 192.168.0.18] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [192.168.0.18 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [192.168.0.18 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.298219 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node node1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: sxselk.89o66wc3lsjesxre [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in orderfor nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.18:6443 --token sxselk.89o66wc3lsjesxre \ --discovery-token-ca-cert-hash sha256:f190ea5d5b9e3c95afa98ddaf8eac353b23418a9d7af5f72f38dda81f7e8d9aa Waiting for api server to startup Warning: resource daemonsets/kube-proxy is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. daemonset.apps/kube-proxy configured No resources found |
[node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION node1 NotReady control-plane,master 3m v1.20.1 |
Node2에서 작업
[node2 ~]$ kubeadm join 192.168.0.18:6443 --token sxselk.89o66wc3lsjesxre \ > --discovery-token-ca-cert-hash sha256:f190ea5d5b9e3c95afa98ddaf8eac353b23418a9d7af5f72f38dda81f7e8d9aa Initializing machine ID from random generator. [preflight] Running pre-flight checks [WARNING Service-Docker]: docker service is not active, please run 'systemctl startdocker.service' [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist [preflight] The system verification failed. Printing the output from the verification: KERNEL_VERSION: 4.4.0-101-generic DOCKER_VERSION: 20.10.1 OS: Linux CGROUPS_CPU: enabled CGROUPS_CPUACCT: enabled CGROUPS_CPUSET: enabled CGROUPS_DEVICES: enabled CGROUPS_FREEZER: enabled CGROUPS_MEMORY: enabled CGROUPS_PIDS: enabled CGROUPS_HUGETLB: enabled [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03 [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "", err: exit status 1 [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. |
[node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION node1 NotReady control-plane,master 4m56s v1.20.1 node2 NotReady <none> 64s v1.20.1 |
[node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION node1 NotReady control-plane,master 3m v1.20.1 |
[node1 ~]$ kubectl config view
apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://192.168.0.18:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubernetes-admin name: kubernetes-admin@kubernetes current-context: kubernetes-admin@kubernetes kind: Config preferences: {} users: - name: kubernetes-admin user: client-certificate-data: REDACTED client-key-data: REDACTED |
[node1 ~]$ kubectl create deployment my-home --image=ghcr.io/acmexii/edu-welcome:latest
deployment.apps/my-home created
[node1 ~]$ kubectl get all
NAME READY STATUS RESTARTS AGE pod/my-home-98b4df49c-k6mds 0/1 Pending 0 74s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/my-home 0/1 1 0 74s NAME DESIRED CURRENT READY AGE replicaset.apps/my-home-98b4df49c 1 1 0 74s |
[node1 ~]$ kubectl delete deployment.apps/my-home
deployment.apps "my-home" deleted |
[node1 ~]$ kubectl get all
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9m49s |
[node1 ~]$ kubectl create deployment my-home --image=nginx
deployment.apps/my-home created |
[node1 ~]$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created |
[node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION node1 NotReady control-plane,master 14m v1.20.1 node2 NotReady <none> 10m v1.20.1 |
[deployment.yaml]
--- apiVersion: "apps/v1" kind: "Deployment" metadata: name: "nginx-dep" labels: app: "nginx-dep" spec: selector: matchLabels: app: "nginx-dep" replicas: 1 template: metadata: labels: app: "nginx-dep" spec: containers: - name: "nginx-dep" image: "nginx" ports: - containerPort: 80 |
[service.yaml]
--- apiVersion: "v1" kind: "Service" metadata: name: "" labels: app: "" spec: ports: - port: 80 targetPort: 80 selector: app: "nginx-dep" type: "NodePort" |
'IT > VMware | 가상화' 카테고리의 다른 글
VMware/Citrix GPU License 가이드 (0) | 2023.01.10 |
---|---|
DevOps를 위한 도커와 쿠버네티스 1-2. Linux Network Namespace (0) | 2021.08.11 |
vCenter 업그레이드 작업 6.0 to 6.7 (flash EOL) (0) | 2021.08.09 |
DevOps를 위한 도커와 쿠버네티스 1-1. Docker Basics (0) | 2021.08.09 |
VMware vCenter 비정상 상태 조치 (0) | 2021.01.19 |