自架kubernetes-part 1

Jeff Yen
30 min readJan 27, 2021

install k8s the hard way

由google工程師Kelsey Hightower和google團隊準備的安裝Kubernetes
https://github.com/kelseyhightower/kubernetes-the-hard-way
,安裝完可以清楚了解k8s裡面的元件怎麼溝通

工作環境事前準備

1. mac 或 linux 的使用者環境(這裡是用mac

2. Kubernetes cluster node 機器(這裡是用GCP

Part 1 agenda

  • 準備google cloud SDK和安裝cfssl和kubectl
  • 設定VPC, firewall, public ip, Controllers instances and Workers instances
  • 生產CA和TLS憑證(
    Admin Client Certificate,
    Kubelet Client Certificates,
    Controller Manager Client Certificate,
    Kube Proxy Client Certificate,
    Scheduler Client Certificate,
    Kubernetes API Server Certificate,
    Service Account Key Pair)
  • 準備好kubelet, kube-proxy, kube-controller-manager, kube-scheduler和kubeconfig設定檔
  • 生成kubernetes 資料的加密設定檔和金鑰

Part 2 agenda

  • 設定三台controllers的etcd 服務
  • 設定三台controllers的服務
    Kubernetes Controller Manager
    Kubernetes API Server
    Kubernetes Scheduler
    RBAC for Kubelet
    Kubernetes Frontend Load Balancer
  • 設定三台workers的服務
    cri-tools
    RunC
    CNI
    containerd
    kubectl
    kube-proxy
    kubelet
  • 設定kubectl要遠端登入用的kubeconfig
  • 設定Pod Network Routes
  • 部署kubernetes DNS內部服務
  • Smoke Test

叢集會安裝的套件

Google Cloud Platform

按照這個教學大概一小時花費0.23鎂,一天5.5鎂

準備google cloud SDK和安裝cfssl和kubectl

Google Cloud SDK

安裝gcloud command line

# 如果第一次用 
# 初始化
gcloud init
gcloud auth login
# 設定區域gcloud config set compute/region asia-east1
gcloud config set compute/zone asia-east1-b
# 如果自己已經有專案
# 切到自己所要使用的專案
gcloud config set project projectID# 確認gcloud configgcloud config list->
[compute]
region = asia-east1
zone = asia-east1-b
[core]
account = jeff.yen@google.com.tw
disable_usage_reporting = False
project = project-xxxxyyy
Your active configuration is: [default]

安裝client side tool

  1. 憑証生產工具CFSSL, CFSSL是CloudFlare開源的一款PKI/TLS工具
curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfsslcurl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/darwin/cfssljsonchmod +x cfssl cfssljsonsudo mv cfssl cfssljson /usr/local/bin/

確認版本和狀態

2. 安裝 kubectl

curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.18.6/bin/darwin/amd64/kubectlchmod +x kubectlsudo mv kubectl /usr/local/bin/

確認版本和狀態

kubectl version --clientClient Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

設定google VPC, firewall, public ip, instances and ssh key

切換到該google cloud project

gcloud config set project ${PROJECT_ID}

新增叫做 kubernetes-the-hard-way VPC

gcloud compute networks create kubernetes-the-hard-way --subnet-mode custom

新增叫做kubernetes subnet

gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \
--range 10.240.0.0/24

確認

gcloud compute networks list --filter="name=kubernetes-the-hard-way"gcloud compute networks subnets list --filter="name=kubernetes"

allow TCP UDP ICMP內部溝通

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \
--network kubernetes-the-hard-way \
--source-ranges 10.240.0.0/24,10.200.0.0/16

allow 外部SSH, ICMP和HTTPS

gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \
--source-ranges 0.0.0.0/0

確認firewall

gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"

Kubernetes Public IP for kubernetes API

gcloud compute addresses create kubernetes-the-hard-way --region $(gcloud config get-value compute/region)

確認

gcloud compute addresses list --filter="name:kubernetes-the-hard-way"

建三個Kubernetes Controllers node

for i in 0 1 2; do
gcloud compute instances create controller-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--private-network-ip 10.240.0.1${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,controller
done

建三個Kubernetes Workers node,這裏cluster CIDR使用 10.200.0.0/16

for i in 0 1 2; do
gcloud compute instances create worker-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,worker
done

確認

gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way"

ssh controller-0 測試

gcloud compute ssh controller-0

生產CA和TLS憑證

建立CA憑證

CA Certificate

{

cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF

cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

}

Client and Server Certificates

  • admin user Certificate
  • Kubelet Client Certificates
  • Controlloer Manager Client Certificate
  • Kube Proxy Client Certificate
  • Schedular Client Certificate
  • Kubernetes API Server Certificate
  • Service Account Key Pair

有CA憑證後,我們才可以生產其他我們需要的憑證

建Admin使用者憑證

這樣使用者才可以由外部連線到api server

{

cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin

}

結果

admin-key.pem
admin.pem

Kubelet Client憑證

這個憑證是授權讓kubelet去worker nodes是在system:nodes group並使用system:node:<nodeName>,所以要依照worker nodes去產生同數量的憑證。

for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF

EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].accessConfigs[0].natIP)')

INTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces[0].networkIP)')

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
done

結果

worker-0-key.pem
worker-0.pem
worker-1-key.pem
worker-1.pem
worker-2-key.pem
worker-2.pem

kube-controller-manager 憑證

{

cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

}

結果

kube-controller-manager-key.pem
kube-controller-manager.pem

Kube Proxy Client 憑證

{

cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler

}

結果

kube-scheduler-key.pem
kube-scheduler.pem

Kubernetes API Server憑證

在憑證中必須添加跟api server連線的位置

  • controllers node ip ip(240.0.10,10.240.0.11,10.240.0.12)
  • hostname(KUBERNETES_HOSTNAMES)
  • kubernetes API public ip
  • service cluster ip range 10.32.0.1 (這個在後面安裝kubernetes api server 中設定檔service-cluster-ip-range 10.32.0.0/24 的第一個ip)

controllers node ip ip(240.0.10,10.240.0.11,10.240.0.12)
hostname(KUBERNETES_HOSTNAMES)

{

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')

KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local

cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes

}

結果

kubernetes-key.pem
kubernetes.pem

Service Account Key Pair

在Kubernetes Controller Manager會利用密鑰生成和簽章service account tokens,然後利用token去管理service account

{

cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF

cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account

}

結果

service-account-key.pem
service-account.pem

分發憑證

我們要把剛剛生成的憑證傳送到要用的controllers node或workers node

worker instance

  • ca憑證
  • Kubelet Client憑證
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done

controller instance

  • ca憑證
  • Kubernetes API Server憑證
  • Service Account Key Pair
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/
done

生成Kubernetes Configuration Files

Client Authentication Configs

我們這裡要生成以下服務的kubeconfig設定檔

  • controller manager
  • kubelet
  • kube-proxy
  • scheduler
  • admin user

Kubernetes Public IP Address

我們要先準備好kubernetes的外部ip,等等要寫入到kubelet和kube-proxy設定檔

KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')

kubelet Kubernetes設定檔

Kubelets 的kubeconfig設定檔,system:node:${instance} 必須符合worker instancet名稱,這樣Node Authorizer才會認證成功。

for instance in worker-0 worker-1 worker-2; do
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=${instance}.kubeconfig

kubectl config set-credentials system:node:${instance} \
--client-certificate=${instance}.pem \
--client-key=${instance}-key.pem \
--embed-certs=true \
--kubeconfig=${instance}.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:${instance} \
--kubeconfig=${instance}.kubeconfig

kubectl config use-context default --kubeconfig=${instance}.kubeconfig
done

結果

kube-proxy Kubernetes 設定檔

kube-proxy要用的kubeconfig

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}

結果

kube-controller-manager

kube-controller-manager 要用的kubeconfig

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}

結果

kube-scheduler

kube-scheduler 要用的kubeconfig

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}

結果

admin 要用的kubeconfig

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig

kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig

kubectl config use-context default --kubeconfig=admin.kubeconfig
}

結果

分發設定檔

我們要把剛剛生成的kubeconfig傳送到要用的controllers node或workers node

worker instance

  • kubelet
  • kube-proxy
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done

controller instance

  • kube-controller-manager
  • kube-scheduler
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done

生成kubernetes 資料的加密設定檔和金鑰

在kube-apiserver會把資料加密存在etcd ,所以我們要先準備encryption-config.yaml

生成加密金鑰

ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)

生成加密的設定檔

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF

剛剛生成的encryption-config.yaml傳送到要用的controllers node

for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/
done

--

--