2020年11月10日 星期二

升級stackdriver-agent5.x 到 stackdriver-agent6.x on GCP


因為 stackdriver-agent5.x 將在2020 年 4 月 28 日停止使用
所以我這邊做了下列升級步驟




yum remove stackdriver-agent -y

curl -sSO https://dl.google.com/cloudagents/add-monitoring-agent-repo.sh

bash add-monitoring-agent-repo.sh

yum clean all

yum list --showduplicates stackdriver-agent

yum install -y stackdriver-agent-6.*



參考網址:
https://cloud.google.com/monitoring/agent/installation#upgrade
https://cloud.google.com/monitoring/agent/installation#upgrade

2020年11月9日 星期一

第一次接觸terraform來管理GCP

 希望能夠透過terraform來加速佈建雲端服務的效率


1. 安裝terraform

(terraform只是一個執行檔案)

到https://www.terraform.io/downloads.html 下載壓縮檔案
解壓縮到機器上的/usr/local/bin就可以直接使用


2.建立第一個 tf配置檔案

 a.從GCP建立Account Service , 並下載金鑰檔案(json格式),rename成account.json
 b.配置Google Cloud provider設定
(project 這邊是project id)

 provider "google" {
  credentials = file("account.json")
  project = "ab-xx-xxxx"
  region  = "asia-east1"
}

3. 完善tf設定檔案,並在台灣節點建立一台vm , 配置如下


file - 1.tf

  
  provider "google" {
  credentials = file("account.json")
  project = "ab-xx-xxxx
  region  = "asia-east1"
}

resource "google_compute_instance" "vm_instance" {
name         = "terraform-instance"
machine_type = "n1-standard-1"
zone         = "asia-east1-c"

boot_disk {
initialize_params {
image = "centos-cloud/centos-7"
 }
}

network_interface {

    # A default network is created for all GCP projects
 network       = "default"
subnetwork    = "default"
 }
}


4.啟動/刪除資源從tf檔案 ---- 好處是資源新增刪除都不會漏掉
(金鑰和tf檔案都要放在同一層)


啟動
terraform init

刪除
terraform destroy


參考網站:
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_started
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance
https://ithelp.ithome.com.tw/articles/10206648


2020年11月8日 星期日

Install Kubernetes Cluster With CRI-O: Part4 kubeadm join node

 

在三台主機做Kubernetes Cluster的情境下
 (Version = 1.18.3)
192.168.53.204  k8s-master  ---CentOS 7.8
192.168.53.205  k8s-node1   ---CentOS 7.8
192.168.53.206  k8s-node2   ---CentOS 7.8





1.安裝CRI-O / kubeadm / kubelet  / kubectl )
(依照 Part 1 , Part 2 安裝即可,操作kubeadm join之後服務kubelet才能正常啟動)


2.加入cluster
(可以從master機器做kubeadm init時複製token和hash sha256訊息,因為使用cri-o所以要加--cri-socket)

kubeadm join 192.168.53.204:6443 --token xxxx8.q5qweqw77445 --discovery-token-ca-cert-hash sha256:9ffasdadsadffcb1b85d35d8asdasdsadabf47f04edc44a94esdadsasdasd0c --cri-socket="/var/run/crio/crio.sock" --ignore-preflight-errors=all



PS:
----查看token 方式(從master)
kubeadm token list

----查看ca-cert-hashsha256(從master)
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'


3. 在master查看node

kubeadm get node



參考網址:
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-join/
https://ithelp.ithome.com.tw/articles/10209787


2020年10月30日 星期五

Install Kubernetes Cluster With CRI-O: Part3 Install kubeadm init

 在三台主機做Kubernetes Cluster的情境下
(Version = 1.18.3)
192.168.53.204  k8s-master  ---CentOS 7.8
192.168.53.205  k8s-node1   ---CentOS 7.8
192.168.53.206  k8s-node2   ---CentOS 7.8


1. 配置Cluster on Master

kubeadm init --cri-socket="/var/run/crio/crio.sock"  --apiserver-advertise-address=192.168.53.204  --kubernetes-version=v1.18.3  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16

init的過程中會產出--token 和 --discovery-token-ca-cert-hash,需要複製起來以便加入node


2. 按照init輸入下列指令

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config


3. 配置CNI Plugin (flannel)

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


4.查看節點

kubectl get nodes


kubectl get all -n kube-system


參考:
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
https://github.com/coreos/flannel#flannel



2020年10月29日 星期四

Install Kubernetes Cluster With CRI-O: Part2 Install kubeadm / kubelet / kubectl

在三台主機做Kubernetes Cluster的情境下
(Version = 1.18.3)
192.168.53.204  k8s-master  ---CentOS 7.8
192.168.53.205  k8s-node1   ---CentOS 7.8
192.168.53.206  k8s-node2   ---CentOS 7.8


1. 新增sysctl配置 

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

2. 關閉SWAP 和 Selinux

swapoff -a 

編輯/etc/fstab , 並註解swap的配置

setenforce 0

sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

3.新增hosts

在/etc/hosts上新增

192.168.53.204  k8s-master
192.168.53.205  k8s-node1 
192.168.53.206  k8s-node2

4. 新增kubernetes Repo

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://packages.cloud.google.com/yum/doc/yum-key.gpg http://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF


5.安裝 kubeadm / kubelet  / kubectl 

yum install kubeadm-1.18.3-0 kubectl-1.18.3-0 kubelet-1.18.3-0 --disableexcludes=kubernetes


6. 調整cgroup配置 (預設是吃 docker的cgroup , 使用CRI-O要改配置)

修改 /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf




修改後
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CGROUP_ARGS


7. 啟動服務 kubelet

systemctl daemon-reload
systemctl start kubelet
(因為使用CRI-O啟動會失敗, kubeadm init後就會正常)



參考:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
https://blog.csdn.net/twingao/article/details/105382305
https://ithelp.ithome.com.tw/articles/10209357




Install Kubernetes Cluster With CRI-O: Part1 Install CRI-O

在三台主機做Kubernetes Cluster的情境下
(Version = 1.18.3)
192.168.53.204  k8s-master  ---CentOS 7.8
192.168.53.205  k8s-node1   ---CentOS 7.8
192.168.53.206  k8s-node2   ---CentOS 7.8



#先從k8s-master 安裝CRI-O

1.開啟模組

modprobe overlay
modprobe br_netfilter

2.新增sysctl配置

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

sysctl --system


3.加入CRI-O Repo (1.18.3) & 安裝 CRI-O

curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_7/devel:kubic:libcontainers:stable.repo

curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:1.18:1.18.3.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:1.18:1.18.3/CentOS_7/devel:kubic:libcontainers:stable:cri-o:1.18:1.18.3.repo

yum install cri-o


4. 開啟服務

systemctl enable crio
systemctl start crio



參考文件:
https://kubernetes.io/docs/setup/production-environment/container-runtimes/




2020年4月9日 星期四

Centos7 硬碟大於2TB製作LVM


原本硬碟分割都是使用 fsdisk ,但超過 2TB 的硬碟不能使用 MBR 分割表,要用 GPT

1. 利用parted指令建立GPT分割表


#part /dev/sdb

GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.

(parted) mklabel gpt

(parted) mkpart primary xfs 0GB 100%

(parted) print

Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 3299GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  3299GB  3299GB               primary

(parted) q

Information: You may need to update /etc/fstab.


2.建立LVM PV

# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created.

2.建立LVM VG name adata

#vgcreate data /dev/sdb1
  Volume group "data" successfully created

3. 建立LVM  LV  name lv-data

# lvcreate -n lv-data -l 100%FREE data
  Logical volume "lv-data" created.

4.查看 LVM 設定

#lvdisplay  /dev/data/lv-data
5.格式化

#mkfs.xfs /dev/data/lv-data

6.查看硬碟UUID

#blkid /dev/data/lv-data
/dev/data/lv-data: UUID="82ca6a69-9ee7-4431-82ea-ae4a810fd942" TYPE="xfs"

7.新增data目錄 ,將掛載資訊加入/etc/fstab

#mkdir /data

#echo "UUID=82ca6a69-9ee7-4431-82ea-ae4a810fd942 /data xfs defaults 0 0" >> /etc/fstab

#mount -a

#df -h
Filesystem                 Size  Used Avail Use% Mounted on
devtmpfs                    32G     0   32G   0% /dev
tmpfs                       32G     0   32G   0% /dev/shm
tmpfs                       32G   12M   32G   1% /run
tmpfs                       32G     0   32G   0% /sys/fs/cgroup
/dev/mapper/centos-root     45G  1.2G   44G   3% /
/dev/sda1                 1014M  149M  866M  15% /boot
tmpfs                      6.3G     0  6.3G   0% /run/user/0
/dev/mapper/data-lv--data  3.0T   33M  3.0T   1% /data

完成!!