010.OpenShift綜合實驗及應用

實驗一 安裝OpenShift

1.1 前置準備

[student@workstation ~]$ lab review-install setup

1.2 配置規劃

OpenShift集群有三個節點:

  • master.lab.example.com:OpenShift master節點,是一個不可調度pod的節點。
  • node1.lab.example.com:一個OpenShift節點,它可以同時運行應用程序和基礎設施pod。
  • node2.lab.example.com:另一個OpenShift節點,它可以同時運行應用程序和基礎設施pod。

所有節點都使用帶有overlay2驅動程序的OverlayFS來存儲Docker,每個節點中的第二個磁盤(vdb)保留給Docker存儲。

所有節點都將使用基於rpm的安裝,使用release v3.9和OpenShift image tag version v3.9.14。

路由的默認域是apps.lab.example.com。Classroom DNS服務器已經配置為將此域中的所有主機名解析為node1.lab.example.com。

OpenShift集群使用的所有容器image都存儲在registry.lab.example.com提供的私有倉庫中。

使用兩個基於HTPasswd身份驗證的初始用戶:developer和admin,起密碼都是redhat,developer作為普通用戶,admin作為集群管理員。

services.lab.example.com中的NFS卷作為OpenShift內部倉庫的持久存儲支持。

services.lab.example.com也為集群存儲提供NFS服務。

etcd也部署在master節點上,同時存儲使用services.lab.example.com主機提供的NFS共享存儲。

集群必須與Internet斷開連接,即使用離線包形式。

內部OpenShift倉庫應該由NFS持久存儲支持,存儲位於services.lab.example.com。

master API和控制台將在端口443上運行。

安裝OpenShift所需的RPM包由已經在所有主機上使用Yum配置文件定義完成。

/home/student/DO280/labs/review-install文件夾為OpenShift集群的安裝提供了一個部分完成的Ansible目錄文件。這個文件夾中包含了執行安裝前和安裝後步驟所需的Ansible playbook。

測試應用程序由Git服務器http://services.lab.example.com/phphelloworld提供。這是一個簡單的“hello, world”應用程序。可以使用Source-to-Image來部署這個應用程序,以驗證OpenShift集群是否已部署成功。

1.3 確認Ansible

  1 [student@workstation ~]$ cd /home/student/DO280/labs/review-install/
  2 [student@workstation review-install]$ sudo yum -y install ansible
  3 [student@workstation review-install]$ ansible --version
  4 [student@workstation review-install]$ cat ansible.cfg
  5 [defaults]
  6 remote_user = student
  7 inventory = ./inventory
  8 log_path = ./ansible.log
  9 
 10 [privilege_escalation]
 11 become = yes
 12 become_user = root
 13 become_method = sudo

1.4 檢查Inventory

  1 [student@workstation review-install]$ cp inventory.preinstall inventory		#此為準備工作的Inventory
  2 [student@workstation review-install]$ cat inventory
  3 [workstations]
  4 workstation.lab.example.com
  5 
  6 [nfs]
  7 services.lab.example.com
  8 
  9 [masters]
 10 master.lab.example.com
 11 
 12 [etcd]
 13 master.lab.example.com
 14 
 15 [nodes]
 16 master.lab.example.com
 17 node1.lab.example.com
 18 node2.lab.example.com
 19 
 20 [OSEv3:children]
 21 masters
 22 etcd
 23 nodes
 24 nfs
 25 
 26 #Variables needed by the prepare_install.yml playbook.
 27 [nodes:vars]
 28 registry_local=registry.lab.example.com
 29 use_overlay2_driver=true
 30 insecure_registry=false
 31 run_docker_offline=true
 32 docker_storage_device=/dev/vdb

提示:

Inventory定義了六個主機組:

  • nfs:為集群存儲提供nfs服務的環境中的vm;
  • masters:OpenShift集群中用作master角色的節點;
  • etcd:用於OpenShift集群的etcd服務的節點,本環境中使用master節點;
  • node:OpenShift集群中的node節點;
  • OSEv3:組成OpenShift集群的所有接待,包括master、etcd、node或nfs組中的節點。

注意:默認情況下,docker使用在線倉庫下載容器映像。本環境內部無網絡,因此將docker倉庫配置為內部私有倉庫。在yml中使用變量引入倉庫配置。

此外,安裝會在每個主機上配置docker守護進程,以使用overlay2 image驅動程序存儲容器映像。Docker支持許多不同的image驅動。如AUFS、Btrfs、Device mapper、OverlayFS。

1.5 確認節點

  1 [student@workstation review-install]$ cat ping.yml
  2 ---
  3 - name: Verify Connectivity
  4   hosts: all
  5   gather_facts: no
  6   tasks:
  7     - name: "Test connectivity to machines."
  8       shell: "whoami"
  9       changed_when: false
 10 [student@workstation review-install]$ ansible-playbook -v ping.yml

1.6 準備工作

  1 [student@workstation review-install]$ cat prepare_install.yml
  2 ---
  3 - name: "Host Preparation: Docker tasks"
  4   hosts: nodes
  5   roles:
  6     - docker-storage
  7     - docker-registry-cert
  8     - openshift-node
  9 
 10   #Tasks below were not handled by the roles above.
 11   tasks:
 12     - name: Student Account - Docker Access
 13       user:
 14         name: student
 15         groups: docker
 16         append: yes
 17 
 18 ...
 19 [student@workstation review-install]$ ansible-playbook prepare_install.yml

提示:如上yml引入了三個role,具體role內容參考《002.OpenShift安裝與部署》2.5步驟。

1.7 確認驗證

  1 [student@workstation review-install]$ ssh node1 'docker pull rhel7:latest' #驗證是否可以正常pull image

1.8 檢查Inventory

  1 [student@workstation review-install]$ cp inventory.partial inventory		#此為正常安裝的完整Inventory
  2 [student@workstation review-install]$ cat inventory
  3 [workstations]
  4 workstation.lab.example.com
  5 
  6 [nfs]
  7 services.lab.example.com
  8 
  9 [masters]
 10 master.lab.example.com
 11 
 12 [etcd]
 13 master.lab.example.com
 14 
 15 [nodes]
 16 master.lab.example.com
 17 node1.lab.example.com openshift_node_labels="{'region':'infra', 'node-role.kubernetes.io/compute':'true'}"
 18 node2.lab.example.com openshift_node_labels="{'region':'infra', 'node-role.kubernetes.io/compute':'true'}"
 19 
 20 [OSEv3:children]
 21 masters
 22 etcd
 23 nodes
 24 nfs
 25 
 26 #Variables needed by the prepare_install.yml playbook.
 27 [nodes:vars]
 28 registry_local=registry.lab.example.com
 29 use_overlay2_driver=true
 30 insecure_registry=false
 31 run_docker_offline=true
 32 docker_storage_device=/dev/vdb
 33 
 34 
 35 [OSEv3:vars]
 36 #General Variables
 37 openshift_disable_check=disk_availability,docker_storage,memory_availability
 38 openshift_deployment_type=openshift-enterprise
 39 openshift_release=v3.9
 40 openshift_image_tag=v3.9.14
 41 
 42 #OpenShift Networking Variables
 43 os_firewall_use_firewalld=true
 44 openshift_master_api_port=443
 45 openshift_master_console_port=443
 46 #default subdomain
 47 openshift_master_default_subdomain=apps.lab.example.com
 48 
 49 #Cluster Authentication Variables
 50 openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
 51 openshift_master_htpasswd_users={'admin': '$apr1$4ZbKL26l$3eKL/6AQM8O94lRwTAu611', 'developer': '$apr1$4ZbKL26l$3eKL/6AQM8O94lRwTAu611'}
 52 
 53 #Need to enable NFS
 54 openshift_enable_unsupported_configurations=true
 55 #Registry Configuration Variables
 56 openshift_hosted_registry_storage_kind=nfs
 57 openshift_hosted_registry_storage_access_modes=['ReadWriteMany']
 58 openshift_hosted_registry_storage_nfs_directory=/exports
 59 openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)'
 60 openshift_hosted_registry_storage_volume_name=registry
 61 openshift_hosted_registry_storage_volume_size=40Gi
 62 
 63 #etcd Configuration Variables
 64 openshift_hosted_etcd_storage_kind=nfs
 65 openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)"
 66 openshift_hosted_etcd_storage_nfs_directory=/exports
 67 openshift_hosted_etcd_storage_volume_name=etcd-vol2
 68 openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]
 69 openshift_hosted_etcd_storage_volume_size=1G
 70 openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
 71 
 72 #Modifications Needed for a Disconnected Install
 73 oreg_url=registry.lab.example.com/openshift3/ose-${component}:${version}
 74 openshift_examples_modify_imagestreams=true
 75 openshift_docker_additional_registries=registry.lab.example.com
 76 openshift_docker_blocked_registries=registry.access.redhat.com,docker.io
 77 openshift_web_console_prefix=registry.lab.example.com/openshift3/ose-
 78 openshift_cockpit_deployer_prefix='registry.lab.example.com/openshift3/'
 79 openshift_service_catalog_image_prefix=registry.lab.example.com/openshift3/ose-
 80 template_service_broker_prefix=registry.lab.example.com/openshift3/ose-
 81 ansible_service_broker_image_prefix=registry.lab.example.com/openshift3/ose-
 82 ansible_service_broker_etcd_image_prefix=registry.lab.example.com/rhel7/
 83 [student@workstation review-install]$ lab review-install verify		#本環境使用腳本驗證

1.9 安裝OpenShift Ansible playbook

  1 [student@workstation review-install]$ rpm -qa | grep atomic-openshift-utils
  2 [student@workstation review-install]$ sudo yum -y install atomic-openshift-utils

1.10 Ansible安裝OpenShift

  1 [student@workstation review-install]$ ansible-playbook \
  2 /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml

  1 [student@workstation review-install]$ ansible-playbook \
  2 /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

1.11 確認驗證

通過web控制台使用developer用戶訪問https://master.lab.example.com,驗證集群已成功配置。

1.12 授權

  1 [student@workstation review-install]$ ssh root@master
  2 [root@master ~]# oc whoami
  3 system:admin
  4 [root@master ~]# oc adm policy add-cluster-role-to-user cluster-admin admin

提示:master節點的root用戶,默認為集群管理員。

1.13 登錄測試

  1 [student@workstation ~]$ oc login -u admin -p redhat \
  2 https://master.lab.example.com
  3 [student@workstation ~]$ oc get nodes			#驗證節點情況

1.14 驗證pod

  1 [student@workstation ~]$ oc get pods -n default #查看內部pod

1.15 測試S2I

  1 [student@workstation ~]$ oc login -u developer -p redhat \
  2 https://master.lab.example.com
  3 [student@workstation ~]$ oc new-project test-s2i	#創建項目
  4 [student@workstation ~]$ oc new-app --name=hello \
  5 php:5.6~http://services.lab.example.com/php-helloworld

1.16 測試服務

  1 [student@workstation ~]$ oc get pods			#查看部署情況
  2 NAME            READY     STATUS    RESTARTS   AGE
  3 hello-1-build   1/1       Running   0          39s
  4 [student@workstation ~]$ oc expose svc hello		#暴露服務
  5 [student@workstation ~]$ curl hello-test-s2i.apps.lab.example.com	#測試訪問
  6 Hello, World! php version is 5.6.25

1.17 實驗判斷

  1 [student@workstation ~]$ lab review-install grade #本環境使用腳本判斷
  2 [student@workstation ~]$ oc delete project test-s2i #刪除測試項目

實驗二 部署一個應用

2.1 前置準備

  1 [student@workstation ~]$ lab review-deploy setup

2.2 應用規劃

部署一個TODO LIST應用,包含以下三個容器:

一個MySQL數據庫容器,它在TODO列表中存儲關於任務的數據。

一個Apache httpd web服務器前端容器(todoui),它具有應用程序的靜態HTML、CSS和Javascript。

基於Node.js的API後端容器(todoapi),將RESTful接口公開給前端容器。todoapi容器連接到MySQL數據庫容器來管理應用程序中的數據

2.3 設置策略

  1 [student@workstation ~]$ oc login -u admin -p redhat https://master.lab.example.com
  2 [student@workstation ~]$ oc adm policy remove-cluster-role-from-group \
  3 self-provisioner system:authenticated system:authenticated:oauth
  4 #將項目創建限製為僅集群管理員角色,普通用戶不能創建新項目。

2.4 創建項目

  1 [student@workstation ~]$ oc new-project todoapp
  2 [student@workstation ~]$ oc policy add-role-to-user edit developer	#授予developer用戶可訪問權限的角色edit

2.5 設置quota

  1 [student@workstation ~]$ oc project todoapp
  2 [student@workstation ~]$ oc create quota todoapp-quota --hard=pods=1	#設置pod的quota

2.6 創建應用

  1 [student@workstation ~]$ oc login -u developer -p redhat \
  2 https://master.lab.example.com						#使用developer登錄
  3 [student@workstation ~]$ oc new-app --name=hello \
  4 php:5.6~http://services.lab.example.com/php-helloworld			#創建應用
  5 [student@workstation ~]$ oc logs -f bc/hello				#查看build log

2.7 查看部署

  1 [student@workstation ~]$ oc get pods
  2 NAME             READY     STATUS      RESTARTS   AGE
  3 hello-1-build    0/1       Completed   0          2m
  4 hello-1-deploy   1/1       Running     0          1m
  5 [student@workstation ~]$ oc get events
  6 ……
  7 2m          2m           7         hello.15b54ba822fc1029            DeploymentConfig
  8 Warning   FailedCreate            deployer-controller              Error creating deployer pod: pods "hello-1-deploy" is forbidden: exceeded quota: todoapp-quota, requested: pods=1, used: pods=1, limited: pods=
  9 [student@workstation ~]$ oc describe quota
 10 Name:       todoapp-quota
 11 Namespace:  todoapp
 12 Resource    Used  Hard
 13 --------    ----  ----
 14 pods        1     1

結論:由於pod的硬quota限制,導致部署失敗。

2.8 擴展quota

  1 [student@workstation ~]$ oc rollout cancel dc hello	#修正quota前取消dc
  2 [student@workstation ~]$ oc login -u admin -p redhat
  3 [student@workstation ~]$ oc project todoapp
  4 [student@workstation ~]$ oc patch resourcequota/todoapp-quota --patch '{"spec":{"hard":{"pods":"10"}}}'

提示:也可以使用oc edit resourcequota todoapp-quota命令修改quota配置。

  1 [student@workstation ~]$ oc login -u developer -p redhat
  2 [student@workstation ~]$ oc describe quota		#確認quota
  3 Name:       todoapp-quota
  4 Namespace:  todoapp
  5 Resource    Used  Hard
  6 --------    ----  ----
  7 pods        0     10

2.9 重新部署

  1 [student@workstation ~]$ oc rollout latest dc/hello
  2 [student@workstation ~]$ oc get pods			#確認部署成功
  3 NAME            READY     STATUS      RESTARTS   AGE
  4 hello-1-build   0/1       Completed   0          9m
  5 hello-2-qklrr   1/1       Running     0          12s
  6 [student@workstation ~]$ oc delete all -l app=hello	#刪除hello

2.10 配置NFS

  1 [kiosk@foundation0 ~]$ ssh root@services
  2 [root@services ~]# mkdir -p /var/export/dbvol
  3 [root@services ~]# chown nfsnobody:nfsnobody /var/export/dbvol
  4 [root@services ~]# chmod 700 /var/export/dbvol
  5 [root@services ~]# echo "/var/export/dbvol *(rw,async,all_squash)" > /etc/exports.d/dbvol.exports
  6 [root@services ~]# exportfs -a
  7 [root@services ~]# showmount -e

提示:本實驗使用services上的NFS提供的共享存儲為後續實驗提供持久性存儲。

2.11 測試NFS

  1 [kiosk@foundation0 ~]$ ssh root@node1
  2 [root@node1 ~]# mount -t nfs services.lab.example.com:/var/export/dbvol /mnt
  3 [root@node1 ~]# ls -la /mnt ; mount | grep /mnt		#測試是否能正常掛載

提示:建議node2做同樣測試,測試完畢需要卸載,後續使用持久卷會自動進行掛載。

2.12 創建PV

  1 [student@workstation ~]$ vim /home/student/DO280/labs/review-deploy/todoapi/openshift/mysql-pv.yaml
  2 apiVersion: v1
  3 kind: PersistentVolume
  4 metadata:
  5  name: mysql-pv
  6 spec:
  7  capacity:
  8   storage: 2G
  9  accessModes:
 10   -  ReadWriteMany
 11  nfs:
 12   path: /var/export/dbvol
 13   server: services.lab.example.com
 14 [student@workstation ~]$ oc login -u admin -p redhat
 15 [student@workstation ~]$ oc create -f /home/student/DO280/labs/review-deploy/todoapi/openshift/mysql-pv.yaml
 16 [student@workstation ~]$ oc get pv

2.13 導入模板

  1 [student@workstation ~]$ oc apply -n openshift -f /home/student/DO280/labs/review-deploy/todoapi/openshift/nodejs-mysql-template.yaml

提示:模板文件見附件。

2.14 使用dockerfile創建image

  1 [student@workstation ~]$ vim /home/student/DO280/labs/review-deploy/todoui/Dockerfile
  2 FROM  rhel7:7.5
  3 
  4 MAINTAINER Red Hat Training <training@redhat.com>
  5 
  6 # DocumentRoot for Apache
  7 ENV HOME /var/www/html
  8 
  9 # Need this for installing HTTPD from classroom yum repo
 10 ADD training.repo /etc/yum.repos.d/training.repo
 11 RUN yum downgrade -y krb5-libs libstdc++ libcom_err && \
 12     yum install -y --setopt=tsflags=nodocs \
 13     httpd \
 14     openssl-devel \
 15     procps-ng \
 16     which && \
 17     yum clean all -y && \
 18     rm -rf /var/cache/yum
 19 
 20 # Custom HTTPD conf file to log to stdout as well as change port to 8080
 21 COPY conf/httpd.conf /etc/httpd/conf/httpd.conf
 22 
 23 # Copy front end static assets to HTTPD DocRoot
 24 COPY src/ ${HOME}/
 25 
 26 # We run on port 8080 to avoid running container as root
 27 EXPOSE 8080
 28 
 29 # This stuff is needed to make HTTPD run on OpenShift and avoid
 30 # permissions issues
 31 RUN rm -rf /run/httpd && mkdir /run/httpd && chmod -R a+rwx /run/httpd
 32 
 33 # Run as apache user and not root
 34 USER 1001
 35 
 36 # Launch apache daemon
 37 CMD /usr/sbin/apachectl -DFOREGROUND
 38 [student@workstation ~]$ cd /home/student/DO280/labs/review-deploy/todoui/
 39 [student@workstation todoui]$ docker build -t todoapp/todoui .
 40 [student@workstation todoui]$ docker images
 41 REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
 42 todoapp/todoui                   latest              0249e1c69e38        39 seconds ago      239 MB
 43 registry.lab.example.com/rhel7   7.5                 4bbd153adf84        12 months ago       201 MB

2.15 推送倉庫

  1 [student@workstation todoui]$ docker tag todoapp/todoui:latest \
  2 registry.lab.example.com/todoapp/todoui:latest
  3 [student@workstation todoui]$ docker push \
  4 registry.lab.example.com/todoapp/todoui:latest

提示:將從dockerfile創建的image打標,然後push至內部倉庫。

2.16 導入IS

  1 [student@workstation todoui]$ oc whoami -c
  2 todoapp/master-lab-example-com:443/admin
  3 [student@workstation todoui]$ oc import-image todoui \
  4 --from=registry.lab.example.com/todoapp/todoui \
  5 --confirm -n todoapp					#將docker image導入OpenShift的Image Streams
  6 [student@workstation todoui]$ oc get is -n todoapp
  7 NAME      DOCKER REPO                                       TAGS      UPDATED
  8 todoui    docker-registry.default.svc:5000/todoapp/todoui   latest    13 seconds ago
  9 [student@workstation todoui]$ oc describe is todoui -n todoapp	#查看is

2.17 創建應用

瀏覽器登錄https://master.lab.example.com,選擇todoapp的項目。

查看目錄。

語言——>JavaScript——Node.js + MySQL (Persistent)。

參考下錶建立應用:

名稱
Git Repository URL http://services.lab.example.com/todoapi
Application Hostname todoapi.apps.lab.example.com
MySQL Username todoapp
MySQL Password todoapp
Database name todoappdb
Database Administrator Password redhat

create進行創建。

Overview進行查看。

2.18 測試數據庫

  1 [student@workstation ~]$ oc port-forward mysql-1-6hq4d 3306:3306		#保持端口轉發
  2 [student@workstation ~]$ mysql -h127.0.0.1 -u todoapp -ptodoapp todoappdb < /home/student/DO280/labs/review-deploy/todoapi/sql/db.sql
  3 #導入測試數據至數據庫
  4 [student@workstation ~]$ mysql -h127.0.0.1 -u todoapp -ptodoapp todoappdb -e "select id, description, case when done = 1 then 'TRUE' else 'FALSE' END as done from Item;"
  5 #查看是否導入成功

2.19 訪問測試

  1 [student@workstation ~]$ curl -s http://todoapi.apps.lab.example.com/todo/api/host | python -m json.tool	#curl訪問
  2 {
  3     "hostname": "todoapi-1-kxlnx",
  4     "ip": "10.128.0.12"
  5 }
  6 [student@workstation ~]$ curl -s http://todoapi.apps.lab.example.com/todo/api/items | python -m json.tool	#curl訪問

2.20 創建應用

  1 [student@workstation ~]$ oc new-app --name=todoui -i todoui	#使用todoui is創建應用
  2 [student@workstation ~]$ oc get pods
  3 NAME              READY     STATUS      RESTARTS   AGE
  4 mysql-1-6hq4d     1/1       Running     0          9m
  5 todoapi-1-build   0/1       Completed   0          9m
  6 todoapi-1-kxlnx   1/1       Running     0          8m
  7 todoui-1-wwg28    1/1       Running     0          32s

2.21 暴露服務

  1 [student@workstation ~]$ oc expose svc todoui --hostname=todo.apps.lab.example.com

瀏覽器訪問:http://todo.apps.lab.example.com

2.22 實驗判斷

  1 [student@workstation ~]$ lab review-deploy grade #本環境使用腳本判斷

本站聲明:網站內容來源於博客園,如有侵權,請聯繫我們,我們將及時處理

【其他文章推薦】

網頁設計公司推薦不同的風格,搶佔消費者視覺第一線

※廣告預算用在刀口上,台北網頁設計公司幫您達到更多曝光效益

※自行創業缺乏曝光? 網頁設計幫您第一時間規劃公司的形象門面

南投搬家公司費用需注意的眉眉角角,別等搬了再說!

新北清潔公司,居家、辦公、裝潢細清專業服務

※教你寫出一流的銷售文案?