CEPH环境搭建08

(本节未完成)
本节主要是将rbd(也就是块存储),通过LIO映射为iscsi(也就是ip san),从而提供给其他系统使用。

1、我们在“CEPH环境搭建03”中已经建立了rbd pool

2、增加配置文件

#ceph-0002
vi /etc/ceph/iscsi-gateway.cfg
[config]
# Name of the Ceph storage cluster. A suitable Ceph configuration file allowing
# access to the Ceph storage cluster from the gateway node is required, if not
# colocated on an OSD node.
cluster_name = ceph

# Place a copy of the ceph cluster's admin keyring in the gateway's /etc/ceph
# drectory and reference the filename here
gateway_keyring = ceph.client.admin.keyring

# API settings.
# The API supports a number of options that allow you to tailor it to your
# local environment. If you want to run the API under https, you will need to
# create cert/key files that are compatible for each iSCSI gateway node, that is
# not locked to a specific node. SSL cert and key files *must* be called
# 'iscsi-gateway.crt' and 'iscsi-gateway.key' and placed in the '/etc/ceph/' directory
# on *each* gateway node. With the SSL files in place, you can use 'api_secure = true'
# to switch to https mode.

# To support the API, the bear minimum settings are:
api_secure = false

# Additional API configuration options are as follows, defaults shown.
# api_user = admin
# api_password = admin
# api_port = 5001
# trusted_ip_list = 192.168.1.103,192.168.1.104

3、部署iscsi

ceph orch daemon add iscsi rbd --placement="1 ceph-0002"

4、查看iscsi盘情况


5、挂载iscsi盘

#ceph-0004

#安装需要的软件
apt-get install open-iscsi

#查看可用的iscsi服务
iscsiadm -m discovery -t sendtargets -p 192.168.1.102:3260
192.168.1.102:3260,1 iqn.2020-06.com.neohope:iscsi

#登录,挂载iscsi盘
iscsiadm -m node -T iqn.2020-06.com.neohope:iscsi --login
Logging in to [iface: default, target: iqn.2020-06.com.neohope:iscsi, portal: 192.168.1.102,3260] (multiple)
Login to [iface: default, target: iqn.2020-06.com.neohope:iscsi, portal: 192.168.1.102,3260] successful.

6、使用iscsi盘

#ceph-0004

#查看硬盘,会发现多出一块
fdisk -l
Disk /dev/vda: 40 GiB, 42949672960 bytes, 83886080 sectors
Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/mapper/ceph--44634c9f--cf41--4215--bd5b--c2db93659bf1-osd--block--b192f8e5--55f2--4e75--a7ce--54d007410829: 20 GiB, 21470642176 bytes, 41934848 sectors
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors

#查看sda硬盘情况
fdisk -l /dev/sda
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@ceph-0004:/dev# fdisk -l /dev/sda
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

#格式化
sudo mkfs.ext4 -m0 /dev/sda
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: 42229c39-e23c-46b2-929d-469e66196498
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

#挂载
mount -t ext4 /dev/sda /mnt/iscsi

#基本操作
cd /mnt/iscsi/
ls
vi iscis.txt
ls

7、卸载iscsi盘

#取消mount
umount /mnt/iscsi

#登出
iscsiadm -m node -T iqn.2020-06.com.neohope:iscsi  --logout
Logging out of session [sid: 1, target: iqn.2020-06.com.neohope:iscsi, portal: 192.168.1.102,3260]
Logout of [sid: 1, target: iqn.2020-06.com.neohope:iscsi, portal: 192.168.1.102,3260] successful.

#查看硬盘列表,会发现iscsi盘已经不见了
fdisk -l

CEPH环境搭建07

本节主要是将rbd(也就是块存储),通过tgt映射为iscsi(也就是ip san),从而提供给其他系统使用。

1、安装需要的软件

#ceph-0002
apt-get install tgt
apt-get install open-iscsi

2、查看tgt对rbd的支持情况

#ceph-0002
tgtadm --lld iscsi --op show --mode system
System:
State: ready
debug: off
LLDs:
iscsi: ready
iser: error
Backing stores:
sheepdog
bsg
sg
null
ssc
smc (bsoflags sync:direct)
mmc (bsoflags sync:direct)
rdwr (bsoflags sync:direct)
aio
Device types:
disk
cd/dvd
osd
controller
changer
tape
passthrough
iSNS:
iSNS=Off
iSNSServerIP=
iSNSServerPort=3205
iSNSAccessControl=Off

可见当前版本的tgt不支持直接使用rbd,所以要先将rbd映射为硬盘后,再进行处理。

3、创建并映射rbd设备

#ceph-0002
#创建块存储
rbd create --size 1024 rbd/r2

#禁用特性
rbd feature disable r2 object-map fast-diff deep-flatten

#映射r2设备
rbd map r2
/dev/rbd0

#查看映射情况
rbd showmapped
id pool image snap device
0  rbd  r2    -    /dev/rbd0

4、修改tgt配置文件

#ceph-0002
vim /etc/tgt/targets.conf
<target iqn.2020-06.com.neohope:iscsi="">
backing-store  /dev/rbd0             #虚拟设备
initiator-address 192.168.1.0/24     #IP限制,请根据实际需要配置
# incominguser iuid ipwd             #授权限制,请根据实际需要配置
write-cache off                      #关闭缓存,请根据实际需要配置
</target>

# 重启服务,配置生效
systemctl restart tgt.service

# 查看tgt情况
tgt-admin –-show
Target 1: iqn.2020-06.com.neohope:iscsi
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET     00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET     00010001
SCSI SN: beaf11
Size: 1074 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
SWP: No
Thin-provisioning: No
Backing store type: rdwr
Backing store path: /dev/rbd0
Backing store flags:
Account information:
ACL information:
192.168.1.0/24

5、挂载iscsi盘

#ceph-0004
#安装需要的软件
apt-get install open-iscsi

#查看可用的iscsi服务
iscsiadm -m discovery -t sendtargets -p 192.168.1.102:3260
192.168.1.102:3260,1 iqn.2020-06.com.neohope:iscsi

#登录,挂载iscsi盘
iscsiadm -m node -T iqn.2020-06.com.neohope:iscsi --login
Logging in to [iface: default, target: iqn.2020-06.com.neohope:iscsi, portal: 192.168.1.102,3260] (multiple)
Login to [iface: default, target: iqn.2020-06.com.neohope:iscsi, portal: 192.168.1.102,3260] successful.

6、使用iscsi盘

#ceph-0004

#查看硬盘,会发现多出一块
fdisk -l
Disk /dev/vda: 40 GiB, 42949672960 bytes, 83886080 sectors
Disk /dev/vdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk /dev/mapper/ceph--44634c9f--cf41--4215--bd5b--c2db93659bf1-osd--block--b192f8e5--55f2--4e75--a7ce--54d007410829: 20 GiB, 21470642176 bytes, 41934848 sectors
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors

#查看sda硬盘情况
fdisk -l /dev/sda
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root@ceph-0004:/dev# fdisk -l /dev/sda
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

#格式化
sudo mkfs.ext4 -m0 /dev/sda
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 262144 4k blocks and 65536 inodes
Filesystem UUID: 42229c39-e23c-46b2-929d-469e66196498
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

#挂载
mount -t ext4 /dev/sda /mnt/iscsi

#基本操作
cd /mnt/iscsi/
ls
vi iscis.txt
ls

7、卸载iscsi盘

#取消mount
umount /mnt/iscsi

#登出
iscsiadm -m node -T iqn.2020-06.com.neohope:iscsi  --logout
Logging out of session [sid: 1, target: iqn.2020-06.com.neohope:iscsi, portal: 192.168.1.102,3260]
Logout of [sid: 1, target: iqn.2020-06.com.neohope:iscsi, portal: 192.168.1.102,3260] successful.

#查看硬盘列表,会发现iscsi盘已经不见了
fdisk -l

CEPH环境搭建06

(本节未完成)
本节主要是使用ganesha将cephfs和rgw,映射为nfs,从而提供给其他系统使用。

1、安装nfs相关软件

apt-get install nfs-kernel-server
apt-get install nfs-common

2、创建ganesha配置文件

vi /etc/ganesha/ganesha.conf
EXPORT
{
Export_ID=1;
Path = "/";
Pseudo = /cephfs;
Access_Type = RW;
NFS_Protocols = 4;
Transport_Protocols = TCP;
FSAL {
Name = CEPH;
}
}
EXPORT
{
Export_ID=2;
Path = "/";
Pseudo = /rgw;
Access_Type = RW;
Squash = No_root_squash;
NFS_Protocols = 4;
Transport_Protocols = TCP;
FSAL {
Name = RGW;
User_Id = "s3user";
Access_Key_Id ="6IUA1DMFDTP5BG9ZMIR8";
Secret_Access_Key = "zdoRS2yWL6EsNEBa4xuOSFMPn0lMvPJVMIYZJirP";
}
}
RGW {
ceph_conf = "/etc/ceph/ceph.conf";
}

3、创建ganesha nfs服务

#创建pool
ceph osd pool create nfspool
pool 'nfspool' created

#创建nfs服务
ceph orch apply nfs mynfs nfspool --placement="1 ceph-0002"
Scheduled nfs.mynfs update...

#查看运行状况
ceph orch ps
NAME                                 HOST       STATUS         REFRESHED  AGE  VERSION  IMAGE NAME                IMAGE ID      CONTAINER ID
alertmanager.ceph-0001               ceph-0001  running (43m)  3m ago     95m  0.20.0   prom/alertmanager         0881eb8f169f  3705b800f488
crash.ceph-0001                      ceph-0001  running (43m)  3m ago     95m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  8a9626bd1e8f
crash.ceph-0002                      ceph-0002  running (86m)  14m ago    89m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  c12329831055
crash.ceph-0003                      ceph-0003  running (86m)  3m ago     89m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  d6a935855aa8
crash.ceph-0004                      ceph-0004  running (85m)  3m ago     85m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  638b44d13928
grafana.ceph-0001                    ceph-0001  running (43m)  3m ago     95m  6.6.2    ceph/ceph-grafana:latest  87a51ecf0b1c  187863dc8db2
mds.cephfs01.ceph-0003.ptainy        ceph-0003  running (82m)  3m ago     82m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  fdbc5ae73e57
mds.cephfs01.ceph-0004.ivkeqr        ceph-0004  running (82m)  3m ago     82m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  d8d3f3de875a
mgr.ceph-0001.pttjrr                 ceph-0001  running (43m)  3m ago     96m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  8fb1ee64f050
mgr.ceph-0004.qnxgej                 ceph-0004  running (85m)  3m ago     85m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  e94e690ee6d7
mon.ceph-0001                        ceph-0001  running (43m)  3m ago     96m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  10d98d496ca2
mon.ceph-0002                        ceph-0002  running (85m)  14m ago    85m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  fc98b4e1e98d
mon.ceph-0003                        ceph-0003  running (84m)  3m ago     84m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  93ff5834fa13
mon.ceph-0004                        ceph-0004  running (85m)  3m ago     85m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  0263b8398f31
nfs.mynfs.ceph-0002                  ceph-0002  running (59m)  14m ago    59m  3.2      docker.io/ceph/ceph:v15   d72755c420bc  8fd3c8820929
node-exporter.ceph-0001              ceph-0001  running (43m)  3m ago     95m  1.0.0    prom/node-exporter        14191dbfb45b  45f0525baf7e
node-exporter.ceph-0002              ceph-0002  running (84m)  14m ago    84m  1.0.0    prom/node-exporter        14191dbfb45b  995668d5202e
node-exporter.ceph-0003              ceph-0003  running (84m)  3m ago     84m  1.0.0    prom/node-exporter        14191dbfb45b  b2c34f89fa99
node-exporter.ceph-0004              ceph-0004  running (84m)  3m ago     84m  1.0.0    prom/node-exporter        14191dbfb45b  9213462b8b7c
osd.0                                ceph-0001  running (43m)  3m ago     83m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  9fdaaf6d7413
osd.1                                ceph-0002  running (83m)  14m ago    83m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  8add1f7856cd
osd.2                                ceph-0003  running (83m)  3m ago     83m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  b1e5a200bdac
osd.3                                ceph-0004  running (82m)  3m ago     82m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  d534c9132856
prometheus.ceph-0001                 ceph-0001  running (43m)  3m ago     95m  2.18.1   prom/prometheus:latest    de242295e225  4acc8df6ec7b
rgw.myrealm.myzone.ceph-0001.mbzrge  ceph-0001  running (43m)  3m ago     74m  15.2.3   docker.io/ceph/ceph:v15   d72755c420bc  1cfcd5007b71

4、查看nfs服务情况

# 如果出现这个错误,是本机nfs服务没有配置好
showmount -e
clnt_create: RPC: Program not registered

# 查看导出情况
showmount -e
Export list for ceph-0002:
/ (everyone)
/ (everyone)

5、挂载nfs目录

# 挂载nfs
mkdir -p  /mnt/nfs
mount -t nfs4 ceph-0002:/  /mnt/nfs/

# 当作本地文件夹操作
ls /mnt/nfs

CEPH环境搭建05

本节主要是使用nfs-kernel-server将cephfs或rbd映射为nfs。

1、安装nfs相关软件

apt-get install nfs-kernel-server
apt-get install nfs-common

2、在ceph-0001挂载好fuse和rbd

ceph-fuse /mnt/fuse
rbd map r1
mount -t ext4 /dev/rbd0 /mnt/rbd

3、配置exports文件

vi /etc/exports
/mnt/fuse         192.168.1.0/24(rw,sync,no_root_squash,no_subtree_check,fsid=0)
/mnt/rbd          192.168.1.0/24(rw,sync,no_root_squash,no_subtree_check)

4、更新配置

exportfs -a

systemctl restart nfs-kernel-server

showmount -e
Export list for ceph-0001:
/mnt/rbd  192.168.1.0/24
/mnt/fuse 192.168.1.0/24

5、到另一台机器进行挂载

mount -t nfs  ceph-0001:/mnt/fuse  /mnt/fuse --verbose
mount.nfs: timeout set for Fri Jun  5 19:06:27 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.1.101,clientaddr=192.168.1.103'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=192.168.1.101'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.1.101 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.1.101 prog 100005 vers 3 prot UDP port 39630

mount -t nfs  ceph-0001:/mnt/rbd  /mnt/rbd --verbose
mount.nfs: timeout set for Fri Jun  5 19:06:36 2020
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.1.101,clientaddr=192.168.1.103'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=192.168.1.101'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.1.101 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.1.101 prog 100005 vers 3 prot UDP port 39630

ls /mnt/fuse
fuse.txt  volumes

ls /mnt/rbd
lost+found  rbd1.txt

CEPH环境搭建04

本节主要是测试ceph的三种存储方式之一对象存储。

1、创建zone,并开启rgw

#创建realm
radosgw-admin realm create --rgw-realm=myrealm --default
#创建zonegroup
radosgw-admin zonegroup create --rgw-zonegroup=myzg --endpoints=http://ceph01:8080 --rgw-realm=myrealm --master --default
#创建zone
radosgw-admin zone create --rgw-zonegroup=myzg --rgw-zone=myzone --endpoints=http://ceph01:8080 --master --default
# 在ceph01启用rgw
ceph orch apply rgw myrealm myzone --placement="1 ceph01"

2、创建s3用户

radosgw-admin user create --uid=s3user --display-name=s3user  --system
{
"user_id": "s3user",
"display_name": "s3user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "s3user",
"access_key": "6IUA1DMFDTP5BG9ZMIR8",
"secret_key": "zdoRS2yWL6EsNEBa4xuOSFMPn0lMvPJVMIYZJirP"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"system": "true",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}

3、创建swift用户

sudo radosgw-admin subuser create --uid=s3user --subuser=s3user:swift --access=full
{
"user_id": "s3user",
"display_name": "s3user",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [
{
"id": "s3user:swift",
"permissions": "full-control"
}
],
"keys": [
{
"user": "s3user",
"access_key": "6IUA1DMFDTP5BG9ZMIR8",
"secret_key": "zdoRS2yWL6EsNEBa4xuOSFMPn0lMvPJVMIYZJirP"
}
],
"swift_keys": [
{
"user": "s3user:swift",
"secret_key": "2wou5DxQ6WiBYyHf8qb3QIMX9BnhhBd5Njlj6LJX"
}
],
"caps": [],
"op_mask": "read, write, delete",
"system": "true",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw"
}

4、通过s3接口新增一个bucket

sudo apt-get install python-boto

# 编辑s3test.py文件
vi s3test.py

import boto.s3.connection

access_key = '6IUA1DMFDTP5BG9ZMIR8'
secret_key = 'zdoRS2yWL6EsNEBa4xuOSFMPn0lMvPJVMIYZJirP'

conn = boto.connect_s3(
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
host='ceph01', port=80,
is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
)

bucket = conn.create_bucket('my-new-bucket')
for bucket in conn.get_all_buckets():
print "{name} {created}".format(
name=bucket.name,
created=bucket.creation_date,
)

#运行
python s3test.py
mybucket 2020-05-19T20:01:59.139Z

5、通过swift接口查询bucket

sudo apt-get install python-pip
sudo pip install --upgrade setuptools
sudo pip install --upgrade python-swiftclient

# 用swift工具查询bucket
swift -V 1 -A http://172.16.172.101:80/auth -U s3user:swift -K '2wou5DxQ6WiBYyHf8qb3QIMX9BnhhBd5Njlj6LJX' list
mybucket

5、通过s3cmd接口查询bucket

sudo apt-get install s3cmd

#生成配置
s3cmd --configure

#用s3cmd工具查询bucket
s3cmd ls
2020-05-19 20:01  s3://mybucket

CEPH环境搭建03

本节主要是测试ceph的三种存储方式之一块存储。

1、创建存储池以及rbd

sudo ceph osd pool ls
sudo ceph osd pool create rbd

sudo rados df

sudo rbd ls
sudo rbd create --size 1024 rbd/r1

2、将映像映射到块设备

#直接映射会报一个错
#sudo rbd map r1
#rbd: sysfs write failed
#RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable r1 object-map fast-diff deep-flatten".
#In some cases useful info is found in syslog - try "dmesg | tail".
#rbd: map failed: (6) No such device or address

#修正错误,重新映射
sudo rbd feature disable r1 object-map fast-diff deep-flatten
sudo rbd map r1
/dev/rbd0

#查看映射情况
sudo rbd showmapped
id pool image snap device
0  rbd  r1    -    /dev/rbd0

3、初始化块设备

#fdis初始化设备
sudo fdisk -l /dev/rbd0

#格式化分区为ext4
sudo mkfs.ext4 -m0 /dev/rbd0

#挂载块设备
sudo mkdir -p /mnt/rbd/r1
sudo mount -t ext4 /dev/rbd0 /mnt/rbd/r1

4、进行一些基本操作

sudo ls /mnt/rbd/r1

sudo vi /mnt/rbd/r1/hi.txt

sudo cat /mnt/rbd/r1/hi.txt
hello rbd

CEPH环境搭建02

本节主要是将存储设备加入ceph进行管理,并测试ceph的三种存储方式之一cephfs。

1、查看设备状态

sudo ceph osd status
ID  HOST     USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
0  ceph01  1027M   298G      0        0       0        0   exists,up
1  ceph02  1027M   298G      0        0       0        0   exists,up
2  ceph03  1027M   298G      0        0       0        0   exists,up
3  ceph04  1027M   298G      0        0       0        0   exists,up

sudo ceph orch device ls
HOST    PATH      TYPE   SIZE  DEVICE                             AVAIL  REJECT REASONS
ceph01  /dev/sdb  hdd    300G  VBOX_HARDDISK_VB434b1565-528a303a  True
ceph01  /dev/sda  hdd    300G  VBOX_HARDDISK_VB3eec2162-9aed4ffc  False  locked
ceph02  /dev/sdb  hdd    300G  VBOX_HARDDISK_VBa6445865-c497aa8e  True
ceph02  /dev/sda  hdd    300G  VBOX_HARDDISK_VB64e04201-60c7209f  False  locked
ceph03  /dev/sdb  hdd    300G  VBOX_HARDDISK_VB20fd0c04-b14ef3fa  True
ceph03  /dev/sda  hdd    300G  VBOX_HARDDISK_VB6f4439ab-85f80c78  False  locked
ceph04  /dev/sdb  hdd    300G  VBOX_HARDDISK_VB2c293541-3183e992  True
ceph04  /dev/sda  hdd    300G  VBOX_HARDDISK_VBd81d45d4-a88d6ff3  False  locked

2、存储设备加入osd管理

sudo ceph orch apply osd --all-available-devices
Scheduled osd update...

sudo ceph orch device ls --refresh
HOST    PATH      TYPE   SIZE  DEVICE                             AVAIL  REJECT REASONS
ceph01  /dev/sda  hdd    300G  VBOX_HARDDISK_VB3eec2162-9aed4ffc  False  locked
ceph01  /dev/sdb  hdd    300G  VBOX HARDDISK_VB434b1565-528a303a  False  LVM detected, Insufficient space (<5GB) on vgs, locked
ceph02  /dev/sda  hdd    300G  VBOX_HARDDISK_VB64e04201-60c7209f  False  locked
ceph02  /dev/sdb  hdd    300G  VBOX HARDDISK_VBa6445865-c497aa8e  False  LVM detected, locked, Insufficient space (<5GB) on vgs
ceph03  /dev/sda  hdd    300G  VBOX_HARDDISK_VB6f4439ab-85f80c78  False  locked
ceph03  /dev/sdb  hdd    300G  VBOX HARDDISK_VB20fd0c04-b14ef3fa  False  locked, Insufficient space (<5GB) on vgs, LVM detected
ceph04  /dev/sda  hdd    300G  VBOX_HARDDISK_VBd81d45d4-a88d6ff3  False  locked
ceph04  /dev/sdb  hdd    300G  VBOX HARDDISK_VB2c293541-3183e992  False  LVM detected, locked, Insufficient space (<5GB) on vgs

3、查看osd状态

sudo ceph osd df
ID  CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP  META   AVAIL    %USE  VAR   PGS  STATUS
0    hdd  0.29300   1.00000  300 GiB  1.0 GiB  3.4 MiB   0 B  1 GiB  299 GiB  0.33  1.00    0      up
1    hdd  0.29300   1.00000  300 GiB  1.0 GiB  3.4 MiB   0 B  1 GiB  299 GiB  0.33  1.00    1      up
2    hdd  0.29300   1.00000  300 GiB  1.0 GiB  3.4 MiB   0 B  1 GiB  299 GiB  0.33  1.00    1      up
3    hdd  0.29300   1.00000  300 GiB  1.0 GiB  3.4 MiB   0 B  1 GiB  299 GiB  0.33  1.00    1      up
TOTAL  1.2 TiB  4.0 GiB   14 MiB   0 B  4 GiB  1.2 TiB  0.33
MIN/MAX VAR: 1.00/1.00  STDDEV: 0

sudo ceph osd utilization
avg 0.75
stddev 0.433013 (expected baseline 0.75)
min osd.0 with 0 pgs (0 * mean)
max osd.1 with 1 pgs (1.33333 * mean)
neohope@ceph01:~$ sudo ceph osd pool stats
pool device_health_metrics id 1
nothing is going on

sudo ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME        STATUS  REWEIGHT  PRI-AFF
-1         1.17200  root default
-3         0.29300      host ceph01
0    hdd  0.29300          osd.0        up   1.00000  1.00000
-5         0.29300      host ceph02
1    hdd  0.29300          osd.1        up   1.00000  1.00000
-7         0.29300      host ceph03
2    hdd  0.29300          osd.2        up   1.00000  1.00000
-9         0.29300      host ceph04
3    hdd  0.29300          osd.3        up   1.00000  1.00000

sudo ceph pg stat
1 pgs: 1 active+clean; 0 B data, 14 MiB used, 1.2 TiB / 1.2 TiB avail

4、创建cephfs

sudo ceph fs volume ls
[]
sudo ceph fs volume create  v1
sudo ceph fs volume ls
[
{
"name": "v1"
}
]

sudo ceph fs subvolumegroup create v1 g1
sudo ceph fs subvolumegroup ls v1

sudo ceph fs subvolume create v1 sv1
sudo ceph fs subvolume ls v1

sudo ceph fs ls
name: v1, metadata pool: cephfs.v1.meta, data pools: [cephfs.v1.data ]

5、挂载cephfs

sudo apt-get install ceph-fuse

# 挂载cephfs
sudo mkdir -p /mnt/ceph/ceph_fuse
sudo ceph-fuse /mnt/ceph/ceph_fuse
ceph-fuse[24512]: starting ceph client
2020-05-18 05:57:36.039818 7f7d221a2500 -1 init, newargv = 0x559708e0e2e0 newarg                                                                                                             c=9
ceph-fuse[24512]: starting fuse

# 查看挂载情况
sudo mount | grep ceph
ceph-fuse on /mnt/ceph/ceph_fuse type fuse.ceph-fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

# 可以当作本地磁盘做一些日常操作
sudo ls /mnt/ceph/ceph_fuse/
volumes
sudo ls /mnt/ceph/ceph_fuse/volumes
g1  _nogroup
sudo ls /mnt/ceph/ceph_fuse/volumes/g1

sudo vi /mnt/ceph/ceph_fuse/volumes/g1/hi.txt
sudo cat /mnt/ceph/ceph_fuse/volumes/g1/hi.txt
hello ceph fuse

CEPH环境搭建01

1、初始环境

准备四个节点(每个节点的hosts和hostname要进行相应修改):

ceph-0001 172.16.172.101
ceph-0002 172.16.172.102
ceph-0003 172.16.172.103
ceph-0004 172.16.172.104

每个节点都执行:

sudo apt-get update
sudo apt-get install docker.io

2、在主节点安装cephadm

#官方推荐的方法有问题
#sudo ./cephadm add-repo --release octopus
#The key(s) in the keyring /etc/apt/trusted.gpg.d/ceph.release.gpg are ignored as the file has an unsupported filetype.
#sudo rm /etc/apt/trusted.gpg.d/ceph.release.gpg

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-octopus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt-get update

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod 711 cephadm
sudo ./cephadm install

sudo cephadm install ceph-common

3、初始化

sudo mkdir -p /etc/ceph

sudo cephadm bootstrap --mon-ip 172.16.172.101 --allow-overwrite
INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/docker) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit systemd-timesyncd.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: 7bffaaf6-9688-11ea-ac24-080027b4217f
INFO:cephadm:Verifying IP 172.16.172.101 port 3300 ...
INFO:cephadm:Verifying IP 172.16.172.101 port 6789 ...
INFO:cephadm:Mon IP 172.16.172.101 is in CIDR network 172.16.172.0/24
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:mgr not available, waiting (3/10)...
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to root@localhost's authorized_keys...
INFO:cephadm:Adding host ceph01...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 13...
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:

URL: https://localhost:8443/
User: admin
Password: mdbewc14gq

INFO:cephadm:You can access the Ceph CLI with:

sudo /usr/sbin/cephadm shell --fsid 7bffaaf6-9688-11ea-ac24-080027b4217f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

INFO:cephadm:Please consider enabling telemetry to help improve Ceph:

ceph telemetry on

For more information see:

https://docs.ceph.com/docs/master/mgr/telemetry/

INFO:cephadm:Bootstrap complete.

4、此时可以通过最后给出的信息,登录管理页面了

5、修改配置文件

sudo vi /etc/ceph/ceph.conf

[global]
fsid = a4547d9d-f1a1-4753-b5cc-df0e043ebc65
mon_initial_members = ceph01
#原本生成的mon_host好像有些问题
#mon_host = [v2:ceph01:3300/0,v1:ceph:6789/0]
mon_host = 172.16.172.101
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 172.16.172.0/24

6、查看ceph状态

sudo ceph status
cluster:
id:     7bffaaf6-9688-11ea-ac24-080027b4217f
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
OSD count 0 < osd_pool_default_size 3

services:
mon: 1 daemons, quorum ceph01 (age 35m)
mgr: ceph01.lreqdw(active, since 33m)
osd: 0 osds: 0 up, 0 in

data:
pools:   1 pools, 1 pgs
objects: 0 objects, 0 B
usage:   0 B used, 0 B / 0 B avail
pgs:     100.000% pgs unknown
1 unknown

7、三个子节点做好准备

#在ceph01
#将ceph.pub拷贝到其他三个节点
scp /etc/ceph/ceph.pub  neohope@ceph02:~/authorized_keys

#在ceph02
#启用root
sudo passwd -u root
#设置好root的authorized_keys
mv authorized_keys  /root/.ssh/
cd /root/.ssh/
chown root:root authorized_keys
chmod 0600 authorized_keys
#允许root ssh登录
sudo sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
sudo service ssh restart

#在ceph01
#获取私钥
sudo ceph config-key get mgr/cephadm/ssh_identity_key > ceph.pem
chmod 0600 ceph.pem
#测试root登录
ssh  -i ceph.pem root@ceph02

8、三个子节点加入网络

sudo ceph orch host add ceph02
Added host 'ceph02'

sudo ceph orch host add ceph03
Added host 'ceph03'

sudo ceph orch host add ceph04
Added host 'ceph04'

sudo ceph orch host ls
HOST    ADDR    LABELS  STATUS
ceph01  ceph01
ceph02  ceph02
ceph03  ceph03
ceph04  ceph04

9、设置mon

ceph orch apply mon 4
ceph orch apply mon ceph01,ceph02,ceph03,ceph04
sudo ceph status

ISTIO环境搭建02

换了3个云厂商,才把最后的例子跑完。。。

1、下载示例源码

git clone https://github.com/istio/istio.git
Cloning into 'istio'...

2、生成镜像

cd istio/samples/helloworld/src
./build_service.sh
Sending build context to Docker daemon  7.168kB
Step 1/8 : FROM python:2-onbuild
2-onbuild: Pulling from library/python
......

sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
istio/examples-helloworld-v2         latest              2c7736ccfb8b        45 seconds ago      713MB
istio/examples-helloworld-v1         latest              20be3b24eab7        46 seconds ago      713MB

3、镜像发到其他节点

# 备份镜像
sudo docker save -o hello1.tar 20be3b24eab7
sudo docker save -o hello2.tar 2c7736ccfb8b

# 镜像发送到其他3个节点,并导入
# 对于每个节点做下面的操作
scp -i ~/hwk8s.pem hello1.tar root@192.168.1.229:~/
scp -i ~/hwk8s.pem hello2.tar root@192.168.1.229:~/

ssh -i 192.168.1.229

sudo docker load -i hello1.tar
sudo docker tag 20be3b24eab7 istio/examples-helloworld-v1:latest

sudo docker load -i hello2.tar
sudo docker tag 2c7736ccfb8b istio/examples-helloworld-v2:latest

exit

4、部署helloworld

kubectl apply -f helloworld.yaml
service/helloworld created
deployment.apps/helloworld-v1 created
deployment.apps/helloworld-v2 created

kubectl apply -f helloworld-gateway.yaml
gateway.networking.istio.io/helloworld-gateway created
virtualservice.networking.istio.io/helloworld created

kubectl get pods
kubectl get deployments

5、测试并生成流量

# 设置环境变量
# 这里IP要选用内网IP
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
export GATEWAY_URL=192.168.1.124:$INGRESS_PORT

# 测试一下,会发现两次访问会用不同版本的服务
curl http://$GATEWAY_URL/hello
Hello version: v1, instance: helloworld-v1-5b75657f75-9dss5
curl http://$GATEWAY_URL/hello
Hello version: v2, instance: helloworld-v2-7855866d4f-rd2tr

# 也可以在外网,通过浏览器浏览
# 这里IP要选用外网IP
# 同样的,刷新浏览器会在不同版本服务之间切换
http://159.138.135.216:INGRESS_PORT/hello

# 生成访问流量
./loadgen.sh

6、使用kiali查看相关信息

#kiali 20001
istioctl dashboard kiali

#按上一节方法修改nginx配置内部端口
#重新加载配置
nginx -s reload

# 浏览器访问
http://159.138.135.216:8000

7、其他dashboard信息也可以用相同方法访问

#grafana 3000
istioctl dashboard grafana
#jaeger  16686
istioctl dashboard jaeger
#kiali 20001
istioctl dashboard kiali
#prometheus 9090
istioctl dashboard prometheus
#podid 9876
istioctl dashboard controlz podid
#podid 15000
istioctl dashboard envoy podid
#zipkin
istioctl dashboard zipkin

#按上一节方法修改nginx配置内部端口
#重新加载配置
nginx -s reload

# 浏览器访问
http://159.138.135.216:8000

ISTIO环境搭建01

1、首先,请根据前面k8s的教程,搭建一套可以运行的k8s环境
搭建Kubernetes环境01

搭建Kubernetes环境02

k8s-0001 159.138.135.216 192.168.1.124
k8s-0002 159.138.139.37 192.168.1.229
k8s-0003 159.138.31.39 192.168.1.187
k8s-0004 119.8.113.135 192.168.1.83

2、下载并部署istio

#下载并部署istio
curl -L https://istio.io/downloadIstio | sh -
cd istio-1.5.2
export PATH=$PWD/bin:$PATH
istioctl manifest apply --set profile=demo
Detected that your cluster does not support third party JWT authentication. Falling back to less secure first party JWT. See https://istio.io/docs/ops/best-practices/security/#configure-third-party-service-account-tokens for details.
- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Pilot...
✔ Finished applying manifest for component Pilot.
Waiting for resources to become ready...
Waiting for resources to become ready...
Waiting for resources to become ready...
Waiting for resources to become ready...
Waiting for resources to become ready...
- Applying manifest for component EgressGateways...
- Applying manifest for component IngressGateways...
- Applying manifest for component AddonComponents...
✔ Finished applying manifest for component EgressGateways.
✔ Finished applying manifest for component AddonComponents.
✔ Finished applying manifest for component IngressGateways.
✔ Installation complete

#告知istio,对default空间下的pod自动注入Envoy sidecar
kubectl label namespace default istio-injection=enabled
namespace/default labeled

3、部署demo

#部署
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

#查看pods情况
kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-6fc55d65c9-kxxpm       2/2     Running   0          106s
productpage-v1-7f44c4d57c-h6h7p   2/2     Running   0          105s
ratings-v1-6f855c5fff-2rjz9       2/2     Running   0          105s
reviews-v1-54b8794ddf-tq5vm       2/2     Running   0          106s
reviews-v2-c4d6568f9-q8mvs        2/2     Running   0          106s
reviews-v3-7f66977689-ccp9c       2/2     Running   0          106s

#查看services情况
kubectl get services
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.104.68.235   <none>        9080/TCP   89s
kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP    31m
productpage   ClusterIP   10.106.255.85   <none>        9080/TCP   89s
ratings       ClusterIP   10.103.19.155   <none>        9080/TCP   89s
reviews       ClusterIP   10.110.79.44    <none>        9080/TCP   89s</none></none></none></none></none>

# 开启外部访问
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

#查看gateway情况
kubectl get gateway
NAME               AGE
bookinfo-gateway   7s

4、设置ingress

# 查看是否配置了外部IP
kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                                                      AGE
istio-ingressgateway   LoadBalancer   10.105.220.60   <pending>     15020:32235/TCP,80:30266/TCP,443:30265/TCP,15029:30393/TCP,15030:30302/TCP,15031:30789/TCP,15032:31411/TCP,31400:30790/TCP,15443:31341/TCP   5m30s</pending>

#使用node的地址作为host,和LB只需要配置一种
export INGRESS_HOST=47.57.158.253

#使用LB的地址作为host,和node只需要配置一种
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

#配置http端口
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

#配置https端口
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

#设置并查看外部访问地址
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
echo http://$GATEWAY_URL/productpage

#此时就可以通过节点的ip地址来访问部署的实例了
#浏览器打开上面输出的地址
#http://47.57.158.253:30266/productpage

5、开启管理页面

#开始kaili
istioctl dashboard kiali

#安装nginx
#并设置反向代理
vi /etc/nginx/nginx.conf
http {

upstream backend {
# 代理的本地端口
server 127.0.0.1:20001;
}

server {
# 访问的外部端口
listen 8000;
location / {
proxy_pass http://backend;
}
}

}

# 通过反向代理的8000端口就可以访问kiali的管理界面了
# #http://47.57.158.253:8000

PS:
必须开放的TCP端口有:

8000 nginx代理端口
8001 k8s默认代理端口
30266 bookinfo demo端口,会变更