OpenStack yoga安装(Ubuntu)
2022/6/27 5:20:20
本文主要是介绍OpenStack yoga安装(Ubuntu),对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
目录
- 双节点手动安装openstack yoga版本
- 环境准备
- 网络配置
- hostname解析
- 测试网络连通性
- NTP时间同步
- 所有节点安装openstack包
- controller节点安装SQL数据库
- controller节点安装消息队列
- controller节点安装Memcached内存缓存
- controller节点安装etcd存储器
- yoga版本最小启动服务
- 安装keystone
- Glance安装
- Placement安装
- Nova安装
- Neutron安装
- Horizon安装
- Cinder安装
- 利用yoga版的openstack启动一个云服务器(非必要)
- 环境准备
- 附录1:防火墙与默认端口
双节点手动安装openstack yoga版本
先看看最终的版本号
test@controller:~$ openstack --version openstack 5.8.0
参考文献(官方文档):https://docs.openstack.org/install-guide/
自动安装的Allinone是爽啊,但是毕竟外国佬做的自动部署,经常由于网络环境等安装到一半出错。
虽然手动安装过程非常繁琐,非常耗时,但是新手一定要手动安装一次,你才知道每一个组件的意义。
一定要参考文档,英文的话,你右键翻译为中文就行了。
环境准备
以Ubuntu22.04为例,CENTOS要8的才行,7的支持不了这么新的OpenStack版本。如果你用Centos8
也可以,除了一些包安装命令或者网络配置不同其他基本一样。
Ubuntu是否服务器版无所谓。我这里节省资源用服务器版的,镜像下载地址
https://mirrors.huaweicloud.com/ubuntu-releases/22.04/ubuntu-22.04-live-server-amd64.iso
controller: 10.0.20.190 4C8G100G Ubuntu22.04服务器版
compute1:10.0.20.191 4C8G100G Ubuntu22.04服务器版
宿主机系统任意,我这里是使用了PVE来安装。记得打开虚机的CPU类型为host以便开启虚拟化功能,VMWARE的记得打开虚拟化。
在安装的Ubuntu的过程中,下一步下一步就好,这里需要下载额外软件包,也设置华为云的源:
https://mirrors.huaweicloud.com/ubuntu
网络配置
系统安装成功后,controller和compute都创建两个虚拟网卡。
将两台虚拟机都添加网络设备provider。
对controller的两个网口进行配置:
网口1调成静态ip,ip固定是10.0.20.190,子网掩码24,网关10.0.20.1
网口2调成静态ip,ip固定是203.0.113.190,子网掩码24,网关203.0.113.1
对compute1的两个网口进行配置
网口1调成静态ip,ip固定是10.0.20.191,子网掩码24,网关10.0.20.1
网口2调成静态ip,ip固定是203.0.113.191,子网掩码24,网关203.0.113.1
test@controller:~$ ls /etc/netplan/ 00-installer-config.yaml 01-installer-config.yaml test@controller:~$ sudo cat /etc/netplan/00-installer-config.yaml [sudo] password for deepexi: # This is the network config written by 'subiquity' network: ethernets: ens18: addresses: - 10.0.20.190/24 gateway4: 10.0.20.1 nameservers: addresses: - 10.0.20.254 search: [] version: 2 test@controller:~$ sudo cat /etc/netplan/01-installer-config.yaml network: ethernets: ens19: addresses: - 203.0.113.190/24 gateway4: 203.0.113.1 version: 2 # 计算节点compute1类似就不列出来了
hostname解析
两台机器都做好主机名修改。
vim /etc/hosts ----------------- # controller 10.0.20.190 controller # compute1 10.0.20.191 compute1
测试网络连通性
# 从controller发送ping命令连通外网 ping -c 4 www.baidu.com # 从controller发送ping命令连通compute1 ping -c 4 compute1 # 从compute1发送ping命令连通外网 ping -c 4 www.baidu.com # 从compute1发送ping命令连通ccontroller ping -c 4 controller
NTP时间同步
在controller和compute1上执行以下命令,从阿里云的NTP服务器上同步时间
sudo apt -y install chrony # 备份NTP服务的原始配置文件 sudo mv /etc/chrony/chrony.conf /etc/chrony/chrony.conf.bak # 编写一个空的配置文件,文件只有两行配置 sudo vim /etc/chrony/chrony.conf server ntp.aliyun.com iburst allow 10.0.20.0/24 # 保存退出 #重启系统的ntp服务 sudo service chrony restart
查看NTP服务是否连上正确的服务器
两台节点都执行一下命令
chronyc sources
如果你还有块存储节点和对象存储节点,则配置方法和compute1一样,都从controller上拉取时间。
所有节点安装openstack包
openstack每半年发布一个新版,版本号从A-Z,截止目前最新版本是Zed(开发中)。
Ubuntu每两年出一个LTS版本,以下是各个LTS版本对应的可安装的openstack版本。
OpenStack for Ubuntu 20.04 LTS: yoga、xena、wallaby、victoria、Ussuri
OpenStack for Ubuntu 18.04 LTS: ussuri、train、stein、rocky
OpenStack for Ubuntu 16.04 LTS:queen、pike、mitaka
我们的虚拟机是Ubuntu22.04,所以安装最新的yoga版本
以下的命令请在controller和compute1上都执行(所有的openstack节点都要安装openstack包!!!)
这里是安装时间最长的,如果你的机器性能非常好另外说了,我这里的不行,执行后去玩2局王者吧。
官方文档是这样说明的。
Note: The archive enablement described here needs to be done on all nodes that run OpenStack services.
# 添加yoga的官方apt源 sudo add-apt-repository cloud-archive:yoga # 安装nova计算组件 sudo apt -y install nova-compute # 安装客户端 sudo apt -y install python3-openstackclient
controller节点安装SQL数据库
官方的安装指南让我们安装的是mariaDB,这一步安装操作只在controller上执行。
sudo apt -y install mariadb-server python3-pymysql
安装完成后,为openstack在MariaDB中添加一个配置文件。
sudo vim /etc/mysql/mariadb.conf.d/99-openstack.cnf [mysqld] bind-address = 10.0.20.190 default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
重启mariaDB并设置数据库的root用户密码
# 重启数据库 sudo service mysql restart # 运行下面这个命令设置root密码,设置完成后还会有一些初始化操作,根据提示一路Y就可以了 mysql_secure_installation # 我设置了mariaDB的root密码是123456,并取消了mariaDB的远程登录功能(一路Y的时候有一步就是取消远程登录)
controller节点安装消息队列
openstack支持3种消息队列。
OpenStack supports several message queue services including RabbitMQ, Qpid, and ZeroMQ.
推荐使用rabbitMQ。
以下命令在controller上面执行
# 安装rabbitMQ sudo apt -y install rabbitmq-server # 给rabbitMQ添加openstack用户和密码(我设置了密码123456) rabbitmqctl add_user openstack 123456 # 开放openstack用户的设置+读+写权限 rabbitmqctl set_permissions openstack ".*" ".*" ".*"
controller节点安装Memcached缓存
controller节点执行以下命令
# 安装mencached sudo apt -y install memcached python3-memcache # 把本机ip添加到mencached,让其他节点能访问这个服务 sudo vim /etc/memcached.conf ---------------------------- # 修改文件中已有的-l 127.0.0.1 ,把它改成-l 10.0.20.190 -l 10.0.20.190 # 重启服务 sudo service memcached restart
controller节点安装etcd存储器
以下命令在controller节点上运行。
# 安装etcd sudo apt -y install etcd # 配置etcd,将本地ip配置进去 vim /etc/default/etcd ETCD_NAME="controller" ETCD_DATA_DIR="/var/lib/etcd" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01" ETCD_INITIAL_CLUSTER="controller=http://10.0.20.190:2380" ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.20.190:2380" ETCD_ADVERTISE_CLIENT_URLS="http://10.0.20.190:2379" ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" ETCD_LISTEN_CLIENT_URLS="http://10.0.20.190:2379" # 重启服务并设置开机自启动 sudo systemctl restart etcd sudo systemctl enable etcd
至此,基本环境安装完毕。
yoga版本最小启动服务
想要安装一个可用的openstack,至少安装以下几个服务
• Identity service keystone installation for Yoga(Keystone认证服务)
• Image service glance installation for Yoga(Glance镜像服务)
• Placement service placement installation for Yoga(Placement接口服务)
• Compute service nova installation for Yoga(Nova计算服务)
• Networking service neutron installation for Yoga(Neutron网络服务)
其他的推荐安装服务:
• Dashboard horizon installation for Yoga(Horizon用户网页面板服务)
• Block Storage service cinder installation for Yoga(Cinder块存储服务)
所以,以上七个服务我们依次安装完。
安装keystone
参考官方文档:https://docs.openstack.org/keystone/yoga/install/
首先为keystone本身是个网站,网站就需要创建一个数据库。
所以在controller上面执行以下命令创建数据库
mysql -u root -p Enter Password: 此处输入密码123456(之前安装mariaDB时设置的) # 创建keystone数据库 MariaDB [(none)]> CREATE DATABASE keystone; Query OK, 1 row affected (0.001 sec) # 创建一个keystone用户并设置密码也是keystone,专门用于访问keystone数据库 MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone'; Query OK, 0 rows affected (0.002 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone'; Query OK, 0 rows affected (0.001 sec) # 退出mysql \q Bye
安装keystone
sudo apt -y install keystone
配置keystone,修改如下两处配置:
vim /etc/keystone/keystone.conf ----------------------------------- [database] # ... connection = mysql+pymysql://keystone:keystone@controller/keystone [token] # ... provider = fernet
同步配置到keystone数据库,以root用户身份执行,后面类似这里的同步都是这样切换再执行
su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化fernet秘钥库
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
运行keystone API
# 这个admin就是keystone的初始密码,你可以设置成别的。 keystone-manage bootstrap --bootstrap-password admin \ --bootstrap-admin-url http://controller:5000/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne
到此,keystone的三个接口就运行起来了,web server是apache服务器。
还要设置apache
sudo vim /etc/apache2/apache2.conf ServerName controller # 修改完后重启apache sudo service apache2 restart
最后收尾操作:
设置以下环境变量
export OS_USERNAME=admin export OS_PASSWORD=admin # 这个就是之前运行API时候的bootstrap-password export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3
配置域、项目、用户、角色
openstack domain create --description "An Example Domain" example openstack project create --domain default --description "Service Project" service openstack project create --domain default --description "Demo Project" myproject openstack user create --domain default --password-prompt myuser # 为了方便记忆,密码也设置成myuser openstack role create myrole openstack role add --project myproject --user myuser myrole
验证keystone是否安装成功
unset OS_AUTH_URL OS_PASSWORD # 用admin用户尝试获取一个token openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue # 随后提示输入密码,就是之前设置的admin +------------+-----------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------+ | expires | 2026-02-12T20:14:07.056119Z | | id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv | | | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 | | | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws | | project_id | 343d245e850143a096806dfaefa9afdc | | user_id | ac3377633149401296f6c0d92d79dc16 | +------------+-----------------------------------------------------------------+ # 用myuser用户尝试获取一个token openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name myproject --os-username myuser token issue # 密码是myuser +------------+-----------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------+ | expires | 2026-02-12T20:15:39.014479Z | | id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW | | | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ | | | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U | | project_id | ed0b60bf607743088218b0a533d5943f | | user_id | 58126687cbcc4888bfa9ab73a2256f27 | +------------+-----------------------------------------------------------------+
在controller上编写两个凭证文件
mkdir ~/openrc vim ~/openrc/admin-openrc ------------------------------------ export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 vim ~/openrc/demo-openrc ------------------------------------ export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=myproject export OS_USERNAME=myuser export OS_PASSWORD=myuser export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
尝试加载admin-openrc试试
. ~/openrc/admin-openrc openstack token issue +------------+-----------------------------------------------------------------+ | Field | Value | +------------+-----------------------------------------------------------------+ | expires | 2022-04-24T16:48:29+0000 | | id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl | | | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e | | | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E | | project_id | 343d245e850143a096806dfaefa9afdc | | user_id | ac3377633149401296f6c0d92d79dc16 | +------------+-----------------------------------------------------------------+
到此,所有的keystone安装结束了
Glance安装
yoga版本的glance组件的官方安装文档:
https://docs.openstack.org/glance/yoga/install/install-ubuntu.html
首先为Glance创建数据库
mysql -u root -p Enter password: 123456 MariaDB [(none)]> CREATE DATABASE glance; Query OK, 1 row affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance'; Query OK, 0 rows affected (0.002 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance'; Query OK, 0 rows affected (0.001 sec) \q Bye
加载admin用户(这个用户在keystone安装时创建,所以不能跳)
. ~/openrc/admin-openrc
创建glance用户和项目
openstack user create --domain default --password-prompt glance # 这里要输入密码,密码也设置成glance openstack role add --project service --user glance admin openstack service create --name glance --description "OpenStack Image" image openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292
下载安装并配置Glance
sudo apt -y install glance sudo vim /etc/glance/glance-api.conf [DEFAULT] use_keystone_quotas = True [database] # ... 原先这个database组下的已经有的所有配置删除或注释掉!!! # 我注释了一行backend connection = mysql+pymysql://glance:glance@controller/glance [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = glance [paste_deploy] # ... flavor = keystone [glance_store] # ... stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/
同步配置到数据库(切换到root)
su -s /bin/sh -c "glance-manage db_sync" glance
重启glance服务
sudo service glance-api restart
验证安装是否成功
. ~/openrc/admin-openrc # 下载一个cirros镜像用于测试,大小15M。后面也要用这个镜像测试用的 sudo apt -y install wget wget http://github.com/cirros-dev/cirros/releases/download/0.5.2/cirros-0.5.2-x86_64-disk.img -O ~/cirros-0.5.2-x86_64-disk.img # 如果下载太慢,就用迅雷下载,然后scp放到虚拟机里的家目录下http://download.cirros-cloud.net/ glance image-create --name "cirros" --file ~/cirros-0.5.2-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility=public # 查看激活的镜像 glance image-list +--------------------------------------+--------+ | ID | Name | +--------------------------------------+--------+ | 76d504e7-8b0b-4fc3-846c-6a14b7f86877 | cirros | +--------------------------------------+--------+
至此,Glance安装成功了。
Placement安装
参考文档:https://docs.openstack.org/placement/yoga/install/
以下操作在controller节点上执行。Plancement是一个API和端口管理服务。WSGI
创建数据库
mysql -u root -p Enter password: 123456 MariaDB [(none)]> CREATE DATABASE placement; Query OK, 1 row affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'placement'; Query OK, 0 rows affected (0.002 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'placement'; Query OK, 0 rows affected (0.001 sec) \q Bye
创建项目和用户
. ~/openrc/admin-openrc openstack user create --domain default --password-prompt placement # 设置密码也是placement openstack role add --project service --user placement admin # 将admin用户添加到placement openstack service create --name placement --description "Placement API" placement openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778
下载placement并配置
sudoapt -y install placement-api sudo vim /etc/placement/placement.conf [placement_database] # ... connection = mysql+pymysql://placement:placement@controller/placement [api] # ... auth_strategy = keystone [keystone_authtoken] # ... auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = placement
同步配置到数据库(切换到root执行后再切换回来)
su -s /bin/sh -c "placement-manage db sync" placement
重启apache
sudo service apache2 restart
验证placement是否安装成功
. ~/openrc/admin-openrc placement-status upgrade check +-------------------------------------------+ | Upgrade Check Results | +-------------------------------------------+ | Check: Missing Root Provider IDs | | Result: Success | | Details: None | +-------------------------------------------+ | Check: Incomplete Consumers | | Result: Success | | Details: None | +-------------------------------------------+ | Check: Policy File JSON to YAML Migration | | Result: Success | | Details: None | +-------------------------------------------+ # 测试placementAPI sudo apt -y install python3-pip # 安装pip3 pip3 install --upgrade pip -i https://mirrors.aliyun.com/pypi/simple/# 升级pip3 pip3 install osc-placement -i https://mirrors.aliyun.com/pypi/simple/ openstack --os-placement-api-version 1.2 resource class list --sort-column name +----------------------------------------+ | name | +----------------------------------------+ | DISK_GB | | FPGA | | IPV4_ADDRESS | | MEMORY_MB | ...... openstack --os-placement-api-version 1.6 trait list --sort-column name +---------------------------------------+ | name | +---------------------------------------+ | COMPUTE_ACCELERATORS | | COMPUTE_ARCH_AARCH64 | | COMPUTE_ARCH_MIPSEL | | COMPUTE_ARCH_PPC64LE | ......
至此,placement安装成功。
Nova安装
参考官方文档:https://docs.openstack.org/nova/yoga/install/controller-install-ubuntu.html
nova组件在controller和compute1上都要安装。
首先在controller上安装nova:
配置数据库:
mysql -u root -p Enter Password:123456 MariaDB [(none)]> CREATE DATABASE nova_api; Query OK, 1 row affected (0.001 sec) MariaDB [(none)]> CREATE DATABASE nova; Query OK, 1 row affected (0.001 sec) MariaDB [(none)]> CREATE DATABASE nova_cell0; Query OK, 1 row affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'nova'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'nova'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'nova'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'nova'; \q Bye
创建项目、用户、角色
. ~/openrc/admin-openrc openstack user create --domain default --password-prompt nova # 这里设置nova用户的密码也是nova openstack role add --project service --user nova admin # 将nova用户添加到admin组中变成管理员 openstack service create --name nova --description "OpenStack Compute" compute # 创建服务实体 openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 # 提供API服务 openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
下载安装配置NOVA
sudo apt -y install nova-api nova-conductor nova-novncproxy nova-scheduler sudo vim /etc/nova/nova.conf [DEFAULT] # ...不用注释已有配置 my_ip = 10.0.20.190 transport_url = rabbit://openstack:123456@controller:5672/ [api_database] # ...该组中已有的配置全部注释掉 connection = mysql+pymysql://nova:nova@controller/nova_api [database] # ...该组中已有的配置全部注释掉 connection = mysql+pymysql://nova:nova@controller/nova [api] # ...该组中已有的配置全部注释掉 auth_strategy = keystone [keystone_authtoken] # ...该组中已有的配置全部注释掉 www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = nova [vnc] # ... enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip [glance] # ... api_servers = http://controller:9292 [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement
将配置同步到数据库中(切换到root)
su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova su -s /bin/sh -c "nova-manage db sync" nova
验证是否安装成功(执行后再su切换回来)
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova +-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+ | Name | UUID | Transport URL | Database Connection | Disabled | +-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+ | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False | | cell1 | dbc442b7-fc9c-4223-983a-3dc4fcd0b5e4 | rabbit://openstack:****@controller:5672/ | mysql+pymysql://nova:****@controller/nova | False | +-------+--------------------------------------+------------------------------------------+-------------------------------------------------+----------+
最后收尾,做一系列重启动作:
sudo service nova-api restart sudo service nova-scheduler restart sudo service nova-conductor restart sudo service nova-novncproxy restart
至此,controller的nova计算服务完成
下面我们在compute1节点上安装nova服务,这个很重要,因为像compute1这种计算节点就是用来运行很多云服务器的,所以nova对于计算节点至关重要。
以下命令请在compute1节点上执行!!
下载安装配置nova
sudo apt -y install nova-compute sudo vim /etc/nova/nova.conf [DEFAULT] # ... transport_url = rabbit://openstack:123456@controller my_ip = 10.0.20.191 [api] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = nova [vnc] # ... enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html [glance] # ... api_servers = http://controller:9292 [oslo_concurrency] # ... lock_path = /var/lib/nova/tmp [placement] # ... region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = placement
检查你的计算节点compute1是否支持cpu虚拟化。我们的节点都是kvm虚拟机,这一步要检查的。
egrep -c '(vmx|svm)' /proc/cpuinfo
如果上面这条命令返回1或者大于1,则说明cpu支持虚拟化不需要做额外配置,上面的配置就够了。
如果上面的命令返回0,则虚拟机不支持虚拟化。解决方法有两个
- 虚拟机关机,然后打开KVM虚拟化功能,在开机。
- 让compute1节点使用qemu而不是KVM,进行如下配置
sudo vim /etc/nova/nova-compute.conf # 把文件中的 virt_type=kvm 修改成 virt_type=qemu [libvirt] # ... virt_type = qemu
注意,以上这个配置只有命令返回0的时候做,返回大于0的(支持虚拟化的)无须进行,直接跳过。
重启nova服务
sudo service nova-compute restart # 如果重启失败,自行查看日志/var/log/nova/nova-compute.log。 # 大概率是compute1无法连接controller的消息队列服务
将compute1加到cell数据库
以下步骤在controller节点执行!!!
. ~/openrc/admin-openrc openstack compute service list --service nova-compute +--------------------------------------+--------------+------------+------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +--------------------------------------+--------------+------------+------+---------+-------+----------------------------+ | 0d0f25ef-89e2-4acd-b578-7ad0a51e266e | nova-compute | controller | nova | enabled | up | 2022-06-26T10:15:42.000000 | | b967a1ab-3328-457c-8ce1-f6eb8ff2b7dc | nova-compute | compute1 | nova | enabled | up | 2022-06-26T10:15:34.000000 | +--------------------------------------+--------------+------------+------+---------+-------+----------------------------+ # 让controller节点同步刚发现compute节点的,同步到nova的cell数据库(root执行) su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova # 每次添加新的计算节点 ,如compute2 ,compute3 ...... # 都需要在controller上执行这个nova-manage cell_v2 discover_hosts命令! # 或者你可以一劳永逸,配置一个定时器,让controller定时去发现计算节点 sudo vim /etc/nova/nova.conf ------------------------------- [scheduler] discover_hosts_in_cells_interval = 300
至此,两台机器都安装完成了nova服务,并将计算节点添加到了控制节点。
Neutron安装
参考官方文档:https://docs.openstack.org/neutron/yoga/install/
最复杂也是难度最高的就是网络配置了,Neutron是openstack的网络组件。
官方文档给出的网络架构案例:
The example architectures assume use of the following networks:
Management on 10.0.0.0/24 with gateway 10.0.20.1
This network requires a gateway to provide Internet access to all nodes for administrative purposes such as package installation, security updates, Domain Name System (DNS), and Network Time Protocol (NTP).
Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to instances in your OpenStack environment.
下面开始controller节点的网络。
网卡和主机名解析我们已经做过了。这里不赘述了,忘了就往前翻在文章开头。
创建数据库:
mysql -u root -p Enter Password:123456 MariaDB [(none)] CREATE DATABASE neutron; Query OK, 1 row affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron'; \q Bye
创建用户和角色
. ~/openrc/admin-openrc openstack user create --domain default --password-prompt neutron # 这里设置密码,密码设成neutron,方便记忆 openstack role add --project service --user neutron admin # 把neutron用户加到admin组 openstack service create --name neutron --description "OpenStack Networking" network # 实例化服务 openstack endpoint create --region RegionOne network public http://controller:9696 # 老样子,创建3大接口 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 # 如果遇到了Multiple service matches found for 'network', use an ID to be more specific. # openstack service list # openstack service delete <ID号> 删除多余的服务
然后官方文档给出了两个网络架构:公网架构option1和私网架构option2。其中私网架构包含了公网架构的所有功能,也比公网架构多两个组件。所以本文档选择部署option2私网架构。
原文简介:
Option 2 augments option 1 with layer-3 services that support attaching instances to self-service networks. The demo or other unprivileged user can manage self-service networks including routers that provide connectivity between self-service and provider networks. Additionally, floating IP addresses provide connectivity to instances using self-service networks from external networks such as the Internet.
下载、安装、配置neutron
sudo apt -y install neutron-server neutron-plugin-ml2 neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent neutron-metadata-agent sudo vim /etc/neutron/neutron.conf [DEFAULT] # ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:123456@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true [database] # ...database组中已有的配置注释掉 connection = mysql+pymysql://neutron:neutron@controller/neutron [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [nova] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = nova [oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp
配置ml2组件
sudo vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] # ... type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] # ... flat_networks = provider [ml2_type_vxlan] # ... vni_ranges = 1:1000 #VID [securitygroup] # ... enable_ipset = true
配置linux网桥
sudo vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini ----------------------------------------------------- [linux_bridge] physical_interface_mappings = provider:ens19 # 这里的ens19是我的203.0.113.0/24网段的网口名称,你需要根据你自己的实际填写,不能照抄我的。 # provider 网络可以理解为能与外部互联网相通的网络,后面在创建Flat 类型网络时 --provider-physical-network 要指定是provider。 [vxlan] enable_vxlan = true local_ip = 10.0.20.190 # manage ip,要修改成控制节点/计算节点实际用于vxlan IP。 l2_population = true [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
然后通过sysctl命令验证你的Ubuntu linux系统内核是否支持linux网桥。相当于支持VMware里面的桥接模式。
返回1表示支持网桥模式。一般情况下都是1,除非你的cpu很老。如果不是1,自行百度解决
sudo sysctl net.bridge.bridge-nf-call-iptables # net.bridge.bridge-nf-call-iptables = 1 sudo sysctl net.bridge.bridge-nf-call-ip6tables # net.bridge.bridge-nf-call-ip6tables = 1
配置layer-3三层交换机代理
sudo vim /etc/neutron/l3_agent.ini [DEFAULT] # ... interface_driver = linuxbridge
配置DHCP代理,使用DNSMASQ
sudo vim /etc/neutron/dhcp_agent.ini [DEFAULT] # ... interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
做neutron基本配置
sudo vim /etc/neutron/metadata_agent.ini ------------------------------------- [DEFAULT] # ... nova_metadata_host = controller metadata_proxy_shared_secret = metadata # 这是设置一个密码叫metedata,下一步会用到
再次配置nova,将上面的密码加入到nova
sudo vim /etc/nova/nova.conf [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron service_metadata_proxy = true metadata_proxy_shared_secret = metadata # 这里用到了上一步的密码
同步配置到数据库(切换到root执行后切换回)
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron # 返回差不多长这样: INFO [alembic.runtime.migration] Running upgrade 1bb3393de75d -> c181bb1d89e4 INFO [alembic.runtime.migration] Running upgrade c181bb1d89e4 -> ba859d649675 INFO [alembic.runtime.migration] Running upgrade ba859d649675 -> e981acd076d3 INFO [alembic.runtime.migration] Running upgrade e981acd076d3 -> 76df7844a8c6, add Local IP tables INFO [alembic.runtime.migration] Running upgrade 76df7844a8c6 -> 1ffef8d6f371, migrate RBAC registers from "target_tenant" to "target_project" INFO [alembic.runtime.migration] Running upgrade 1ffef8d6f371 -> 8160f7a9cebb, drop portbindingports table INFO [alembic.runtime.migration] Running upgrade 8160f7a9cebb -> cd9ef14ccf87 INFO [alembic.runtime.migration] Running upgrade cd9ef14ccf87 -> 34cf8b009713 INFO [alembic.runtime.migration] Running upgrade 7d9d8eeec6ad -> a8b517cff8ab INFO [alembic.runtime.migration] Running upgrade a8b517cff8ab -> 3b935b28e7a0 INFO [alembic.runtime.migration] Running upgrade 3b935b28e7a0 -> b12a3ef66e62 INFO [alembic.runtime.migration] Running upgrade b12a3ef66e62 -> 97c25b0d2353 INFO [alembic.runtime.migration] Running upgrade 97c25b0d2353 -> 2e0d7a8a1586 INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d OK
重启nova和neutron
sudo service nova-api restart # 重启neutron组件 sudo service neutron-server restart sudo service neutron-linuxbridge-agent restart sudo service neutron-dhcp-agent restart sudo service neutron-metadata-agent restart # 重启3层交换机 sudo service neutron-l3-agent restart
controller的组件就配置结束了。
下面开始在compute上安装配置neutron组件。
下载安装neutron
sudo apt -y install neutron-linuxbridge-agent
配置neutron
sudo vim /etc/neutron/neutron.conf [DEFAULT] # ...不要把core_plugin = ml2注释了,有用的 transport_url = rabbit://openstack:123456@controller auth_strategy = keystone [keystone_authtoken] # ...已有的配置注释掉 www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = neutron [oslo_concurrency] # ... lock_path = /var/lib/neutron/tmp
配置Linux网桥
sudo vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:enp6s0 [vxlan] enable_vxlan = true local_ip = 10.0.20.191 l2_population = true [securitygroup] # ... enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
然后通过sysctl命令验证你的Ubuntu linux系统内核是否支持linux网桥。相当于支持VMware里面的桥接模式。
返回1表示支持网桥模式。一般情况下都是1,除非你的cpu很老。如果不是1,自行百度解决
sudo sysctl net.bridge.bridge-nf-call-iptables # net.bridge.bridge-nf-call-iptables = 1 sudo sysctl net.bridge.bridge-nf-call-ip6tables # net.bridge.bridge-nf-call-ip6tables = 1
配置compute节点上的nova组件
sudo vim /etc/nova/nova.conf [neutron] # ... auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = neutron
重启nova和neutron
sudo service nova-compute restart sudo service neutron-linuxbridge-agent restart
验证neutron在controller和compute1上是否安装成功。
方法就是列出本机的neutron网络组件。controller上应该有四个,compute1上应该有一个。
# 在controller上执行 openstack network agent list +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ | f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | None | True | UP | neutron-metadata-agent | | 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent | | 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent | | 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | nova | True | UP | neutron-l3-agent | | dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent | +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
Horizon安装
参考官方文档:https://docs.openstack.org/horizon/yoga/
Horizon是个网页,让用户能自由的创建账号,创建虚拟机,规划网络等等一切云资源。
yoga版没有暂时没有自己的Horizon,依然采用的是Ussari版本的Horizon。
参考官方文档:https://docs.openstack.org/horizon/yoga/
安装U版Horizon的前提条件:
- Python3.6或者3.7。Ubuntu22.04自带的Python是3.10.4
- Django 3.2。Horizon项目是用的Python的Django框架编写的网站。
- 一个可用的keystone后端
- Horizon如何连接其他服务?其实Horizon只和keystone相连。但是每个服务都会连接keystone,所以Horizon通过读取keystone自动连接其他服务,如cinder、glance、neutron、nova、swift。Horizon还可以安装插件来连接其他不常用的openstack组件。
所以综上:Horizon的安装条件是Python>=3.6 + Django 3.2 + keystone
安装Horizon最简单的方式就是通过包安装。我们用apt。
以下命令可以在controller上,也可以在compute1上执行。任意一个能连通controller的节点都能安装Horizon,但是我建议还是把Horizon安装在controller上,这样以后能随意的添加和删除计算节点。
sudo apt -y install openstack-dashboard
配置Horizon
sudo vim /etc/openstack-dashboard/local_settings.py OPENSTACK_HOST = "controller" ALLOWED_HOSTS = ['*'] # *表示允许任何外部主机访问Horizon,但是这样不安全,生产环境请写几台机器用户访问Horizon。 SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', } } OPENSTACK_KEYSTONE_URL = "http://%s:5000" % OPENSTACK_HOST # 这个是Python的语法,字符串格式化输出。 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 3, } OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" # 新注册用户的默认角色和权限为普通用户 TIME_ZONE = "Asia/Shanghai" sudo vim /etc/apache2/conf-available/openstack-dashboard.conf WSGIApplicationGroup %{GLOBAL}
重启apache
sudo systemctl reload apache2.service
验证是否安装成功:
激动人心的时刻终于到了,MLGB装了这么久终于到这了,
打开controller上的Firefox浏览器,输入网址 http://controller/horizon/ , 查看是否出现openstack的网址。
然后尝试用admin登录,密码也是admin。这里的账户密码就是我们刚才创建的openrc/admin-openrc文件下的OS_USERNAME,OS_PASSWORD
如果你登录不上,可以修改这2个变量,然后source下生效就能登录了。
如果成功,恭喜你,openstack安装完成了。如果你只是简单的用一下openstack,下面的所有内容你都可以不用看了!!!
Cinder安装(非必要)
cinder组件为openstack提供块存储服务。云服务器盘、快照等都存放在块存储里。
cinder组件安装不是opentack必须的。但是它相当重要,如果你是云服务器厂商,用户的每一台云服务器都对应几十上百GB的存储,我们需要对用户的镜像进行存储和多备份,当用户服务器崩溃或者我们的openstack计算节点崩溃,我们可以再30s内快速启动一个新的云服务器。
参考文档:https://docs.openstack.org/cinder/yoga/
The Block Storage API and scheduler services typically run on the controller nodes. Depending upon the drivers used, the volume service can run on controller nodes, compute nodes, or standalone storage nodes.
块存储服务的API和定时服务是运行在controller节点上的,但是存储卷可以在任何节点上,所以我们只需要在controller上调用对应的驱动去连接存储卷就可以了。
存储卷的类型可以多种多样,NAS/SAN, NFS, iSCSI, Ceph等等。各大IT厂商也提供了块存储服务如腾讯云、阿里云、华为云、AWS、Google等,openstack也能连过去的。
在controller安装cinder
mysql -u root -p Enter Password:123456 MariaDB [(none)] CREATE DATABASE cinder; Query OK, 1 row affected (0.001 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder'; \q Bye
-
获取
admin
凭据以访问仅限管理员的 CLI 命令:. ~/openrc/admin-openrc
-
要创建服务凭证,请完成以下步骤:
-
创建
cinder
用户,密码cinder:$ openstack user create --domain default --password-prompt cinder User Password: Repeat User Password: +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | enabled | True | | id | 9d7e33de3e1a498390353819bc7d245d | | name | cinder | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
-
将
admin
角色添加到cinder
用户:$ openstack role add --project service --user cinder admin
-
创建
cinderv3
服务实体:$ openstack service create --name cinderv3 \ --description "OpenStack Block Storage" volumev3 +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | OpenStack Block Storage | | enabled | True | | id | ab3bbbef780845a1a283490d281e7fda | | name | cinderv3 | | type | volumev3 | +-------------+----------------------------------+
-
-
创建块存储服务 API 端点:
$ openstack endpoint create --region RegionOne \ volumev3 public http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 03fa2c90153546c295bf30ca86b1344b | | interface | public | | region | RegionOne | | region_id | RegionOne | | service_id | ab3bbbef780845a1a283490d281e7fda | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ $ openstack endpoint create --region RegionOne \ volumev3 internal http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 94f684395d1b41068c70e4ecb11364b2 | | interface | internal | | region | RegionOne | | region_id | RegionOne | | service_id | ab3bbbef780845a1a283490d281e7fda | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+ $ openstack endpoint create --region RegionOne \ volumev3 admin http://controller:8776/v3/%\(project_id\)s +--------------+------------------------------------------+ | Field | Value | +--------------+------------------------------------------+ | enabled | True | | id | 4511c28a0f9840c78bacb25f10f62c98 | | interface | admin | | region | RegionOne | | region_id | RegionOne | | service_id | ab3bbbef780845a1a283490d281e7fda | | service_name | cinderv3 | | service_type | volumev3 | | url | http://controller:8776/v3/%(project_id)s | +--------------+------------------------------------------+
安装和配置组件
-
安装软件包:
sudo apt install cinder-api cinder-scheduler
-
编辑 sudo vim
/etc/cinder/cinder.conf
文件并完成以下操作:-
在该
[database]
部分中,配置数据库访问:[database] # ... connection = mysql+pymysql://cinder:cinder@controller/cinder
-
在该
[DEFAULT]
部分中,配置RabbitMQ
消息队列访问:[DEFAULT] # ... transport_url = rabbit://openstack:123456@controller
在
[DEFAULT]
和[keystone_authtoken]
部分中,配置身份服务访问: -
[DEFAULT] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder
-
在该
[DEFAULT]
部分中,配置my_ip
选项以使用控制器节点的管理接口 IP 地址:[DEFAULT] # ... my_ip = 10.0.20.190
-
-
在该
[oslo_concurrency]
部分中,配置锁定路径:[oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp
-
填充块存储数据库(root执行):
# su -s /bin/sh -c "cinder-manage db sync" cinder
配置计算以使用块存储
-
编辑sudo vim
/etc/nova/nova.conf
文件并将以下内容添加到其中:[cinder] os_region_name = RegionOne
完成安装
-
重启计算 API 服务:
sudo service nova-api restart
-
重启块存储服务:
sudo service cinder-scheduler restart sudo service apache2 restart
-
安装支持的实用程序包:
sudo apt install lvm2 thin-provisioning-tools
-
创建 LVM 物理卷
/dev/sdb
:sudo pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created
-
创建 LVM 卷组
cinder-volumes
:sudo vgcreate cinder-volumes /dev/sdb Volume group "cinder-volumes" successfully created
块存储服务在该卷组中创建逻辑卷。
-
只有实例可以访问块存储卷。但是,底层操作系统管理与卷关联的设备。默认情况下,LVM 卷扫描工具会在
/dev
目录中扫描包含卷的块存储设备。如果项目在其卷上使用 LVM,扫描工具会检测到这些卷并尝试缓存它们,这可能会导致底层操作系统和项目卷出现各种问题。您必须重新配置 LVM 以仅扫描包含cinder-volumes
卷组的设备。编辑/etc/lvm/lvm.conf
文件并完成以下操作:filter = [ "a/sda/", "a/sdb/", "r/.*/"]
安装和配置组件
-
安装软件包:
sudo apt install cinder-volume tgt
-
编辑
/etc/cinder/cinder.conf
文件并完成以下操作:-
在该
[database]
部分中,配置数据库访问:[database] # ... connection = mysql+pymysql://cinder:cinder@controller/cinder
在该
[DEFAULT]
部分中,配置RabbitMQ
消息队列访问: -
[DEFAULT] # ... transport_url = rabbit://openstack:123456@controller
在
[DEFAULT]
和[keystone_authtoken]
部分中,配置身份服务访问: -
[DEFAULT] # ... auth_strategy = keystone [keystone_authtoken] # ... www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = cinder
-
在该
[DEFAULT]
部分中,配置my_ip
选项:[DEFAULT] # ... my_ip = 10.0.20.191
替换为存储节点上管理网络接口的 IP 地址,对于示例架构
MANAGEMENT_INTERFACE_IP_ADDRESS
中的第一个节点,通常为 10.0.0.41 。 -
在该部分中,使用 LVM 驱动程序、卷组、iSCSI 协议和适当的 iSCSI 服务
[lvm]
配置 LVM 后端:cinder-volumes
[lvm] # ... volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes target_protocol = iscsi target_helper = tgtadm
-
在该
[DEFAULT]
部分中,启用 LVM 后端:[DEFAULT] # ... enabled_backends = lvm
-
在该
[DEFAULT]
部分中,配置图像服务 API 的位置:[DEFAULT] # ... glance_api_servers = http://controller:9292
-
在该
[oslo_concurrency]
部分中,配置锁定路径:[oslo_concurrency] # ... lock_path = /var/lib/cinder/tmp
-
3 防止创建volume失败
如果不加这行,在装好cinder后创建volume,会提示无法创建实例显示 cinder.exception.VolumeBackendAPIException: 从存储卷后端 API 返回了不正确或意外的响应:Create export for volume failed (资源没有找到。).
sudo vim /etc/tgt/targets.conf # 最后一行加上一段 一定要加 要不然报错 include /var/lib/cinder/volumes/* sudo service tgt restart sudo service cinder-volume restart
完成安装
-
重新启动块存储卷服务,包括其依赖项:
sudo service tgt restart sudo service cinder-volume restart
验证,然后就可以愉快的创建卷(磁盘)啦
test@controller:~$ . ~/openrc/admin-openrc test@controller:~$ openstack volume service list +------------------+--------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+--------------+------+---------+-------+----------------------------+ | cinder-scheduler | controller | nova | enabled | up | 2022-06-26T11:53:58.000000 | | cinder-volume | compute1@lvm | nova | enabled | up | 2022-06-26T11:53:57.000000 | | cinder-backup | compute1 | nova | enabled | down | 2022-06-26T02:33:01.000000 | +------------------+--------------+------+---------+-------+----------------------------+ # 这里的cinder-backup dowm不用管它,备份用的需要指定对象存储地址的,我们不需要使用及安装。
利用yoga版的openstack启动一个云服务器(非必要)
在安装之前,openstack安装指南给出了两种网络架构:公网网络架构option1、私网网络架构option2。
其中私网网络架构包含了公网网络架构的全部功能和组件,所以比公网的更复杂。
公网架构原文是这样的
Before launching an instance, you must create the necessary virtual network infrastructure. For networking option 1, an instance uses a provider (external) network that connects to the physical network
infrastructure via layer-2 (bridging/switching). This network includes a DHCP server that provides IP
addresses to instances.
The admin or other privileged user must create this network because it connects directly to the physical
network infrastructure.
私网架构option2介绍
If you chose networking option 2, you can also create a self-service (private) network that connects to the
physical network infrastructure via NAT. This network includes a DHCP server that provides IP addresses
to instances. An instance on this network can automatically access external networks such as the Internet.
However, access to an instance on this network from external networks such as the Internet requires a
floating IP address.
我们最初整了两张虚拟网卡就是为了实现私网架构的。
附录1:防火墙与默认端口
你可以给openstack安装防火墙以提高集群的安全性。安装防火墙时你需要知道各个组件的端口号。下表列出了openstack常用组件的API端口。
Table 1: Default ports that OpenStack components use
OpenStack service | Default ports |
---|---|
Application Catalog (murano) | 8082 |
Backup Service (Freezer) | 9090 |
Big Data Processing Framework (sahara) | 8386 |
Block Storage (cinder) | 8776 |
Clustering (senlin) | 8777 |
Compute (nova) endpoints | 8774 |
Compute ports for access to virtual machine consoles | 5900-5999 |
Compute VNC proxy for browsers (openstack-nova-novncproxy) | 6080 |
Compute VNC proxy for traditional VNC clients (openstack-nova-xvpvncproxy) | 6081 |
Container Infrastructure Management (Magnum) | 9511 |
Container Service (Zun) | 9517 |
Data processing service (sahara) endpoint | 8386 |
Database service (Trove) | 8779 |
DNS service (Designate) | 9001 |
High Availability Service (Masakari) | 15868 |
Identity service (keystone) endpoint | 5000 |
Image service (glance) API | 9292 |
Key Manager service (Barbican) | 9311 |
Loadbalancer service (Octavia) | 9876 |
Networking (neutron) | 9696 |
NFV Orchestration service (tacker) | 9890 |
Object Storage (swift) | 6000, 6001, 6002 |
Orchestration (heat) endpoint | 8004 |
Orchestration AWS CloudFormation-compatible API (openstack-heat-api-cfn) | 8000 |
Orchestration AWS CloudWatch-compatible API (openstack-heat-api-cloudwatch) | 8778 |
Placement API (placement) | 8003 |
Proxy port for HTML5 console used by Compute service | 6082 |
Rating service (Cloudkitty) | 8889 |
Registration service (Adjutant) | 5050 |
Resource Reservation service (Blazar) | 1234 |
Root Cause Analysis service (Vitrage) | 8999 |
Shared File Systems service (Manila) | 8786 |
Telemetry alarming service (Aodh) | 8042 |
Telemetry event service (Panko) | 8977 |
Workflow service (Mistral) | 8989 |
这篇关于OpenStack yoga安装(Ubuntu)的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-23增量更新怎么做?-icode9专业技术文章分享
- 2024-11-23压缩包加密方案有哪些?-icode9专业技术文章分享
- 2024-11-23用shell怎么写一个开机时自动同步远程仓库的代码?-icode9专业技术文章分享
- 2024-11-23webman可以同步自己的仓库吗?-icode9专业技术文章分享
- 2024-11-23在 Webman 中怎么判断是否有某命令进程正在运行?-icode9专业技术文章分享
- 2024-11-23如何重置new Swiper?-icode9专业技术文章分享
- 2024-11-23oss直传有什么好处?-icode9专业技术文章分享
- 2024-11-23如何将oss直传封装成一个组件在其他页面调用时都可以使用?-icode9专业技术文章分享
- 2024-11-23怎么使用laravel 11在代码里获取路由列表?-icode9专业技术文章分享
- 2024-11-22怎么实现ansible playbook 备份代码中命名包含时间戳功能?-icode9专业技术文章分享