版本查看

https://releases.openstack.org/

openstack 安装:

简介

OpenStack是一个开源的云计算管理平台项目,是一系列软件开源项目的组合,由NASA(美国国家航空航天局)和Rackspace合作研发并发起,以Apache许可证授权的开源代码项目
OpenStack为私有云和公有云提供可扩展的弹性的云计算服务,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台
OpenStack覆盖了网络、虚拟化、操作系统、服务器等各个方面,它是一个正在开发中的云计算平台项目,根据成熟及重要程度的不同,被分解成核心项目、孵化项目,以及支持项目和相关项目,每个项目都有自己的委员会和项目技术主管,而且每个项目都不是一成不变的,孵化项目可以根据发展的成熟度和重要性,转变为核心项目
核心组件
1、计算(Compute)Nova:一套控制器,用于为单个用户或使用群组管理虚拟机实例的整个生命周期,根据用户需求来提供虚拟服务。负责虚拟机创建、开机、关机、挂起、暂停、调整、迁移、重启、销毁等操作,配置CPU、内存等信息规格
2、对象存储(Object Storage)Swift:一套用于在大规模可扩展系统中通过内置冗余及高容错机制实现对象存储的系统,允许进行存储或者检索文件,可为Glance提供镜像存储,为Cinder提供卷备份服务
3、镜像服务(Image Service)Glance:一套虚拟机镜像查找及检索系统,支持多种虚拟机镜像格式(AKI、AMI、ARI、ISO、QCOW2、Raw、VDI、VHD、VMDK),有创建上传镜像、删除镜像、编辑镜像基本信息的功能
4、身份服务(Identity Service)Keystone:为OpenStack其他服务提供身份验证、服务规则和服务令牌的功能,管理Domains、Projects、Users、Groups、Roles
5、网络&地址管理(Network)Neutron:提供云计算的网络虚拟化技术,为OpenStack其他服务提供网络连接服务。为用户提供接口,可以定义Network、Subnet、Router,配置DHCP、DNS、负载均衡、L3服务,网络支持GRE、VLAN,插件架构支持许多主流的网络厂家和技术,如OpenvSwitch
6、块存储(Block Storage)Cinder:为运行实例提供稳定的数据块存储服务,它的插件驱动架构有利于块设备的创建和管理,如创建卷、删除卷,在实例上挂载和卸载卷
7、UI 界面(Dashboard)Horizon:OpenStack中各种服务的Web管理门户,用于简化用户对服务的操作,例如:启动实例、分配IP地址、配置访问控制等
8、测量(Metering)Ceilometer:能把OpenStack内部发生的几乎所有的事件都收集起来,然后为计费和监控以及其它服务提供数据支撑
9、部署编排(Orchestration)Heat:提供了一种通过模板定义的协同部署方式,实现云基础设施软件运行环境(计算、存储和网络资源)的自动化部署
10、数据库服务(Database Service)Trove:为用户在OpenStack的环境提供可扩展和可靠的关系和非关系数据库引擎服务

前期准备

准备两台Centos7服务器,配置IP地址和hostname,同步系统时间,关闭防火墙和selinux,修改ip地址和hostname映射

1.修改机器主机名分别为:controller和compute

#hostnamectl set-hostname hostname 

2.编辑controller和compute的 /etc/hosts 文件

 #vi /etc/hosts
172.30.154.44 controller
172.30.154.47 compute

3.验证

采取互ping以及ping百度的方式

4.关闭防火墙

systemctl stop firewalld.service            #停止firewall
systemctl disable firewalld.service        #禁止firewall开机启动

5.关闭selinux

#setenforce 0 临时关闭
#vi /etc/selinux/config
#SELINUX=enforcing改为SELINUX=disabled
重启生效
  1. 设置同一时间可以打开的文件数为65535
[root@controller]# ulimit -n 65535
[root@controller# ulimit -n
65535
[root@rabbitmq rabbitmq]# vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
  1. 设置时间同步

自行百度设置时间同步

  1. 重启服务器

reboot

部署服务

安装epel源

[root@controller ~]# yum install epel-release -y
[root@computer ~]# yum install epel-release -y

安装openstack源(train 版本)

[root@controller ~]# yum install centos-release-openstack-train -y
[root@computer ~]# yum install centos-release-openstack-train -y

安装openstack的客户端和selinux服务

[root@controller ~]# yum install -y python2-openstackclient openstack-selinux
[root@computer ~]# yum install -y python2-openstackclient openstack-selinux

部署Mariadb数据库和memcached

[root@controller ~]# yum install mariadb mariadb-server python2-PyMySQL memcached -y

安装消息队列服务

[root@controller ~]# yum install rabbitmq-server -y

安装keystone服务

[root@controller bin]# yum install openstack-keystone httpd mod_wsgi -y

安装glance服务

[root@controller ~]# yum install openstack-glance -y

安装placememt服务

[root@controller ~]# yum install openstack-placement-api -y

controller安装nova服务

[root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler  -y

computer安装nova服务

[root@computer ~]# yum install openstack-nova-compute -y  

controller安装neutron服务

[root@controller ~]# yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables ipset iproute -y

computer安装neutron服务

[root@computer ~]# yum install openstack-neutron-linuxbridge ebtables ipset iproute -y

安装dashboard组件

[root@controller ~]# yum install openstack-dashboard -y

controller安装cinder和lvm服务

[root@storager ~]# yum install l openstack-cinder -y

computer安装cinder和lvm服务

[root@storager ~]# yum install lvm2 device-mapper-persistent-data openstack-cinder targetcli python2-keystone -y 

开启硬件加速

[root@controller ~]# modprobe kvm-intel
[root@computer ~]# modprobe kvm-intel

安装依赖

[root@controller ~]# yum -y install libibverbs

配置消息队列服务

开启服务

[root@controller ~]# systemctl start rabbitmq-server.service
[root@controller ~]# systemctl enable rabbitmq-server.service

添加用户

[root@controller ~]# rabbitmqctl add_user openstack openstack

授权限

[root@controller ~]#rabbitmqctl set_user_tags admin administrator
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"

启动监控界面

[root@controller ~]#rabbitmq-plugins enable rabbitmq_management    

配置memcached服务

修改配置文件

[root@controller ~]# vim /etc/sysconfig/memcached 
PORT="11211"
USER="memcached"
MAXCONN="65535"
CACHESIZE="1024"
OPTIONS="-l 127.0.0.1,::1,controller"

启动服务

[root@controller ~]# systemctl start memcached.service
[root@controller ~]# systemctl enable memcached.service

配置数据库服务

修改配置文件

[root@controller ~]# vim /etc/my.cnf.d/mariadb-server.cnf
bind-address = 172.30.154.44
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

启动服务

[root@controller ~]# systemctl start mariadb.service
[root@controller ~]# systemctl enable mariadb.service

创建数据库

MariaDB [(none)]> create database keystone;
MariaDB [(none)]> create database glance;
MariaDB [(none)]> create database nova;
MariaDB [(none)]> create database nova_api;
MariaDB [(none)]> create database nova_cell0;
MariaDB [(none)]> create database neutron;
MariaDB [(none)]> create database cinder;
MariaDB [(none)]> create database placement;

授权用户

MariaDB [(none)]> grant all privileges on keystone.* to \'keystone\'@\'localhost\' identified by \'123456\';
MariaDB [(none)]> grant all privileges on keystone.* to \'keystone\'@\'%\' identified by \'123456\';

MariaDB [(none)]> grant all privileges on glance.* to \'glance\'@\'localhost\' identified by \'123456\';
MariaDB [(none)]> grant all privileges on glance.* to \'glance\'@\'%\' identified by \'123456\';

MariaDB [(none)]> grant all privileges on nova.* to \'nova\'@\'localhost\' identified by \'123456\';
MariaDB [(none)]> grant all privileges on nova.* to \'nova\'@\'%\' identified by \'123456\';

MariaDB [(none)]> grant all privileges on nova_api.* to \'nova\'@\'localhost\' identified by \'123456\';
MariaDB [(none)]> grant all privileges on nova_api.* to \'nova\'@\'%\' identified by \'123456\';

MariaDB [(none)]> grant all privileges on nova_cell0.* to \'nova\'@\'localhost\' identified by \'123456\';
MariaDB [(none)]> grant all privileges on nova_cell0.* to \'nova\'@\'%\' identified by \'123456\';

MariaDB [(none)]> grant all privileges on neutron.* to \'neutron\'@\'localhost\' identified by \'123456\';
MariaDB [(none)]> grant all privileges on neutron.* to \'neutron\'@\'%\' identified by \'123456\';

MariaDB [(none)]> grant all privileges on cinder.* to \'cinder\'@\'localhost\' identified by \'123456\';
MariaDB [(none)]> grant all privileges on cinder.* to \'cinder\'@\'%\' identified by \'123456\';

MariaDB [(none)]> grant all privileges on placement.* to \'placement\'@\'localhost\' identified by \'123456\';
MariaDB [(none)]> grant all privileges on placement.* to \'placement\'@\'%\' identified by \'123456\';

MariaDB [(none)]> flush privileges;

通过运行mysql_secure_installation脚本来保护数据库服务

[root@controller ~]# mysql_secure_installation

配置keystone服务

修改配置文件

[root@controller ~]# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:your_password@controller/keystone
[token]
provider = fernet

数据库同步

[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone

密钥库初始化

[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

[root@controller ~]# keystone-manage bootstrap --bootstrap-password openstack --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

配置httpd服务

#修改配置文件
[root@controller ~]# vi /etc/httpd/conf/httpd.conf
ServerName controller

#创建软连接
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

#启动服务
[root@controller ~]# systemctl start httpd
[root@controller ~]# systemctl enable httpd

配置admin环境变量脚本

[root@controller ~]# vi admin-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=openstack
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
export OS_VOLUME_API_VERSION=2

验证环境变量

[root@controller ~]# source admin-openrc
[root@controller ~]# openstack token issue

创建service项目

[root@controller ~]# openstack project create --domain default --description "Service Project" service

创建demo项目

[root@controller ~]# openstack project create --domain default --description "Demo Project" demo

创建demo用户

[root@controller ~]# openstack user create --domain default --password-prompt demo

创建user角色

[root@controller ~]# openstack role create user

添加user角色到demo项目和用户

[root@controller ~]# openstack role add --project demo --user demo user

配置demo环境变量脚本

[root@controller ~]# vi demo-openrc 
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

配置glance服务

创建并配置glance用户

[root@controller ~]# openstack user create --domain default --password-prompt glance
[root@controller ~]# openstack role add --project service --user glance admin

创建glance服务实体

[root@controller ~]# openstack service create --name glance  --description "OpenStack Image" image

创建glance服务端点

[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
[root@controller ~]# openstack endpoint create --region RegionOne  image admin http://controller:9292

修改配置文件

[root@controller ~]# vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:your_password@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glance

[paste_deploy]
flavor = keystone

[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

同步数据库

[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

启动服务

[root@controller ~]# systemctl enable openstack-glance-api.service
[root@controller ~]# systemctl start openstack-glance-api.service

上传镜像

[root@controller ~]# glance image-create --name Centos7 --disk-format qcow2 --container-format bare --progress < CentOS-7-x86_64-GenericCloud-1907.qcow2

查看镜像

[root@controller ~]# openstack image list

Controller配置placement服务

创建并配置placement用户

[root@controller ~]# openstack user create --domain default --password-prompt placement
[root@controller ~]# openstack role add --project service --user placement admin

创建placement服务实体

[root@controller ~]# openstack service create --name placement   --description "Placement API" placement

创建placement服务端点

[root@controller ~]# openstack endpoint create --region RegionOne   placement public http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne   placement internal http://controller:8778
[root@controller ~]# openstack endpoint create --region RegionOne   placement admin http://controller:8778

修改配置文件

[root@controller ~]# vim /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placement:\'your_password\'@controller/placement

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = placement

同步数据库

[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement

重启服务

[root@controller ~]# systemctl restart httpd  

Controller配置nova服务

创建并配置nova用户

[root@controller ~]# openstack user create --domain default --password-prompt nova
[root@controller ~]# openstack role add --project service --user nova admin

创建nova服务实体

[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute

创建nova服务端点

[root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
[root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

修改配置文件

[root@controller nova]# cp -a /etc/nova/nova.conf{,.bak2}
[root@controller nova]# grep -Ev \'^$|#\' /etc/nova/nova.conf.bak2 > /etc/nova/nova.conf
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip = 192.168.29.148
transport_url = rabbit://openstack:openstack@controller:5672/
auth_strategy=keystone
block_device_allocate_retries = 600

[api_database]
connection = mysql+pymysql://nova:your_password@controller/nova_api

[database]
connection = mysql+pymysql://nova:your_password@controller/nova

[api]
auth_strategy = keystone 

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[root@controller ~]# vi /etc/httpd/conf.d/00-placement-api.conf

添加如下内容

<Directory /usr/bin>
   <IfVersion >= 2.4>
      Require all granted
   </IfVersion>
   <IfVersion < 2.4>
      Order allow,deny
      Allow from all
   </IfVersion>
</Directory>

重启httpd服务

[root@controller ~]# systemctl restart httpd

同步数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

验证

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

启动服务

[root@controller ~]# systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
[root@controller ~]# systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

compute配置nova服务

修改配置文件

[root@controller nova]# cp -a /etc/nova/nova.conf{,.bak2}
[root@controller nova]# grep -Ev \'^$|#\' /etc/nova/nova.conf.bak2 > /etc/nova/nova.conf
[root@compute ~]# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@controller
my_ip = 192.168.29.146
auth_strategy=keystone
block_device_allocate_retries = 600

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova

[vnc]
enabled = True
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

[glance]
api_servers = http://controller:9292

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = placement

[libvirt]
virt_type = kvm
#虚拟机部署集群需要用qemu
#virt_type = qemu

启动服务

[root@compute ~]# systemctl start libvirtd.service openstack-nova-compute.service
[root@compute ~]# systemctl enable libvirtd.service openstack-nova-compute.service   

controller添加computer进入数据库

查看nova-compute结点

[root@controller ~]# openstack compute service list --service nova-compute

添加数据库

[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

controller配置neutron服务

创建并配置neutron用户

[root@controller ~]# openstack user create --domain default --password-prompt neutron
[root@controller ~]# openstack role add --project service --user neutron admin

创建neutron服务实体

[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

创建neutron服务端点

[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network internal http://controller:9696
[root@controller ~]# openstack endpoint create --region RegionOne  network admin http://controller:9696

修改配置文件(linuxbridge网络架构)

[root@controller ~]# vi /etc/neutron/neutron.conf

[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[database]
connection = mysql+pymysql://neutron:your_password@controller/neutron

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

[root@controller ~]# vi /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
#type_drivers = flat,vlan,vxlan
type_drivers = flat,vlan
#tenant_network_types = vxlan
tenant_network_types =
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security

[ml2_type_flat]
flat_networks = provider

[ml2_type_vxlan]
#vni_ranges = 1:1000

[securitygroup]
enable_ipset = true

[root@controller ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens160 #ens160为物理机对外访问网卡

[vxlan]
enable_vxlan = false
#enable_vxlan = true
#local_ip = 172.30.154.44
#l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[root@controller ~]# vi /etc/neutron/l3_agent.ini

[DEFAULT]
interface_driver = linuxbridge

[root@controller ~]# vi /etc/neutron/dhcp_agent.ini

[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

[root@controller ~]# vi /etc/neutron/metadata_agent.ini

[DEFAULT]
nova_metadata_host = controller
#metadata_proxy_shared_secret = 000000
metadata_proxy_shared_secret = METADATA_SECRET

[root@controller ~]# vi /etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

创建软链接

[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

启动服务

#重启nova-api服务
[root@controller ~]# systemctl restart openstack-nova-api.service

#linuxbridge架构
[root@controller ~]# systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
[root@controller ~]# systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service

computer配置neutron服务

修改配置文件(linuxbridge架构)

[root@computer ~]# vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

[root@computer ~]# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:ens160

[vxlan]
enable_vxlan = false
#local_ip = 192.168.29.149
#l2_population = true

[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[root@computer ~]# vi /etc/nova/nova.conf

[neutron]
url = http://controller:9696
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron

启动服务

#重启nova-compute服务
[root@compute ~]# systemctl stop openstack-nova-compute.service
[root@compute ~]# systemctl start openstack-nova-compute.service
#注:直接restart重启可能会导致报错

#linuxbridge架构
[root@compute ~]# systemctl start neutron-linuxbridge-agent.service
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service
验证

[root@controller ~]# openstack network agent list

#查看日志
[root@computer ~]# tail /var/log/nova/nova-compute.log

配置dashboard组件

修改配置文件

[root@controller ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = [\'*\', \'two.example.com\']
SESSION_ENGINE = \'django.contrib.sessions.backends.cache\'
CACHES = { \'default\': { \'BACKEND\': \'django.core.cache.backends.memcached.MemcachedCache\', \'LOCATION\': \'controller:11211\', }
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
WEBROOT = \'/dashboard/\'

重建apache的dashboard配置文件

[root@controller  ~]# cd /usr/share/openstack-dashboard
[root@controller  ~]# python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf
[root@controller  ~]#ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
[root@controller ~]# vim /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
#将原有的配置注释掉,添加以下配置
#WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi.py
#Alias /static /usr/share/openstack-dashboard/stati

WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static

重启服务

[root@controller ~]# systemctl restart httpd.service memcached.service

访问web界面

浏览器访问http://ip/dashboard

安装并配置一个存储节点(类似于计算节点)

没有存储节点默认存储在计算节点的本地硬盘,目录为/var/lib/nova/instances

本地硬盘的优势是性能好,缺点是不灵活

本次试验还是安装在控制节点上面(对于cinder来说就是一个存储节点)

Computer配置cinder服务(有第二块硬盘可配置,区别系统盘)

配置cinder硬盘

[root@computer~]# mkfs.xfs -f /dev/sdb

配置逻辑卷

[root@computer~]# pvcreate /dev/sdb
[root@computer~]# vgcreate cinder-volumes /dev/sdb

修改lvm的配置文件中添加filter,只有instance可以访问

[root@test03 ~]# vim /etc/lvm/lvm.conf
143 filter = ["a/sda/","a/sdb/","r/.*/"]

启动lvm 程序

[root@test03 ~]# systemctl enable lvm2-lvmetad.service
[root@test03 ~]# systemctl start lvm2-lvmetad.service

修改配置文件

[root@controller ~]# cp -a /etc/cinder/cinder.conf{,.bak}
[root@controller ~]# grep -Ev \'^$|#\' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
[root@controller ~]# vim /etc/cinder/cinder.conf

[default]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.29.149
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:your_password@controller/cinder

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
#auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

#没有lvm标签自行添加
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
#volumes_dir = $state_path/volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
#iscsi_ip_address = 172.30.154.47#存储节点ip

启动服务

[root@computer~]# systemctl start openstack-cinder-volume.service  target.service 
[root@computer~]# systemctl enable openstack-cinder-volume.service  target.service 

Computer配置tgt服务(此服务官方文档未说明,具体作用未知,查看源码有此服务,故安装,不安装报错)

安装 scsi-target-utils

[root@computer~]# yum --enablerepo=epel -y install scsi-target-utils libxslt

配置

vim /etc/tgt/tgtd.conf
添加如下内容
include /var/lib/cinder/volumes/*
启动 tgtd 服务
//设置开机启动
systemctl enable tgtd

//启动
systemctl start tgtd

Controller配置cinder服务

创建并配置cinder用户

[root@controller ~]# openstack user create --domain default --password-prompt cinder
[root@controller ~]# openstack role add --project service --user cinder admin

创建cinder服务实体

[root@controller ~]# openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
[root@controller ~]# openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3

创建cinder服务端点

[root@controller ~]# openstack endpoint create --region RegionOne  volumev2 public http://controller:8776/v2/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev2 internal  http://controller:8776/v2/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev2 admin  http://controller:8776/v2/%\(project_id\)s

[root@controller ~]# openstack endpoint create --region RegionOne  volumev3 public http://controller:8776/v3/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev3 internal  http://controller:8776/v3/%\(project_id\)s
[root@controller ~]# openstack endpoint create --region RegionOne  volumev3 admin http://controller:8776/v3/%\(project_id\)s

编辑配置文件

[root@controller ~]# cp -a /etc/cinder/cinder.conf{,.bak}
[root@controller ~]# grep -Ev \'^$|#\' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
[root@controller ~]# vim /etc/cinder/cinder.conf

[default]
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
my_ip = 192.168.29.148

[database]
connection = mysql+pymysql://cinder:your_password@controller/cinder
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
#auth_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = cinder

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

[root@controller ~]# vim /etc/nova/nova.conf

[cinder]
os_region_name = RegionOne

重启nova服务

systemctl restart openstack-nova-api.service

同步数据库

[root@controller ~]# su -s /bin/sh -c "cinder-manage db sync" cinder

启动服务

[root@controller ~]# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service 
[root@controller ~]# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service 

查看状态

[root@controller ~]# openstack volume service list

创建卷

#容量为1G
[root@controller ~]# cinder create --name demo_volume 1

挂载卷

#查看卷id
[root@controller ~]# cinder list
#挂载卷到云主机
[root@controller ~]# nova volume-attach mycentos e9804810-9dce-47f6-84f7-25a8da672800

创建实例

镜像下载

下载clould 镜像 (官网镜像存在bug,不建议使用)

最简单的方法是使用标准镜像。主流的Linux发行版都提供可以在 OpenStack 中直接使用的cloud镜像,下载地址:

CentOS6:http://cloud.centos.org/centos/6/images/

CentOS7:http://cloud.centos.org/centos/7/images/

Ubuntu14.04:http://cloud-images.ubuntu.com/trusty/current/

Ubuntu16.04:http://cloud-images.ubuntu.com/xenial/current/

定制 openstack centos7镜像(推荐)

参考次博客方法定制:
https://www.jianshu.com/p/137e6f3f0369

上传镜像

openstack image create “centos” –file CentOS-7-x86_64-Azure-1707.qcow2 –disk-format qcow2 –container-format bare –public

openstack image create –disk-format qcow2 –container-format bare –public –file /root/CentOS-7-x86_64-Minimal-1708.iso CentOS-7-x86_64

查看制作的镜像信息

openstack image list

创建网络

openstack network create –share –external –provider-physical-network provider –provider-network-type flat provider

参数
–share 允许所有项目使用虚拟网络
–external 定义外接虚拟网络 如果需要创建外网使用 --internal
–provider-physical-network provider && --provider-network-type flat 连接flat 虚拟网络

创建子网

openstack subnet create --network provider  --allocation-pool start=10.71.11.50,end=10.71.11.60 --dns-nameserver 114.114.114.114 --gateway 10.71.11.254 --subnet-range 10.71.11.0/24 provider
openstack network list

创建flavor

openstack flavor create --id 1 --vcpus 4 --ram 128 --disk 1 m2.nano
openstack flavor list

控制节点生成秘钥对,在启动实例之前,需要将公钥添加到Compute服务

ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey

添加安全组,允许ICMP(ping)和安全shell(SSH)

openstack security group rule create --proto icmp default
openstack security group list

允许安全shell(SSH)访问

openstack security group rule create --proto tcp --dst-port 22 default

创建虚拟机

方法一:界面按照指引创建

方法二:

openstack server create --flavor min --image cirros --nic net-id=6ef57ba4-b18b-4d37-9696-ca6d740ae586 --security-group default --key-name liukey cirros

查看虚拟机状态

openstack server list

版权声明:本文为royfans原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/royfans/p/15176966.html