centos7部署openstackk m版本可以安装在centos6.5上吗

VMware/CentOS6.5一步一步安装Openstack&Icehouse(四)
Dashboard的安装
文档上很简单的一点,却花了挺长时间搞定,有点意思。
Dashboard不允许root用户登录使用,所以先建立一个名为stack的用户,要有sudo权限
[root@controller openstack]# cat 28-CreateStackUser.sh
mkdir -p /opt/stack
groupadd stack
useradd -g stack -s /bin/bash -d /opt/stack -m stack
echo "Giving stack user passwordless sudo privileges"
# UEC images ``/etc/sudoers`` does not have a ``#includedir``, add
grep -q "^#includedir.*/etc/sudoers.d" /etc/sudoers ||
"#includedir /etc/sudoers.d" && /etc/sudoers
( umask 226 && echo "stack ALL=(ALL) NOPASSWD:ALL" \
/etc/sudoers.d/50_stack_sh )
chown -R stack:stack /opt/stack
然后安装软件
# yum install memcached python-memcached mod_wsgi openstack-dashboard
编辑/etc/openstack-dashboard/local_settings文件
找到CACHES一段,把注释去掉,成这样
&CACHES = {
&&& 'default':
'BACKEND' :
'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211',
安装文件里写的‘127.0.0.1:11211’后面没有逗号,local_settings里的demo是有逗号的,经过尝试,没有区别。
ALLOWED_HOSTS = ['localhost', 'my-desktop']
这部分改成了ALLOWED_HOSTS = ['192.168.1.128','localhost']
192.168.1.128是我笔记本电脑的IP
OPENSTACK_HOST = "controller"
保存文件,按照安装文档运行下面命令
# setsebool -P httpd_can_network_connect on
因为之前我把SELINUX关掉了,所以这条命令报错,忽略。
# service httpd start
# service memcached start
# chkconfig httpd on
# chkconfig memcached on
在controller的桌面上用stack用户登录,firefox访问,结果出错了
Internal Server Error
The server encountered an internal error or misconfiguration and
was unable to complete.
什么情况,反复对照文档无果。只有研究日志。
看/var/log/httpd/error_log
有这样的内容:
[Mon Jul 07 22:45:37 2014] [error] [client
192.168.1.131]&& File
"/usr/share/openstack-dashboard/openstack_dashboard/wsgi/../../openstack_dashboard/local/local_settings.py",
[Mon Jul 07 22:45:37 2014] [error] [client
192.168.1.131]&&&&&
CACHES = {
[Mon Jul 07 22:45:37 2014] [error] [client
192.168.1.131]&&&&
[Mon Jul 07 22:45:37 2014] [error] [client
192.168.1.131]& IndentationError: unexpected
竟然是因为CACHES前面多了个空格!难以想象这么高大上的软件是如此的脆弱,节操何在?!把空格去掉,再重启httpd,memcached,用浏览器再访问。Internal
Error就没有了。新问题又来了。
Something went wrong!
An unexpected error has occurred. Try refreshing the page. If
that doesn't help, contact your local administrator
再看日志,有这么一行:
&SuspiciousOperation: Invalid HTTP_HOST header
(you may need to set ALLOWED_HOSTS): controller
这个提示还像话,我在ALLOWED_HOSTS里只写了localhost和我的笔记本电脑的IP,加上controller,变成
ALLOWED_HOSTS = ['controller','192.168.1.128', 'localhost']
重启httpd,memcached服务,在controller的桌面上终于可以访问dashboard了。
在笔记本电脑上用浏览器访问,仍然是Something
went wrong!提示。这回不用看日志了,看来访问dashboard的URL是不能改的。
在笔记本电脑上,开始-&附件-&记事本
点右键,以管理员身份运行,编辑c:\windows\system32\drivers\etc\hosts文件,在里面加一行
192.168.1.131 controller
然后在浏览器地址栏上输入,回车,搞定!
已投稿到:
以上网友发言只代表其个人观点,不代表新浪网的观点或立场。在centos7上安装openstack mitaka版本
前言:openstack真是一个庞然大物,想要吃透还真不容易,所以在对openstack大概有了一个了解的时候,就应该是部署,虽然openstack的安装方式有rdo或者devstack等一键安装工具,但是最好浅尝辄止,有了大概的使用
前言:openstack真是一个庞然大物,想要吃透还真不容易,所以在对openstack大概有了一个了解的时候,就应该是部署,虽然openstack的安装方式有rdo或者devstack等一键安装工具,但是最好浅尝辄止,有了大概的使用经验之后就应该是从头到尾的安装一遍了,不然对于那些报错,以及故障的解决一定是不够气定神闲的,因此,当你有了openstack的基本认识后,开始安装吧~注:openstack的官方文档写得真的是,好的不要不要的,但是看英文总是感觉有点不溜,因此在官方文档的基础上写得这篇笔记。参考:http://docs.openstack.org/mitaka/install-guide-rdo/首先应该是大概的规划,需要几个节点,选择什么操作,网络怎么划分~下面是我的大概规划 节点数:2 (控制节点,计算节点) 操作系统:CentOS
release 7.2.1511 (Core) 网络配置:
控制节点: 10.0.0.101 192.168.15.101
结算节点: 10.0.0.102 192.168.15.102先决条件:The following minimum requirements should support a proof-of-concept environment with core services and several CirrOS instances:Controller Node: 1 processor, 4 GB memory, and 5 GB storageCompute Node: 1 processor, 2 GB memory, and 10 GB storage官方建议概念验证的最小硬件需求。 控制节点 1 处理器,4 GB内存,5 GB硬盘 计算节点 1 处理器,2 GB内存,10 GB硬盘参考:http://docs.openstack.org/mitaka/install-guide-rdo/environment.html注:如果你是用手动一步一步的创建操作系统,配置网络,那么笔者就得好好的鄙视你了~~研究研究vagrant吧,通过下面的配置文件你就能一条命令生成两个,并配置好网络了,vagrant简易教程参考:http://youerning./5102#-*-mode:ruby-*-
#vi:setft=ruby:
Vagrant.configure(2)do|config|
config.vm.box="centos7"
node_servers={:control=&['10.0.0.101','192.168.15.101'],
:compute=&['10.0.0.102','192.168.15.102']
node_servers.eachdo|node_name,node_ip|
config.vm.definenode_namedo|node_config|
node_config.vm.host_name=node_name.to_s
node_config.vm.network:private_network,ip:node_ip[0]
node_config.vm.network:private_network,ip:node_ip[1],virtualbox_inet:true
config.vm.boot_timeout=300
node_config.vm.provider"virtualbox"do|v|
v.memory=4096
end通过vagrant up一条命令,稍等一会,两个热腾腾的虚拟机就出炉了,我们的环境就OK了~~环境如下 操作系统:CentOS Linux release 7.2.1511 (Core) 网络配置:
控制节点: 10.0.0.101 192.168.15.101
结算节点: 10.0.0.102 192.168.15.102注意:上面的config.vm.box="centos7",首先需要有个centos7的box在开始部署前,我们先捋一捋openstack安装步骤首先是软件环境准备,我们需要将一些通用的软件以及源仓库等进行配置,基本如下 NTP服务器
控制节点,其他节点 openstack 安装包仓库 通用:
===& MariaDB
NoSQL 数据库 ==& MongoDB(基本组件不需要,)
消息队列 ==& RabbitMQ
Memcached再就是openstack整个框架下的各个组件,基本组件如下 认证服务 ===& Keystone 镜像服务 ===& Glance 计算资源服务 ===& Nova 网络资源服务 ===& Neutron Dashboard ===& Horizon 块存储服务 ===& Cinder其他存储服务,如下 文件共享服务===& Manila 对象存储服务===& Swift其他组件,如下 编排服务===& Heat 遥测服务===& Ceilometer 数据库服务===& Trove环境准备域名解析:在各个节点编辑hosts文件,加入以下配置
10.0.0.101 controller
10.0.0.102 computentp时间服务器控制节点
1) 安装chrony软件包
yum install chrony
2) 编辑配置文件 /etc/chrony.conf,添加以下内容,202.108.6.95可根据自己需求自行更改。
server 202.108.6.95 iburst
allow 10.0.0.0/24 3)加入自启动,并启动
# systemctl enable chronyd.service
# systemctl start chronyd.service其他节点
1) 安装chrony软件包
yum install chrony
2) 编辑配置文件 /etc/chrony.conf,添加以下内容
server controller iburst
allow 10.0.0.0/24 3)加入自启动,并启动
# systemctl enable chronyd.service
# systemctl start chronyd.service验证:控制节点chronycsources 210 Number of sources = 2 MS Name/IP address
Stratum Poll Reach LastRx Last sample ============================================================= ^- 192.0.2.11
137 -2814us[-3000us] +/-
43ms ^* 192.0.2.12
+17us[ -23us] +/-
68ms 其他节点#chronycsources 210 Number of sources = 1 MS Name/IP address
Stratum Poll Reach LastRx Last sample =============================================================================== ^* controller
+15us[ -87us] +/-
15msopenstack 安装包仓库 安装相应openstack版本yum源yuminstallcentos-release-openstack-mitaka 系统更新yumupgrade 注:如果系统内核有更新,需要重启安装openstackclient,openstack-selinuxyuminstallpython-openstackclient
yuminstallopenstack-selinux注:如果报什么 Package does not match intended download,则yum clean all或者直接下载rpm包安装吧。参考下载地址:http://ftp.usf.edu/pub/centos/7/cloud/x86_64/openstack-kilo/common/SQL数据库 安装yuminstallmariadbmariadb-serverpython2-PyMySQL创建/f.f配置文件,加入以下内容#绑定IP
bind-address=10.0.0.11
#设置字符集等
default-storage-engine=innodb.
innodb_file_per_table
collation-server=utf8_general_ci
character-set-server=utf8 配置启动项,启动等 systemctlenablemariadb.service
systemctlstartmariadb.service 数据库初始化,创建root密码等,操作如下mysql_secure_installation
Enter current password for root (enter for none):[Enter]
Set root password? [Y/n] Y
New password: openstack
Re-enter new password:openstack
Remove anonymous users? [Y/n] Y
Disallow root login remotely? [Y/n] n
Remove test database and access to it? [Y/n] Y
Reload privilege tables now? [Y/n] Y消息队列rabbitmq 安装yuminstallrabbitmq-server 配置启动项,启动systemctlenablerabbitmq-server.service
systemctlstartrabbitmq-server.service 添加openstack用户rabbitmqctladd_useropenstackRABBIT_PASS 设置openstack用户的权限,依次分别为写,读,访问rabbitmqctlset_permissionsopenstack".*"".*"".*"NoSQL Mongodb安装yuminstallmongodb-servermongodb配置/etc/mongod.conf配置文件bind_ip=10.0.0.11
#smallfile=true可选
smallfiles=true配置启动项,启动#systemctlenablemongod.service
#systemctlstartmongod.serviceMemcached安装#yuminstallmemcachedpython-memcached配置启动项,启动#systemctlenablememcached.service
#systemctlstartmemcached.service至此,openstack整个框架的软件环境基本搞定,下面就是各组件了。安装各组件很有意思,除了keystone基本上是差不多的步骤,唯一的区别就是创建时指定的名字不同而已,基本是一般以下步骤。1)配置数据库createdatabasexxx
GRANTALLPRIVILEGESONkeystone.*TO'xxxx'@'localhost'\
IDENTIFIEDBY'XXXX_DBPASS';
GRANTALLPRIVILEGESONkeystone.*TO'xxxx'@'%'\
IDENTIFIEDBY'XXXX_DBPASS';2)安装yuminstallxxx3)配置文件配置各项服务的连接,比如数据库,rabbitmq等认证配置特定配置5)数据库同步创建需要的表4)加入启动项,启动#systemctlenableopenstack-xxx.service
#systemctlstartopenstack-xxxx.service5)创建用户,service,endpoint等openstackusercreatexxx
openstackservicecreatexxx
openstackendpointcreatexxx6)验证服务是否成功注:配置文件的配置建议首先备份,然后为了省略不必要的篇幅,在此说明配置文件的编辑方式,如下。[DEFAULT]...admin_token = ADMIN_TOKEN上面的内容,指明在[DEFAULT]的段落加入admin_token = ADMIN_TOKEN内容。各组件安装认证服务 Keystone配置数据库$mysql-uroot-p
CREATEDATABASE
GRANTALLPRIVILEGESONkeystone.*TO'keystone'@'localhost'\
IDENTIFIEDBY'KEYSTONE_DBPASS';
GRANTALLPRIVILEGESONkeystone.*TO'keystone'@'%'\
IDENTIFIEDBY'KEYSTONE_DBPASS';安装#yuminstallopenstack-keystonehttpdmod_wsgi配置文件/etc/keystone/keystone.confadmin令牌[DEFAULT]
admin_token=ADMIN_TOKEN数据库[database]
connection=mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone令牌生成方式[token]
provider=fernet注:上面的ADMIN_TOKEN可用openssl rand -hex 10命令生成,或者填入一串自定义的字符串数据库同步#su-s/bin/sh-c"keystone-managedb_sync"keystone初始化fernet秘钥。令牌的生成方式参考:http://blog.csdn.net/miss_yang_cloud/article/details/#keystone-managefernet_setup--keystone-userkeystone--keystone-groupkeystone配置Apache编辑/etc/httpd/conf/httpd.conf更改一下内容ServerNamecontroller创建/etc/httpd/conf.d/wsgi-keystone.conf配置文件,加入以下内容Listen5000
Listen35357
&VirtualHost*:5000&
WSGIDaemonProcesskeystone-publicprocesses=5threads=1user=keystonegroup=keystonedisplay-name=%{GROUP}
WSGIProcessGroupkeystone-public
WSGIScriptAlias//usr/bin/keystone-wsgi-public
WSGIApplicationGroup%{GLOBAL}
WSGIPassAuthorizationOn
ErrorLogFormat"%{cu}t%M"
ErrorLog/var/log/httpd/keystone-error.log
CustomLog/var/log/httpd/keystone-access.logcombined
&Directory/usr/bin&
Requireallgranted
&/Directory&
&/VirtualHost&
&VirtualHost*:35357&
WSGIDaemonProcesskeystone-adminprocesses=5threads=1user=keystonegroup=keystonedisplay-name=%{GROUP}
WSGIProcessGroupkeystone-admin
WSGIScriptAlias//usr/bin/keystone-wsgi-admin
WSGIApplicationGroup%{GLOBAL}
WSGIPassAuthorizationOn
ErrorLogFormat"%{cu}t%M"
ErrorLog/var/log/httpd/keystone-error.log
CustomLog/var/log/httpd/keystone-access.logcombined
&Directory/usr/bin&
Requireallgranted
&/Directory&
&/VirtualHost&配置启动项,启动#systemctlenablehttpd.service
#systemctlstarthttpd.service创建service,API endpoint为了避免不必要的篇幅,将admin_token,endpoint url配置到环境变量。$exportOS_TOKEN=ADMIN_TOKEN
$exportOS_URL=http://controller:35357/v3
$exportOS_IDENTITY_API_VERSION=3创建service$openstackservicecreate\
--namekeystone--description"OpenStackIdentity"identity创建endpoint,依次有public,internal,admin$openstackendpointcreate--regionRegionOne\
identitypublichttp://controller:5000/v3
$openstackendpointcreate--regionRegionOne\
identityinternalhttp://controller:5000/v3
$openstackendpointcreate--regionRegionOne\
identityadminhttp://controller:35357/v3创建域,项目,用户,角色 domain,project,user,role创建domainopenstackdomaincreate--description"DefaultDomain"default创建projectopenstackusercreate--domaindefault\
--password-promptadmin创建admin roleopenstackrolecreateadmin将admin角色加入admin项目中openstackroleadd--projectadmin--useradminadmin创建service项目openstackprojectcreate--domaindefault\
--description"ServiceProject"service创建demo项目openstackprojectcreate--domaindefault\
--description"DemoProject"demo创建demo用户openstackusercreate--domaindefault\
--password-promptdemo创建user角色openstackrolecreateuser将user角色加入到demo项目中openstackroleadd--projectdemo--userdemouser注:记住创建用户时的密码。验证admin用户unsetOS_TOKENOS_URL
openstack--os-auth-urlhttp://controller:35357/v3\
--os-project-domain-namedefault--os-user-domain-namedefault\
--os-project-nameadmin--os-usernameadmintokenissuePassword:+------------+-----------------------------------------------------------------+| Field
|+------------+-----------------------------------------------------------------+| expires
| T20:14:07.056119Z
| gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv ||
| atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 ||
| o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws
|| project_id | 343d245e806dfaefa9afdc
|| user_id
| acd92d79dc16
|+------------+-----------------------------------------------------------------+验证demo用户$openstack--os-auth-urlhttp://controller:5000/v3\
--os-project-domain-namedefault--os-user-domain-namedefault\
--os-project-namedemo--os-usernamedemotokenissuePassword:+------------+-----------------------------------------------------------------+| Field
|+------------+-----------------------------------------------------------------+| expires
| T20:15:39.014479Z
| gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW ||
| yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ ||
| JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U
|| project_id | ed0b60bfb0a533d5943f
|| user_id
| cbcc4888bfa9ab73a2256f27
|+------------+-----------------------------------------------------------------+如果有以上格式返回,验证通过admin,demo用户的环境变量脚本正常情况下,当然吧诸如os-xxxx的参数放在环境变量中,为了更快的在admin,demo用户之间切换,创建环境脚本创建admin-openrcexportOS_PROJECT_DOMAIN_NAME=default
exportOS_USER_DOMAIN_NAME=default
exportOS_PROJECT_NAME=admin
exportOS_USERNAME=admin
exportOS_PASSWORD=ADMIN_PASS
exportOS_AUTH_URL=http://controller:35357/v3
exportOS_IDENTITY_API_VERSION=3
exportOS_IMAGE_API_VERSION=2创建demo-openrcexportOS_PROJECT_DOMAIN_NAME=default
exportOS_USER_DOMAIN_NAME=default
exportOS_PROJECT_NAME=demo
exportOS_USERNAME=demo
exportOS_PASSWORD=DEMO_PASS
exportOS_AUTH_URL=http://controller:5000/v3
exportOS_IDENTITY_API_VERSION=3
exportOS_IMAGE_API_VERSION=2在此验证admin首先 . admin-openrc$openstacktokenissue+------------+-----------------------------------------------------------------+| Field
|+------------+-----------------------------------------------------------------+| expires
| T20:44:35.659723Z
| gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl ||
| JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e ||
| eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E
|| project_id | 343d245e806dfaefa9afdc
|| user_id
| acd92d79dc16
|+------------+-----------------------------------------------------------------+镜像服务 Glance配置数据库$mysql-uroot-p
CREATEDATABASE
GRANTALLPRIVILEGESONglance.*TO'glance'@'localhost'\
IDENTIFIEDBY'GLANCE_DBPASS';
GRANTALLPRIVILEGESONglance.*TO'glance'@'%'\
IDENTIFIEDBY'GLANCE_DBPASS';创建service,user,role$.admin-openrc
$openstackusercreate--domaindefault--password-promptglance
$openstackroleadd--projectservice--userglanceadmin创建endpoint,依次有public,internal,admin$openstackservicecreate--nameglance\
--description"OpenStackImage"image
$openstackendpointcreate--regionRegionOne\
imagepublichttp://controller:9292
$openstackendpointcreate--regionRegionOne\
imageinternalhttp://controller:9292
$openstackendpointcreate--regionRegionOne\
imageadminhttp://controller:9292安装#yuminstallopenstack-glance配置文件/etc/glance/glance-api.conf数据库[database]
connection=mysql+pymysql://glance:GLANCE_DBPASS@controller/glancekeystone认证[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=glance
password=GLANCE_PASS
[paste_deploy]
flavor=keystoneglance存储[glance_store]
stores=file,http
default_store=file
filesystem_store_datadir=/var/lib/glance/images/配置文件/etc/glance/glance-registry.conf数据库[database]
connection=mysql+pymysql://glance:GLANCE_DBPASS@controller/glancekeystone认证[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=glance
password=GLANCE_PASS
[paste_deploy]
flavor=keystone同步数据库#su-s/bin/sh-c"glance-managedb_sync"glance启动#systemctlenableopenstack-glance-api.service\
openstack-glance-registry.service
#systemctlstartopenstack-glance-api.service\
openstack-glance-registry.service验证$.admin-openrc下载cirros镜像$wget
http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img创建镜像$openstackimagecreate"cirros"\
--filecirros-0.3.4-x86_64-disk.img\
--disk-formatqcow2--container-formatbare\
--public如果执行以下命令,显示如下,则成功$openstackimagelist
+--------------------------------------+--------+
+--------------------------------------+--------+
|a7-41ea-9b49-bb9|cirros|
+--------------------------------------+--------+计算资源服务 nova控制节点数据库$mysql-uroot-p
CREATEDATABASEnova_
CREATEDATABASE
GRANTALLPRIVILEGESONnova_api.*TO'nova'@'localhost'\
IDENTIFIEDBY'NOVA_DBPASS';
GRANTALLPRIVILEGESONnova_api.*TO'nova'@'%'\
IDENTIFIEDBY'NOVA_DBPASS';
GRANTALLPRIVILEGESONnova.*TO'nova'@'localhost'\
IDENTIFIEDBY'NOVA_DBPASS';
GRANTALLPRIVILEGESONnova.*TO'nova'@'%'\
IDENTIFIEDBY'NOVA_DBPASS';创建service,user,role$.admin-openrc
$openstackusercreate--domaindefault\
--password-promptnova
$openstackroleadd--projectservice--usernovaadmin
$openstackservicecreate--namenova\
--description"OpenStackCompute"compute创建endpoint,依次有public,internal,admin$openstackendpointcreate--regionRegionOne\
computepublichttp://controller:/%\(tenant_id\)s
$openstackendpointcreate--regionRegionOne\
computeinternalhttp://controller:/%\(tenant_id\)s
$openstackendpointcreate--regionRegionOne\
computeadminhttp://controller:/%\(tenant_id\)s安装#yuminstallopenstack-nova-apiopenstack-nova-conductor\
openstack-nova-consoleopenstack-nova-novncproxy\
openstack-nova-scheduler配置文件/etc/nova/nova.conf启用的api[DEFAULT]
enabled_apis=osapi_compute,metadata
[api_database]
connection=mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api数据库[database]
connection=mysql+pymysql://nova:NOVA_DBPASS@controller/novarabbitmq队列[DEFAULT]
rpc_backend=rabbit
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_userid=openstack
rabbit_password=RABBIT_PASSkeystone认证[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=nova
password=NOVA_PASS绑定ip[DEFAULT]
my_ip=10.0.0.101支持neutron[DEFAULT]
use_neutron=True
firewall_driver=nova.virt.firewall.NoopFirewallDrivervnc配置[vnc]
vncserver_listen=$my_ip
vncserver_proxyclient_address=$my_ipglance配置[glance]
api_servers=http://controller:9292并发锁[oslo_concurrency]
lock_path=/var/lib/nova/tmp同步数据库#su-s/bin/sh-c"nova-manageapi_dbsync"nova
#su-s/bin/sh-c"nova-managedbsync"nova启动#systemctlenableopenstack-nova-api.service\
openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service\
openstack-nova-conductor.serviceopenstack-nova-novncproxy.service
#systemctlstartopenstack-nova-api.service\
openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service\
openstack-nova-conductor.serviceopenstack-nova-novncproxy.service计算节点安装#yuminstallopenstack-nova-compute配置文件/etc/nova/nova.confrabbitmq队列[DEFAULT]
rpc_backend=rabbit
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_userid=openstack
rabbit_password=RABBIT_PASSkeystone认证[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=nova
password=NOVA_PASS绑定ip[DEFAULT]
my_ip=10.0.0.102支持neutron[DEFAULT]
use_neutron=True
firewall_driver=nova.virt.firewall.NoopFirewallDriver配置VNC[vnc]
enabled=True
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$my_ip
novncproxy_base_url=http://controller:6080/vnc_auto.html配置Glance[glance]
api_servers=http://controller:9292并发锁[oslo_concurrency]
lock_path=/var/lib/nova/tmp虚拟化驱动[libvirt]
virt_type=qemu启动#systemctlenablelibvirtd.serviceopenstack-nova-compute.service
#systemctlstartlibvirtd.serviceopenstack-nova-compute.service验证$.admin-openrc$openstackcomputeservicelist+----+--------------------+------------+----------+---------+-------+----------------------------+| Id | Binary
| Status | State | Updated At
|+----+--------------------+------------+----------+---------+-------+----------------------------+| 1 | nova-consoleauth
| controller | internal | enabled | up
| T23:11:15.000000 || 2 | nova-scheduler
| controller | internal | enabled | up
| T23:11:15.000000 || 3 | nova-conductor
| controller | internal | enabled | up
| T23:11:16.000000 || 4 | nova-compute
| compute1
| enabled | up
| T23:11:20.000000 |+----+--------------------+------------+----------+---------+-------+----------------------------+网络服务 neutron控制节点数据库$mysql-uroot-p
CREATEDATABASE
GRANTALLPRIVILEGESONneutron.*TO'neutron'@'localhost'\
IDENTIFIEDBY'NEUTRON_DBPASS';
GRANTALLPRIVILEGESONneutron.*TO'neutron'@'%'\
IDENTIFIEDBY'NEUTRON_DBPASS';创建service,user,role$.admin-openrc
$openstackusercreate--domaindefault--password-promptneutron
$openstackroleadd--projectservice--userneutronadmin
$openstackservicecreate--nameneutron\
--description"OpenStackNetworking"network创建endpoint,依次有public,internal,admin$openstackendpointcreate--regionRegionOne\
networkpublichttp://controller:9696
$openstackendpointcreate--regionRegionOne\
networkinternalhttp://controller:9696
$openstackendpointcreate--regionRegionOne\
networkadminhttp://controller:9696配置提供者网络 provider network,参考:http://docs.openstack.org/mitaka/install-guide-rdo/neutron-controller-install-option1.html安装#yuminstallopenstack-neutronopenstack-neutron-ml2\
openstack-neutron-linuxbridgeebtables配置文件/etc/neutron/neutron.conf数据库[database]
connection=mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron启用二层插件,禁用其他插件[DEFAULT]
core_plugin=ml2
service_plugins=rabbitmq队列[DEFAULT]
rpc_backend=rabbit
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_userid=openstack
rabbit_password=RABBIT_PASSkeystone认证[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=neutron
password=NEUTRON_PASS并发锁[oslo_concurrency]
lock_path=/var/lib/neutron/tmp配置文件/etc/neutron/plugins/ml2/ml2_conf.ini驱动[ml2]
type_drivers=flat,vlan禁用个人(selfservice)网络[ml2]
tenant_network_types=启用linux网桥[ml2]
mechanism_drivers=linuxbridge端口安装扩展[ml2]
extension_drivers=port_securityflat网络[ml2_type_flat]
flat_networks=provider启用ipset[securitygroup]
enable_ipset=True配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini[linux_bridge]
physical_interface_mappings=provider:PROVIDER_INTERFACE_NAME
enable_vxlan=False
[securitygroup]
enable_security_group=True
firewall_driver=neutron.agent.linux.iptables_firewall.IptablesFirewallDriver注:PROVIDER_INTERFACE_NAME为网络接口,如eth 1之类的配置文件/etc/neutron/dhcp_agent.ini[DEFAULT]
interface_driver=neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver=neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata=True配置文件/etc/neutron/metadata_agent.ini[DEFAULT]
nova_metadata_ip=controller
metadata_proxy_shared_secret=METADATA_SECRET配置文件/etc/nova/nova.conf[neutron]
url=http://controller:9696
auth_url=http://controller:35357
auth_type=password
project_domain_name=default
user_domain_name=default
region_name=RegionOne
project_name=service
username=neutron
password=NEUTRON_PASS
service_metadata_proxy=True
metadata_proxy_shared_secret=METADATA_SECRET软连接ln-s/etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini数据库同步su-s/bin/sh-c"neutron-db-manage--config-file/etc/neutron/neutron.conf\
--config-file/etc/neutron/plugins/ml2/ml2_conf.iniupgradehead"neutron重启nova-apisystemctlrestartopenstack-nova-api.service启动#systemctlenableneutron-server.service\
neutron-linuxbridge-agent.serviceneutron-dhcp-agent.service\
neutron-metadata-agent.service
#systemctlstartneutron-server.service\
neutron-linuxbridge-agent.serviceneutron-dhcp-agent.service\
neutron-metadata-agent.service
#systemctlenableneutron-l3-agent.service
#systemctlstartneutron-l3-agent.service计算节点安装yuminstallopenstack-neutron-linuxbridgeebtables配置文件 /etc/neutron/neutron.confrabbitmq队列[DEFAULT]
rpc_backend=rabbit
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_userid=openstack
rabbit_password=RABBIT_PASSkeystone认证[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=neutron
password=NEUTRON_PASS并发锁[oslo_concurrency]
lock_path=/var/lib/neutron/tmp配置文件/etc/nova/nova.conf[neutron]
url=http://controller:9696
auth_url=http://controller:35357
auth_type=password
project_domain_name=default
user_domain_name=default
region_name=RegionOne
project_name=service
username=neutron
password=NEUTRON_PASS重启nova-compute#systemctlrestartopenstack-nova-compute.service启动#systemctlenableneutron-linuxbridge-agent.service
#systemctlstartneutron-linuxbridge-agent.service验证$.admin-openrc
$neutronext-list+---------------------------+-----------------------------------------------+| alias
|+---------------------------+-----------------------------------------------+| default-subnetpools
| Default Subnetpools
|| network-ip-availability
| Network IP Availability
|| network_availability_zone | Network Availability Zone
|| auto-allocated-topology
| Auto Allocated Topology Services
|| ext-gw-mode
| Neutron L3 Configurable external gateway mode || binding
| Port Binding
|............Dashboard horizon注:必须在控制节点安装#yuminstallopenstack-dashboard配置文件/etc/openstack-dashboard/local_settingsOPENSTACK_HOST="controller"
ALLOWED_HOSTS=['*',]
SESSION_ENGINE='django.contrib.sessions.backends.cache'
'default':{
'BACKEND':'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION':'controller:11211',
OPENSTACK_KEYSTONE_URL="http://%s:5000/v3"%OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT=True
OPENSTACK_API_VERSIONS={
"identity":3,
"image":2,
"volume":2,
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN="default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE="user"
OPENSTACK_NEUTRON_NETWORK={
'enable_router':False,
'enable_quotas':False,
'enable_distributed_router':False,
'enable_ha_router':False,
'enable_lb':False,
'enable_firewall':False,
'enable_vpn':False,
'enable_fip_topology_check':False,
TIME_ZONE="Asia/Shanghai"启动#systemctlrestarthttpd.servicememcached.service验证访问http://controller/dashboard块存储 cinder数据库$mysql-uroot-p
CREATEDATABASE
GRANTALLPRIVILEGESONcinder.*TO'cinder'@'localhost'\
IDENTIFIEDBY'CINDER_DBPASS';
GRANTALLPRIVILEGESONcinder.*TO'cinder'@'%'\
IDENTIFIEDBY'CINDER_DBPASS';创建service,user,role$.admin-openrc
$openstackusercreate--domaindefault--password-promptcinder
$openstackroleadd--projectservice--usercinderadmin注意,这里创建两个service$openstackservicecreate--namecinder\
--description"OpenStackBlockStorage"volume
$openstackservicecreate--namecinderv2\
--description"OpenStackBlockStorage"volumev2创建endpoint,依次有public,internal,admin$openstackendpointcreate--regionRegionOne\
volumepublichttp://controller:8776/v1/%\(tenant_id\)s
$openstackendpointcreate--regionRegionOne\
volumeinternalhttp://controller:8776/v1/%\(tenant_id\)s
$openstackendpointcreate--regionRegionOne\
volumeadminhttp://controller:8776/v1/%\(tenant_id\)s注意,每个service对应三个endpoint$openstackendpointcreate--regionRegionOne\
volumev2publichttp://controller:8776/v2/%\(tenant_id\)s
$openstackendpointcreate--regionRegionOne\
volumev2internalhttp://controller:8776/v2/%\(tenant_id\)s
$openstackendpointcreate--regionRegionOne\
volumev2adminhttp://controller:8776/v2/%\(tenant_id\)s安装控制节点#yuminstallopenstack-cinder配置文件/etc/cinder/cinder.conf数据库[database]
connection=mysql+pymysql://cinder:CINDER_DBPASS@controller/cinderrabbitmq队列[DEFAULT]
rpc_backend=rabbit
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_userid=openstack
rabbit_password=RABBIT_PASSkeystone认证[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=cinder
password=CINDER_PASS绑定ip[DEFAULT]
my_ip=10.0.0.11并行锁[oslo_concurrency]
lock_path=/var/lib/cinder/tmp同步数据库#su-s/bin/sh-c"cinder-managedbsync"cinder配置文件/etc/nova/nova.conf[cinder]
os_region_name=RegionOne重启nova-api#systemctlrestartopenstack-nova-api.service启动#systemctlenableopenstack-cinder-api.serviceopenstack-cinder-scheduler.service
#systemctlstartopenstack-cinder-api.serviceopenstack-cinder-scheduler.service其他节点,可在计算节点加一块硬盘注:需要另外一块硬盘安装#yuminstalllvm2
#systemctlenablelvm2-lvmetad.service
#systemctlstartlvm2-lvmetad.service创建逻辑卷#preate/dev/sdb
Physicalvolume"/dev/sdb"successfullycreated#vgcreatecinder-volumes/dev/sdb
Volumegroup"cinder-volumes"successfullycreated配置文件/etc/lvm/lvm.confdevices{
filter=["a/sdb/","r/.*/"]注:新添加的硬盘一般为sdb,如果有sdc,sde等,则为filter = [ "a/sdb/", "a/sdb/","a/sdb/","r/.*/"],以此类推安装#yuminstallopenstack-cindertargetcli配置文件/etc/cinder/cinder.conf数据库[database]
connection=mysql+pymysql://cinder:CINDER_DBPASS@controller/cinderrabbitmq队列[DEFAULT]
rpc_backend=rabbit
[oslo_messaging_rabbit]
rabbit_host=controller
rabbit_userid=openstack
rabbit_password=RABBIT_PASSkeystone认证[DEFAULT]
auth_strategy=keystone
[keystone_authtoken]
auth_uri=http://controller:5000
auth_url=http://controller:35357
memcached_servers=controller:11211
auth_type=password
project_domain_name=default
user_domain_name=default
project_name=service
username=cinder
password=CINDER_PASS绑定ip[DEFAULT]
my_ip=10.0.0.102增加[lvm]及其内容[lvm]
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group=cinder-volumes
iscsi_protocol=iscsi
iscsi_helper=lioadm后端启用lvm[DEFAULT]
enabled_backends=lvm配置Glance API[DEFAULT]
glance_api_servers=http://controller:9292并行锁[oslo_concurrency]
lock_path=/var/lib/cinder/tmp启动#systemctlenableopenstack-cinder-volume.servicetarget.service
#systemctlstartopenstack-cinder-volume.servicetarget.service验证$.admin-openrc
$cinderservice-list+------------------+------------+------+---------+-------+----------------------------+-----------------+|
| Zone | Status | State |
Updated_at
| Disabled Reason |+------------------+------------+------+---------+-------+----------------------------+-----------------+| cinder-scheduler | controller | nova | enabled |
up | T01:30:54.000000 |
|| cinder-volume
| block1@lvm | nova | enabled |
up | T01:30:57.000000 |
|至此。基本上完成了,所有的安装,你可以在dashboard上首先用admin用户创建一个网络,然后用新建一个实例后记:虽然手动安装一整套实在有点夸张,这里还是用yum的呢~但是至少得这么手动来一次,其他时候就脚本或者安装工具吧,复制粘贴都把我复制的眼花了~其他组件就另起一篇文章了,值得注意的是,官方文档才是最好的文档本文出自 “又耳笔记” 博客,请务必保留此出处http://youerning./9358

我要回帖

更多关于 centos 部署openstack 的文章

 

随机推荐