oVirt 3.4.3-1 bug: when enable gluster and vdsm-gluster not installed in HOSTs , cann't active host after restart ovirt-engine

oVirt 3.4.3-1 bug: when enable gluster and vdsm-gluster not installed in HOSTs , cann't active host after restart ovirt-engine

前面一篇我在oVirt中的Cluster中开启了Gluster的选项, 但是在Volume中无法添加volume.


配置后重启ovirt-engine后, 主机就显示不可用了.

手工去激活主机也会报错, 如下 : 

/var/log/ovirt-engine/engine.log

2014-08-06 14:06:43,563 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-1) [3b0756e9] Lock Acquired to object EngineLock [exclusiveLocks= key: 44379bb8-e87b-4e00-a18e-5df43922da82 value: VDS
, sharedLocks= ]
2014-08-06 14:06:43,599 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-6-thread-8) [3b0756e9] Running command: ActivateVdsCommand internal: false. Entities affected :  ID: 44379bb8-e87b-4e00-a18e-5df43922da82 Type: VDS
2014-08-06 14:06:43,600 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-6-thread-8) [3b0756e9] Before acquiring lock in order to prevent monitoring for host 150 from data-center Default
2014-08-06 14:06:43,602 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-6-thread-8) [3b0756e9] Lock acquired, from now a monitoring of host will be skipped for host 150 from data-center Default
2014-08-06 14:06:43,636 INFO  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-6-thread-8) [3b0756e9] START, SetVdsStatusVDSCommand(HostName = 150, HostId = 44379bb8-e87b-4e00-a18e-5df43922da82, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 4f14d57
2014-08-06 14:06:43,641 INFO  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-6-thread-8) [3b0756e9] FINISH, SetVdsStatusVDSCommand, log id: 4f14d57
2014-08-06 14:06:43,694 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-6-thread-8) [3b0756e9] Activate finished. Lock released. Monitoring can run now for host 150 from data-center Default
2014-08-06 14:06:43,698 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-8) [3b0756e9] Correlation ID: 3b0756e9, Job ID: 0109c142-c237-4653-a0f8-ab49ff56cfcd, Call Stack: null, Custom Event ID: -1, Message: Host 150 was activated by admin.
2014-08-06 14:06:43,703 INFO  [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-6-thread-8) [3b0756e9] Lock freed to object EngineLock [exclusiveLocks= key: 44379bb8-e87b-4e00-a18e-5df43922da82 value: VDS
, sharedLocks= ]
2014-08-06 14:06:46,552 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-95) START, GetHardwareInfoVDSCommand(HostName = 150, HostId = 44379bb8-e87b-4e00-a18e-5df43922da82, vds=Host[150,44379bb8-e87b-4e00-a18e-5df43922da82]), log id: 63f04cab
2014-08-06 14:06:46,600 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-95) FINISH, GetHardwareInfoVDSCommand, log id: 63f04cab
2014-08-06 14:06:46,637 INFO  [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-95) [64e40754] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected :  ID: 44379bb8-e87b-4e00-a18e-5df43922da82 Type: VDS
2014-08-06 14:06:46,663 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-95) [64e40754] START, GlusterServersListVDSCommand(HostName = 150, HostId = 44379bb8-e87b-4e00-a18e-5df43922da82), log id: 4d0caa2
2014-08-06 14:06:46,671 ERROR [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-95) [64e40754] Command GlusterServersListVDSCommand(HostName = 150, HostId = 44379bb8-e87b-4e00-a18e-5df43922da82) execution failed. Exception: VDSNetworkException: org.apache.xmlrpc.XmlRpcException: <type 'exceptions.Exception'>:method "glusterHostsList" is not supported
2014-08-06 14:06:46,678 INFO  [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-95) [64e40754] FINISH, GlusterServersListVDSCommand, log id: 4d0caa2
2014-08-06 14:06:46,716 INFO  [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-95) [35d9d600] Running command: SetNonOperationalVdsCommand internal: true. Entities affected :  ID: 44379bb8-e87b-4e00-a18e-5df43922da82 Type: VDS
2014-08-06 14:06:46,741 INFO  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-95) [35d9d600] START, SetVdsStatusVDSCommand(HostName = 150, HostId = 44379bb8-e87b-4e00-a18e-5df43922da82, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: 618db5cc
2014-08-06 14:06:46,745 INFO  [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-95) [35d9d600] FINISH, SetVdsStatusVDSCommand, log id: 618db5cc
2014-08-06 14:06:46,750 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-95) [35d9d600] Correlation ID: 35d9d600, Job ID: e492cadf-ca72-4f93-8c3b-be595b4fca29, Call Stack: null, Custom Event ID: -1, Message: Gluster command [Non interactive user] failed on server 150.
2014-08-06 14:06:46,779 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-95) [35d9d600] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: State was set to NonOperational for host 150.
2014-08-06 14:06:46,815 INFO  [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-95) [6a19c783] Running command: HandleVdsVersionCommand internal: true. Entities affected :  ID: 44379bb8-e87b-4e00-a18e-5df43922da82 Type: VDS
2014-08-06 14:06:46,818 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-95) [6a19c783] Host 44379bb8-e87b-4e00-a18e-5df43922da82 : 150 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.


解决办法 : 

(在所有的节点安装vdsm-gluster包, 貌似不必要), 主要是要关闭Cluster的gluster 支持, 如图一, 把钩去掉 , 并重启ovirt-engine服务.

[参考]

1. https://bugzilla.redhat.com/show_bug.cgi?id=1072274

2. http://lists.ovirt.org/pipermail/users/2014-March/022102.html

时间: 2024-09-09 23:13:10

oVirt 3.4.3-1 bug: when enable gluster and vdsm-gluster not installed in HOSTs , cann't active host after restart ovirt-engine的相关文章

GlusterFS cann&#039;t active in oVirt because gluster&#039;s brick directory &amp; subd&#039;s permission problem not vdsm.kvm 755

今天在ovirt中使用glusterfs时, 发现一点问题, gfs deactive后就无法激活了, 查看日志. /var/log/ovirt-engine/engine.log 2014-08-08 08:26:09,087 ERROR [org.ovirt.engine.core.bll.storage.GLUSTERFSStorageHelper] (org.ovirt.thread.pool-6-thread-50) [d59bd20] The connection with deta

oVirt 3.4.3-1 cann&#039;t create disk image in GlusterFS DATA STORAGE BUG

前面刚遇到一个不能使用gluster的bug,  http://blog.163.com/digoal@126/blog/static/16387704020147623132332/ 现在又一个, 现在这个是我手工创建的glusterfs volume, 然后在ovirt的storage中新建一个domain, 使用DATA/GlusterFS挂载, 挂载成功了. 但是在后面就出问题了, 创建虚拟机时, 如果选择使用GlusterFS的domain, 新建disk image会失败. 有其他的

oVirt Architecture

[原文] http://www.ovirt.org/Architecture Architecture   Contents  [hide]  1 oVirt Architecture 1.1 Overall architecture 1.2 Engine 1.2.1 Engine-Core Architecture 1.3 Host Agent (VDSM) 1.3.1 Hooks mechanism 1.3.2 MOM integration 1.4 Web-based User Inter

oVirt 3.4.3-1 LiveCD ISO based on CentOS 6.5

因为从ovirt yum安装部署ovirt, 由于engine-setup更新包后导致ovirt-engine url无法打开, 可能是更新后的BUG, 数据库日志也报了大量的函数不存在的错误.  准备试一试oVirt打包的LiveCD ISO, 可以下载3.4.2或者最新的3.4.3-1 http://resources.ovirt.org/pub/ovirt-3.4/iso/ 3.4.3-1的版本是基于CentOS 6.5封装的. 因为oVirt首页也只有3.4.2的release note

use glusterfs in oVirt

使用glusterfs前, 请先阅读 1. http://blog.163.com/digoal@126/blog/static/163877040201476115823746/ 2. http://blog.163.com/digoal@126/blog/static/16387704020147623132332/ 3. http://blog.163.com/digoal@126/blog/static/16387704020147632024869/ 4. http://wiki.ov

oVirt guest | VM &amp; HOST HA in one Cluster

oVirt支持虚拟机的HA配置, 当集群中有合适的机器可以接管failed的宿主机上的虚拟机时, 这些虚拟机会自动在同集群下的合适的主机上运行. 但是有几个前提条件 :  1. Power management must be configured for the hosts running the highly available virtual machines. 需要配置HA的虚拟机, 它的宿主机的电源管理必须配置. 2. The host running the highly avail

oVirt engine 3.4 installed on CentOS 6.x x64

oVirt是RHEV的上游开源产品, 管理也和RHEV非常相似, 主打KVM的虚拟机管理平台. 相比OpenStack更加轻量化. 本文先介绍一下oVirt engine在CentOS 6.x x64平台下的安装. 除了数据库我们使用自己编译的PostgreSQL 9.3.5, 因为从依赖安装的版本实在太老了. 其他都使用依赖安装. 导入yum源 # wget http://resources.ovirt.org/pub/yum-repo/ovirt-release34.rpm # rpm -i

oVirt Reports Portal 3.4 added to ovirt-engine

配置完oVirt engine后, 可以进行用户管理和虚拟化平台的管理, 但是无法生成报告. 如图, 点击Reports Portal会告诉你没有安装report application. 安装ovirt engine可参考昨天发的一篇文章. http://blog.163.com/digoal@126/blog/static/1638770402014628114756319/ engine report建议在engine配置前就安装, 那么只需engine-setup时可以直接配置. 现在的

make openvswitch compatible with oVirt (tight with brctl)

在openvswitch网站的首页, 指出openvswitch已经整合到了oVirt管理软件, 但是实际使用oVirt时, 发现并没有这样, (我用的是oVirt 3.4.3). It has also been integrated into many virtual management systems including OpenStack, openQRM, OpenNebula and oVirt. 额外在管理机和HOST节点安装openvswitch, 并且将ovirtmgmt网桥