KVM live migration and Using DFS As Storage

大约是在08年的时候,我曾经到一家代理VMWARE产品的公司面试过,公司人不多,但是非常有激情。当时与他们老板聊的也比较投缘,还应邀体验了一下他们公司周末的内部技术交流会议,后来由于其他原因放弃了去这家公司的机会。

那时候的话VMWARE已经有LIVE MIGRATION的成品了,当时的VMWARE热切换在存储方面应该是借助共享存储或ISCSI来实现的。

随着虚拟机技术的发展,出现了相当多的虚拟机产品(virtualBOX,virtualPC,XEN,KVM等等)当然还有小型机上的LPAR,DLPAR等等。

最近在测分布式文件系统,这个系统除了适合日志文件以外,我觉得还是可以拿来做虚拟机的操作系统实体。所以也来关心一下RHEL自带的虚拟机KVM。如果可以结合起来使用的话,无疑是一个相当廉价高效的解决方案。

分布式的文件系统这里就不描述了,在之前的博客文章里有描述过。

要做到活动迁移,其实还要考虑IP的迁移,二层交换中MAC地址的更新,(或者是MAC地址一同迁移)。

OK,让我们来看看RHEL6里面的描述:

Migration is name for the process of moving a virtualized guest from one host to another. Migration is a key feature of virtualization as software is completely separated from hardware. Migration is useful for:

(负载均衡)Load balancing – guests can be moved to hosts with lower usage when a host becomes overloaded.

(计划外切换)Hardware failover – when hardware devices on the host start to fail, guests can be safely relocated so the host can be powered down and repaired.

(计划内停机切换)Energy saving – guests can be redistributed to other hosts and host systems powered off to save energy and cut costs in low usage periods.

(跨地域切换)Geographic migration – guests can be moved to another location for lower latency or in serious circumstances.

Migrations can be performed live or offline. To migrate guests the storage must be shared. Migration works by sending the guests memory to the destination host. The shared storage stores the guest’s default file system. The file system image is not sent over the network from the source host to the destination host.

An offline migration suspends the guest then moves an image of the guests memory to the destination host. The guest is resumed on the destination host and the memory the guest used on the source host is freed.

The time an offline migration takes depends network bandwidth and latency. A guest with 2GB of memory should take an average of ten or so seconds on a 1 Gbit Ethernet link.

A live migration keeps the guest running on the source host and begins moving the memory without stopping the guest. All modified memory pages are monitored for changes and sent to the destination while the image is sent. The memory is updated with the changed pages. The process continues until the amount of pause time allowed for the guest equals the predicted time for the final few pages to be transfer. KVM estimates the time remaining and attempts to transfer the maximum amount of page files from the source to the destination until KVM predicts the amount of remaining pages can be transferred during a very brief time while the virtualized guest is paused. The registers are loaded on the new host and the guest is then resumed on the destination host. If the guest cannot be merged (which happens when guests are under extreme loads) the guest is paused and then an offline migration is started instead.

这段话描述了虚拟机活动迁移的过程,不一定所有的活动迁移都能成功,需要看目标物理机的资源情况和当时虚拟机的负载情况。

The time an offline migration takes depends network bandwidth and latency. If the network is in heavy use or a low bandwidth the migration will take much longer.

Live migration requirements

A virtualized guest installed on shared networked storage using one of the following protocols:

Fibre Channel

iSCSI

NFS

GFS2

Two or more Red Hat Enterprise Linux systems of the same version with the same updates.

Both system must have the appropriate ports open.

Both systems must have identical network configurations. All bridging and network configurations must be exactly the same on both hosts.

Shared storage must mount at the same location on source and destination systems. The mounted directory name must be identical.

Live KVM migration with virsh

# virsh migrate --live

GuestName DestinationURL

The 

GuestName

parameter represents the name of the guest which you want to migrate.

The 

DestinationURL

parameter is the URL or hostname of the destination system. The destination system must run the same version of Red Hat Enterprise Linux, be using the same hypervisor and have 

libvirt

running.

Once the command is entered you will be prompted for the root password of the destination system.

时间: 2024-11-01 01:00:57

KVM live migration and Using DFS As Storage的相关文章

基于超融合架构的容器云管部署实践

作者介绍 张扬,DaoCloud售前工程师,致力于企业IT环境中容器化.超融合和云计算方面的需求分析,方案设计和技术支持.曾任职IBM AICS云服务项目并负责Cloud Infra和DevOps相关工作.Nutanix社区活跃用户,个人微信号:小张烤茄.   随着云计算的逐步落地,容器化和超融合等技术开始风靡整个IT圈.本次分享主要从技术层面讲述云计算环境中容器化和超融合的双擎工作模型.    一.超融合概念   超融合基础架构(Hyper-Converged Infrastructure,或

13.10. Storage Pool Migration

13.10.1. Exporting a Pool # zpool export tank # zpool export tank cannot unmount '/export/home/eschrock': Device busy # zpool export -f tank 13.10.2. Importing Pools # zpool import tank 原文出处:Netkiller 系列 手札 本文作者:陈景峯 转载请与作者联系,同时请务必标明文章原始出处和作者信息及本声明.

KVM虚拟化集群技术概述

一. 虚拟化集群介绍.设计思路及架构 使用虚拟化集群的目标是克服单机虚拟化的局限性,利用技术手段提高虚拟机可用性,最终达到业务不中断或者减少中断时间,确保业务数据更安全的目标.   1. 虚拟化集群介绍 1)什么是虚拟化集群 虚拟机集群最显著的特征是有共享存储,因为有了共享存储,虚拟机就可以实现非常快速的在线迁移,并在虚拟化层配置高可用.笔者在生产环境使用的集群有两种存储方式,基于商业存储和基于开源分布式文件系统. 2)虚拟化集群的高可用和基于应用层高可用的区别 高可用是经常用到的运维技术,在系

KVM基于NFS的动态迁移

静态迁移静态迁移:也叫做常规迁移.离线迁移(OfflineMigration).就是在虚拟机关机或暂停的情况下从一台物理机迁移到另一台物理机.因为虚拟机的文件系统建立在虚拟机镜像上面,所以在虚拟机关机的情况下,只需要简单的迁移虚拟机镜像和相应的配置文件到另外一台物理主机上:如果需要保存虚拟机迁移之前的状态,在迁移之前将虚拟机暂停,然后拷贝状态至目的主机,最后在目的主机重建虚拟机状态,恢复执行.这种方式的迁移过程需要显式的停止虚拟机的运行.从用户角度看,有明确的一段停机时间,虚拟机上的服务不可用.

KVM 存储虚拟化 - 每天5分钟玩转 OpenStack(7)

KVM 的存储虚拟化是通过存储池(Storage Pool)和卷(Volume)来管理的. Storage Pool 是宿主机上可以看到的一片存储空间,可以是多种类型,后面会详细讨论.Volume 是在 Storage Pool 中划分出的一块空间,宿主机将 Volume 分配给虚拟机,Volume 在虚拟机中看到的就是一块硬盘. 下面我们学习不同类型的 Storage Pool 目录类型的 Storage Pool 文件目录是最常用的 Storage Pool 类型.KVM 将宿主机目录 /v

Hadoop Namenode不能启动 dfs/name is in an inconsistent

Hadoop Namenode不能启动 dfs/name is in an inconsistent 前段时间自己的本机上搭的Hadoop环境(按文档的伪分布式),第一天还一切正常,后来发现每次重新开机以后都不能正常启动, 在start-dfs.sh之后jps一下发现namenode不能正常启动,按提示找到logs目录下namenode的启动log发现如下异常 org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: D

Speeding up Migration on ApsaraDB for Redis

Abstract: Redis supports the MIGRATE command to transfer a key from a source instance to a destination instance. During migration, the serialized version of the key value is generated with the DUMP command, and then the target node executes the RESTO

ceph distributed storage , suse storage based on it

早上起床的时候看朋友圈, 看到少聪分享的一个suse storage的链接(基于ceph改的), 非常棒. http://www.suse.com/company/press/2014/11/suse-offers-beta-preview-of-suse-storage.html ceph是一个很好的开源分布式存储, 光看介绍已经很兴奋了. 先记录一下, 好好研究. Ceph is a distributed object store and file system designed to p

docker network performance reduce by nat module & kvm vs docker vs localhost

在测试docker下postgresql性能时, 发现的一个DNAT转换后的网络性能问题. 首先附上KVM,DOCKER,本机的几组测试数据(测试数据仅供参考, 有兴趣的朋友可以自己测试一下) :  注意在测试连接超出物理核心数后, docker的性能开始严重下降. (可能有优化手段解决这个问题)  TPS 16连接IP, 16连接UNIX SOCK, 48连接UNIX SOCK, 96连接UNIX SOCK docker DNAT 29662 docker 174089.814292 2574