Install qemu-kvm , libvirt-bin , virtinst , bridge-utils
# apt-get install qemu-kvm libvirt-bin virtinst bridge-utils
Add root to the libvirt group.
# adduser root libvirt
Configure the bridge interface.
This is an example /etc/config/network/interfaces
# grep -v '##' /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 10.42.241.5 netmask 255.255.255.128 network 10.42.241.0 broadcast 10.42.241.127 gateway 10.42.241.10 bridge_ports eth0 bridge_stp off #disable spanning tree bridge_waitport 0 #no delay before a port becomes available bridge_fd 0 #no forwarding delay bridge_hello 2 #Hello packets are used to communicate information about the topology throughout the entire Bridged Local Area Network.
Restart the network
# /etc/init.d/networking restartand
Create the virtual machine.
# mkdir /home/vm # virt-install --connect qemu:///system -n vm0 -r 512 --vcpus=2 --disk path=/home/vm/vm0.qcow2,size=10 -c /data01/os.iso/debian-live-6.0.7-amd64-standard.iso --vnc --noautoconsole --os-type linux --os-variant debiansqueeze --description vm0_debian --network=bridge:br0 --hvm Starting install... Creating storage file vm0.qcow2 | 10 GB 00:00 Creating domain... | 0 B 00:00 Domain installation still in progress. You can reconnect to the console to complete the installation process.
Options Used Meaning from the virt-install man page:
OPTIONS Most options are not required. Minimum requirements are --name, --ram, guest storage (--disk or --nodisks), and an install option. --connect=CONNECT Connect to a non-default hypervisor. The default connection is chosen based on the following rules: qemu:///system If running on a bare metal kernel as root (needed for KVM installs) General Options -n NAME, --name=NAME Name of the new guest virtual machine instance. This must be unique amongst all guests known to the hypervisor on the connection, including those not currently active. To re-define an existing guest, use the virsh(1) tool to shut it down ('virsh shutdown') & delete ('virsh undefine') it prior to running "virt-install". -r MEMORY, --ram=MEMORY Memory to allocate for guest instance in megabytes. If the hypervisor does not have enough free memory, it is usual for it to automatically take memory away from the host operating system to satisfy this allocation. --vcpus=VCPUS Number of virtual cpus to configure for the guest. Not all hypervisors support SMP guests, in which case this argument will be silently ignored --description Human readable text description of the virtual machine. This will be stored in the guests XML configuration for access by other applications. -c CDROM, --cdrom=CDROM File or device use as a virtual CD-ROM device for fully virtualized guests. It can be path to an ISO image, or to a CDROM device. It can also be a URL from which to fetch/access a minimal boot ISO image. The URLs take the same format as described for the "--location" argument. If a cdrom has been specified via the "--disk" option, and neither "--cdrom" nor any other install option is specified, the "--disk" cdrom is used as the install media. --os-type=OS_TYPE Optimize the guest configuration for a type of operating system (ex. 'linux', 'windows'). This will attempt to pick the most suitable ACPI & APIC settings, optimally supported mouse drivers, virtio, and generally accommodate other operating system quirks. --os-variant=OS_VARIANT Further optimize the guest configuration for a specific operating system variant (ex. 'fedora8', 'winxp'). This parameter is optional, and does not require an "--os-type" to be specified. Valid values are: linux debianetch Debian Etch debianlenny Debian Lenny debiansqueeze Debian Squeeze Storage Configuration --disk=DISKOPTS Specifies media to use as storage for the guest, with various options. The general format of a disk string is --disk opt1=val1,opt2=val2,... path A path to some storage media to use, existing or not. Existing media can be a file or block device. If installing on a remote host, the existing media must be shared as a libvirt storage volume. Specifying a non-existent path implies attempting to create the new storage, and will require specifyng a 'size' value. If the base directory of the path is a libvirt storage pool on the host, the new storage will be created as a libvirt storage volume. For remote hosts, the base directory is required to be a storage pool if using this method. size size (in GB) to use if creating new storage Networking Configuration -w NETWORK, --network=NETWORK,opt1=val1,opt2=val2 Connect the guest to the host network. The value for "NETWORK" can take one of 3 formats: bridge=BRIDGE Connect to a bridge device in the host called "BRIDGE". Use this option if the host has static networking config & the guest requires full outbound and inbound connectivity to/from the LAN. Also use this if live migration will be used with this guest. Graphics Configuration --vnc Setup a virtual console in the guest and export it as a VNC server in the host. Unless the "--vncport" parameter is also provided, the VNC server will run on the first free port number at 5900 or above. The actual VNC display allocated can be obtained using the "vncdisplay" command to "virsh" (or virt-viewer(1) can be used which handles this detail for the use). --noautoconsole Don't automatically try to connect to the guest console. The default behaviour is to launch a VNC client to display the graphical console, or to run the "virsh" "console" command to display the text console. Use of this parameter will disable this behaviour. Virtualization Type options -v, --hvm Request the use of full virtualization, if both para & full virtualization are available on the host. This parameter may not be available if connecting to a Xen hypervisor on a machine without hardware virtualization support. This parameter is implied if connecting to a QEMU based hypervisor.On another host running X
# apt-get install virt-manageroh well, that did not go as planned ... I was unable to manage the Virtual Machine I created with virt-manager remotely ... I tried to install other packages and hunted down the errors for a while but not cigar ...
Plan B All I really need is to open a VNC session to the socket 127.0.0.1:5900 that I see and I hope that it is what I think it is.
Still on another host running X --not the vmhost.
# ssh -L 5900:localhost:5900 vmhostand then, somevncviewer localhost
use a new vinagre with Host:127.0.0.1 and Use host:vmhost
Install the guest system, give an IP address in your LAN to the guest and install SSH
When the installation is done start the Virtual Machine.
On the host.
# virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # list Id Name State ---------------------------------- virsh # start vm0 Domain vm0 started virsh # list Id Name State ---------------------------------- 3 vm0 running virsh # quit
ping the Virtual Machine vm0 and try to ssh to it.
If you cannot ssh to it, open a vnc session
# virsh vncdisplay vm0 :0and then VNC to vm0 from another host.
The system is using the following `virtual` device drivers and behaves OK so-far
root@vm0# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB Controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device 00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device 00:05.0 RAM memory: Red Hat, Inc Virtio memory balloonI have to stress test it a bit.