qemu kvm Debian Jessie

Notes on setting up qemu-kvm on Debian Jessie

The system
# cat /etc/debian_version /etc/issue
Debian GNU/Linux 8 \n \l
# uname -r
# grep "model\ name" /proc/cpuinfo -m1
model name  : Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
# egrep "vmx|svm" /proc/cpuinfo -c

Install the following debian packages
qemu-kvm - QEMU Full virtualization on x86 hardware
libvirt-bin - programs for the libvirt library
virtinst - Programs to create and clone virtual machines
bridge-utils - Utilities for configuring the Linux Ethernet bridge

# apt-get update
# apt-get install qemu-kvm libvirt-bin virtinst bridge-utils

Example interfaces file
# cat /etc/network/interfaces

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet static

    bridge_ports eth0
    bridge_stp off    #disable spanning tree
    bridge_waitport 0 #no delay before a port becomes available
    bridge_fd 0       #no forwarding delay
    bridge_hello 2    #Use hello packets to communicate information about the topology throughout the entire Bridged Local Area Network.

Restart network
# /etc/init.d/networking restart

Check bridge
# brctl show
bridge name bridge id       STP enabled interfaces
br0     8000.f832e4a45ec2   no      eth0

Create a guest VM
# virt-install \
--connect qemu:///system \
-n eratosthenes \
--memory=4056,maxmemory=8112 \
--vcpus=1,maxvcpus=2 \
--disk path=/home/vm/eratosthenes/eratosthenes.qcow2,size=20 \
--cdrom=/home/vm/debian-8.4.0-amd64-netinst.iso \
--graphics vnc --noautoconsole \
--os-type linux \
--network=bridge=br0 \

virt-install options used
  For creating KVM and QEMU guests to be run by the system libvirtd
  instance.  This is the default mode that virt-manager uses, and
  what most KVM users want.

-n NAME, --name=NAME
  Name of the new guest virtual machine instance. This must be unique
  amongst all guests known to the hypervisor on the connection,
  including those not currently active. To re-define an existing guest,
  use the virsh(1) tool to shut it down ('virsh shutdown') & delete
  ('virsh undefine') it prior to running "virt-install".

  Memory to allocate for the guest, in megabytes. Sub options are
  available, like 'maxmemory' and 'hugepages'. This deprecates the
  -r/--ram option.

  Number of virtual cpus to configure for the guest. If 'maxvcpus' is
  specified, the guest will be able to hotplug up to MAX vcpus while
  the guest is running, but will startup with VCPUS.

  CPU topology can additionally be specified with sockets, cores, and
  threads.  If values are omitted, the rest will be autofilled
  preferring sockets over cores over threads.

  'cpuset' sets which physical cpus the guest can use. "CPUSET" is a
  comma separated list of numbers, which can also be specified in
  ranges or cpus to exclude. Example:

      0,2,3,5     : Use processors 0,2,3 and 5
      1-5,^3,8    : Use processors 1,2,4,5 and 8

  If the value 'auto' is passed, virt-install attempts to automatically
  determine an optimal cpu pinning using NUMA data, if available.

  Use --vcpus=? to see a list of all available sub options. Complete
  details at

  Specifies media to use as storage for the guest, with various
  options. The general format of a disk string is

      --disk opt1=val1,opt2=val2,...

  The simplest invocation to create a new 10G disk image and associated
  disk device:

      --disk size=10

  virt-install will generate a path name, and place it in the default
  image location for the hypervisor. To specify media, the command can
  either be:

      --disk /some/storage/path[,opt1=val1]...

  or explicitly specify one of the following arguments:

      A path to some storage media to use, existing or not. Existing
      media can be a file or block device.

      Specifying a non-existent path implies attempting to create the
      new storage, and will require specifying a 'size' value. Even for
      remote hosts, virt-install will try to use libvirt storage APIs
      to automatically create the given path.

  File or device use as a virtual CD-ROM device for fully virtualized
  guests.  It can be path to an ISO image, or to a CDROM device. It can
  also be a URL from which to fetch/access a minimal boot ISO image.
  The URLs take the same format as described for the "--location"
  argument. If a cdrom has been specified via the "--disk" option, and
  neither "--cdrom" nor any other install option is specified, the
  "--disk" cdrom is used as the install media.

  Don't automatically try to connect to the guest console. The default
  behaviour is to launch virt-viewer(1) to display the graphical
  console, or to run the "virsh" "console" command to display the text
  console. Use of this parameter will disable this behaviour.

-w NETWORK, --network=NETWORK,opt1=val1,opt2=val2,...
  Connect the guest to the host network. The value for "NETWORK" can
  take one of 4 formats:

      Connect to a bridge device in the host called "BRIDGE". Use this
      option if the host has static networking config & the guest
      requires full outbound and inbound connectivity  to/from the LAN.
      Also use this if live migration will be used with this guest.

Install kvm-vnc-list
# git clone https://github.com/ipduh/kvm-vnc-list.git
# mv kvm-vnc-list/kvm-vnc-list /usr/sbin/
# rm -r kvm-vnc-list

kvm jessie