Posts tagged ‘KVM’

Qemu/KVM invokation-modes simple benchmarks

Virtualization is one of my areas of interest in my daily computing. I don’t like invassive solutions like Xen (modified OS required) or gigantic infrastructures (like VMWare), since I don’t really need none of them. Maybe, if I need that kind of solution in the future, I’ll start liking it.

By now, I prefer more simple approaches, like VirtualBox and Qemu/KVM. I’ve been using VirtualBox with some self-written scripts to manage a virtualized system/network in my former job for at least 2 years (totally headless), with impressive performance and stability. I have no complains about VirtualBox. Before that, I used Qemu in an almost daily basis for simple desktop virtualization (LiveCDs mastering, OS/distro testing, …).

Some days ago, I decided to try KVM, since I switched recently to an Intel Core 2 Duo processor with Intel VT-x technology support. Despite those aren’t big brothers (like Xen and VMWare), both virtualization systems have pretty good features and performance, even the OSE of VirtualBox (I don’t use any other version), so, they are absolutely enough for my needs. What is more important to me than having more features, is the performance I can get from any of this systems, cause I don’t use it to virtualize complex solutions.

Qemu/KVM has a lot of combinations to run it: without any performance accelerator, with kqemu (in user mode or in kernel mode), with KVM, any of this combinations with different SMP, some formats of virtual storage are better than others, or more versatile, etc.

With this benchmarks, what I’m trying to clarify for myself, and off course, for others with the same concerns as me, is which of this combinations could give us the best performance. I’ve designed some really simple benchmarks to help me deciding.

For this benchmarks, I’ll be using mainstream Qemu (for non-accelereted, kqemu and kvm benchmarks) and Qemu-KVM (for kvm benchmarks).

I choose to use some of my server virtual machines configuration (a WWW server), mostly to see how long takes the Start-Shutdown benchmark with a more or less real live system.

By now I have three tests, I’m designing and writing more tests (i.e: multi-threaded test using ffmpeg, compression test). All the tests were ran in an automatic way, and the data collected as CSV, also automatically.


  • Start-Shutdown
  • Fibonacci
  • Factorial

Tests Details:

  • Start-Shutdown: Run a VM configured for auto-shutdown
    Loops: 3
    Add “(/bin/sleep 1;/sbin/shutdown -h now) &” at the end of /etc/rc.local
    Data to keep: Total executing time and average execution time
  • Fibonacci: Run 1000000 times the algorithm to obtain the Fibonacci Sequency for 1000000
    Loops: 3
    Data to keep: Total executing time and average execution time
  • Factorial: Run 100000 times the algorithm to obtain the Factorial of 500
    Loops: 3
    Data to keep: Total executing time and average execution time


  • Qemu_NoKQemu_SMP1
  • Qemu_KQemuUser_SMP1
  • Qemu_KQemuKernel_SMP1
  • Qemu_NoKQemu_SMP2
  • Qemu_KQemuUser_SMP2
  • Qemu_KQemuKernel_SMP2
  • Qemu_KvmEnabled_SMP1
  • QemuKvm_SMP1
  • QemuKvm_SMP2

The Host:

  • Name: WWW
  • OS: Archlinux 2.6.31-ARCH (i686)
  • CPU: Intel(R) Core(TM)2 Duo CPU E8400 @ 3.00GHz
  • RAM: 1 GiB DDR2 667 MHz
  • HDD: 232 GiB Seagate Barracuda SATA II (Model: ST3250310AS) (JFS)
  • Software:
    • Qemu (0.11.0-1) [apports qemu]
    • KVM (88-1) [apports qemu-kvm]
    • KQemu (1.4.0pre1-4)
    • Python (2.6.3-2)

The VM:

  • Name: WWW
  • OS: Archlinux 2.6.29-ARCH
  • RAM: 384 MiB
  • HDD: 4 GiB (QCOW2, 1.1 GiB used) (JFS)
  • Software:
    • nginx (0.7.59-2)
    • PHP (5.2.9-3) [Fast-CGI as daemon]
    • RSyncd (3.0.6)
    • Pure-FTPd (1.0.22-1)
    • OpenSSH (5.2p1-1)
    • Python (2.6.2-2)

Let’s see the benchmarks results


Start-Shutdown Benchmark Results

I found too curious and unexpected results in this benchmark: 1) Mainstream Qemu + KVM performs slightly better than Qemu-KVM, and 2) In all Subjects, with SMP=2, I got (quite?) bad results.

OK, I know this benchmark is absolutely sequencial, so, with SMP=2, I shouldn’t expect more performance, and I didn’t, but I didn’t expected so bad performance. All SMP=2 Subjects performed even worst than Qemu without any accelerator and SMP=1! That’s out of any understanding, at least for me.


Fibonacci Benchmark Results

Althought this test is sequencial, Qemu-KVM with SMP=2 wins by a really slightly difference, with Qemu + KQemu (in both user and kernel mode) SMP=1, Qemu + KVM SMP=1 and Qemu-KVM SMP=1 all of them in a virtually shared second place. Again Qemu + KVM SMP=1 performs better than Qemu-KVM SMP=1.

This time the difference between SMP=1 and SMP=2 is even worst. I must say SMP=2 performance in mainstream Qemu is disgusting, not to mention that it doesn’t supports SMP>1 when used in combination with KVM.


Factorial Benchmark Results

Once again Qemu-KVM SMP=2 wins, again with a sequencial test. All the results are consistent with the Fibonacci test.


With this absolutely simple tests, we can say that, if you have a processor with KVM support, you should use it, if not, no problem, yo can always use Qemu + KQemu in kernel mode if this mode is stable enough for you (there are some unstability problems reported in some hardware/software setups). In mainstream Qemu, never to use SMP>1, you will end having a really disgusting performance.

Later we’ll see if you should use KVM with mainstream Qemu or Qemu-KVM, but we should see first what the results from multi-threaded tests are.

In other benchmarks I’ll be testing multi-threaded against the winners of this tests to see if it worth to use SMP>1. Also, I’ll test different storage formats, mostly RAW and QCOW2, only in performance, because both formats have others advantages on their own, which must be taken on account to decide what format to use.