I did some quick testing on using virtio (filesystem passthrough) with some of my KVM virtual machines. Typically, I have been using NFSv3 and I haven’t really had much of a problem. For the heck of it, I did some testing with virtio pass-throughs instead, since I assumed it would be a lighter and more efficient method. I found it is completely the opposite! Not only was is less efficient, but it seems to not involve any type of caching at all.
Testing involves just a few DD dumps to a passthrough share of two SSDs in a raid-0.
9P – method: dd bs=1M count=1000 (1G test file) using /dev/zero for writes and /dev/null for reads
Write Speed: 52 MB/sec
Read Speed: 11.5 MB/sec
100% of one CPU core was pegged by qemu on the host.
I found that NFS was properly caching, so I switched to a 20G file instead.
NFS – method: dd bs=1M count=20000 (20G test file) using /dev/zero for writes and /dev/null for reads
Write Speed: 461 MB/sec
Read Speed: 774 MB/sec
30% of one CPU core was used for the NFS transfer.
It seems like virtio is currently CPU bound and the difference is staggering! Note that I am using an i5-6500 CPU, so the amount of overhead is quite large.
I will be definitely sticking to NFS for the time being!