It's amazing how many mistakes it is possible to make when attempting to understand a concept. Up until a couple of weeks ago I had some misconceptions about virtualisation. I've used it through VMWare and have viewed it as a way to have a virtual layer on top of your processor.
Then some concepts coalesced from various articles I was reading.
Virtualisation is the abstraction of resources away from their use. At the moment the most common use of virtualisation would be memory. You get a virtual address space from your operating system - this can be mad up of RAM, ROM, cache, hard drive, flash memory, and probably more. You don't know what you are accessing but the virtualisation of those resources and their access makes it easy to use.
One of the catalysts for my understanding was iD Software new iD Tech 5. There was mention of Megtexture which was texture virtualisation allowing the use of massive amounts of textures for the rendering. This would no doubt be a way of not needing to know about the basic building blocks of the textures, but to use them through a virtualised interface.
Then this comes to the processor virtualisation. My misconception was because of the examples I had seen from Parallels and VMWare. It allows you to run multiple computers on a single processor. But what processor virtualisation should do is abstract away the processor resources allowing the program to run. This would mean that you would not care about multiple cores or even computers, you could in their run one virtual machine which has a backend of a massive cluster of computers. I don't know if this is the case yet as when you do parallel processing you still nee to know too much about the parallelism in order to develop a solution but I imagine some solutions must be able to do it.