Without getting too technical (and I know techs will complain that I'm oversimplifying), a RAID10 array is essentially two sets of 3 disks, each of which are configured as RAID5 arrays. RAID5 has two disks that mirror each other, and a third that provides a parity stripe. It's not the most super-redundant fault tolerance out there, but it does a decent job of ensuring that if a disk fails, the array can continue to run, and rebuild itself when the array is repaired.
Now, any time you have simultaneous need to both read and write to a disk or RAID array, the read or write operation has to be completed before the next read or write can happen. That can lead to delays in reading or writing. When you have a virtual environment, many simultaneous read/writes may be needed, depending on how many virtual disks are accessing it and how many users are using the virtual machines.
A RAID10 array allows simultaneous reads and writes, because it can separate reading from one array, while writing on the other.
The big downside to RAID10 is that it's expensive, both in terms of how many disks you need, and in how much disk capacity you lose by setting up an array where more disks are used to maintain redundancy than are used for storage. You see that in your original configuration, where 6 1TB drives only provide you with about 2 TB of actual storage.
Also, because a virtual host machine may host many virtual machines (yours will probably not). In that environment, it's critical to,have redundancy and protection of the data and the running machines comes before the cost of the drives. Downtime on 6 virtual servers all doing different things for the company, just because a drive died in the host, leads to people updating resumes :)