Title pretty much says it all. I'm just wondering if this makes a difference in the way VMs will process things and if one method is preferable vs. another.
Title pretty much says it all. I'm just wondering if this makes a difference in the way VMs will process things and if one method is preferable vs. another.
No, your VM should perform the same and will use the same resources on the host. It's just a design choice which was primarily added to add some flexibility where your OS or software may have per CPU licensing requirements.
Each socket / core will represent one physical core on the host. Remember that more cores isn't automatically a good thing due to the scheduling requirements.
The main purpose of the cores/socket option is to provide flexibility with software that may have runtime or licensing requirements based on the number of "physical" sockets or CPU cores.
While there's no difference in performance between multiple cores on one socket versus a combination of multiple sockets, there IS a slight difference in operation if you enable or require the CPU hot-add feature of the virtual machine.
With the VMware CPU hot-add feature, you can add a socket to a running VM, but not additional cores. As odd as it seems, this is something I run into in production at work, and has influenced how I configure new VM's.
Generally speaking it will make little or no performance difference.
A sufficiently complex OS may alter its scheduling heuristics (keeping closely related threads on cores in the same package for instance, which with some chip designs can increase the efficiency of how cache shared between cores is used) depending on the arrangement of (populated) sockets and cores. In a virtualised setup any difference is likely to be insignificant or rendered completely moot due to the way the hypervisor schedules CPU access for guest VMs unless the hypervisor is fairly clever about core scheduling between/within VMs too.
As Dan points out, you should benchmark your tasks (in a realistic manner: i.e. with other activity on the host not just in a test environment where the VM in question is the only one running) to make sure that multiple vCores/vCPUs/both is actually beneficial to their performance. The way access to cores is scheduled can introduce delays that wipe out any benefit and in fact make things slower overall - I've seen reports where for tasks with significant CPU work a small farm of single core VMs on the same host performed significantly better on the same hardware than a smaller number of (where "smaller number2 includes one) multi-core VM (though of course that is likely to impose far greater memory load on the host).