Im not entirely sure if this is the correct place for this question, but I would like to build a server to run some experiments on and im considering arm, most of my code can be compiled to run on arm but there will be some external dependencies which may have to run on x86 so Im trying to understand how this might look.
I understand there is a performance penalty running x86 applications from arm. Im not entirely clear how this is handled - for example, I assume that if a given application is not compiled for arm it will not run natively on an arm environment so what would be the procedure for calling such an application - e.g does the os recognise it as an executable and emulate the x86 calls in the background somehow (i guess maybe os dependent) or would you have to spin up a full virtualised x86 environment to run these x86 applications.
I have used a range of virtualisation technologies and im particularly interested in how this would affect a typical type 2 hypervisor (is there an additionl penalty going from x86 to x86 vs arm to x86 as well something like wine which is mapping windows calls anyway - is support even there for this at this time?
would appreciate a brief explanation of how this works and a link to any performance benchmarks for the described operations
Emulation is required to run x86 on ARM. How transparent and easy to use depends on your environment.
Windows 10 on ARM extends WOW64. The goal is an ARM device that Just Works with x86 apps. No server builds yet, though presumably such a thing is being tested internally in Azure.
Another user emulation example: QEMU on Linux with binfmt_misc to make it slightly more user friendly. No need to emulate hardware and run another kernel. However you do need to provide libraries of the other architecture like you would when cross compiling.
Emulate hardware and you can create whatever virtual machine you want. ARM VM on x86 using QEMU is one example. This is a different kernel on virtual hardware, but virtual machines are familiar these days.
Performance overhead, again it varies. Native binaries are of course the best, CPU instruction rewriting can be near native, emulating an entire computer is a bit slower.
Worst case, you acquire both ARM and x86 boxes for their respective workloads. Complicates operations a bit, but you gain hardware diversity.