There are a handful of different standard desktops and laptops at my company that I could choose from for developers on my team. Each is from a different company and comes with different load-outs of CPU, RAM, OS, etc. I'd like a test to determine which one of them will run Eclipse most quickly with some sort of impartial test. Since the task most critical to our team is running Java programs, I was considering using SciMark 2 scores. Is this a relatively good rough idea of how Eclipse will perform of this machine, or is there a better way?
I don't understand why you even need to test for this. You should know generally which is faster than what.
OS shouldn't matter, since you should be reformatting everything to be the same.
Just pick the fastest (non server) CPU, and then max out ram. Simple.
You'll also want to read this: http://www.codinghorror.com/blog/archives/001198.html
It all depends what kind of Java development your team is about to do. If all they do is console based number crunching, sure SciMark might do the trick as far as I can tell, since it is benchmarking just that: number crunching. And in this case, that would be CPU first, then memory. On the other hand, if they're developing JEE applications with an ant/maven deployment and, say, a local database for tests, disk access and memory would be number one factors. And if they are developing graphical interfaces with for examples a 3D rendering, you might want to take care of the GPU.
Best test for you, if the project already exists, would be to load up Eclipse and run builds and tests for your application being developed. No generic benckmark can really help you there unless it matches your exact needs.
Eclipse is very memory hungry. Assuming the CPU's are roughly in the same range, I would try to give them the ones with the most memory.
The best way to test for Eclipse would be to run Eclipse:-) I don't think any sort of benchmark would help that much.
The best test is the user frustration test. Make then use Eclipse for work for a given duration and then ask them what was better. Alternatively, use a stopwatch to record time spent twiddling their thumbs waiting for a compile/refresh/deployment operation.