Not a technical question, but a valid one nonetheless. Scenario:
HP ProLiant DL380 Gen 8 with 2 x 8-core Xeon E5-2667 CPUs and 256GB RAM running ESXi 5.5. Eight VMs for a given vendor's system. Four VMs for test, four VMs for production. The four servers in each environment perform different functions, e.g.: web server, main app server, OLAP DB server and SQL DB server.
CPU shares configured to stop the test environment from impacting production. All storage on SAN.
We've had some queries regarding performance, and the vendor insists that we need to give the production system more memory and vCPUs. However, we can clearly see from vCenter that the existing allocations aren't being touched, e.g.: a monthly view of CPU utilization on the main application server hovers around 8%, with the odd spike up to 30%. The spikes tend to coincide with the backup software kicking in.
Similar story on RAM - the highest utilization figure across the servers is ~35%.
So, we've been doing some digging, using Process Monitor (Microsoft SysInternals) and Wireshark, and our recommendation to the vendor is that they do some TNS tuning in the first instance. However, this is besides the point.
My question is: how do we get them to acknowledge that the VMware statistics that we've sent them are evidence enough that more RAM/vCPU won't help?
--- UPDATE 12/07/2014 ---
Interesting week. Our IT management have said that we should make the change to the VM allocations, and we're now waiting for some downtime from the business users. Strangely, the business users are the ones saying that certain aspects of the app are running slowly (compared to what, I don't know), but they're going to "let us know" when we can take the system down (grumble, grumble!).
As an aside, the "slow" aspect of the system is apparently not the HTTP(S) element, i.e.: the "thin app" used by most of the users. It sounds like it's the "fat client" installs, used by the main finance bods, that is apparently "slow". This means that we're now considering the client and the client-server interaction in our investigations.
As the initial purpose of the question was to seek assistance as to whether to go down the "poke it" route, or just make the change, and we're now making the change, I'll close it using longneck's answer.
Thank you all for your input; as usual, serverfault has been more than just a forum - it's kind of like a psychologist's couch as well :-)
I suggest that you make the adjustments they have requested. Then benchmark the performance to show them that it made no difference. You could even go so far to benchmark it with LESS memory and vCPU to make your point.
Also, "We're paying you to support the software with actual solutions, not guesswork."
Providing you are confident you are within the given system specs they document.
Then any claim they are making in regards to requiring more RAM or CPU they should be able to back up. As the experts in their system I hold people to account on this.
Ask them specifics.
What information provided on the system indicates more RAM is needed and how did you interpret this?
What information provided on the system indicates more CPU is needed and how did you interpret this?
The data I have - at first glance - contradicts what you are telling me. Can you explain to me why I may be interpreting this incorrectly?
I am interpreting this [obvious series of data] to mean [obvious interpretation]. Can you confirm I am interpreting it correctly with regards to my problem?
Having dealt with support in the past I have asked the same questions. Sometimes I was right and they were not focusing their attention on my problem properly. Other times however, I was wrong and I was interpreting the data incorrectly, or failing to include other data which was important in my analysis.
In any case, both of these situations were a net benefit to me, either I learnt something new I did not know before - or I have got their support teams to think harder about my problem to get a a decent root cause.
If the support team are unable to provide you with a logical expansion of their argument to a basis you can be satisfied with (you need to have an open mind to compromise yourself, be reasonable to accept your interpretation of the data is wrong) then it should become very present in their response. Even in the worst case scenario you can use this as a basis for escalating the problem.
The big thing is to be able to prove that you are using best practices for your system allocation, notably RAM and CPU reservations for your SQL server.
All this being said the easiest thing is to make the adjustments requested, at least temporarily. If nothing else it tends to get vendors over feet dragging. I can't count the number of times I've needed to do something crazy like this to satisfy a tech on the other end of the line that it really is their software not behaving.
For this specific situation (where you have VMware and application developers or a third party who does not understand resource allocation), I use a week's worth of metrics obtained from vCenter Operations Manager (vCops - download a demo if needed) to pinpoint the real constraints, bottlenecks and sizing requirements of the application's VM(s).
Sometimes, I've been able to satisfy the more stubborn consumers by modifying VM reservations or changing priorities to handle contention scenarios; "If RAM|CPU are tight, YOUR VM will take precedence!". Bad-bad things have happened when I've allowed software vendors to dictate their requirements on my vSphere clusters without real analysis.
But in general, numbers and data should win-out.
An example of something I used to justify VM sizing to the developer of a Tomcat application:
Dev: The VM needs MOAR cpu!
Me: Well, memory is your biggest constraint, and here's a heat map of your performance versus time... Wednesdays at 6pm are the most stressful periods, so we can spec around that peak period. Oh, and here's a sizing recommendation based on the past 6 weeks of production metrics...
I used to work in support - and part of what you're asking sounds highly rational (and probably is): but there are a few questions to ask yourself prior to just doing the "performance enhancement" they're requesting
Vendors will 99 times out of 100 (in my experience - both on the support side and the customer/field side) not even deal with performance-related issues until/unless the systems match what their documentation calls for. Maybe it's a system that runs fine 99.5% of the time with 1 CPU and 512M RAM - but if the system requirements say 4 CPUs and 4G RAM and you've only got 2 CPUs and 1G RAM, they're well within their rights to demand more resources be assigned*.
It is probable that they're asking you to increase system resources because of something they found in the lab/development wherein an issue magically disappears if you cross a specific threshold; if this is the case, yes it's an example of potentially-poor debugging on their end, but keep in mind they don't have time to eliminate every possible bug/issue that arises - some just need to be worked-around, and if that is the case here, just go with it.
There's also a not-insignificant chance that the issues you're seeing aren't even part of "their" software, but a component they rely on from some other source (vendor, OSS library, etc). I ran into this exact situation related to swap size, BEA WebLogic, and the Sun JRE at a customer a few years ago.
tl;dr:
In short, work with their support team, escalating as needed, until you find a resolution - but don't be surprised when some of the suggestions/debugging steps/fixes sound off-the-wall or pointless.
*If it truly doesn't "need" those extra resources, you're likely in a place to be able to file a doc bug / RFE for future versions - but don't push that route until you've demonstrated it's not the issue at hand
^an eBook I wrote you may find helpful on the topic: Debugging and Supporting Software Systems
Either ask to escalate the ticket or ask for a different rep. Depending on which vendor it is escalation may help if you say that you feel that the current level of support doesn't adequately address the issue. If they will not escalate then asking for a different rep may help because that requires much less "justification" since all it needs is to not be happy with the current one.
If it's a large vendor then simply closing the ticket and opening a new one on the same issue may work as it may be routed to a different rep, but I'd advise against it because it's poor form.
You could also stand your ground and ask for a rationale as to how more RAM/vCPU will help, or you could just give it more RAM/vCPU to prove that it won't help.
I'll throw in my two cents. We've been pretty successful with this approach -- much better outcomes and less frustration on everyone's part. It requires a lot more effort than the blame-game and blindly adding resources, but it also has better chances of finding the underlying problem.
When we have serious issues with our on-premise apps that are backed by vendor support contracts, and the vendors begin their dodge shuffling dance (which always seems to include outlandish non-data-driven demands for more CPU or RAM), we tend to do these 3 things:
Escalate the priority to system-down equivalent -- they usually balk, but usually back down when you explain it is effectively unusable even if it's technically "working". Treat it as a serious problem for them to solve. Around here we refer to that as a tiger team, which meets daily to get status updates from all the stakeholders. Usually the vendor will be asking you to change stuff. If it's a prod system, that's problematic, but if you want them to help, you will need to accept the responsibility to help them isolate the problem, so it helps if you've got a dev/staging environment where you can run tests.
Tell the vendor you want them to replicate your environment, so that THEY can isolate the problem in their lab. They can even host stuff in some cloud environment if need be. It does not have to be an exact match of your environment, although that would be ideal. The point is that you want the VENDOR to be actively trying to replicate your problem, so that they can test their guesswork on their system instead of yours. Ask them for the diagrams, specs, etc of that replicated environment to make sure they are doing it.
Provide them (under NDA of course) with your actual dataset so that they can run/replay it for real instead of guessing. In our case, most of our vendor-provided app issues (both transient and chronic) frequently turn out to be issues with the accompanying vendor-provided databases. I cannot count the number of times we've done this and they have finally pinpointed the problem down to something unexpected in the actual data -- weird artifacts from app upgrades 2 years ago where something didn't convert cleanly; stale records exposing a problem with the GC settings; queries not working quite right because OUR data values breaking some transmog routine in the vendor code, etc. Stuff we would never be able to identify on our own.
We've done this with quite a few vendors over the last few years, and they are initially very resistant to doing it our way. However, after it works, it always comes up as a positive highlight in the quarterly reviews we hold with our vendors. And it helps cement our technical relationship with those vendors. They don't want vague problems. They do want specific problems that they can analyze to improve their products.
Hope the suggestion helps. I know it's not a one-size-fits-all approach, but if you can swing it I think you'll find it worthwhile.
The real question is, who is in charge here? If you can't realistically switch to an alternative vendor, then they have the power, and all you can really do is go along with whatever they say and hope it will work out. Not a happy situation! Otherwise, I suggest you ask for another rep (as others have said), but make it clear you are not happy with the service and will look elsewhere if they cannot do the job.
Don't just "make the adjustments they suggested" if you're sure they won't work, as that is setting up a pattern for your relationship that will hurt you in the long run. You are paying them to provide you a service, and they shouldn't be able to dictate your actions any more than someone I hire to paint my house can dictate what colour it will be.
This may sound drastic, as it sounds like this is not a hugely critical issue, but the fact is that if they are messing you around on something minor, they will likely do the same for something big, and the last thing you want is to run into some sort of horrible charlie foxtrot six months down the line and have the same trouble with the vendor then.
Make sure that whatever steps you take to resolve the issue now, will work equally well when you're two days from a deadline and everything breaks...
I'm going to post a view from the vendor's side.
We had this customer that had this recurrent problem where the performance of the software would drop off every few hours or so to some truly abysmal rate then come back a few hours later.
The bulitin profiler in the system indicated the system CPU (or possibly memory) speed was disgustingly slow, something like 100MHZ rather than the expected 2GHZ. Doubling the CPU provided by the VM didn't change the symptom and they thought we were being wasteful.
As they couldn't get a faster CPU (more CPUs wasn't going to help), we then tried swapping TEST and PROD VMs. The problem then showed up on TEST the next day. Then we tried promoting one of the clients to a standalone (serverless) instance. No problem on that workstation while the server was choking.
They produced reports from the VM host indicating no performance problems and tried again to claim it was an application problem.
Finally I [an engineer] (I had zero support from those in dedicated support roles) asked specifically for a physical box. The customer screamed bloody murder but with nobody having any other potential solution they did it. What do you know, the problem magically disappeared.
We never did find out what the problem was. All benchmark programs showed normal but the application profiler was telling us computing resources simply weren't adequate. There's kind of a specific signature we look for in the profiler now. If we see it, we know before we get any farther the problem is VM interaction, but it just wasn't known at the time.
They sure thought I was full of it. I wasn't. I was out of options.
EDIT, Update from years later:
With more and more customers wanting to run on VMs and management willing to attempt to solve the problem at all cost, we got good VM hardware. I was able to construct a specialized VM burn program that ran in userspace (and required no privileges) on two single-core VMs with 512mb RAM, that was able to drain 1/3 memory performance out of another single-core VM with only 4 total cores out of 16 in use on the VM host and most of its ram still free. The program raised no alarms, and showed nothing out of the ordinary on the VM host nor any of the guests, except for memory access was slow.
Now we can tell customers we know that there is a problem with VMs, and it's not our software. We still get customer requests from time to time for VM compatible software. I wonder why management doesn't let support tell them we were able to develop a piece of software that slows down every other VM on the same host.
The scary thing is the technique involved is a simple transform of well-known programming technique involving lock-free synchronization. Hundreds of software vendors could have this VM drainer in their software and not know it. Getting an atomic instruction lock that hotly contested should be rare but not impossible. The amusing part of it all is I was getting the lock to contest ACROSS VMs.
I would suggest a very different approach to the ones mentioned so far. Before argueing with the vendor, why not look more closely at the problem reported and see what that tells you.
What are the actual problems being reported and what are the users expectations. If a user is saying something "take too long", ask them exactly what 'it' is (so you can reproduce it), how long they think it should take, and why they think it should take that long. If their expectations are reasonable, measure the actual performance and system impact of what they are trying to do. The fact that your system shows a 30% spike over a month does not mean it is not running at >100% when the user is trying their query. If you can demonstrate to your vendor that cpu and memory are not being strained by the problematic task, then you can ask the vendor to justify recomendations that will cost you money.