Problem - I have Logstash running on 13 different servers and I have no idea what is the current CPU and memory consumption of these java processes.
Sure, I can open Perfmon and connect to these 13 different servers and show the different performance counters ... in theory. In practice, it is hardly the way.
So, what is the way? How is it done sanely?
The way this is handled is with a monitoring tool that monitors the servers from a centralized location.
Different monitoring tools work in different ways. Some of them use agents that must be installed on every server. Some of them use SNMP. (Well that technically requires an agent too but SNMP is ubiquitous on almost all server platforms.) They often leverage protocols and management mechanisms that are native to the platform they're monitoring to help them collect performance and health data. (For example, WMI for Windows.)
The idea is that no matter how the monitoring software chooses to collect the data, (and by "data" I mean performance counters, statistics about how the computer is performing,) all of the data is aggregated from all the different servers and stored in a centralized repository/database, and then there'll be a "dashboard" or "management console" application that comes along with that software product that allows you to view the data in a "single pane of glass" kind of way.
That's a super-generic answer, but it's all I could come up with without turning this into a product recommendation thread.
As Ryan said there are several monitoring tools that can monitor what you want. You asked for some product recommendation, so I'm going to say you what I use to monitor.
My solution is Pandora FMS, that is a centrilized monitoring tool that can obtain the data you want via several ways. Maybe using WMI protocol included in Windows systems or installing some agents in your servers. SNMP is a good choice too. Take a look on their web.
Just to give you more options, you have Nagios or Zabbix, but there are much more.
Hope this can help!