I have several computers that are running applications. These computers are on the internet, in that they can connect out to the general internet over port 80. These computers have dynamic IP addresses assigned via DHCP. These computers are installed in an environment where I have almost no control over the network. I would like to install run Performance Co-pilot (PCP) running on each computer to record system and application metrics.
Can PCP phone-home from the described environment to a central monitoring system where I can aggregate the data for visualization and analysis? The central monitoring system can be at a static IP address and I have total control over this network environment. However, it isn't possible for the server to reach out to the client computers because of their dynamic IP addresses. Statistics for each client machine could be identified by a unique client variable, or MAC address, just not by IP address or DNS entry.
Is this something that should be done with a different tool? (Zabbix, Sensu)
TLDR: Can I push PCP performance stats from clients, to a server, or does the server have to request the PCP stats from a static IP or DNS entry?
PCP is "pull-based" in the sense that clients pull their desired data from the collection daemons, rather than having the daemon push it out somewhere. This includes the
pmlogger
client that creates the archive files. For centralized logging, it is typical to run manypmlogger
instances - one per target machine - on a monitoring server. Then the resulting archive files may be read there (or copied / downscaled / analyzed elsewhere).The
pmmgr
service makes it easy to automate management of a variable fleet ofpmlogger
instances for a networkful of machines.pmmgr
can find machines on a network by hostname, IP address range scanning, or dns-sd self-advertisements.PCP supports the OpenMetrics standard since version 4 which means you can use Prometheus to scrape metrics from your target nodes via PMAPI and make use of Prometheus' federated mode where you can have a hierarchy of monitoring servers, ie. one Prometheus instance per remote site and one central monitoring server where you collect metrics from all remote instances for overall analysis, visualisation etc.