I'm sure this is going to be very dependent on the harddrives and other hardware that we have in our machine, but I have no idea how to read these graphs.
This graph looks nice and low.
This one looks scary as hell.
I know that the scale is different, but how do I know when things are starting to go wrong as far as the disk actually struggling to keep up?
If you only need to know if your disk is bottlenecking something, the basic parameters for disk performance are queue length (you might want to look at read and write queues separately, sometimes this is insightful), the disk idle time percentage and possibly the request service time. The queue length value tells you how many requests for disk data are outstanding but not yet serviced because the disk is busy doing other things. If you see anything there that is larger than the number of spindles (i.e. the number of disks in your disk array if you have one or simply "1" if you only have a single disk to monitor) for a prolonged period of time, your applications are certainly waiting for I/O a lot and you should look into it. You will see the disk idle time go down to 0% (or the disk busy time go up to 100%) simultaneously, because if there is always at least one request in the queue, then there is never time for the disk to be idle.
The resource monitor shows the disk's busy time percentage (blue lines in the graph) as well as the queue length and the throughput. Throughput is hardly of any value except that it is nice to look at. None of the given graphs looks "scary" in the "my system is under high load" way - the disks are mostly idle (blue line crawling at the bottom, mean disk queue length << 0.05)