Suppose that one has 100 machines (each with a 2TB hard drive), all connected in a network.
Is there a way (in Linux) to unite this combined 200TB of disk space into one folder, that can be shared using NFS among all machines that are in the network?
Going this route, all machines will be able to read/write to the same folder, and this way, I/O will be spread evenly between them. Is this a good idea if one needs a large file system to store 100s of TB of data? (note: data will be split to many smaller files, of size ~ 500GB each).
Is there a ready-made solution (prefereable, an open source one) that achieves just that?