I've got multiple servers which all need to have the same content in /home. In other words, if the file /home/user1/test.txt is updated on server A, this needs to be replicated to all other servers in the cluster.
Is it possible to use GlusterFS for this purpose? That is, let each server have a full copy of all data locally - which that server will be working on - and solely use GlusterFS to take care of replicating this data to the other servers?
I'm not intersted in a combined storage, but rather have all data on all machines only to have GlusterFS to replicate it to the other machines.
Yeah. This is pretty much a reference case for GlusterFS. One of the best things about GlusterFS is having the data stored locally, and also replicated to other cluster members. Meaning you can alternatively look it up from the Gluster Client - which gives you failover, or you can also grab the data directly off the "storage brick" location. - Good for backups.
Gluster also supports exporting volumes using NFSv3, but you don't get failover if you're using the NFS mechanism, as opposed to the FUSE-based client.
The Gluster Quick start guide should be a pretty good starting point for you.. and the administration guide has all the other answers.
That said.. GlusterFS can behave oddly if you're using it for database backend storage.. I wouldn't really want to put
/var/lib/mysql
on there, for example. The locks get a bit messy.