Right now I have two glusterfs volumes
Volume Name: gv0
Type: Replicate
Volume ID: id-here
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: s1.example.com:/data/brick1/gv0
Brick2: s2.example.com:/data/brick1/gv0
Options Reconfigured:
performance.readdir-ahead: on
/etc/fstab (server1):
/dev/vdb1 /data/brick1 xfs defaults 1 2
s1.example.com:/gv0 /mnt/glusterfs glusterfs defaults,_netdev,direct-io-mode=disable 0 0
The glusterfs volume is mounted locally, yet read/writes are slow/sluggish. I know the connection between server1 and server2 is slow, but ideally it should write to the local volume, and then sync correct? I'm having issues where my upload application timesout when storing files to the locally mounted glusterfs volume.
I'm using the native FUSE client. All servers are on a KVM VM. Qcow2, no cache XFS filesystem for glusterfs partition.
Benchmarks
GlusterFS Vol:
[~]@s1:$ dd if=/dev/zero of=/mnt/glusterfs/zero1 bs=64k count=40
40+0 records in
40+0 records out
2621440 bytes (2.6 MB) copied, 17.3101 s, 151 kB/s
Normal Vol
[~]@s1:~$ dd if=/dev/zero of=zero1 bs=64k count=40
40+0 records in
40+0 records out
2621440 bytes (2.6 MB) copied, 0.00406856 s, 644 MB/s
Writes on replicated volumes are always synchronized.
Some time ago, I specifically asked that on GlusterFS mailing list and the short reply was that it is not possible to have a background, continuous sync process while leaving the localhost as fast as possible.
A possible suggested workaround was to deliberately break the replication, write to your localhost, and restore the replication. The self healing daemon would kick in and background-sync all the changes.
This workaround clearly only works when your remote copy is read-only; if it is read-write, you will incur in split-brain scenarios. Anyway, if you only need a read-only remote copy, you can use GlusterFS's georeplication feature, which is based on rsync and it is fully decoupled from localhost writes.