My redis machine's rdb size is about 1 GB. My app use redis heavily and cannot let the master handle bgsave or else, it will slow down all the system in master's machine down. So, I have to prevent master from doing bgsave and only let slave do the persistence job.
The problem come when master need to be restart. It has to copy rdb from slave to load initial data into memory. This process took around 30-60 seconds. If my redis grow larger, time for the process will be increased.
I have tried NFS to let master and slave use the same rdb to read/write but replication process force master to do bgsave on first time the slave connect to master and it cannot do this job because the source rdb (master) and destination rdb (slave) is at the same place.
The question is "Is there a better way to do this initial load rdb into memory job?".
The short answer is that there is no inherently 'faster' way to speed up loading data into Redis from AOF or RDB. Maybe a small increase from faster disks, but as you have observed, it is a linear-time operation.
The long answer is that you are hitting the limits of your current architecture and you'd be well served by rethinking it. There are a few options in this scenario, covered on the web in some detail (your own experience will have to bear out which solution works best for you.)
A few options, most complex first:
Partitioning your data wouldn't really help you at this point, IMO, as your dataset is still relatively small. Overall, it is the availability and coordination of the master that you need to address while maintaining persistence.
Finally, the more complex answer above involves horizontal scaling. You are experiencing stress under
BGSAVE
with a single host & small dataset: you should consider scaling vertically to solve your immediate problem (i.e. a more powerful machine for the master.) This way you completely duck the problem ofBGSAVE
on slave / restart master / load from slave, at least for now. It likely also will prove more cost-effective in the short run.