My boss recently asserted, in conversation, that Redis supports some configuration options for controlling maximum key or key/value sizes ... so we could set some option to prevent our applications from creating keys or key/value pairs larger than, say, 50KB.
My impression is that no such option exists and that we'd have to patch the sources and build our own to add such feature. (For this question forcing the applications programmers to mediate all access through Lua scripts or through something like twemproxy would NOT be an option).
Did I miss something in the Redis: Documentation somewhere?
Also what are the best practices for failover these days? Is Redis Sentinel ready for prime time? Is the Linux-HA OCF Heartbeat/Pacemaker/Cluster Glue trio still the best for this?
Pretty sure no such feature exists. All you can limit is an instance's total footprint with
maxmemory
.But, it'd be tricky to enforce the limit in a way that made sense - I'd argue that working with the application developers to use Redis in a way that makes sense, instead of kicking an error when their value size goes over an arbitrary boundary, is better. (Why would you want to have this kind of limit, anyway - what's a large key or value hurting?)
Redis doesn't seem to be very interested in preventing people from shooting themselves in the foot;
FLUSHALL
orDEBUG SEGFAULT
are right there at your fingertips.For failover, I've been hammering on the new version of Sentinel and it seems solid - some rough edges, but on the whole it works as expected. Probably going to start using it in production on a limited basis soon.
Indeed, Redis does not offer this kind of feature.
For failover see: