I have a VPC in AWS that contains a public and private subnet. In the private subnet, I have two load balanced EC2 app servers, and an EC2 Database/Cache server.
The two app servers connect to the Database/Cache server for database queries, but there is also an instance of Redis running on the database server. Both of the app servers are configured to connect to this redis instance.
My question is - is this performant? Would it be better to have an instance of Redis installed on each of the App server nodes?
Or are we better off leaving redis on the database/cache server?
It's going to depend on your usage, but lacking other information, I'd say it's better to go with a centralized cache.
Adding Entries and Cache Hits
If you have a centralized cache, then adding to the cache benefits all EC2 instances.
But if you have separate caches, then adding to the cache only benefits that EC2 instance.
Example: Imagine a DB query is executed from EC2 instance 1, then the same query is executed by EC2 instance 2.
With a centralized cache, query 2 will be a cache hit, whereas with separate caches, both will be cache misses. Adding more EC2 instances will compound the cache misses.
Cache Invalidations
With separate caches, you cannot invalidate entries from your cache because it'll only affect the local cache. Other caches will still contain the stale/invalid data.
Conclusion
Go with a centralized cache.
Although having a local cache can be more performant, it isn't the best architectural choice and it isn't the right way to grow your app.
As well explained by @Matt Houser, a centralized cache is the right way to go because this can better support your application growth.