AWS SysOp questions on load-balancing, caching and reverse-proxy (redis, memcached, nginx, varnish)

First half:

Elasticache is just a marketing label; is a managed service offering memcached or redis. When reading advice keep in mind much of it is situation specific.

If you have a single fat server with plenty RAM it is beneficial to have a cache on the machine itself, save as a ton of work/time on network overhead, and the various worker processes in same machine would still share it. But like you say, if you run light throwaway instances in a loadbalancer you might want to share the cache.

You can also do both: we have an app that runs on 4 beefy EC2 instances, each with a local short-term memcached (random stuff for seconds/minutes), and a shared Elasticache memcached (ORM cache, sessions etc).

Redis is more then jut key-value, is had some really amazing basic data types (sorted sets are very handy but there is muck more like blocking list pop, and pubsub features). It is great as a building block in various apps (this gets specific, but it does things nothing else can as easily with same performance).

Keep in mind that if you need to actively invalidate/update cache data then all instances should look at same shared cache (unless you know what you're doing). So we run our ORM cache that needs invalidation signals on the shared instance, and use short term local caches for stuff that can be a little stale: because HTTP has so much caching and stuff the application needs to be able to deal with some stale data anyway.

/r/django Thread