Tagged: cluster

Redis Over EFS for Caching In moodle cluster solution

If you are on this page, it means you are going with a scaling part for moodle. Now webserver part is okay, the challenge comes with Shared file storage. That is also get resolved by using shared storage like EFS.

In case, if you want to read about horizontal Scaling for moodle cluster solution.

In case of AWS, EFS is good as shared storage for media data.

But if you are using EFS as filestore caching (required and default) , means you are using $CFG->cachedir at EFS location, this will make the system response slow.

Challenge with Memcache to use as application cache mode

So , we see that there are lot of posts for memcache. But the problem with Memcache is , this does not provide data guarantee and locking . For the application cache which are stored in $CFG->cachedir, it is base requirement. Memcache is good option for session store.

Just for reference : there are three type of cache modes in moodle ( application, session, static request [localcache])

Now we have choice for mongo DB for application cache with new moodle version. Redis is more popular and have support for session store as well while mongo db mode is application only.

So the choice for selection is, shared storage EFS or Redis.

In case of EFS vs REDIS I will prefer REDIS, because of the following testing results.

Test-Config

Load test Env

  • 2 ec2 t2.micro server
  • 1 medium RDS DB server,
  • 1 ALB
  • one window server from another region was used for testing.

load test on moodle : https://developerck.com/load-testing-on-moodle/

EFS for caching store

20 user, 30 seconds rampup time, 1 iteration

20 user, 30 seconds rampup time, 1 iteration

30 user, 30 seconds rampup time, 1 iteration

30 user, 30 seconds rampup time, 1 iteration

40 user, 30 seconds rampup time, 1 iteration

40 user, 30 seconds rampup time, 1 iteration

Note : system was failed for this test. it is just an upper cap

Redis Cache for caching store (AWS Elasticache, micro server)

20 user, 30 seconds rampup time, 1 iteration

20 user, 30 seconds rampup time, 1 iteration

30 user, 30 seconds rampup time, 1 iteration

30 user, 30 seconds rampup time, 1 iteration

40 user, 30 seconds rampup time, 1 iteration

40 user, 30 seconds rampup time, 1 iteration

Note : system was failed for this test. it is just an upper cap

Load Test Result Comparison

Disclaimer : Test are repeated with same configuration, just a change of Caching store. In both cases, server stats were also monitored and did on a virgin stage. Before putting load, cache were generated by manually following same steps. In spite of all these consideration, Caching store may not be the only reason for difference pattern in response time. There may be other factors as well, however, it is one of the factor.

by comparing the above charts, one can see, there was a lot of improvement once i moved my application cache from EFS to REDIS .

EFSREDIS
20/30/12000-10000 ms1000-2200 ms
30/30/1mostly > 9000 msmostly < 6000 ms

If you are using only one web server, EBS volume results are 10 times better than Redis. It is just , if we have more than one application server and a shared storage, REDIS is performing much better than EFS for cachestore.

more about redis : https://aws.amazon.com/redis/