Tagged: AWS

Setting Up Moodle with AWS cloudfront CDN

before writting further , following is my exisitng architecture , where

  • ALB and ec2 are being utilized for Compute
  • RDS for database
  • EFS for moodledata
  • Redis for cache

Moodle version is 3.8

My Domain is pointing to Loadbalancer and site is being served.

Now , My Objective is to deliver the site utilizing AWS Cloudront CDN.

Setting Up Cloudfront :-

just navigate to aws cloud front service and start creating a distribution. you will find a web form to fill following sections.

  • Origin Setting
  • Cache Setting
  • Distribution Setting

Origin Setting:-

origin setting
  • Origin Domain Name : Select the ALB or ec2 on which the application is setup. It is the only thing which is requried.
    • origin path : Put the path if your site is runing in a directory or you just want to deliver the defined directory content, otherwise leave it blank.

Cache Settings :-

Cache Settings
  • Viewer Protocol policy is main thing to handle and it will depend upon the bhaviour of moodle implementation\
    • whether your site is using ssl
    • if you are enforcing https to user
    • if loadbalancer is utilized to offload ssl

as we force https, so in my case, i have set it up to HTTPS only

Distribution Settings:-

Distribution Settings
  • If site is using https then you need to import certificate through AWS ACM service, then it will be available for selection
  • Cloud front will provide a unique domain name , <d3e5asd3gad9wckaz>.cloudfront.net. If you want to use your own domain, you can put that domian under CNAME. so that, the same will be accessible by your domain as well.
  • you can put logs into s3 bucket, loggin and log prefix is optional
Final Architecutre

So we are done with setting up. It will take few minutes to deploy. Once it is deployed, you can try accesssing with the CNAME, and moodle should be served through cloudfront.


  • here we are delivering the complete site with cloudfront
  • only GET Request are cached, POST and other actions are forwarded directly
  • one can control caching behaviour by various ways
    • header values
    • apache mod header setting and values
    • maximum time to cache
  • one can put pre and post hook to execute , means, you can manipulate request , before it reaches to the web server and response, before it is reached to end user by invoking Lambda.
Pros :-
  • You can utilize best of CDN to deliver the content, it will help to imprvoe performance by caching , more control on each request, end user will get the speed as it content is being delivered from nearest location

Note :- Origin Response Timeout can be maximum set to 60 seconds. Although there should not be any process which takes more than that, ,but heavy process which are processed on demand or report downloads can lead to 504 timeout. The same condition is with ALB, but we can increase that beyond 60 seconds.

Redis Over EFS for Caching In moodle cluster solution

If you are on this page, it means you are going with a scaling part for moodle. Now web server part is okay, the challenge comes with Shared file storage. That is also get resolved by using shared storage like EFS.

In case , if you want to read about horizontal Scaling for moodle cluster solution.

in case of AWS, EFS is good as shared storage for media data.

But if you are using EFS as filestore caching (required and default) , means you are using $CFG->cachedir at EFS location, this will make the system response slow.

Challenge with Memcache to use as application cache mode

So , we see that there are lot of posts for memcache. But the problem with Memcache is , this does not provide data guarantee and locking . For the application cache which are stored in $CFG->cachedir, it is base requirement. Memcache is good option for session store.

Just for reference : there are three type of cache modes in moodle ( application, session, static request [localcache])

Now we have choice for mongo db for application cache with new moodle version. Redis is more popular and have support for session store as well while mongo db mode is application only.

so the the choice for selection are , shared storage EFS or Redis.

I will go with redis, because of following testing results.


Load test Env : 2 aws t2.micro server 1 medium DB server, 1 ALB,  one window  server from another region used for testing.
load test Config: https://developerck.com/load-testing-on-moodle/

EFS for caching store

20 user, 30 seconds rampup time, 1 iteration

30 user, 30 seconds rampup time, 1 iteration

40 user, 30 seconds rampup time, 1 iteration

note : system was failed for this test. it is just an upper cap

Redis Cache for caching store (AWS Elasticache, micro server)

20 user, 30 seconds rampup time, 1 iteration

30 user, 30 seconds rampup time, 1 iteration
40 user, 30 seconds rampup time, 1 iteration

note : system was failed for this test. it is just an upper cap


Disclaimer : Test are repeated with same configuration, just a change of Caching store. In both cases, server stats were also monitored and did on a virgin stage. Before putting load, cache were generated by manually following same steps. In spite of all these consideration, Caching store may not be the only reason for difference pattern in response time. There may be other factors as well, however, it is one of the factor.

by comparing above charts, one can see, there was a lot of improvement once i moved my application cache from EFS to REDIS .

20/30/12000-10000 ms1000-2200 ms
30/30/1mostly > 9000 msmostly < 6000 ms

If you are using only one web server, EBS volume results are 10 times better than Redis. It is just , if we have more than one application server and a shared storage, REDIS is performing much better than EFS for cachestore.

more about redis : https://aws.amazon.com/redis/