brendanheywood / moodle-local_cleanurls

Lets drag Moodle's url structure into this century...
36 stars 24 forks source link

Clicking Course Completion Checkbox Causes JS MIME Type Error #110

Open nyanginator opened 6 years ago

nyanginator commented 6 years ago

I have a course with Course Completion enabled, and an activity with Activity Completion set to "manual", meaning a user can check off an activity as complete by clicking the checkbox.

With Clean URLs enabled, clicking the checkbox results in an error. The path to MathJax.js is incorrect, resulting in a MIME type error. Chrome console outputs a bit more information than Firefox:

Refused to execute script from 'https://cdnjs.cloudflare.com/cleanmoodle/athjax/2.7.1/MathJax.js?delayStartupUntil=configured' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.

I would say it shouldn't be necessary to clean any Javascript paths (especially external paths), so maybe it's cleaning it by accident? Also interesting to note is that the "/athjax/" in the path is supposed to be "/mathjax/". For some reason, it is adding the Moodle directory path ("cleanmoodle" in this case) and removing the first letter 'm' of "mathjax".

The correct path is:

https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?delayStartupUntil=configured

My $CFG->wwwroot = 'http://localhost/cleanmoodle'. As usual, I replicated this error on a new Moodle install on XAMPP.

Here's a quick fix to make sure the URL is actually on the same domain before cleaning. In /local/cleanurls/classes/local/cleaner/cleaner.php's execute() function:


    private function execute() {
        if ($this->originalurl !== null) {
            global $CFG;
            $sitehost = parse_url($CFG->wwwroot, PHP_URL_HOST);
            if ($this->originalurl->get_host() !== $sitehost) {
                return;
            }
        }
        ...
brendanheywood commented 6 years ago

thanks @nyanginator - yes it should definitely not be cleaning anything like js, the whole architecture has been inverted so it should only be cleaning urls which are human facing and known to be cleanable.

Please keep the bugs coming, we are working through a large number of issues and architecture changes to get this polished. We won't get to everything right away but we will address them :)