Pages that are indexed in search results have their entire contents indexed, including any HTML code snippets. These HTML snippets would appear in the search results unsanitised, so it was possible to render arbitrary HTML or run arbitrary scripts:
This is a largely theoretical security issue; to exploit it, an attacker would need to find a way of committing malicious code to a page indexed by a site that uses tech-docs-gem (which are typically not editable by untrusted users). Their code would also be limited by the relatively short length that's rendered in the corresponding search result. Nevertheless, the XSS would then be triggerable by visiting a pre-constructed URL (/search/index.html?q=some+search+term), which users could be tricked into clicking on through social engineering.
What’s changed
This commit sanitises the HTML before rendering it to the page. It does so whilst retaining the <mark data-markjs="true"> behaviour that highlights the search term in the result:
I've used jQuery's text() function for sanitisation, as that is the approach used elsewhere in the project (1).
I did consider using native JavaScript (using the same approach as in Mustache 2) to avoid the jQuery dependency, but this itself may contain bugs and would lead to having two sanitisation approaches to maintain, so I opted against it. For future reference, the code in this commit can be swapped out with:
var entityMap = {
'&': '&',
'<': '<',
'>': '>',
'"': '"',
"'": ''',
'/': '/',
'`': '`',
'=': '='
};
var sanitizedContent = String(content).replace(/[&<>"'`=\/]/g, function (s) {
return entityMap[s];
});
Identifying a user need
The look and interactions of the gem are unchanged. This simply addresses a security issue.
Pages that are indexed in search results have their entire contents indexed, including any HTML code snippets. These HTML snippets would appear in the search results unsanitised, so it was possible to render arbitrary HTML or run arbitrary scripts:
This is a largely theoretical security issue; to exploit it, an attacker would need to find a way of committing malicious code to a page indexed by a site that uses tech-docs-gem (which are typically not editable by untrusted users). Their code would also be limited by the relatively short length that's rendered in the corresponding search result. Nevertheless, the XSS would then be triggerable by visiting a pre-constructed URL (
/search/index.html?q=some+search+term
), which users could be tricked into clicking on through social engineering.What’s changed
This commit sanitises the HTML before rendering it to the page. It does so whilst retaining the
<mark data-markjs="true">
behaviour that highlights the search term in the result:I've used jQuery's
text()
function for sanitisation, as that is the approach used elsewhere in the project (1).I did consider using native JavaScript (using the same approach as in Mustache 2) to avoid the jQuery dependency, but this itself may contain bugs and would lead to having two sanitisation approaches to maintain, so I opted against it. For future reference, the code in this commit can be swapped out with:
Identifying a user need
The look and interactions of the gem are unchanged. This simply addresses a security issue.