Riccardo-ten-Cate / skf

Security knowledge framework
0 stars 0 forks source link

sad #74

Open Riccardo-ten-Cate opened 5 years ago

Riccardo-ten-Cate commented 5 years ago

aso

skf-integration[bot] commented 5 years ago

alt text

Security knowledge framework!


Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 

Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 

Permit Password Change

Description:

Users should be able to update their password whenever it is necessary. For example, take in consideration the scenario in which they tend to use the same password for multiple purposes. If this password is leaked, the users have to immediately update their credentials in every application they are registered. Therefore, if the application does not provide an accessible password update functionality to a user, there is the risk that his account may be taken over.

Solution:

Applications should provide to the user a functionality that permits the change of its own password.

Unauthorized credential changes

 Description:

An application which offers user login functionality, usually has an administration page
where userdata can be modified. When the user wants to change this data he should
specify his current password.

 Solution:

When changing user credentials or email address the user must always enter a valid
password in order to implement the changes. This is also called reauthentication or
stepup / adaptive authentication. Whenever a user "reauthenticates" himself the current
session ID value should also be refreshed in order to fend oFf so called "session hijackers"

Verify Breached Passwords

 Description:

Multiple database of leaked credentials have been released during breaches over the years. If users choose passwords already leaked, they are vulnerable to dictionary attacks.

 Solution:

Verify that passwords submitted during account registration, login, and password change are checked against a set of breached passwords. In case the password chosen has already been breached, the application must require the user to reenter a nonbreached password.

Provide Password Strength Checker

 Description:

Users may tend to choose easy guessable passwords. Therefore, it is suggested to implement a functionality that encourage them to set password of higher complexity.

 Solution:

Applications should provide the users a password security meter in occasion of account registration and password change.

Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 

no password rotation policy

Description:
Some policies require users to change passwords periodically, often every 90 or 180 days. 
The benefit of password expiration, however, is debatable. Systems that implement such 
policies sometimes prevent users from picking a password too close to a previous selection.

This policy can often backfire. Some users find it hard to devise "good" passwords that are 
also easy to remember, so if people are required to choose many passwords because they have 
to change them often, they end up using much weaker passwords; the policy also encourages 
users to write passwords down. Also, if the policy prevents a user from repeating a recent password, 
this requires that there is a database in existence of everyone's recent passwords (or their hashes) 
instead of having the old ones erased from memory. Finally, users may change their password repeatedly
within a few minutes, and then change back to the one they really want to use, circumventing the 
password change policy altogether.

Solution:
Only force users to update their passwords when the password strength that is enforced by the application
is no longer sufficient to withstand brute force attacks due to increase of computing power.

Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

Forget password functions

 Description:

Whenever the application provides a password forget functionality or another 
type of recovery methods there are several implementations of hardened proven ways to make
the user recover his password.

 Solution:

The recommended solutions are to use TOTP (Timebased OneTime Password algorithm). This 
method is an example of a hashbased message authentication code (HMAC). It combines a 
secret key with the current timestamp using a cryptographic hash function to generate 
a onetime password. Because network latency and outofsync clocks can result in the password 
recipient having to try a range of possible times to authenticate against, the timestamp typically 
increases in 30second intervals, which thus cuts the potential search space.

Or the other option is to use a Mathematicalalgorithmbased onetime password method. This other 
type of onetime password uses a complex mathematical algorithm, such as a hash chain, to generate 
a series of onetime passwords from a secret shared key. Each password cannot be guessed even when 
previous passwords are known. The open source OAuth algorithm is standardized; other algorithms are 
covered by U.S. patents. Each password is observably unpredictable and independent on previous ones. 
Therefore, an adversary would be unable to guess what the next password may be, even with the 
knowledge of all previous passwords.

Example of a hard token mathimatical algorithm would be a yubikey
Example of a soft token TOTP would be google authenticator

The last resort would be to send a new password by email. This mail should contain a reset link with 
a token which is valid for a limited amount of time. Additional authentication based on softtokens 
(e.g. SMS token, native mobile applications, etc.) can be required as well before the link is 
sent over. Also, make sure whenever such a recovery cycle is started, the application does not 
reveal the user’s current password in any way.

user notification on critical state changing operations

Description:
When a user is informed of critical operations than the user can determine
if the notification is send by his own actions, or that the notifucation indicates 
potential compromitation of his user account.

Solution:

Verify that secure notifications are sent to users after updates
to authentication details, such as credential resets, email or address changes,
logging in from unknown or risky locations. Users must also be notified when
password policies change or any other important updates that require action from the
user to increase the security of his account.

The use of push notifications  rather than SMS or email  is preferred, but in the 
absence of push notifications, SMS or email is acceptable as long as no sensitive information is disclosed 
in the notification.

Secrets should be secure random generated

 Description:

Secret keys, API tokens, or passwords must be dynamically generated. Whenever these tokens
are not dynamically generated they can become predicable and used by attackers to compromise
user accounts. 

 Solution:

When it comes to API tokens and secret keys these values have to be dynamically generated and valid only once.
The secret token should be cryptographically 'random secure', with at least 120 bit of effective entropy, salted with a unique and random 32bit value and hashed with an approved hashing (oneway) function.

Passwords on the other hand should be created by the user himself, rather than assigning
a user a dynamically generated password. The user should be presented a onetime link with a 
cryptographically random token by means of an email or SMS which is used to activate his 
account and provide a password of his own.

Password leakage

 Description:

After completing a password recovery functionality, the user should not be sent a plaintext
password to his email adress. The application should also under no circumstances disclose the old or current password
to the users.

 Solution:

The application should under no circumstances disclose the users current, old and new password plain text.
This behavior makes the application susceptible to side channel attacks and make the passwords
lose their integrity since they could be compromised by someone looking over another users shoulder to
see the password. 

No shared knowledge for secret questions

 Description:

Whenever an application ask an user a secret question i.e a password forgot
functionality, these questions should not be shared knowledge an attacker could get from
the web to prevent him compromising the account by this function.

 Solution:

Secret questions should never include shared knowledge, predictable or easy
guessable values.

Otherwise the answers for these secret questions can be easilly looked up on the internet by means 
of social media accounts and the like.

Password leakage

 Description:

After completing a password recovery functionality, the user should not be sent a plaintext
password to his email adress. The application should also under no circumstances disclose the old or current password
to the users.

 Solution:

The application should under no circumstances disclose the users current, old and new password plain text.
This behavior makes the application susceptible to side channel attacks and make the passwords
lose their integrity since they could be compromised by someone looking over another users shoulder to
see the password. 

Forget password functions

 Description:

Whenever the application provides a password forget functionality or another 
type of recovery methods there are several implementations of hardened proven ways to make
the user recover his password.

 Solution:

The recommended solutions are to use TOTP (Timebased OneTime Password algorithm). This 
method is an example of a hashbased message authentication code (HMAC). It combines a 
secret key with the current timestamp using a cryptographic hash function to generate 
a onetime password. Because network latency and outofsync clocks can result in the password 
recipient having to try a range of possible times to authenticate against, the timestamp typically 
increases in 30second intervals, which thus cuts the potential search space.

Or the other option is to use a Mathematicalalgorithmbased onetime password method. This other 
type of onetime password uses a complex mathematical algorithm, such as a hash chain, to generate 
a series of onetime passwords from a secret shared key. Each password cannot be guessed even when 
previous passwords are known. The open source OAuth algorithm is standardized; other algorithms are 
covered by U.S. patents. Each password is observably unpredictable and independent on previous ones. 
Therefore, an adversary would be unable to guess what the next password may be, even with the 
knowledge of all previous passwords.

Example of a hard token mathimatical algorithm would be a yubikey
Example of a soft token TOTP would be google authenticator

The last resort would be to send a new password by email. This mail should contain a reset link with 
a token which is valid for a limited amount of time. Additional authentication based on softtokens 
(e.g. SMS token, native mobile applications, etc.) can be required as well before the link is 
sent over. Also, make sure whenever such a recovery cycle is started, the application does not 
reveal the user’s current password in any way.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

The login functionality should always generate a new session id

 Description:

Whenever an user is successfully authenticated the application should generate a
new session cookie.

 Solution:

The login functionality should always generate (and use) a new session ID after a
successful login. This is done to prevent an attacker doing a session fixation attack
on your users.

Some frameworks do not provide the possibility to change the session ID on login such as
.net applications. Whenever this problem occurs you could set an extra random cookie on
login  with a strong token and store this value in a session variable.

Now you can compare the cookie value with the session variable in order to prevent
session fixation since the authentication does not solely rely on the session ID since
the random cookie can not be predicted or fixated by the attacker.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

The logout functionality should revoke the complete session

 Description:

When the logout functionality does not revoke the complete session, an attacker could still
impersonate a user when he has access to the session cookie even after the user is logged off the application.

 Solution:

The logout functionality should revoke the complete session whenever a user
wants to terminate his session.

Each different framework has its own guide to achieve this revocation.
It is also recommended for you to make test cases which you follow to ensure
session revocation in your application.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

Password change leads to destroying concurrent sessions

 Description:

Whenever a user changes his password, the user should be granted the option
to kill all other concurrent sessions. This countermessure helps to exclude
potential attackers living on a hijacked session.

Note: Whenever users are granted the possibility to change their passwords,
      do not forget to make them reauthenticate or to use a form of step up
      or adaptive authentication mechanism.

 Solution:

Verify the user is prompted with the option to terminate all other active sessions 
after a successful change password process.

concurrent session handling

 Description:

You should limit and keep track of all the different active concurrent sessions.
Whenever the application discovers concurrent sessions it should always notify the user
about this and should give him the opportunity to end the other sessions.

With this defense in place it becomes harder for attackers to hijack a users session since
they will be notified about concurrent sessions.

 Solution:

The application should keep track and limit all the granted sessions.
It should store your users IP address, session id and user id. After storing these credentials
it should do regular checks to see if there are:

1. Multiple active sessions linked to same user id
2. Multiple active sessions from different locations
3. Multiple active sessions from different devices
4. Limit and destroy sessions when they exceed an accepted threshold.

The more critical the application becomes the lower the accepted threshold for
concurrent sessions should be.

Session cookies without the Secure attribute

 Description:

The secure flag is an option that can be set when creating a cookie.
This flag ensures that the cookie will not be sent over an unencrypted
connection by the browser,which ensures that the session cookie can not be sent over a nonencrypted link.

 Solution:

When creating a session cookie which is sent over an encrypted connection
you should set the secure flag. The Secure flag should be set during every setcookie.
This will instruct the browser to never send the cookie over HTTP.
The purpose of this flag is to prevent the accidental exposure of a cookie value if a user
follows an HTTP link.

Session cookies without the HttpOnly attribute

 Description:

An HttpOnly flag is an option that can be set when creating a cookie. This v ensures that the cookie cannot be read or edited by JavaScript. This ensures an attacker cannot steal this cookie as a crosssite scripting vulnerability is present in the application.

 Solution:

The HttpOnly flag should be set to disable malicious script access to the cookie values such as the session ID value. Also, disable unnecessary HTTP request methods such as the TRACE option. Misconfiguration of the HTTP request headers can lead to stealing the session cookie even though HttpOnly protection is in place.

same site attribute

Description:
SameSite prevents the browser from sending this cookie along with crosssite requests. 
The main goal is mitigate the risk of crossorigin information leakage. It also provides some 
protection against crosssite request forgery attacks.

Solution:
The strict value will prevent the cookie from being sent by the browser to the target site in all 
crosssite browsing context, even when following a regular link. For example, for a GitHublike website this 
would mean that if a loggedin user follows a link to a private GitHub project posted on a corporate discussion 
forum or email, GitHub will not receive the session cookie and the user will not be able to access the project.

A bank website however most likely doesn't want to allow any transactional pages to be linked from external 
sites so the strict flag would be most appropriate here.

The default lax value provides a reasonable balance between security and usability for websites that want
to maintain user's loggedin session after the user arrives from an external link. In the above GitHub scenario, 
the session cookie would be allowed when following a regular link from an external website while blocking it in 
CSRFprone request methods (e.g. POST).

As of November 2017 the SameSite attribute is implemented in Chrome, Firefox, and Opera. 
Since version 12.1 Safari also supports this. Windows 7 with IE 11 lacks support as of December 2018, 
see caniuse.com below.

host prefix

Description:

the '__Host" prefix signals to the browser that both the Path=/ and Secure attributes are required, 
and at the same time, that the Domain attribute may not be present.

Cross subdomain cookie attack

 Description:

A quick overview of how it works:

1. A website www.example.com hands out subdomains to untrusted third parties
2. One such party, Mallory, who now controls evil.example.com, lures Alice to her site
3. A visit to evil.example.com sets a session cookie with the domain .example.com on Alice's browser
4. When Alice visits www.example.com, this cookie will be sent with the request, as the specs for cookies states, and Alice will have the session specified by Mallory's cookie.
5. Mallory can now use Alice her account.

 Solution:

In this scenario changing the sessionID on login does not make any difference since
Alice is already logged in when she visits Mallory's evil web page.

It is good practice to use a completely different domain for all trusted activity.

For example Google uses google.com for trusted activities and *.googleusercontent.com
for untrusted sites.

Also when setting your cookies to specify which domains they are allowed to
be send to. Especially on your trusted domain you do not want to leak cookies to unintended
subdomains. highly recommended is to not use wildcards when setting this option.

High value transactions

 Description:

Whenever there are high value transactions a normal username/password static authentication method does
not suffice to ensure a high level of security. Whenever the application digests high level of transactions ensure that
risk based reauthentication, two factor or transaction signing is in place.

 Solution:

1 risk based authentication:
In Authentication, riskbased authentication is a nonstatic authentication 
system which takes into account the profile of the agent requesting access to 
the system to determine the risk profile associated with that transaction. 

The risk profile is then used to determine the complexity of the challenge.
Higher risk profiles leads to stronger challenges, whereas a static username/password may suffice for 
lowerrisk profiles. Riskbased implementation allows the application to challenge the user for additional 
credentials only when the risk level is appropriate.

2 two factor authentication:
Multifactor authentication (MFA) is a method of computer access control in which a user is 
granted access only after successfully presenting several separate pieces of evidence to an 
authentication mechanism – typically at least two of the following categories: knowledge (something they know), 
possession (something they have), and inherence (something they are)

3 Transaction signing:
Transaction signing (or digital transaction signing) is the process of calculating a keyed hash function 
to generate a unique string which can be used to verify both the authenticity and integrity of an online transaction.

A keyed hash is a function of the user's private or secret key and the transaction details, 
such as the transfer to the account number and the transfer amount.

To provide a high level of assurance of the authenticity and integrity of 
the hash it is essential to calculate the hash on a trusted device, such as a separate smart card reader.
Calculating a hash on an Internetconnected PC or mobile device such as a mobile telephone/PDA would be
counterproductive as malware and attackers can attack these platforms and potentially subvert the signing process itself.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

All authentication controls must fail securely

 Description:

Handling errors securely is a key aspect of secure coding.
There are two types of errors that deserve special attention. The first is exceptions
that occur in the processing of a security control itself. It's important that these
exceptions do not enable behavior that the countermeasure would normally not allow.
As a developer, you should consider that there are generally three possible outcomes
from a security mechanism:

1. Allow the operation
2. Disallow the operation
3. Exception

In general, you should design your security mechanism so that a failure will follow the same execution path
as disabling the operation

 Solution:

Make sure all the access control systems are thoroughly tested for failing securely before
using it in your application. It is common that complete unittest are created especially
for this purpose.

Insecure direct object references

 Description:

Applications frequently use the actual name or key of an object when generating web pages. 
Applications don’t always verify the user is authorized for the target object. 
This results in an insecure direct object reference flaw. Testers can easily manipulate parameter 
values to detect such flaws and code analysis quickly shows whether authorization is properly verified.

The most classic example:
The application uses unverified data in a SQL call that is accessing account information:

String query = "SELECT * FROM accts WHERE account = ?";
PreparedStatement pstmt = connection.prepareStatement(query , ... );
pstmt.setString( 1, request.getParameter("acct"));
ResultSet results = pstmt.executeQuery();

The attacker simply modifies the ‘acct’ parameter in their browser to send whatever 
account number they want. If not verified, the attacker can access any user’s account, instead of 
only the intended customer’s account.

http://example.com/app/accountInfo?acct=notmyacct

 Solution:

Preventing insecure direct object references requires selecting an approach 
for protecting each user accessible object (e.g., object number, filename):

Use per user or session indirect object references. This prevents attackers from directly 
targeting unauthorized resources. For example, instead of using the resource’s database key, 
a drop down list of six resources authorized for the current user could use the numbers 1 to 6 to 
indicate which value the user selected. The application has to map the peruser indirect reference 
back to the actual database key on the server.

Check access. Each use of a direct object reference from an untrusted source must include an access control 
check to ensure the user is authorized for the requested object.

Cross site request forgery

 Description:

CrossSite Request Forgery (CSRF) is a type of attack that occurs when a malicious Web site,
email, blog, instant message, or program causes a users Web browser to perform an unwanted
action on a trusted site for which the user is currently authenticated.

The impact of a successful crosssite request forgery attack is limited to the
capabilities exposed by the vulnerable application. For example, this attack could result
in a transfer of funds, changing a password, or purchasing an item in the users context.
In effect, CSRF attacks are used by an attacker to make a target system perform a
function (funds Transfer, form submission etc.) via the targets browser without
knowledge of the target user at least until the unauthorised function has been committed.

 Solution:

To arm an application against automated attacks and tooling you need to use unique tokens
which are included into the forms of an application, API calls or AJAX requests.  
Any state changing operation requires a secure random token (e.g CSRF token) to prevent
against CSRF attacks. Characteristics of a CSRF Token are a unique, large random
value generated by a cryptographically secure random number generator.

The CSRF token is then added as a hidden field for forms and validated on the sever side whenever
a user is sending a request to the server.

Note :
Whenever the application is an REST service and is using tokens such as JWT tokens, whenever these tokens are being sent
in the application headers rather than stored in cookies the application should not be suspectible to CSRF attacks for a succesfull CSRF attacke depends on the browsers cookie jar.

Two factor authentication

 Description:

Two factor authenitcation must be implemented to protect your applications users against unauthorized use of the application.

Whenever the users username and password are leaked or disclosed by an application on what ever fashion possible, the 
users account should still be proteced by two factor authentication mechanisms to prevent attackers
from logging in with the credentials.

 Solution:

Multifactor authentication (MFA) is a method of computer access control in which a user is granted access only after successfully presenting several separate pieces of evidence to an authentication mechanism – typically at least two of the following categories: knowledge (something they know), possession (something they have), and inherence (something they are)

Examples of two/multi factor authentication can be 

1. Google authenticator
   Google Authenticator is an application that implements twostep verification services using the Timebased 
   Onetime Password Algorithm (TOTP) and HMACbased Onetime Password Algorithm 

2. Yubikey

  The YubiKey is a hardware authentication device manufactured by Yubico that supports onetime passwords, public key   
  encryption and authentication, and the Universal 2nd Factor (U2F) protocol[1] developed by the FIDO Alliance (FIDO U2F).
  It allows users to securely log into their accounts by emitting onetime passwords or using a FIDObased public/private
  key pair generated by the device

Directory listing

 Description:

Whenever directory listing is enabled, an attacker could gain sensitive information about
the systems hierarchical structure and gain knowledge about directories or files which should
possibly not be publicly accessible. An attacker could use this information to
increase his attack vector. In some cases this could even lead to an attacker gaining knowledge about
credentials or old vulnerable system demo functions which might lead to remote code execution.

 Solution:

Different types of servers require a different type of approach in order to disable
directory listing. For instance: Apache uses a .htacces in order to disable directory listing.
As for iis7, directory listing is disabled by default.

Step up or adaptive authentication

 Description:

Whenever a user browses a section of a webbased application that contains sensitive information the user should be challenged authenticate again using a higher assurance credential to be granted access to this information.
This is to prevent attackers from reading sensitive information after they successfully hijacked a user account.

 Solution:

Verify the application has additional authorization (such as step up or adaptive authentication) so the user is challenged before being granted access to sensitive information. This rule also applies for making critical changes to an account or action.
Segregation of duties should be applied for highvalue applications to enforce antifraud controls as per the risk of application and past fraud.

Verify that structured data is strongly typed and validated

 Description:

Whenever structured data is strongly typed and validated against a defined schema the application
can be developed as a defensible proactive application. The application can now measure everything
that is outside of its intending operation by means of the defined schema's and should be used to
reject the input if the schema checks return false.

 Solution:

Verify that structured data is strongly typed and validated against a defined schema
including allowed characters, length and pattern (e.g. credit card numbers or telephone, 
or validating that two related fields are reasonable, such as validating suburbs and zip or 
post codes match

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

XSS injection

 Description:

Every time the application gets userinput, whether this showing it on screen or processing
this data in the application background, these parameters should be escaped for malicious
code in order to prevent crosssite scripting injections.
When an attacker gains the possibility to perform an XSS injection,
he is given the opportunity to inject HTML and JavaScript code directly into the
application. This could lead to accounts being compromised by stealing session cookies or directly 
affect the operation of the target application. 

Altough templating engines(razor, twig, jinja, etc) and contextaware applications(Angular, React, etc)
do a lot of auto escaping for you. These frameworks should always be validated for effectiveness.

 Solution:

In order to prevent XSS injections, all userinput should be escaped or encoded.
You could start by sanitizing userinput as soon as it is inserted into the application,
by preference using a so called whitelisting method.
This means you should not check for malicious content like the tags or anything,
but only allow the expected input. Every input which is outside of the intended operation
of the application should immediately be detected and login rejected.
Do not try to help use the input in any way because that could introduce a new type of attack by converting characters. 

The second step would be encoding all the parameters or userinput before putting this in
your html with encoding libraries specially designed for this purpose.

You should take into consideration that there are several contexts for encoding userinput for
escaping XSS injections. These contexts are amongst others:

* HTML encoding, is for whenever your userinput is displayed directly into your HTML.
* HTML attribute encoding, is the type of encoding/escaping that should be applied 
  whenever your user input is displayed into the attribute of your HTML tags.
* HTML URL encoding, this type of encoding/escaping should be applied to whenever you are using userinput into a HREF tag.

JavaScript encoding should be used whenever parameters are rendered via JavaScript; your application will detect normal injections in the first instant. But your application still remains vulnerable to JavaScript encoding which will not be detected by the normal encoding/escaping methods.

type checking and length checking

 Description

Type checking, length checking and whitelisting is an essential in defense in depth strategie to make
your application more resiliant against input injection attacks.

Example:
SELECT * FROM pages WHERE id=mysql_real_escape_string($_GET['id'])
```

This PHP example did effectively not mitigate the SQL injection. This was due to the fact that it only escaped string based SQL injection.

Now, if this application also had additional checks to validate if the value of the $_GET['id'] parameter was indeed as expected an integer and rejected if this condition was false, the attack would effectively been mitigated.

Solution

All the user supplied input that works outside of the intended opteration of the application should be rejected by the application.

Syntax and Semantic Validity An application should check that data is both syntactically and semantically valid (in that order) before using it in any way (including displaying it back to the user).

Syntax validity, means that the data is in the form that is expected. For example, an application may allow a user to select a fourdigit “account ID” to perform some kind of operation. The application should assume the user is entering a SQL injection payload, and should check that the data entered by the user is exactly four digits in length, and consists only of numbers (in addition to utilizing proper query parameterization).

Semantic validity, includes only accepting input that is within an acceptable range for the given application functionality and context. For example, a start date must be before an end date when choosing date ranges.

</details>
------

- [ ] **Does the sprint implement functions that reflect user supplied input on the side of the client?**
------

- [ ] **Does the sprint implement functions that utilize LDAP?**
------

- [ ] **Does the sprint implement functions that utilize OS commands?**
------

- [ ] **Does the sprint implement functions that get/grabs files from the file system?**
------

- [ ] **Does the sprint implement functions that parse or digests XML?**
------

- [ ] **Does the sprint implement functions that deserializes objects (JSON, XML and YAML)**
------
- [ ] Verify that the application correctly restricts XML parsers to only use the most restrictive configuration possible and to ensure that unsafe features such as resolving external entities are disabled to prevent XXE.
<details><summary>More information</summary>

XXE injections

Description:

Processing of an Xml eXternal Entity containing tainted data may lead to the disclosure of confidential information and other system impacts. The XML 1.0 standard defines the structure of an XML document. The standard defines a concept called an entity, which is a storage unit of some type.

There exists a specific type of entity, an external general parsed entity often shortened to an external entity, that can access local or remote content via a declared system identifier and the XML processor may disclose confidential information normally not accessible by the application. Attacks can include disclosing local files, which may contain sensitive data such as passwords or private user data.

Solution:

Disable the possibility to fetch resources from an external source. This is normally done in the configuration of the used XML parser.

</details>
------
- [ ] Verify that deserialization of untrusted data is avoided or is protected in both custom code and third-party libraries (such as JSON, XML and YAML parsers).
<details><summary>More information</summary>

Insecure object deserialization

Description:

Serialization is the process of turning some object into a data format that can be restored later. People often serialize objects in order to save them to storage, or to send as part of communications.

Deserialization is the reverse of that process, taking data structured from some format, and rebuilding it into an object. Today, the most popular data format for serializing data is JSON. Before that, it was XML.

However, many programming languages offer a native capability for serializing objects. These native formats usually offer more features than JSON or XML, including customizability of the serialization process.

Unfortunately, the features of these native deserialization mechanisms can be repurposed for malicious effect when operating on untrusted data. Attacks against deserializers have been found to allow denialofservice, access control, and remote code execution (RCE) attacks.

Solution:

Verify that serialized objects use integrity checks or are encrypted to prevent hostile object creation or data tampering.

A great reduction of risk is achieved by avoiding native (de)serialization formats. By switching to a pure data format like JSON or XML, you lessen the chance of custom deserialization logic being repurposed towards malicious ends.

Many applications rely on a datatransfer object pattern that involves creating a separate domain of objects for the explicit purpose data transfer. Of course, it's still possible that the application will make security mistakes after a pure data object is parsed.

If the application knows before deserialization which messages will need to be processed, they could sign them as part of the serialization process. The application could then to choose not to deserialize any message which didn't have an authenticated signature.

</details>
------
- [ ] Verify that when parsing JSON in browsers or JavaScript-based backends, JSON.parse is used to parse the JSON document. Do not use eval() to parse JSON.
<details><summary>More information</summary>

Parsing JSON with Javascript

Description:

The eval() function evaluates or executes an argument.

If the argument is an expression, eval() evaluates the expression. If the argument is one or more JavaScript statements, eval() executes the statements.

This is exactly the reason why eval() should NEVER be used to parse JSON or other formats of data which could possible contain malicious code.

Solution:

For the purpose of parsing JSON we would recommend the use of the json.parse functionality. Even though this function is more trusted you should always build your own security checks and encoding routines around the json.parse before mutating the data or passing it on to a view to be displayed in your HTML.

</details>
------

- [ ] **Does the sprint implement functions that process sensitive data?**
------

- [ ] **Does the sprint implement functions that impact logging?**
------
- [ ] Verify that the application does not log other sensitive data as defined under local privacy laws or relevant security policy. ([C9](https://www.owasp.org/index.php/OWASP_Proactive_Controls#tab=Formal_Numbering))
<details><summary>More information</summary>

User credentials in audit logs

Description:

Whenever there are user credentials supplied in an audit log, this could become a risk whenever an attacker could gain access to one of these log files.

Solution:

Instead of storing user credentials, you may want to use user ID's in order to identify the user in the log files.

</details>
------

- [ ] **Does the sprint implement functions that store sensitive information?**
------
- [ ] Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII.
<details><summary>More information</summary>

Client side storage

Description:

Client side storage also known as Offline Storage or Web Storage. The Underlying storage mechanism may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it's recommended not to store any sensitive information in local storage.

Solution:

Verify that authenticated data is cleared from client storage, such as the browser DOM, after the session is terminated. This also goes for other session and local storage information which could assist an attacker launching an successful attack.

Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII (personal identifiable information).

</details>
------
- [ ] Verify that authenticated data is cleared from client storage, such as the browser DOM, after the client or session is terminated.
<details><summary>More information</summary>

Client side storage

Description:

Client side storage also known as Offline Storage or Web Storage. The Underlying storage mechanism may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it's recommended not to store any sensitive information in local storage.

Solution:

Verify that authenticated data is cleared from client storage, such as the browser DOM, after the session is terminated. This also goes for other session and local storage information which could assist an attacker launching an successful attack.

Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII (personal identifiable information).

</details>
------
- [ ] Verify that sensitive data is sent to the server in the HTTP message body or headers, and that query string parameters from any HTTP verb do not contain sensitive data.
<details><summary>More information</summary>

GET POST requests

Description:

Authors of services which use the HTTP protocol SHOULD NOT use GETbased forms for the submission of sensitive data, because this will cause this data to be encoded in the RequestURI. Many existing servers, proxies, and browsers will log the request URL in some place where it might be visible to third parties. Servers can use POSTbased form submission instead. GET parameters are also more likely to be vulnerable to XSS. Please refer to the XSS manual in the knowledge base for more information.

Solution:

Whenever transmitting sensitive data always do this by means of the POST request or by header. Note: Avoid userinput in your application header, this could lead to vulnerabilities. Also make sure you disable all other HTTP request methods which are unnecessary for your applications operation such as; REST, PUT, TRACE, DELETE, OPTIONS, etc, since allowing these request methods could lead to vulnerabilities and injections.

</details>
------
- [ ] Verify that users have a method to remove or export their data on demand.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------

- [ ] **Does the sprint implement functions that store sensitive information?**
------

- [ ] **Does the sprint implement/changes TLS configuration?**
------
- [ ] Verify using online or up to date TLS testing tools that only strong algorithms, ciphers, and protocols are enabled, with the strongest algorithms and ciphers set as preferred.
<details><summary>More information</summary>

TLS settings are in line with current leading practice

Description:

TLS settings must always be in line with current leading practice. Whenever TLS settings and ciphers get outdated, the TLS connection can be degraded/broken and used by attackers to eavesdrop on users traffic over the application.

Solution:

There should be structural scans that are done regularly against the applications TLS settings and configurations to check whether the TLS settings are in line with current leading practice.

This could be achieved by using the SSLLabs api or the OWASP OSaft project.

OSaft is an easy to use tool to show informations about SSL certificate and tests the SSL connection according to a given list of ciphers and various SSL configurations.

It's designed to be used by penetration testers, security auditors or server administrators. The idea is to show the important information or the special checks with a simple call of the tool. However, it provides a wide range of options so that it can be used for comprehensive and special checks by experienced people.

While doing these tests also take into consideration the following configuration on the server side:

Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite.

</details>
------
- [ ] Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite.
<details><summary>More information</summary>

TLS settings are in line with current leading practice

Description:

TLS settings must always be in line with current leading practice. Whenever TLS settings and ciphers get outdated, the TLS connection can be degraded/broken and used by attackers to eavesdrop on users traffic over the application.

Solution:

There should be structural scans that are done regularly against the applications TLS settings and configurations to check whether the TLS settings are in line with current leading practice.

This could be achieved by using the SSLLabs api or the OWASP OSaft project.

OSaft is an easy to use tool to show informations about SSL certificate and tests the SSL connection according to a given list of ciphers and various SSL configurations.

It's designed to be used by penetration testers, security auditors or server administrators. The idea is to show the important information or the special checks with a simple call of the tool. However, it provides a wide range of options so that it can be used for comprehensive and special checks by experienced people.

While doing these tests also take into consideration the following configuration on the server side:

Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite.

</details>
------

- [ ] **Does the sprint implement changes that affect and change CI/CD?**
------
- [ ] Verify that the application employs integrity protections, such as code signing or sub-resource integrity. The application must not load or execute code from untrusted sources, such as loading includes, modules, plugins, code, or libraries from untrusted sources or the Internet.
<details><summary>More information</summary>

code signing

Description: Code signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed. The process employs the use of a cryptographic hash to validate authenticity and integrity.

Code signing can provide several valuable features. The most common use of code signing is to provide security when deploying; in some programming languages, it can also be used to help prevent namespace conflicts. Almost every code signing implementation will provide some sort of digital signature mechanism to verify the identity of the author or build system, and a checksum to verify that the object has not been modified. It can also be used to provide versioning information about an object or to store other meta data about an objec

Solution: Sign your code and validate the signatures(checksums) of your code and third party components to confirm the integrity of the deployed components.

</details>
------
- [ ] Verify that the application has protection from sub-domain takeovers if the application relies upon DNS entries or DNS sub-domains, such as expired domain names, out of date DNS pointers or CNAMEs, expired projects at public source code repos, or transient cloud APIs, serverless functions, or storage buckets (autogen-bucket-id.cloud.example.com) or similar. Protections can include ensuring that DNS names used by applications are regularly checked for expiry or change.
<details><summary>More information</summary>

sub domain take over

Description: Subdomain takeover is a process of registering a nonexisting domain name to gain control over another domain. The most common scenario of this process follows:

Domain name (e.g., sub.example.com) uses a CNAME record to another domain (e.g., sub.example.com CNAME anotherdomain.com). At some point in time, anotherdomain.com expires and is available for registration by anyone. Since the CNAME record is not deleted from example.com DNS zone, anyone who registers anotherdomain.com has full control over sub.example.com until the DNS record is present.

The implications of the subdomain takeover can be pretty significant. Using a subdomain takeover, attackers can send phishing emails from the legitimate domain, perform crosssite scripting (XSS), or damage the reputation of the brand which is associated with the domain.

Source: https://0xpatrik.com/subdomaintakeoverbasics/

</details>
------

- [ ] **Does this sprint introduce functions with critical business logic that needs to be reviewed?**
------
- [ ] Verify the application will only process business logic flows with all steps being processed in realistic human time, i.e. transactions are not submitted too quickly.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify the application has appropriate limits for specific business actions or transactions which are correctly enforced on a per user basis.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify the application has sufficient anti-automation controls to detect and protect against data exfiltration, excessive business logic requests, excessive file uploads or denial of service attacks.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify the application has business logic limits or validation to protect against likely business risks or threats, identified using threat modelling or similar methodologies.
<details><summary>More information</summary>

Threat modeling

Description:

Threat modeling is a procedure for optimizing Network/ Application/ Internet Security by identifying objectives and vulnerabilities, and then defining countermeasures to prevent, or mitigate the effects of, threats to the system. A threat is a potential or actual undesirable event that may be malicious (such as DoS attack) or incidental (failure of a Storage Device). Threat modeling is a planned activity for identifying and assessing application threats and vulnerabilities.

Solution:

Threat modeling is best applied continuously throughout a software development project. The process is essentially the same at different levels of abstraction, although the information gets more and more granular throughout the lifecycle. Ideally, a highlevel threat model should be defined in the concept or planning phase, and then refined throughout the lifecycle. As more details are added to the system, new attack vectors are created and exposed. The ongoing threat modeling process should examine, diagnose, and address these threats.

Note that it is a natural part of refining a system for new threats to be exposed. For example, when you select a particular technology such as Java for example you take on the responsibility to identify the new threats that are created by that choice. Even implementation choices such as using regular expressions for validation introduce potential new threats to deal with.

More indepth information about threat modeling can be found at: https://www.owasp.org/index.php/Application_Threat_Modeling

</details>
------
- [ ] Verify the application does not suffer from "time of check to time of use" (TOCTOU) issues or other race conditions for sensitive operations.
<details><summary>More information</summary>

race conditions

Description: A race condition is a flaw that produces an unexpected result when the timing of actions impact other actions. An example may be seen on a multithreaded application where actions are being performed on the same data. Race conditions, by their very nature, are difficult to test for.

Race conditions may occur when a process is critically or unexpectedly dependent on the sequence or timings of other events. In a web application environment, where multiple requests can be processed at a given time, developers may leave concurrency to be handled by the framework, server, or programming language.

Solution:

One common solution to prevent race conditions is known as locking. This ensures that at any given time, at most one thread can modify the database. Many databases provide functionality to lock a given row when a thread is accessing it.

</details>
------
- [ ] Verify the application has configurable alerting when automated attacks or unusual activity is detected.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------

- [ ] **Does the sprint implement functions that allow users to upload/download files?**
------
- [ ] Verify that user-submitted filename metadata is not used directly with system or framework file and URL API to protect against path traversal.
<details><summary>More information</summary>

File upload injections

Description:

Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

The consequences of unrestricted file upload can vary, including complete system takeover, an overloaded file system or database, forwarding attacks to backend systems, and simple defacement.

There are really two classes of problems here. The first is with the file metadata, like the path and file name. These are generally provided by the transport, such as HTTP multipart encoding. This data may trick the application into overwriting a critical file or storing the file in a bad location. You must validate the metadata extremely carefully before using it.

The other class of problem is with the file size or content. An attacker can easily craft a valid image file with PHP code inside.

Solution:

Uploaded files always need to be placed outside the document root of the webserver Check to not accept large files that could fill up storage or cause a denial of service attack Check the userinput(filename) for having the right allowed extensions such as .jpg, .png etc Note: when checking these extensions always make sure your application validates the last possible extension so an attacker could not simply inject ".jpg.php" and bypass your validation

Check the userinput(filename) for containing possible path traversal patterns in order to prevent him from uploading outside of the intended directory.

You may also want to check if the filenames do already exist before uploading in order to prevent the overwriting of files.

Also for serving the files back there needs to be a file handler function that can select the file based on an identifier that will serve the file back towards the user.

Most developers also do a mimetype check. This is a good protection however not whenever you are checking this mimetype through the post request. This header can not be trusted since it can be easily manipulated by an attacker.

The best way to check the mimetype is to extract the file from the server after uploading and check it from the file itself. Deleting it whenever it does not comply with expected values.

</details>
------
- [ ] Verify that user-submitted filename metadata is validated or ignored to prevent the disclosure, creation, updating or removal of local files (LFI).
<details><summary>More information</summary>

File upload injections

Description:

Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

The consequences of unrestricted file upload can vary, including complete system takeover, an overloaded file system or database, forwarding attacks to backend systems, and simple defacement.

There are really two classes of problems here. The first is with the file metadata, like the path and file name. These are generally provided by the transport, such as HTTP multipart encoding. This data may trick the application into overwriting a critical file or storing the file in a bad location. You must validate the metadata extremely carefully before using it.

The other class of problem is with the file size or content. An attacker can easily craft a valid image file with PHP code inside.

Solution:

Uploaded files always need to be placed outside the document root of the webserver Check to not accept large files that could fill up storage or cause a denial of service attack Check the userinput(filename) for having the right allowed extensions such as .jpg, .png etc Note: when checking these extensions always make sure your application validates the last possible extension so an attacker could not simply inject ".jpg.php" and bypass your validation

Check the userinput(filename) for containing possible path traversal patterns in order to prevent him from uploading outside of the intended directory.

You may also want to check if the filenames do already exist before uploading in order to prevent the overwriting of files.

Also for serving the files back there needs to be a file handler function that can select the file based on an identifier that will serve the file back towards the user.

Most developers also do a mimetype check. This is a good protection however not whenever you are checking this mimetype through the post request. This header can not be trusted since it can be easily manipulated by an attacker.

The best way to check the mimetype is to extract the file from the server after uploading and check it from the file itself. Deleting it whenever it does not comply with expected values.

</details>
------
- [ ] Verify that user-submitted filename metadata is validated or ignored to prevent the disclosure or execution of remote files (RFI); which may also lead to SSRF.
<details><summary>More information</summary>

File upload injections

Description:

Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

The consequences of unrestricted file upload can vary, including complete system takeover, an overloaded file system or database, forwarding attacks to backend systems, and simple defacement.

There are really two classes of problems here. The first is with the file metadata, like the path and file name. These are generally provided by the transport, such as HTTP multipart encoding. This data may trick the application into overwriting a critical file or storing the file in a bad location. You must validate the metadata extremely carefully before using it.

The other class of problem is with the file size or content. An attacker can easily craft a valid image file with PHP code inside.

Solution:

Uploaded files always need to be placed outside the document root of the webserver Check to not accept large files that could fill up storage or cause a denial of service attack Check the userinput(filename) for having the right allowed extensions such as .jpg, .png etc Note: when checking these extensions always make sure your application validates the last possible extension so an attacker could not simply inject ".jpg.php" and bypass your validation

Check the userinput(filename) for containing possible path traversal patterns in order to prevent him from uploading outside of the intended directory.

You may also want to check if the filenames do already exist before uploading in order to prevent the overwriting of files.

Also for serving the files back there needs to be a file handler function that can select the file based on an identifier that will serve the file back towards the user.

Most developers also do a mimetype check. This is a good protection however not whenever you are checking this mimetype through the post request. This header can not be trusted since it can be easily manipulated by an attacker.

The best way to check the mimetype is to extract the file from the server after uploading and check it from the file itself. Deleting it whenever it does not comply with expected values.

</details>
------
- [ ] Verify that the application protects against reflective file download (RFD) by validating or ignoring user-submitted filenames in a JSON, JSONP, or URL parameter, the response Content-Type header should be set to text/plain, and the Content-Disposition header should have a fixed filename.
<details><summary>More information</summary>

RFD and file download injections

Description:

Reflective file download occurs whenever an attacker can "forge" a download through misconfiguration in your "disposition" and "content type" headers. Instead of having the attacker to upload an evil file to the web server he can now force the browser to download a malicious file by abusing these headers and setting the file extension to any type he wants.

Now, whenever there is also userinput being reflected back into that download it can be used to forge evil attacks. The attacker can present an evil file to ignorant victim's who are trusting the domain of which the download was presented from.

File download injection is a similar type of attack except this attack is made possible whenever there is userinput that is reflected into the "filename=" parameter in the "disposition" header. The attacker again can force the browser to download a file with his own choice of extension and set the content of this file by injecting this directly into the response like filename=evil.bat%0A%0D%0A%0DinsertEvilStringHere

Whenever the user now opens the downloaded file the attacker can gain full control over the target’s device.

Solution:

First, never use user input directly into your headers since an attacker can now take control over it.

Secondly, you should check if a filename really does exist before presenting it towards the users. You could also create a whitelist of all files which are allowed to be downloaded and terminate requests whenever they do not match.

Also, you should disable the use of "path parameters". It increases the attacker’s attack vector and these parameters also cause a lot of other vulnerabilities. And last you should sanitize and encode all your userinput as much as possible. Reflective file downloads depend on userinput being reflected in the response header. Whenever this input has been sanitized and encoded it should not do any harm to any system it is being executed on

</details>
------
- [ ] Verify that untrusted file metadata is not used directly with system API or libraries, to protect against OS command injection.
<details><summary>More information</summary>

File IO commands

Description:

I/O commands allow you to own, use, read from, write to, close devices and To direct I/O operations to a device. Whenever user supplied input i.e file names and/or file data is being directly used in these commands, this could lead to path traversal, local file include, file mime type, and OS command injection vulnerabilities.

Solution:

File names and file contents should be sanitized before being used in I/O commands.

</details>
------
- [ ] Verify that files obtained from untrusted sources are stored outside the web root, with limited permissions, preferably with strong validation.
<details><summary>More information</summary>

File upload outside document root

Description:

Files that are uploaded by users or other untrusted services should always be placed outside of the document root. This is to prevent malicious files from being parsed by attackers such as PHP/HTML/Javascript files.

Should an attacker succeed to bypass file upload restrictions and upload a malicous file, it would be impossible for the attacker to parse these files since they are not located inside of the applications document root.

Solution:

Files should be stored outside of the applications document root. Preferably files should be stored on a seperate file server which serves back and forth to the application server.

Files should always be stored outside of the scope of the attacker to prevent files from being parsed or executed.

When storing files outside of the document root, take into consideration potential path traversal injections in the applications file name such as "../html/backtoroot/file.php". Whenever this filename is being used directly into the path that is used to store files, it could be used to manipulate the storage path.

</details>
------
- [ ] Verify that files obtained from untrusted sources are scanned by antivirus scanners to prevent upload of known malicious content.
<details><summary>More information</summary>

File upload anti virus check

Description:

whenever files from untrusted services are uploaded to the server, there should be additional checks in place to verify whether these files contain viruses (malware, trojans, ransomware).

Solution:

After uploading the file, the file should be placed in quarantine and antivirus has to inspect the file for malicious viruses. Antivirus software that has a commandline interface is requisite for doing such scans. There are also API's available for other services such as from "VirusTotal.com"

This site provides a free service in which your file is given as input to numerous antivirus products and you receive back a detailed report with the evidence resulting from the scanning process

</details>
------
- [ ] Verify that the web tier is configured to serve only files with specific file extensions to prevent unintentional information and source code leakage. For example, backup files (e.g. .bak); temporary working files (e.g. .swp); compressed files (.zip, .tar.gz, etc) and other extensions commonly used by editors should be blocked unless required.
<details><summary>More information</summary>

Serve files whitelist.

Description:

Configiring the web server to only serve files with an expected file extension helps prevent information leakage whenever developers forget to remove backup files or zipped versions of the web application from the webserver.

Solution:

Verify that the web tier is configured to serve only files with specific file extensions to prevent unintentional information and source code leakage. For example, backup files (e.g. .bak), temporary working files (e.g. .swp), compressed files (.zip, .tar.gz, etc) and other extensions commonly used by editors should be blocked unless required.

</details>
------
- [ ] Verify that direct requests to uploaded files will never be executed as HTML/JavaScript content.
<details><summary>More information</summary>

File upload outside document root

Description:

Files that are uploaded by users or other untrusted services should always be placed outside of the document root. This is to prevent malicious files from being parsed by attackers such as PHP/HTML/Javascript files.

Should an attacker succeed to bypass file upload restrictions and upload a malicous file, it would be impossible for the attacker to parse these files since they are not located inside of the applications document root.

Solution:

Files should be stored outside of the applications document root. Preferably files should be stored on a seperate file server which serves back and forth to the application server.

Files should always be stored outside of the scope of the attacker to prevent files from being parsed or executed.

When storing files outside of the document root, take into consideration potential path traversal injections in the applications file name such as "../html/backtoroot/file.php". Whenever this filename is being used directly into the path that is used to store files, it could be used to manipulate the storage path.

</details>
------
- [ ] Verify that the web or application server is configured with a whitelist of resources or systems to which the server can send requests or load data/files from.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------

- [ ] **Are you building on an application that has API features?**
------
- [ ] Verify that access to administration and management functions is limited to authorized administrators.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify API URLs do not expose sensitive information, such as the API key, session tokens etc.
<details><summary>More information</summary>

Verify that the sensitive information is never disclosed

Description:

Information exposure through query strings in URL is when sensitive data is passed to parameters in the URL. This allows attackers to obtain sensitive data such as usernames, passwords, tokens (authX), database details, and any other potentially sensitive data. Simply using HTTPS does not resolve this vulnerability.

Regardless of using encryption, the following URL will expose information in the locations detailed below: https://vulnerablehost.com/authuser?user=bob&authz_token=1234&expire=1500000000

The parameter values for 'user', 'authz_token', and 'expire' will be exposed in the following locations when using HTTP or HTTPS:

Referer Header Web Logs Shared Systems Browser History Browser Cache Shoulder Surfing

When not using an encrypted channel, all of the above and the following: ManintheMiddle

Solution:

Sensitive informtion should never be included in the URL.

</details>
------
- [ ] Verify that enabled RESTful HTTP methods are a valid choice for the user or action, such as preventing normal users using DELETE or PUT on protected API or resources.
<details><summary>More information</summary>

HTTP request methods

Description:

HTTP offers a number of methods that can be used to perform actions on the web server. Many of these methods are designed to aid developers in deploying and testing HTTP applications. These HTTP methods can be used for nefarious purposes if the web server is misconfigured. It recommended to read about the different available methods, their purposes and limitations.

Available method are:

GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.)[1] The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations.

HEAD The HEAD method asks for a response identical to that of a GET request, but without the response body. This is useful for retrieving metainformation written in response headers, without having to transport the entire content.

POST The POST method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URI. The data POSTed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a datahandling process; or an item to add to a database.

PUT The PUT method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.

DELETE The DELETE method deletes the specified resource.

TRACE The TRACE method echoes the received request so that a client can see what (if any) changes or additions have been made by intermediate servers.

OPTIONS The OPTIONS method returns the HTTP methods that the server supports for the specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.

CONNECT The CONNECT method converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSLencrypted communication (HTTPS) through an unencrypted HTTP proxy.

PATCH The PATCH method applies partial modifications to a resource.

Some of the methods (for example, GET, HEAD, OPTIONS and TRACE) are, by convention, defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, web caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe. However, this is not mandated by the standard, and it is explicitly acknowledged that it cannot be guaranteed.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Therefore, careless or deliberate programming can cause nontrivial changes on the server. This is discouraged, because it can cause problems for web caching, search engines and other automated agents, which can make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as http://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article.

By contrast, methods such as POST, PUT, DELETE and PATCH are intended for actions that may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.

Methods PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request (note that idempotence refers to the state of the system after the request has completed, so while the action the server takes (e.g. deleting a record) or the response code it returns may be different on subsequent requests, the system state will be the same every time). Methods GET, HEAD, OPTIONS and TRACE, being prescribed as safe, should also be idempotent, as HTTP is a stateless protocol.

In contrast, the POST method is not necessarily idempotent, and therefore sending an identical POST request multiple times may further affect state or cause further side effects (such as financial transactions). In some cases this may be desirable, but in other cases this could be due to an accident, such as when a user does not realize that their action will result in sending another request, or they did not receive adequate feedback that their first request was successful. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may resubmit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other nonidempotent action is triggered by a GET or other request. Ignoring this recommendation, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not.

The TRACE method can be used as part of a class of attacks known as crosssite tracing; for that reason, common security advice is for it to be disabled in the server configuration. Microsoft IIS supports a proprietary "TRACK" method, which behaves similarly, and which is likewise recommended to be disabled

Solution:

Verify that the application accepts only a defined set of HTTP request methods, such as GET and POST and unused methods are explicitly blocked/disabled.

</details>
------
- [ ] Verify that JSON schema validation is in place and verified before accepting input.
<details><summary>More information</summary>

JSON validation schema

Description:

JSON Schema is a vocabulary that allows you to annotate and validate JSON documents.

When adding schema's to your or JSON files you have better control over what type of userinput can be supplied in your application. This dramatically decreases an attacker’s vector when implemented the right way. Nonetheless, you should always apply your own input validation and rejection as an extra layer of defense. This approach is also desirable since you also want to do countering and logging on the user’s requests and input.

Solution:

Verify that JSON schema validation takes place to ensure a properly formed JSON request, followed by validation of each input field before any processing of that data takes place.

</details>
------
- [ ] Verify that RESTful web services that utilize cookies are protected from Cross-Site Request Forgery via the use of at least one or more of the following: triple or double submit cookie pattern (see [references](https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet)); CSRF nonces, or ORIGIN request header checks.
<details><summary>More information</summary>

CSRF on REST

Description:

CrossSite Request Forgery (CSRF) is a type of attack that occurs when a malicious Web site, email, blog, instant message, or program causes a users Web browser to perform an unwanted action on a trusted site for which the user is currently authenticated.

The impact of a successful crosssite request forgery attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or purchasing an item in the users context. In effect, CSRF attacks are used by an attacker to make a target system perform a function (funds Transfer, form submission etc.) via the targets browser without knowledge of the target user at least until the unauthorized function has been committed.

Solution:

REST (REpresentational State Transfer) is a simple stateless architecture that generally runs over HTTPS/TLS. The REST style emphasizes that interactions between clients and services are enhanced by having a limited number of operations

Since the architecture is stateless, the application should make use of sessions or cookies to store the HTTP sessions, which allow associating information with individual visitors. The preferred method for REST services is to utilize tokens for interactive information interchange between the user and the server.

By sending this information solely by means of headers, the application is no longer susceptible to CSRF attacks since the CSRF attack utilizes the browsers cookie jar for succesful attacks.

</details>
------
- [ ] Verify that XSD schema validation takes place to ensure a properly formed XML document, followed by validation of each input field before any processing of that data takes place.
<details><summary>More information</summary>

XML schema (XSD)

Description:

When adding schema's to your or XML files you have better control over what type of userinput can be supplied in your application. This dramatically decreases an attacker’s vector when implemented the right way. Nonetheless, you should always apply your own input validation and rejection as an extra layer of defense. This approach is also desirable since you also want to do countering and logging on the user’s requests and input.

Solution:

Verify that XSD schema validation takes place to ensure a properly formed XML document, followed by validation of each input field before any processing of that data takes place.

</details>
------
- [ ] Verify that the message payload is signed using WS-Security to ensure reliable transport between client and service.
<details><summary>More information</summary>

Signed message payloads WS security

Description:

In order to establish trust between two communicating party's such as servers and clients there message payload should be signed by means of public/private key method. This builds trust and makes it harder for attackers to impersonate different users.

Web Services Security (WSSecurity, WSS) is an extension to SOAP to apply security to Web services. It is a member of the Web service specifications and was published by OASIS.

The protocol specifies how integrity and confidentiality can be enforced on messages and allows the communication of various security token formats, such as Security Assertion Markup Language (SAML), Kerberos, and X.509. Its main focus is the use of XML Signature and XML Encryption to provide endtoend security.

Solution:

WSSecurity describes three main mechanisms:

How to sign SOAP messages to assure integrity. Signed messages also provide nonrepudiation. How to encrypt SOAP messages to assure confidentiality. How to attach security tokens to ascertain the sender's identity. The specification allows a variety of signature formats, encryption algorithms and multiple trust domains, and is open to various security token models, such as:

X.509 certificates, Kerberos tickets, User ID/Password credentials, SAML Assertions, and customdefined tokens. The token formats and semantics are defined in the associated profile documents.

WSSecurity incorporates security features in the header of a SOAP message, working in the application layer.

These mechanisms by themselves do not provide a complete security solution for Web services. Instead, this specification is a building block that can be used in conjunction with other Web service extensions and higherlevel applicationspecific protocols to accommodate a wide variety of security models and security technologies. In general, WSS by itself does not provide any guarantee of security. When implementing and using the framework and syntax, it is up to the implementor to ensure that the result is not vulnerable.

Key management, trust bootstrapping, federation and agreement on the technical details (ciphers, formats, algorithms) is outside the scope of WSSecurity.

Use cases:

Endtoend security If a SOAP intermediary is required, and the intermediary is not more or less trusted, messages need to be signed and optionally encrypted. This might be the case of an applicationlevel proxy at a network perimeter that will terminate TCP (transmission control protocol) connections.

Nonrepudiation One method for nonrepudiation is to write transactions to an audit trail that is subject to specific security safeguards. Digital signatures, which WSSecurity supports, provide a more direct and verifiable nonrepudiation proof.

Alternative transport bindings Although almost all SOAP services implement HTTP bindings, in theory other bindings such as JMS or SMTP could be used; in this case endtoend security would be required.

Reverse proxy/common security token Even if the web service relies upon transport layer security, it might be required for the service to know about the end user, if the service is relayed by a (HTTP) reverse proxy. A WSS header could be used to convey the end user's token, vouched for by the reverse proxy.

</details>
------

- [ ] **Does the sprint implement changes that affect and change CI/CD?**
------
- [ ] Verify that all unneeded features, documentation, samples, configurations are removed, such as sample applications, platform documentation, and default or example users.
<details><summary>More information</summary>

insecure application defaults

Description:

When default sample applications, default users, etc are not removed from your production environment you are increasing an attackers potentiall attack surface significantly.

Solution:

Verify that all unneeded features, documentation, samples, configurations are removed, such as sample applications, platform documentation, and default or example users.

</details>
------
- [ ] Verify that if application assets, such as JavaScript libraries, CSS stylesheets or web fonts, are hosted externally on a content delivery network (CDN) or external provider, Subresource Integrity (SRI) is used to validate the integrity of the asset.
<details><summary>More information</summary>

Application assets hosted on secure location

Description:

Whenever application assets such as JavaScript libraries or CSS styleshees are not hosted on the application itself but on a external CDN which is not under your control these CDNs' can introduce security vulnerabilities. Whenever one of these CDN gets compromised attackers can include malicious scripts. Also whenever one of these CDNs' get out of service it could affect the operation of the application and even cause a denial of service.

Solution:

Verify that all application assets are hosted by the application, such as JavaScript libraries, CSS stylesheets and web fonts are hosted by the application rather than rely on a CDN or external provider.

</details>
------

- [ ] **Is the application in need of a review of configurations and settings?**
------
- [ ] Verify that web or application server and application framework debug modes are disabled in production to eliminate debug features, developer consoles, and unintended security disclosures.
<details><summary>More information</summary>

Debug enabeling

Description:

Sometimes it is possible through an "enabling debug parameter" to display technical information/secrets within the application. As a result, the attacker learns more about the operation of the application, increasing his attack surface. Sometimes having a debug flag enabled could even lead to code execution attacks (older versions of werkzeug)

Solution:

Disable the possibility to enable debug information on a live environment.

</details>
------
- [ ] Verify that the HTTP headers or any part of the HTTP response do not expose detailed version information of system components.
<details><summary>More information</summary>

Verbose version information

Description:

Revealing system data or debugging information helps an adversary learn about the system and form a plan of attack. An information leak occurs when system data or debugging information leaves the program through an output stream or logging function.

Solution:

Verify that the HTTP headers do not expose detailed version information of system components. For each different type of server, there are hardening guides dedicated especially for this type of data leaking. The same applies for i.e any other leak of version information such as the version of your programming language or other services running to make your application function.

</details>
------
- [ ] Verify that every HTTP response contains a content type header specifying a safe character set (e.g., UTF-8, ISO 8859-1).
<details><summary>More information</summary>

Content type headers

Description:

Setting the right content headers is important for hardening your applications security, this reduces exposure to driveby download attacks or sites serving user uploaded content that, by clever naming could be treated by MS Internet Explorer as executable or dynamic HTML files and thus can lead to security vulnerabilities.

Solution:

An example of a content type header would be:

ContentType: text/html; charset=UTF8
or:
ContentType: application/json;

Verify that requests containing unexpected or missing content types are rejected with appropriate headers (HTTP response status 406 Unacceptable or 415 Unsupported Media Type).

</details>
------
- [ ] Verify that all API responses contain Content-Disposition: attachment; filename="api.json" (or other appropriate filename for the content type).
<details><summary>More information</summary>

API resonses security headers

Description:

There are some security headers which should be properly configured in order to protect some API callbacks against Reflective File Download and other type of injections.

Also check if the API response is dynamic, if user input is reflected in the response. If so, you must validate and encode the input, in order to prevent XSS and Same origin method execution attacks.

Solution:

Sanitize your API's input (in this case they should just allow alphanumeric); escaping is not sufficient

Verify that all API responses contain XContentTypeOptions: nosniff, to prevent the browser from interpreting files as something else than declared by the content type (this helps prevent XSS if the page is interpreted as html or js).

Add 'ContentDisposition: attachment; filename="filename.extension"' with extension corresponding the file extension and contenttype, on APIs that are not going to be rendered

</details>
------
- [ ] Verify that a content security policy (CSPv2) is in place that helps mitigate impact for XSS attacks like HTML, DOM, JSON, and JavaScript injection vulnerabilities.
<details><summary>More information</summary>

Content security policy headers

Description:

The main use of the content security policy header is to, detect, report, and reject XSS attacks. The core issue in relation to XSS attacks is the browser's inability to distinguish between a script that's intended to be part of your application, and a script that's been maliciously injected by a thirdparty. With the use of CSP(Content Security policy), we can tell the browser which script is safe to execute and which scripts are most likely been injected by an attacker.

Solution:

A best practice for implementing CSP in your application would be to externalize all JavaScript within the web pages.

So this:

    <script>
      function doSomething() {
        alert('Something!');
      }
    </script>

    <button onclick='doSomething();'>foobar!</button>

Must become this:

    <script src='doSomething.js'></script>
    <button id='somethingToDo'>Let's foobar!</button>

The header for this code could look something like:

    ContentSecurityPolicy: defaultsrc'self'; objectsrc'none'; scriptsrc'https://mycdn.com'

Since it is not entirely realistic to implement all JavaScript on external pages we can apply sort of a crosssite request forgery token to your inline JavaScript. This way the browser can again distinguish the difference between code which is part of the application against probable malicious injected code, in CSP this is called the 'nonce'. Of course, this method is also very applicable on your existing code and designs. Now, to use this nonce you have to supply your inline script tags with the nonce attribute. Firstly, it's important that the nonce changes for each response. Otherwise, the nonce would become guessable. So it should also contain a high entropy and should be hard to predict. Similar to the operation of the CSRF tokens, the nonce becomes impossible for the attacker to predict making it difficult to execute a successful XSS attack.

Inline JavaScript example containing nonce:

    <script nonce=sfsdf03nceI23wlsgle9h3sdd21>
    <! Your javscript code >
    </script>

Matching header example:

    ContentSecurityPolicy: scriptsrc 'noncesfsdf03nceI23wlsgle9h3sdd21'

There is a whole lot more to learn about the CSP header for indepth implementation in your application. This knowledge base item just scratches the surface and it would be highly recommended to gain more indepth knowledge about this powerful header

Very Important: When applying the CSP header, although it blocks XSS attacks. Your application still remains vulnerable to HTML and other code injections. So this is not a substitute for, validation, sanitizing and encoding of userinput.

</details>
------
- [ ] Verify that all responses contain X-Content-Type-Options: nosniff.
<details><summary>More information</summary>

API resonses security headers

Description:

There are some security headers which should be properly configured in order to protect some API callbacks against Reflective File Download and other type of injections.

Also check if the API response is dynamic, if user input is reflected in the response. If so, you must validate and encode the input, in order to prevent XSS and Same origin method execution attacks.

Solution:

Sanitize your API's input (in this case they should just allow alphanumeric); escaping is not sufficient

Verify that all API responses contain XContentTypeOptions: nosniff, to prevent the browser from interpreting files as something else than declared by the content type (this helps prevent XSS if the page is interpreted as html or js).

Add 'ContentDisposition: attachment; filename="filename.extension"' with extension corresponding the file extension and contenttype, on APIs that are not going to be rendered

</details>
------
- [ ] Verify that HTTP Strict Transport Security headers are included on all responses and for all subdomains, such as Strict-Transport-Security: max-age=15724800; includeSubdomains.
<details><summary>More information</summary>

HTTP strict transport security

Description:

HTTP Strict Transport Security (HSTS) is an optin security enhancement that is specified by a web application through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. It also prevents HTTPS click through prompts on browsers.

HSTS addresses the following threats:

  1. User bookmarks or manually types http://example.com and is subject to a maninthemiddle attacker HSTS automatically redirects HTTP requests to HTTPS for the target domain
  2. Web application that is intended to be purely HTTPS inadvertently contains HTTP links or serves content over HTTP HSTS automatically redirects HTTP requests to HTTPS for the target domain
  3. A maninthemiddle attacker attempts to intercept traffic from a victim user using an invalid certificate and hopes the user will accept the bad certificate
  4. HSTS does not allow a user to override the invalid certificate message

    Solution:

When users are visiting the application it should set the following header: These headers should be set in a base class which always sets the header no mather what page the users initially visit.

Simple example, using a long (1 year) maxage: StrictTransportSecurity: maxage=31536000

If all present and future subdomains will be HTTPS: StrictTransportSecurity: maxage=31536000; includeSubDomains

CAUTION: Site owners can use HSTS to identify users without cookies. This can lead to a significant privacy leak.

Cookies can be manipulated from subdomains, so omitting the include "includeSubDomains" option permits a broad range of cookierelated attacks that HSTS would otherwise prevent by requiring a valid certificate for a subdomain. Ensuring the "Secure Flag" is set on all cookies will also prevent, some, but not all, of the same attacks.

</details>
------
- [ ] Verify that a suitable "Referrer-Policy" header is included, such as "no-referrer" or "same-origin".
<details><summary>More information</summary>

Referrer policy header

Description: Requests made from a document, and for navigations away from that document are associated with a Referer header. While the header can be suppressed for links with the noreferrer link type, authors might wish to control the Referer header more directly for a number of reasons,

Privacy A social networking site has a profile page for each of its users, and users add hyperlinks from their profile page to their favorite bands. The social networking site might not wish to leak the user’s profile URL to the band web sites when other users follow those hyperlinks (because the profile URLs might reveal the identity of the owner of the profile).

Some social networking sites, however, might wish to inform the band web sites that the links originated from the social networking site but not reveal which specific user’s profile contained the links.

Security A web application uses HTTPS and a URLbased session identifier. The web application might wish to link to HTTPS resources on other web sites without leaking the user’s session identifier in the URL.

Alternatively, a web application may use URLs which themselves grant some capability. Controlling the referrer can help prevent these capability URLs from leaking via referrer headers.

Note that there are other ways for capability URLs to leak, and controlling the referrer is not enough to control all those potential leaks.

Trackback A blog hosted over HTTPS might wish to link to a blog hosted over HTTP and receive trackback links.

Solution:

For more information about the policy and how it should be implemented please visit the following link,

https://www.w3.org/TR/referrerpolicy/referrerpolicies

</details>
------
- [ ] Verify that a suitable X-Frame-Options or Content-Security-Policy: frame-ancestors header is in use for sites where content should not be embedded in a third-party site.
<details><summary>More information</summary>

Include anti clickjacking headers

Description:

Clickjacking, also known as a "UI redress attack", is when an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the top level page. Thus, the attacker is "hijacking" clicks meant for their page and routing them to another page, most likely owned by another application, domain, or both.

Using a similar technique, keystrokes can also be hijacked. With a carefully crafted combination of stylesheets, iframes, and text boxes, a user can be led to believe they are typing in the password to their email or bank account, but are instead typing into an invisible frame controlled by the attacker.

Solution:

To avoid your application from being clickjacked you can add the XframeOptions header to your application. These headers can be configured as:

XframeOptions: deny

The page cannot be displayed in a frame, regardless of the site attempting to do so.

XFrameOptions: sameorign  

The page can only be displayed in a frame on the same origin as the page itself.

XFrameOptions: ALLOWFROM uri

The page can only be displayed in a frame on the specified origin.

You may also want to consider to include "Framebreaking/Framebusting" defense for legacy browsers that do not support XFrameOption headers.

Source: https://www.codemagi.com/blog/post/194

</details>
------
- [ ] Verify that the application server only accepts the HTTP methods in use by the application or API, including pre-flight OPTIONS.
<details><summary>More information</summary>

HTTP request methods

Description:

HTTP offers a number of methods that can be used to perform actions on the web server. Many of these methods are designed to aid developers in deploying and testing HTTP applications. These HTTP methods can be used for nefarious purposes if the web server is misconfigured. It recommended to read about the different available methods, their purposes and limitations.

Available method are:

GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.)[1] The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations.

HEAD The HEAD method asks for a response identical to that of a GET request, but without the response body. This is useful for retrieving metainformation written in response headers, without having to transport the entire content.

POST The POST method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URI. The data POSTed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a datahandling process; or an item to add to a database.

PUT The PUT method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.

DELETE The DELETE method deletes the specified resource.

TRACE The TRACE method echoes the received request so that a client can see what (if any) changes or additions have been made by intermediate servers.

OPTIONS The OPTIONS method returns the HTTP methods that the server supports for the specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.

CONNECT The CONNECT method converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSLencrypted communication (HTTPS) through an unencrypted HTTP proxy.

PATCH The PATCH method applies partial modifications to a resource.

Some of the methods (for example, GET, HEAD, OPTIONS and TRACE) are, by convention, defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, web caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe. However, this is not mandated by the standard, and it is explicitly acknowledged that it cannot be guaranteed.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Therefore, careless or deliberate programming can cause nontrivial changes on the server. This is discouraged, because it can cause problems for web caching, search engines and other automated agents, which can make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as http://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article.

By contrast, methods such as POST, PUT, DELETE and PATCH are intended for actions that may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.

Methods PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request (note that idempotence refers to the state of the system after the request has completed, so while the action the server takes (e.g. deleting a record) or the response code it returns may be different on subsequent requests, the system state will be the same every time). Methods GET, HEAD, OPTIONS and TRACE, being prescribed as safe, should also be idempotent, as HTTP is a stateless protocol.

In contrast, the POST method is not necessarily idempotent, and therefore sending an identical POST request multiple times may further affect state or cause further side effects (such as financial transactions). In some cases this may be desirable, but in other cases this could be due to an accident, such as when a user does not realize that their action will result in sending another request, or they did not receive adequate feedback that their first request was successful. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may resubmit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other nonidempotent action is triggered by a GET or other request. Ignoring this recommendation, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not.

The TRACE method can be used as part of a class of attacks known as crosssite tracing; for that reason, common security advice is for it to be disabled in the server configuration. Microsoft IIS supports a proprietary "TRACK" method, which behaves similarly, and which is likewise recommended to be disabled

Solution:

Verify that the application accepts only a defined set of HTTP request methods, such as GET and POST and unused methods are explicitly blocked/disabled.

</details>
------
- [ ] Verify that the supplied Origin header is not used for authentication or access control decisions, as the Origin header can easily be changed by an attacker.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify that the cross-domain resource sharing (CORS) Access-Control-Allow-Origin header uses a strict white-list of trusted domains to match against and does not support the "null" origin.
<details><summary>More information</summary>

Cross origin resource sharing

Description:

Cross Origin Resource Sharing or CORS is a mechanism that enables a web browser to perform 'crossdomain' requests using the XMLHttpRequest L2 API in a controlled manner. In the past, the XMLHttpRequest L1 API only allowed requests to be sent within the same origin as it was restricted by the same origin policy.

Solution:

CrossOrigin requests have an Origin header, that identifies the domain initiating the request and is always sent to the server. CORS defines the protocol to use a web browser and a server to determine whether a crossorigin request is allowed. In order to accomplish this goal, there are a few HTTP headers involved in this process, that are supported by all major browsers:

Origin AccessControlRequestMethod AccessControlRequestHeaders AccessControlAllowOrigin AccessControlAllowCredentials AccessControlAllowMethods AccessControlAllowHeaders

Things you must consider when using CORS

  1. Validate URLs passed to XMLHttpRequest.open. Current browsers allow these URLs to be cross domain; this behavior can lead to code injection by a remote attacker. Pay extra attention to absolute URLs.

  2. Ensure that URLs responding with AccessControlAllowOrigin: * do not include any sensitive content or information that might aid an attacker in further attacks. Use the AccessControlAllowOrigin header only on chosen URLs that need to be accessed crossdomain. Don't use the header for the whole domain.

  3. Allow only selected, trusted domains in the AccessControlAllowOrigin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks)

  4. Keep in mind that CORS does not prevent the requested data from going to an unauthenticated location. It's still important for the server to perform usual CSRF prevention.

  5. While the RFC recommends a preflight request with the OPTIONS verb, current implementations might not perform this request, so it's important that "ordinary" (GET and POST) requests perform any access control necessary.

  6. Discard requests received over plain HTTP with HTTPS origins to prevent mixed content bugs.

  7. Don't rely only on the Origin header for Access Control checks. Browser always sends this header in CORS requests, but may be spoofed outside the browser. Applicationlevel protocols should be used to protect sensitive data.

NOTE: Modern application frameworks do dynamically allocation of the origin header, resulting in the browser also allowing to send the "AccessControlAllowCredentials: true" header as well in requests. Whenever JSON web tokens are being send in cookies rather than headers, potential attackers could abuse this behaviour to make unauthenticated XHR get requests on the authenticated users behalf to read sensitive information from the pages.


</details>
------
skf-integration[bot] commented 5 years ago

alt text

Security knowledge framework!


Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 


Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 


Permit Password Change

Description:

Users should be able to update their password whenever it is necessary. For example, take in consideration the scenario in which they tend to use the same password for multiple purposes. If this password is leaked, the users have to immediately update their credentials in every application they are registered. Therefore, if the application does not provide an accessible password update functionality to a user, there is the risk that his account may be taken over.

Solution:

Applications should provide to the user a functionality that permits the change of its own password.


Unauthorized credential changes

 Description:

An application which offers user login functionality, usually has an administration page
where userdata can be modified. When the user wants to change this data he should
specify his current password.

 Solution:

When changing user credentials or email address the user must always enter a valid
password in order to implement the changes. This is also called reauthentication or
stepup / adaptive authentication. Whenever a user "reauthenticates" himself the current
session ID value should also be refreshed in order to fend oFf so called "session hijackers"


Verify Breached Passwords

 Description:

Multiple database of leaked credentials have been released during breaches over the years. If users choose passwords already leaked, they are vulnerable to dictionary attacks.

 Solution:

Verify that passwords submitted during account registration, login, and password change are checked against a set of breached passwords. In case the password chosen has already been breached, the application must require the user to reenter a nonbreached password.


Provide Password Strength Checker

 Description:

Users may tend to choose easy guessable passwords. Therefore, it is suggested to implement a functionality that encourage them to set password of higher complexity.

 Solution:

Applications should provide the users a password security meter in occasion of account registration and password change.


Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 


no password rotation policy

Description:
Some policies require users to change passwords periodically, often every 90 or 180 days. 
The benefit of password expiration, however, is debatable. Systems that implement such 
policies sometimes prevent users from picking a password too close to a previous selection.

This policy can often backfire. Some users find it hard to devise "good" passwords that are 
also easy to remember, so if people are required to choose many passwords because they have 
to change them often, they end up using much weaker passwords; the policy also encourages 
users to write passwords down. Also, if the policy prevents a user from repeating a recent password, 
this requires that there is a database in existence of everyone's recent passwords (or their hashes) 
instead of having the old ones erased from memory. Finally, users may change their password repeatedly
within a few minutes, and then change back to the one they really want to use, circumventing the 
password change policy altogether.

Solution:
Only force users to update their passwords when the password strength that is enforced by the application
is no longer sufficient to withstand brute force attacks due to increase of computing power.


Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


Forget password functions

 Description:

Whenever the application provides a password forget functionality or another 
type of recovery methods there are several implementations of hardened proven ways to make
the user recover his password.

 Solution:

The recommended solutions are to use TOTP (Timebased OneTime Password algorithm). This 
method is an example of a hashbased message authentication code (HMAC). It combines a 
secret key with the current timestamp using a cryptographic hash function to generate 
a onetime password. Because network latency and outofsync clocks can result in the password 
recipient having to try a range of possible times to authenticate against, the timestamp typically 
increases in 30second intervals, which thus cuts the potential search space.

Or the other option is to use a Mathematicalalgorithmbased onetime password method. This other 
type of onetime password uses a complex mathematical algorithm, such as a hash chain, to generate 
a series of onetime passwords from a secret shared key. Each password cannot be guessed even when 
previous passwords are known. The open source OAuth algorithm is standardized; other algorithms are 
covered by U.S. patents. Each password is observably unpredictable and independent on previous ones. 
Therefore, an adversary would be unable to guess what the next password may be, even with the 
knowledge of all previous passwords.

Example of a hard token mathimatical algorithm would be a yubikey
Example of a soft token TOTP would be google authenticator

The last resort would be to send a new password by email. This mail should contain a reset link with 
a token which is valid for a limited amount of time. Additional authentication based on softtokens 
(e.g. SMS token, native mobile applications, etc.) can be required as well before the link is 
sent over. Also, make sure whenever such a recovery cycle is started, the application does not 
reveal the user’s current password in any way.


user notification on critical state changing operations

Description:
When a user is informed of critical operations than the user can determine
if the notification is send by his own actions, or that the notifucation indicates 
potential compromitation of his user account.

Solution:

Verify that secure notifications are sent to users after updates
to authentication details, such as credential resets, email or address changes,
logging in from unknown or risky locations. Users must also be notified when
password policies change or any other important updates that require action from the
user to increase the security of his account.

The use of push notifications  rather than SMS or email  is preferred, but in the 
absence of push notifications, SMS or email is acceptable as long as no sensitive information is disclosed 
in the notification.


Secrets should be secure random generated

 Description:

Secret keys, API tokens, or passwords must be dynamically generated. Whenever these tokens
are not dynamically generated they can become predicable and used by attackers to compromise
user accounts. 

 Solution:

When it comes to API tokens and secret keys these values have to be dynamically generated and valid only once.
The secret token should be cryptographically 'random secure', with at least 120 bit of effective entropy, salted with a unique and random 32bit value and hashed with an approved hashing (oneway) function.

Passwords on the other hand should be created by the user himself, rather than assigning
a user a dynamically generated password. The user should be presented a onetime link with a 
cryptographically random token by means of an email or SMS which is used to activate his 
account and provide a password of his own.


Password leakage

 Description:

After completing a password recovery functionality, the user should not be sent a plaintext
password to his email adress. The application should also under no circumstances disclose the old or current password
to the users.

 Solution:

The application should under no circumstances disclose the users current, old and new password plain text.
This behavior makes the application susceptible to side channel attacks and make the passwords
lose their integrity since they could be compromised by someone looking over another users shoulder to
see the password. 


No shared knowledge for secret questions

 Description:

Whenever an application ask an user a secret question i.e a password forgot
functionality, these questions should not be shared knowledge an attacker could get from
the web to prevent him compromising the account by this function.

 Solution:

Secret questions should never include shared knowledge, predictable or easy
guessable values.

Otherwise the answers for these secret questions can be easilly looked up on the internet by means 
of social media accounts and the like.


Password leakage

 Description:

After completing a password recovery functionality, the user should not be sent a plaintext
password to his email adress. The application should also under no circumstances disclose the old or current password
to the users.

 Solution:

The application should under no circumstances disclose the users current, old and new password plain text.
This behavior makes the application susceptible to side channel attacks and make the passwords
lose their integrity since they could be compromised by someone looking over another users shoulder to
see the password. 


Forget password functions

 Description:

Whenever the application provides a password forget functionality or another 
type of recovery methods there are several implementations of hardened proven ways to make
the user recover his password.

 Solution:

The recommended solutions are to use TOTP (Timebased OneTime Password algorithm). This 
method is an example of a hashbased message authentication code (HMAC). It combines a 
secret key with the current timestamp using a cryptographic hash function to generate 
a onetime password. Because network latency and outofsync clocks can result in the password 
recipient having to try a range of possible times to authenticate against, the timestamp typically 
increases in 30second intervals, which thus cuts the potential search space.

Or the other option is to use a Mathematicalalgorithmbased onetime password method. This other 
type of onetime password uses a complex mathematical algorithm, such as a hash chain, to generate 
a series of onetime passwords from a secret shared key. Each password cannot be guessed even when 
previous passwords are known. The open source OAuth algorithm is standardized; other algorithms are 
covered by U.S. patents. Each password is observably unpredictable and independent on previous ones. 
Therefore, an adversary would be unable to guess what the next password may be, even with the 
knowledge of all previous passwords.

Example of a hard token mathimatical algorithm would be a yubikey
Example of a soft token TOTP would be google authenticator

The last resort would be to send a new password by email. This mail should contain a reset link with 
a token which is valid for a limited amount of time. Additional authentication based on softtokens 
(e.g. SMS token, native mobile applications, etc.) can be required as well before the link is 
sent over. Also, make sure whenever such a recovery cycle is started, the application does not 
reveal the user’s current password in any way.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


The login functionality should always generate a new session id

 Description:

Whenever an user is successfully authenticated the application should generate a
new session cookie.

 Solution:

The login functionality should always generate (and use) a new session ID after a
successful login. This is done to prevent an attacker doing a session fixation attack
on your users.

Some frameworks do not provide the possibility to change the session ID on login such as
.net applications. Whenever this problem occurs you could set an extra random cookie on
login  with a strong token and store this value in a session variable.

Now you can compare the cookie value with the session variable in order to prevent
session fixation since the authentication does not solely rely on the session ID since
the random cookie can not be predicted or fixated by the attacker.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


The logout functionality should revoke the complete session

 Description:

When the logout functionality does not revoke the complete session, an attacker could still
impersonate a user when he has access to the session cookie even after the user is logged off the application.

 Solution:

The logout functionality should revoke the complete session whenever a user
wants to terminate his session.

Each different framework has its own guide to achieve this revocation.
It is also recommended for you to make test cases which you follow to ensure
session revocation in your application.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


Password change leads to destroying concurrent sessions

 Description:

Whenever a user changes his password, the user should be granted the option
to kill all other concurrent sessions. This countermessure helps to exclude
potential attackers living on a hijacked session.

Note: Whenever users are granted the possibility to change their passwords,
      do not forget to make them reauthenticate or to use a form of step up
      or adaptive authentication mechanism.

 Solution:

Verify the user is prompted with the option to terminate all other active sessions 
after a successful change password process.


concurrent session handling

 Description:

You should limit and keep track of all the different active concurrent sessions.
Whenever the application discovers concurrent sessions it should always notify the user
about this and should give him the opportunity to end the other sessions.

With this defense in place it becomes harder for attackers to hijack a users session since
they will be notified about concurrent sessions.

 Solution:

The application should keep track and limit all the granted sessions.
It should store your users IP address, session id and user id. After storing these credentials
it should do regular checks to see if there are:

1. Multiple active sessions linked to same user id
2. Multiple active sessions from different locations
3. Multiple active sessions from different devices
4. Limit and destroy sessions when they exceed an accepted threshold.

The more critical the application becomes the lower the accepted threshold for
concurrent sessions should be.


Session cookies without the Secure attribute

 Description:

The secure flag is an option that can be set when creating a cookie.
This flag ensures that the cookie will not be sent over an unencrypted
connection by the browser,which ensures that the session cookie can not be sent over a nonencrypted link.

 Solution:

When creating a session cookie which is sent over an encrypted connection
you should set the secure flag. The Secure flag should be set during every setcookie.
This will instruct the browser to never send the cookie over HTTP.
The purpose of this flag is to prevent the accidental exposure of a cookie value if a user
follows an HTTP link.


Session cookies without the HttpOnly attribute

 Description:

An HttpOnly flag is an option that can be set when creating a cookie. This v ensures that the cookie cannot be read or edited by JavaScript. This ensures an attacker cannot steal this cookie as a crosssite scripting vulnerability is present in the application.

 Solution:

The HttpOnly flag should be set to disable malicious script access to the cookie values such as the session ID value. Also, disable unnecessary HTTP request methods such as the TRACE option. Misconfiguration of the HTTP request headers can lead to stealing the session cookie even though HttpOnly protection is in place.


same site attribute

Description:
SameSite prevents the browser from sending this cookie along with crosssite requests. 
The main goal is mitigate the risk of crossorigin information leakage. It also provides some 
protection against crosssite request forgery attacks.

Solution:
The strict value will prevent the cookie from being sent by the browser to the target site in all 
crosssite browsing context, even when following a regular link. For example, for a GitHublike website this 
would mean that if a loggedin user follows a link to a private GitHub project posted on a corporate discussion 
forum or email, GitHub will not receive the session cookie and the user will not be able to access the project.

A bank website however most likely doesn't want to allow any transactional pages to be linked from external 
sites so the strict flag would be most appropriate here.

The default lax value provides a reasonable balance between security and usability for websites that want
to maintain user's loggedin session after the user arrives from an external link. In the above GitHub scenario, 
the session cookie would be allowed when following a regular link from an external website while blocking it in 
CSRFprone request methods (e.g. POST).

As of November 2017 the SameSite attribute is implemented in Chrome, Firefox, and Opera. 
Since version 12.1 Safari also supports this. Windows 7 with IE 11 lacks support as of December 2018, 
see caniuse.com below.


host prefix

Description:

the '__Host" prefix signals to the browser that both the Path=/ and Secure attributes are required, 
and at the same time, that the Domain attribute may not be present.


Cross subdomain cookie attack

 Description:

A quick overview of how it works:

1. A website www.example.com hands out subdomains to untrusted third parties
2. One such party, Mallory, who now controls evil.example.com, lures Alice to her site
3. A visit to evil.example.com sets a session cookie with the domain .example.com on Alice's browser
4. When Alice visits www.example.com, this cookie will be sent with the request, as the specs for cookies states, and Alice will have the session specified by Mallory's cookie.
5. Mallory can now use Alice her account.

 Solution:

In this scenario changing the sessionID on login does not make any difference since
Alice is already logged in when she visits Mallory's evil web page.

It is good practice to use a completely different domain for all trusted activity.

For example Google uses google.com for trusted activities and *.googleusercontent.com
for untrusted sites.

Also when setting your cookies to specify which domains they are allowed to
be send to. Especially on your trusted domain you do not want to leak cookies to unintended
subdomains. highly recommended is to not use wildcards when setting this option.


High value transactions

 Description:

Whenever there are high value transactions a normal username/password static authentication method does
not suffice to ensure a high level of security. Whenever the application digests high level of transactions ensure that
risk based reauthentication, two factor or transaction signing is in place.

 Solution:

1 risk based authentication:
In Authentication, riskbased authentication is a nonstatic authentication 
system which takes into account the profile of the agent requesting access to 
the system to determine the risk profile associated with that transaction. 

The risk profile is then used to determine the complexity of the challenge.
Higher risk profiles leads to stronger challenges, whereas a static username/password may suffice for 
lowerrisk profiles. Riskbased implementation allows the application to challenge the user for additional 
credentials only when the risk level is appropriate.

2 two factor authentication:
Multifactor authentication (MFA) is a method of computer access control in which a user is 
granted access only after successfully presenting several separate pieces of evidence to an 
authentication mechanism – typically at least two of the following categories: knowledge (something they know), 
possession (something they have), and inherence (something they are)

3 Transaction signing:
Transaction signing (or digital transaction signing) is the process of calculating a keyed hash function 
to generate a unique string which can be used to verify both the authenticity and integrity of an online transaction.

A keyed hash is a function of the user's private or secret key and the transaction details, 
such as the transfer to the account number and the transfer amount.

To provide a high level of assurance of the authenticity and integrity of 
the hash it is essential to calculate the hash on a trusted device, such as a separate smart card reader.
Calculating a hash on an Internetconnected PC or mobile device such as a mobile telephone/PDA would be
counterproductive as malware and attackers can attack these platforms and potentially subvert the signing process itself.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


All authentication controls must fail securely

 Description:

Handling errors securely is a key aspect of secure coding.
There are two types of errors that deserve special attention. The first is exceptions
that occur in the processing of a security control itself. It's important that these
exceptions do not enable behavior that the countermeasure would normally not allow.
As a developer, you should consider that there are generally three possible outcomes
from a security mechanism:

1. Allow the operation
2. Disallow the operation
3. Exception

In general, you should design your security mechanism so that a failure will follow the same execution path
as disabling the operation

 Solution:

Make sure all the access control systems are thoroughly tested for failing securely before
using it in your application. It is common that complete unittest are created especially
for this purpose.


Insecure direct object references

 Description:

Applications frequently use the actual name or key of an object when generating web pages. 
Applications don’t always verify the user is authorized for the target object. 
This results in an insecure direct object reference flaw. Testers can easily manipulate parameter 
values to detect such flaws and code analysis quickly shows whether authorization is properly verified.

The most classic example:
The application uses unverified data in a SQL call that is accessing account information:

String query = "SELECT * FROM accts WHERE account = ?";
PreparedStatement pstmt = connection.prepareStatement(query , ... );
pstmt.setString( 1, request.getParameter("acct"));
ResultSet results = pstmt.executeQuery();

The attacker simply modifies the ‘acct’ parameter in their browser to send whatever 
account number they want. If not verified, the attacker can access any user’s account, instead of 
only the intended customer’s account.

http://example.com/app/accountInfo?acct=notmyacct

 Solution:

Preventing insecure direct object references requires selecting an approach 
for protecting each user accessible object (e.g., object number, filename):

Use per user or session indirect object references. This prevents attackers from directly 
targeting unauthorized resources. For example, instead of using the resource’s database key, 
a drop down list of six resources authorized for the current user could use the numbers 1 to 6 to 
indicate which value the user selected. The application has to map the peruser indirect reference 
back to the actual database key on the server.

Check access. Each use of a direct object reference from an untrusted source must include an access control 
check to ensure the user is authorized for the requested object.


Cross site request forgery

 Description:

CrossSite Request Forgery (CSRF) is a type of attack that occurs when a malicious Web site,
email, blog, instant message, or program causes a users Web browser to perform an unwanted
action on a trusted site for which the user is currently authenticated.

The impact of a successful crosssite request forgery attack is limited to the
capabilities exposed by the vulnerable application. For example, this attack could result
in a transfer of funds, changing a password, or purchasing an item in the users context.
In effect, CSRF attacks are used by an attacker to make a target system perform a
function (funds Transfer, form submission etc.) via the targets browser without
knowledge of the target user at least until the unauthorised function has been committed.

 Solution:

To arm an application against automated attacks and tooling you need to use unique tokens
which are included into the forms of an application, API calls or AJAX requests.  
Any state changing operation requires a secure random token (e.g CSRF token) to prevent
against CSRF attacks. Characteristics of a CSRF Token are a unique, large random
value generated by a cryptographically secure random number generator.

The CSRF token is then added as a hidden field for forms and validated on the sever side whenever
a user is sending a request to the server.

Note :
Whenever the application is an REST service and is using tokens such as JWT tokens, whenever these tokens are being sent
in the application headers rather than stored in cookies the application should not be suspectible to CSRF attacks for a succesfull CSRF attacke depends on the browsers cookie jar.


Two factor authentication

 Description:

Two factor authenitcation must be implemented to protect your applications users against unauthorized use of the application.

Whenever the users username and password are leaked or disclosed by an application on what ever fashion possible, the 
users account should still be proteced by two factor authentication mechanisms to prevent attackers
from logging in with the credentials.

 Solution:

Multifactor authentication (MFA) is a method of computer access control in which a user is granted access only after successfully presenting several separate pieces of evidence to an authentication mechanism – typically at least two of the following categories: knowledge (something they know), possession (something they have), and inherence (something they are)

Examples of two/multi factor authentication can be 

1. Google authenticator
   Google Authenticator is an application that implements twostep verification services using the Timebased 
   Onetime Password Algorithm (TOTP) and HMACbased Onetime Password Algorithm 

2. Yubikey

  The YubiKey is a hardware authentication device manufactured by Yubico that supports onetime passwords, public key   
  encryption and authentication, and the Universal 2nd Factor (U2F) protocol[1] developed by the FIDO Alliance (FIDO U2F).
  It allows users to securely log into their accounts by emitting onetime passwords or using a FIDObased public/private
  key pair generated by the device


Directory listing

 Description:

Whenever directory listing is enabled, an attacker could gain sensitive information about
the systems hierarchical structure and gain knowledge about directories or files which should
possibly not be publicly accessible. An attacker could use this information to
increase his attack vector. In some cases this could even lead to an attacker gaining knowledge about
credentials or old vulnerable system demo functions which might lead to remote code execution.

 Solution:

Different types of servers require a different type of approach in order to disable
directory listing. For instance: Apache uses a .htacces in order to disable directory listing.
As for iis7, directory listing is disabled by default.


Step up or adaptive authentication

 Description:

Whenever a user browses a section of a webbased application that contains sensitive information the user should be challenged authenticate again using a higher assurance credential to be granted access to this information.
This is to prevent attackers from reading sensitive information after they successfully hijacked a user account.

 Solution:

Verify the application has additional authorization (such as step up or adaptive authentication) so the user is challenged before being granted access to sensitive information. This rule also applies for making critical changes to an account or action.
Segregation of duties should be applied for highvalue applications to enforce antifraud controls as per the risk of application and past fraud.


Verify that structured data is strongly typed and validated

 Description:

Whenever structured data is strongly typed and validated against a defined schema the application
can be developed as a defensible proactive application. The application can now measure everything
that is outside of its intending operation by means of the defined schema's and should be used to
reject the input if the schema checks return false.

 Solution:

Verify that structured data is strongly typed and validated against a defined schema
including allowed characters, length and pattern (e.g. credit card numbers or telephone, 
or validating that two related fields are reasonable, such as validating suburbs and zip or 
post codes match


not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.


XSS injection

 Description:

Every time the application gets userinput, whether this showing it on screen or processing
this data in the application background, these parameters should be escaped for malicious
code in order to prevent crosssite scripting injections.
When an attacker gains the possibility to perform an XSS injection,
he is given the opportunity to inject HTML and JavaScript code directly into the
application. This could lead to accounts being compromised by stealing session cookies or directly 
affect the operation of the target application. 

Altough templating engines(razor, twig, jinja, etc) and contextaware applications(Angular, React, etc)
do a lot of auto escaping for you. These frameworks should always be validated for effectiveness.

 Solution:

In order to prevent XSS injections, all userinput should be escaped or encoded.
You could start by sanitizing userinput as soon as it is inserted into the application,
by preference using a so called whitelisting method.
This means you should not check for malicious content like the tags or anything,
but only allow the expected input. Every input which is outside of the intended operation
of the application should immediately be detected and login rejected.
Do not try to help use the input in any way because that could introduce a new type of attack by converting characters. 

The second step would be encoding all the parameters or userinput before putting this in
your html with encoding libraries specially designed for this purpose.

You should take into consideration that there are several contexts for encoding userinput for
escaping XSS injections. These contexts are amongst others:

* HTML encoding, is for whenever your userinput is displayed directly into your HTML.
* HTML attribute encoding, is the type of encoding/escaping that should be applied 
  whenever your user input is displayed into the attribute of your HTML tags.
* HTML URL encoding, this type of encoding/escaping should be applied to whenever you are using userinput into a HREF tag.

JavaScript encoding should be used whenever parameters are rendered via JavaScript; your application will detect normal injections in the first instant. But your application still remains vulnerable to JavaScript encoding which will not be detected by the normal encoding/escaping methods.


type checking and length checking

 Description

Type checking, length checking and whitelisting is an essential in defense in depth strategie to make
your application more resiliant against input injection attacks.

Example:
SELECT * FROM pages WHERE id=mysql_real_escape_string($_GET['id'])
```

This PHP example did effectively not mitigate the SQL injection. This was due to the fact that it only escaped string based SQL injection.

Now, if this application also had additional checks to validate if the value of the $_GET['id'] parameter was indeed as expected an integer and rejected if this condition was false, the attack would effectively been mitigated.

Solution

All the user supplied input that works outside of the intended opteration of the application should be rejected by the application.

Syntax and Semantic Validity An application should check that data is both syntactically and semantically valid (in that order) before using it in any way (including displaying it back to the user).

Syntax validity, means that the data is in the form that is expected. For example, an application may allow a user to select a fourdigit “account ID” to perform some kind of operation. The application should assume the user is entering a SQL injection payload, and should check that the data entered by the user is exactly four digits in length, and consists only of numbers (in addition to utilizing proper query parameterization).

Semantic validity, includes only accepting input that is within an acceptable range for the given application functionality and context. For example, a start date must be before an end date when choosing date ranges.

</details>
<br/>
------

- [ ] **Does the sprint implement functions that reflect user supplied input on the side of the client?**
------

- [ ] **Does the sprint implement functions that utilize LDAP?**
------

- [ ] **Does the sprint implement functions that utilize OS commands?**
------

- [ ] **Does the sprint implement functions that get/grabs files from the file system?**
------

- [ ] **Does the sprint implement functions that parse or digests XML?**
------

- [ ] **Does the sprint implement functions that deserializes objects (JSON, XML and YAML)**
------
- [ ] Verify that the application correctly restricts XML parsers to only use the most restrictive configuration possible and to ensure that unsafe features such as resolving external entities are disabled to prevent XXE.
<details><summary>More information</summary>

XXE injections

Description:

Processing of an Xml eXternal Entity containing tainted data may lead to the disclosure of confidential information and other system impacts. The XML 1.0 standard defines the structure of an XML document. The standard defines a concept called an entity, which is a storage unit of some type.

There exists a specific type of entity, an external general parsed entity often shortened to an external entity, that can access local or remote content via a declared system identifier and the XML processor may disclose confidential information normally not accessible by the application. Attacks can include disclosing local files, which may contain sensitive data such as passwords or private user data.

Solution:

Disable the possibility to fetch resources from an external source. This is normally done in the configuration of the used XML parser.

</details>
<br/>
------
- [ ] Verify that deserialization of untrusted data is avoided or is protected in both custom code and third-party libraries (such as JSON, XML and YAML parsers).
<details><summary>More information</summary>

Insecure object deserialization

Description:

Serialization is the process of turning some object into a data format that can be restored later. People often serialize objects in order to save them to storage, or to send as part of communications.

Deserialization is the reverse of that process, taking data structured from some format, and rebuilding it into an object. Today, the most popular data format for serializing data is JSON. Before that, it was XML.

However, many programming languages offer a native capability for serializing objects. These native formats usually offer more features than JSON or XML, including customizability of the serialization process.

Unfortunately, the features of these native deserialization mechanisms can be repurposed for malicious effect when operating on untrusted data. Attacks against deserializers have been found to allow denialofservice, access control, and remote code execution (RCE) attacks.

Solution:

Verify that serialized objects use integrity checks or are encrypted to prevent hostile object creation or data tampering.

A great reduction of risk is achieved by avoiding native (de)serialization formats. By switching to a pure data format like JSON or XML, you lessen the chance of custom deserialization logic being repurposed towards malicious ends.

Many applications rely on a datatransfer object pattern that involves creating a separate domain of objects for the explicit purpose data transfer. Of course, it's still possible that the application will make security mistakes after a pure data object is parsed.

If the application knows before deserialization which messages will need to be processed, they could sign them as part of the serialization process. The application could then to choose not to deserialize any message which didn't have an authenticated signature.

</details>
<br/>
------
- [ ] Verify that when parsing JSON in browsers or JavaScript-based backends, JSON.parse is used to parse the JSON document. Do not use eval() to parse JSON.
<details><summary>More information</summary>

Parsing JSON with Javascript

Description:

The eval() function evaluates or executes an argument.

If the argument is an expression, eval() evaluates the expression. If the argument is one or more JavaScript statements, eval() executes the statements.

This is exactly the reason why eval() should NEVER be used to parse JSON or other formats of data which could possible contain malicious code.

Solution:

For the purpose of parsing JSON we would recommend the use of the json.parse functionality. Even though this function is more trusted you should always build your own security checks and encoding routines around the json.parse before mutating the data or passing it on to a view to be displayed in your HTML.

</details>
<br/>
------

- [ ] **Does the sprint implement functions that process sensitive data?**
------

- [ ] **Does the sprint implement functions that impact logging?**
------
- [ ] Verify that the application does not log other sensitive data as defined under local privacy laws or relevant security policy. ([C9](https://www.owasp.org/index.php/OWASP_Proactive_Controls#tab=Formal_Numbering))
<details><summary>More information</summary>

User credentials in audit logs

Description:

Whenever there are user credentials supplied in an audit log, this could become a risk whenever an attacker could gain access to one of these log files.

Solution:

Instead of storing user credentials, you may want to use user ID's in order to identify the user in the log files.

</details>
<br/>
------

- [ ] **Does the sprint implement functions that store sensitive information?**
------
- [ ] Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII.
<details><summary>More information</summary>

Client side storage

Description:

Client side storage also known as Offline Storage or Web Storage. The Underlying storage mechanism may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it's recommended not to store any sensitive information in local storage.

Solution:

Verify that authenticated data is cleared from client storage, such as the browser DOM, after the session is terminated. This also goes for other session and local storage information which could assist an attacker launching an successful attack.

Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII (personal identifiable information).

</details>
<br/>
------
- [ ] Verify that authenticated data is cleared from client storage, such as the browser DOM, after the client or session is terminated.
<details><summary>More information</summary>

Client side storage

Description:

Client side storage also known as Offline Storage or Web Storage. The Underlying storage mechanism may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it's recommended not to store any sensitive information in local storage.

Solution:

Verify that authenticated data is cleared from client storage, such as the browser DOM, after the session is terminated. This also goes for other session and local storage information which could assist an attacker launching an successful attack.

Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII (personal identifiable information).

</details>
<br/>
------
- [ ] Verify that sensitive data is sent to the server in the HTTP message body or headers, and that query string parameters from any HTTP verb do not contain sensitive data.
<details><summary>More information</summary>

GET POST requests

Description:

Authors of services which use the HTTP protocol SHOULD NOT use GETbased forms for the submission of sensitive data, because this will cause this data to be encoded in the RequestURI. Many existing servers, proxies, and browsers will log the request URL in some place where it might be visible to third parties. Servers can use POSTbased form submission instead. GET parameters are also more likely to be vulnerable to XSS. Please refer to the XSS manual in the knowledge base for more information.

Solution:

Whenever transmitting sensitive data always do this by means of the POST request or by header. Note: Avoid userinput in your application header, this could lead to vulnerabilities. Also make sure you disable all other HTTP request methods which are unnecessary for your applications operation such as; REST, PUT, TRACE, DELETE, OPTIONS, etc, since allowing these request methods could lead to vulnerabilities and injections.

</details>
<br/>
------
- [ ] Verify that users have a method to remove or export their data on demand.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
<br/>
------

- [ ] **Does the sprint implement functions that store sensitive information?**
------

- [ ] **Does the sprint implement/changes TLS configuration?**
------
- [ ] Verify using online or up to date TLS testing tools that only strong algorithms, ciphers, and protocols are enabled, with the strongest algorithms and ciphers set as preferred.
<details><summary>More information</summary>

TLS settings are in line with current leading practice

Description:

TLS settings must always be in line with current leading practice. Whenever TLS settings and ciphers get outdated, the TLS connection can be degraded/broken and used by attackers to eavesdrop on users traffic over the application.

Solution:

There should be structural scans that are done regularly against the applications TLS settings and configurations to check whether the TLS settings are in line with current leading practice.

This could be achieved by using the SSLLabs api or the OWASP OSaft project.

OSaft is an easy to use tool to show informations about SSL certificate and tests the SSL connection according to a given list of ciphers and various SSL configurations.

It's designed to be used by penetration testers, security auditors or server administrators. The idea is to show the important information or the special checks with a simple call of the tool. However, it provides a wide range of options so that it can be used for comprehensive and special checks by experienced people.

While doing these tests also take into consideration the following configuration on the server side:

Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite.

</details>
<br/>
------
- [ ] Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite.
<details><summary>More information</summary>

TLS settings are in line with current leading practice

Description:

TLS settings must always be in line with current leading practice. Whenever TLS settings and ciphers get outdated, the TLS connection can be degraded/broken and used by attackers to eavesdrop on users traffic over the application.

Solution:

There should be structural scans that are done regularly against the applications TLS settings and configurations to check whether the TLS settings are in line with current leading practice.

This could be achieved by using the SSLLabs api or the OWASP OSaft project.

OSaft is an easy to use tool to show informations about SSL certificate and tests the SSL connection according to a given list of ciphers and various SSL configurations.

It's designed to be used by penetration testers, security auditors or server administrators. The idea is to show the important information or the special checks with a simple call of the tool. However, it provides a wide range of options so that it can be used for comprehensive and special checks by experienced people.

While doing these tests also take into consideration the following configuration on the server side:

Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite.

</details>
<br/>
------

- [ ] **Does the sprint implement changes that affect and change CI/CD?**
------
- [ ] Verify that the application employs integrity protections, such as code signing or sub-resource integrity. The application must not load or execute code from untrusted sources, such as loading includes, modules, plugins, code, or libraries from untrusted sources or the Internet.
<details><summary>More information</summary>

code signing

Description: Code signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed. The process employs the use of a cryptographic hash to validate authenticity and integrity.

Code signing can provide several valuable features. The most common use of code signing is to provide security when deploying; in some programming languages, it can also be used to help prevent namespace conflicts. Almost every code signing implementation will provide some sort of digital signature mechanism to verify the identity of the author or build system, and a checksum to verify that the object has not been modified. It can also be used to provide versioning information about an object or to store other meta data about an objec

Solution: Sign your code and validate the signatures(checksums) of your code and third party components to confirm the integrity of the deployed components.

</details>
<br/>
------
- [ ] Verify that the application has protection from sub-domain takeovers if the application relies upon DNS entries or DNS sub-domains, such as expired domain names, out of date DNS pointers or CNAMEs, expired projects at public source code repos, or transient cloud APIs, serverless functions, or storage buckets (autogen-bucket-id.cloud.example.com) or similar. Protections can include ensuring that DNS names used by applications are regularly checked for expiry or change.
<details><summary>More information</summary>

sub domain take over

Description: Subdomain takeover is a process of registering a nonexisting domain name to gain control over another domain. The most common scenario of this process follows:

Domain name (e.g., sub.example.com) uses a CNAME record to another domain (e.g., sub.example.com CNAME anotherdomain.com). At some point in time, anotherdomain.com expires and is available for registration by anyone. Since the CNAME record is not deleted from example.com DNS zone, anyone who registers anotherdomain.com has full control over sub.example.com until the DNS record is present.

The implications of the subdomain takeover can be pretty significant. Using a subdomain takeover, attackers can send phishing emails from the legitimate domain, perform crosssite scripting (XSS), or damage the reputation of the brand which is associated with the domain.

Source: https://0xpatrik.com/subdomaintakeoverbasics/

</details>
<br/>
------

- [ ] **Does this sprint introduce functions with critical business logic that needs to be reviewed?**
------
- [ ] Verify the application will only process business logic flows with all steps being processed in realistic human time, i.e. transactions are not submitted too quickly.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
<br/>
------
- [ ] Verify the application has appropriate limits for specific business actions or transactions which are correctly enforced on a per user basis.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
<br/>
------
- [ ] Verify the application has sufficient anti-automation controls to detect and protect against data exfiltration, excessive business logic requests, excessive file uploads or denial of service attacks.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
<br/>
------
- [ ] Verify the application has business logic limits or validation to protect against likely business risks or threats, identified using threat modelling or similar methodologies.
<details><summary>More information</summary>

Threat modeling

Description:

Threat modeling is a procedure for optimizing Network/ Application/ Internet Security by identifying objectives and vulnerabilities, and then defining countermeasures to prevent, or mitigate the effects of, threats to the system. A threat is a potential or actual undesirable event that may be malicious (such as DoS attack) or incidental (failure of a Storage Device). Threat modeling is a planned activity for identifying and assessing application threats and vulnerabilities.

Solution:

Threat modeling is best applied continuously throughout a software development project. The process is essentially the same at different levels of abstraction, although the information gets more and more granular throughout the lifecycle. Ideally, a highlevel threat model should be defined in the concept or planning phase, and then refined throughout the lifecycle. As more details are added to the system, new attack vectors are created and exposed. The ongoing threat modeling process should examine, diagnose, and address these threats.

Note that it is a natural part of refining a system for new threats to be exposed. For example, when you select a particular technology such as Java for example you take on the responsibility to identify the new threats that are created by that choice. Even implementation choices such as using regular expressions for validation introduce potential new threats to deal with.

More indepth information about threat modeling can be found at: https://www.owasp.org/index.php/Application_Threat_Modeling

</details>
<br/>
------
- [ ] Verify the application does not suffer from "time of check to time of use" (TOCTOU) issues or other race conditions for sensitive operations.
<details><summary>More information</summary>

race conditions

Description: A race condition is a flaw that produces an unexpected result when the timing of actions impact other actions. An example may be seen on a multithreaded application where actions are being performed on the same data. Race conditions, by their very nature, are difficult to test for.

Race conditions may occur when a process is critically or unexpectedly dependent on the sequence or timings of other events. In a web application environment, where multiple requests can be processed at a given time, developers may leave concurrency to be handled by the framework, server, or programming language.

Solution:

One common solution to prevent race conditions is known as locking. This ensures that at any given time, at most one thread can modify the database. Many databases provide functionality to lock a given row when a thread is accessing it.

</details>
<br/>
------
- [ ] Verify the application has configurable alerting when automated attacks or unusual activity is detected.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
<br/>
------

- [ ] **Does the sprint implement functions that allow users to upload/download files?**
------
- [ ] Verify that user-submitted filename metadata is not used directly with system or framework file and URL API to protect against path traversal.
<details><summary>More information</summary>

File upload injections

Description:

Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

The consequences of unrestricted file upload can vary, including complete system takeover, an overloaded file system or database, forwarding attacks to backend systems, and simple defacement.

There are really two classes of problems here. The first is with the file metadata, like the path and file name. These are generally provided by the transport, such as HTTP multipart encoding. This data may trick the application into overwriting a critical file or storing the file in a bad location. You must validate the metadata extremely carefully before using it.

The other class of problem is with the file size or content. An attacker can easily craft a valid image file with PHP code inside.

Solution:

Uploaded files always need to be placed outside the document root of the webserver Check to not accept large files that could fill up storage or cause a denial of service attack Check the userinput(filename) for having the right allowed extensions such as .jpg, .png etc Note: when checking these extensions always make sure your application validates the last possible extension so an attacker could not simply inject ".jpg.php" and bypass your validation

Check the userinput(filename) for containing possible path traversal patterns in order to prevent him from uploading outside of the intended directory.

You may also want to check if the filenames do already exist before uploading in order to prevent the overwriting of files.

Also for serving the files back there needs to be a file handler function that can select the file based on an identifier that will serve the file back towards the user.

Most developers also do a mimetype check. This is a good protection however not whenever you are checking this mimetype through the post request. This header can not be trusted since it can be easily manipulated by an attacker.

The best way to check the mimetype is to extract the file from the server after uploading and check it from the file itself. Deleting it whenever it does not comply with expected values.

</details>
<br/>
------
- [ ] Verify that user-submitted filename metadata is validated or ignored to prevent the disclosure, creation, updating or removal of local files (LFI).
<details><summary>More information</summary>

File upload injections

Description:

Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

The consequences of unrestricted file upload can vary, including complete system takeover, an overloaded file system or database, forwarding attacks to backend systems, and simple defacement.

There are really two classes of problems here. The first is with the file metadata, like the path and file name. These are generally provided by the transport, such as HTTP multipart encoding. This data may trick the application into overwriting a critical file or storing the file in a bad location. You must validate the metadata extremely carefully before using it.

The other class of problem is with the file size or content. An attacker can easily craft a valid image file with PHP code inside.

Solution:

Uploaded files always need to be placed outside the document root of the webserver Check to not accept large files that could fill up storage or cause a denial of service attack Check the userinput(filename) for having the right allowed extensions such as .jpg, .png etc Note: when checking these extensions always make sure your application validates the last possible extension so an attacker could not simply inject ".jpg.php" and bypass your validation

Check the userinput(filename) for containing possible path traversal patterns in order to prevent him from uploading outside of the intended directory.

You may also want to check if the filenames do already exist before uploading in order to prevent the overwriting of files.

Also for serving the files back there needs to be a file handler function that can select the file based on an identifier that will serve the file back towards the user.

Most developers also do a mimetype check. This is a good protection however not whenever you are checking this mimetype through the post request. This header can not be trusted since it can be easily manipulated by an attacker.

The best way to check the mimetype is to extract the file from the server after uploading and check it from the file itself. Deleting it whenever it does not comply with expected values.

</details>
<br/>
------
- [ ] Verify that user-submitted filename metadata is validated or ignored to prevent the disclosure or execution of remote files (RFI); which may also lead to SSRF.
<details><summary>More information</summary>

File upload injections

Description:

Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

The consequences of unrestricted file upload can vary, including complete system takeover, an overloaded file system or database, forwarding attacks to backend systems, and simple defacement.

There are really two classes of problems here. The first is with the file metadata, like the path and file name. These are generally provided by the transport, such as HTTP multipart encoding. This data may trick the application into overwriting a critical file or storing the file in a bad location. You must validate the metadata extremely carefully before using it.

The other class of problem is with the file size or content. An attacker can easily craft a valid image file with PHP code inside.

Solution:

Uploaded files always need to be placed outside the document root of the webserver Check to not accept large files that could fill up storage or cause a denial of service attack Check the userinput(filename) for having the right allowed extensions such as .jpg, .png etc Note: when checking these extensions always make sure your application validates the last possible extension so an attacker could not simply inject ".jpg.php" and bypass your validation

Check the userinput(filename) for containing possible path traversal patterns in order to prevent him from uploading outside of the intended directory.

You may also want to check if the filenames do already exist before uploading in order to prevent the overwriting of files.

Also for serving the files back there needs to be a file handler function that can select the file based on an identifier that will serve the file back towards the user.

Most developers also do a mimetype check. This is a good protection however not whenever you are checking this mimetype through the post request. This header can not be trusted since it can be easily manipulated by an attacker.

The best way to check the mimetype is to extract the file from the server after uploading and check it from the file itself. Deleting it whenever it does not comply with expected values.

</details>
<br/>
------
- [ ] Verify that the application protects against reflective file download (RFD) by validating or ignoring user-submitted filenames in a JSON, JSONP, or URL parameter, the response Content-Type header should be set to text/plain, and the Content-Disposition header should have a fixed filename.
<details><summary>More information</summary>

RFD and file download injections

Description:

Reflective file download occurs whenever an attacker can "forge" a download through misconfiguration in your "disposition" and "content type" headers. Instead of having the attacker to upload an evil file to the web server he can now force the browser to download a malicious file by abusing these headers and setting the file extension to any type he wants.

Now, whenever there is also userinput being reflected back into that download it can be used to forge evil attacks. The attacker can present an evil file to ignorant victim's who are trusting the domain of which the download was presented from.

File download injection is a similar type of attack except this attack is made possible whenever there is userinput that is reflected into the "filename=" parameter in the "disposition" header. The attacker again can force the browser to download a file with his own choice of extension and set the content of this file by injecting this directly into the response like filename=evil.bat%0A%0D%0A%0DinsertEvilStringHere

Whenever the user now opens the downloaded file the attacker can gain full control over the target’s device.

Solution:

First, never use user input directly into your headers since an attacker can now take control over it.

Secondly, you should check if a filename really does exist before presenting it towards the users. You could also create a whitelist of all files which are allowed to be downloaded and terminate requests whenever they do not match.

Also, you should disable the use of "path parameters". It increases the attacker’s attack vector and these parameters also cause a lot of other vulnerabilities. And last you should sanitize and encode all your userinput as much as possible. Reflective file downloads depend on userinput being reflected in the response header. Whenever this input has been sanitized and encoded it should not do any harm to any system it is being executed on

</details>
<br/>
------
- [ ] Verify that untrusted file metadata is not used directly with system API or libraries, to protect against OS command injection.
<details><summary>More information</summary>

File IO commands

Description:

I/O commands allow you to own, use, read from, write to, close devices and To direct I/O operations to a device. Whenever user supplied input i.e file names and/or file data is being directly used in these commands, this could lead to path traversal, local file include, file mime type, and OS command injection vulnerabilities.

Solution:

File names and file contents should be sanitized before being used in I/O commands.

</details>
<br/>
------
- [ ] Verify that files obtained from untrusted sources are stored outside the web root, with limited permissions, preferably with strong validation.
<details><summary>More information</summary>

File upload outside document root

Description:

Files that are uploaded by users or other untrusted services should always be placed outside of the document root. This is to prevent malicious files from being parsed by attackers such as PHP/HTML/Javascript files.

Should an attacker succeed to bypass file upload restrictions and upload a malicous file, it would be impossible for the attacker to parse these files since they are not located inside of the applications document root.

Solution:

Files should be stored outside of the applications document root. Preferably files should be stored on a seperate file server which serves back and forth to the application server.

Files should always be stored outside of the scope of the attacker to prevent files from being parsed or executed.

When storing files outside of the document root, take into consideration potential path traversal injections in the applications file name such as "../html/backtoroot/file.php". Whenever this filename is being used directly into the path that is used to store files, it could be used to manipulate the storage path.

</details>
<br/>
------
- [ ] Verify that files obtained from untrusted sources are scanned by antivirus scanners to prevent upload of known malicious content.
<details><summary>More information</summary>

File upload anti virus check

Description:

whenever files from untrusted services are uploaded to the server, there should be additional checks in place to verify whether these files contain viruses (malware, trojans, ransomware).

Solution:

After uploading the file, the file should be placed in quarantine and antivirus has to inspect the file for malicious viruses. Antivirus software that has a commandline interface is requisite for doing such scans. There are also API's available for other services such as from "VirusTotal.com"

This site provides a free service in which your file is given as input to numerous antivirus products and you receive back a detailed report with the evidence resulting from the scanning process

</details>
<br/>
------
- [ ] Verify that the web tier is configured to serve only files with specific file extensions to prevent unintentional information and source code leakage. For example, backup files (e.g. .bak); temporary working files (e.g. .swp); compressed files (.zip, .tar.gz, etc) and other extensions commonly used by editors should be blocked unless required.
<details><summary>More information</summary>

Serve files whitelist.

Description:

Configiring the web server to only serve files with an expected file extension helps prevent information leakage whenever developers forget to remove backup files or zipped versions of the web application from the webserver.

Solution:

Verify that the web tier is configured to serve only files with specific file extensions to prevent unintentional information and source code leakage. For example, backup files (e.g. .bak), temporary working files (e.g. .swp), compressed files (.zip, .tar.gz, etc) and other extensions commonly used by editors should be blocked unless required.

</details>
<br/>
------
- [ ] Verify that direct requests to uploaded files will never be executed as HTML/JavaScript content.
<details><summary>More information</summary>

File upload outside document root

Description:

Files that are uploaded by users or other untrusted services should always be placed outside of the document root. This is to prevent malicious files from being parsed by attackers such as PHP/HTML/Javascript files.

Should an attacker succeed to bypass file upload restrictions and upload a malicous file, it would be impossible for the attacker to parse these files since they are not located inside of the applications document root.

Solution:

Files should be stored outside of the applications document root. Preferably files should be stored on a seperate file server which serves back and forth to the application server.

Files should always be stored outside of the scope of the attacker to prevent files from being parsed or executed.

When storing files outside of the document root, take into consideration potential path traversal injections in the applications file name such as "../html/backtoroot/file.php". Whenever this filename is being used directly into the path that is used to store files, it could be used to manipulate the storage path.

</details>
<br/>
------
- [ ] Verify that the web or application server is configured with a whitelist of resources or systems to which the server can send requests or load data/files from.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
<br/>
------

- [ ] **Are you building on an application that has API features?**
------
- [ ] Verify that access to administration and management functions is limited to authorized administrators.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
<br/>
------
- [ ] Verify API URLs do not expose sensitive information, such as the API key, session tokens etc.
<details><summary>More information</summary>

Verify that the sensitive information is never disclosed

Description:

Information exposure through query strings in URL is when sensitive data is passed to parameters in the URL. This allows attackers to obtain sensitive data such as usernames, passwords, tokens (authX), database details, and any other potentially sensitive data. Simply using HTTPS does not resolve this vulnerability.

Regardless of using encryption, the following URL will expose information in the locations detailed below: https://vulnerablehost.com/authuser?user=bob&authz_token=1234&expire=1500000000

The parameter values for 'user', 'authz_token', and 'expire' will be exposed in the following locations when using HTTP or HTTPS:

Referer Header Web Logs Shared Systems Browser History Browser Cache Shoulder Surfing

When not using an encrypted channel, all of the above and the following: ManintheMiddle

Solution:

Sensitive informtion should never be included in the URL.

</details>
<br/>
------
- [ ] Verify that enabled RESTful HTTP methods are a valid choice for the user or action, such as preventing normal users using DELETE or PUT on protected API or resources.
<details><summary>More information</summary>

HTTP request methods

Description:

HTTP offers a number of methods that can be used to perform actions on the web server. Many of these methods are designed to aid developers in deploying and testing HTTP applications. These HTTP methods can be used for nefarious purposes if the web server is misconfigured. It recommended to read about the different available methods, their purposes and limitations.

Available method are:

GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.)[1] The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations.

HEAD The HEAD method asks for a response identical to that of a GET request, but without the response body. This is useful for retrieving metainformation written in response headers, without having to transport the entire content.

POST The POST method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URI. The data POSTed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a datahandling process; or an item to add to a database.

PUT The PUT method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.

DELETE The DELETE method deletes the specified resource.

TRACE The TRACE method echoes the received request so that a client can see what (if any) changes or additions have been made by intermediate servers.

OPTIONS The OPTIONS method returns the HTTP methods that the server supports for the specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.

CONNECT The CONNECT method converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSLencrypted communication (HTTPS) through an unencrypted HTTP proxy.

PATCH The PATCH method applies partial modifications to a resource.

Some of the methods (for example, GET, HEAD, OPTIONS and TRACE) are, by convention, defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, web caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe. However, this is not mandated by the standard, and it is explicitly acknowledged that it cannot be guaranteed.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Therefore, careless or deliberate programming can cause nontrivial changes on the server. This is discouraged, because it can cause problems for web caching, search engines and other automated agents, which can make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as http://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article.

By contrast, methods such as POST, PUT, DELETE and PATCH are intended for actions that may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.

Methods PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request (note that idempotence refers to the state of the system after the request has completed, so while the action the server takes (e.g. deleting a record) or the response code it returns may be different on subsequent requests, the system state will be the same every time). Methods GET, HEAD, OPTIONS and TRACE, being prescribed as safe, should also be idempotent, as HTTP is a stateless protocol.

In contrast, the POST method is not necessarily idempotent, and therefore sending an identical POST request multiple times may further affect state or cause further side effects (such as financial transactions). In some cases this may be desirable, but in other cases this could be due to an accident, such as when a user does not realize that their action will result in sending another request, or they did not receive adequate feedback that their first request was successful. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may resubmit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other nonidempotent action is triggered by a GET or other request. Ignoring this recommendation, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not.

The TRACE method can be used as part of a class of attacks known as crosssite tracing; for that reason, common security advice is for it to be disabled in the server configuration. Microsoft IIS supports a proprietary "TRACK" method, which behaves similarly, and which is likewise recommended to be disabled

Solution:

Verify that the application accepts only a defined set of HTTP request methods, such as GET and POST and unused methods are explicitly blocked/disabled.

</details>
<br/>
------
- [ ] Verify that JSON schema validation is in place and verified before accepting input.
<details><summary>More information</summary>

JSON validation schema

Description:

JSON Schema is a vocabulary that allows you to annotate and validate JSON documents.

When adding schema's to your or JSON files you have better control over what type of userinput can be supplied in your application. This dramatically decreases an attacker’s vector when implemented the right way. Nonetheless, you should always apply your own input validation and rejection as an extra layer of defense. This approach is also desirable since you also want to do countering and logging on the user’s requests and input.

Solution:

Verify that JSON schema validation takes place to ensure a properly formed JSON request, followed by validation of each input field before any processing of that data takes place.

</details>
<br/>
------
- [ ] Verify that RESTful web services that utilize cookies are protected from Cross-Site Request Forgery via the use of at least one or more of the following: triple or double submit cookie pattern (see [references](https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet)); CSRF nonces, or ORIGIN request header checks.
<details><summary>More information</summary>

CSRF on REST

Description:

CrossSite Request Forgery (CSRF) is a type of attack that occurs when a malicious Web site, email, blog, instant message, or program causes a users Web browser to perform an unwanted action on a trusted site for which the user is currently authenticated.

The impact of a successful crosssite request forgery attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or purchasing an item in the users context. In effect, CSRF attacks are used by an attacker to make a target system perform a function (funds Transfer, form submission etc.) via the targets browser without knowledge of the target user at least until the unauthorized function has been committed.

Solution:

REST (REpresentational State Transfer) is a simple stateless architecture that generally runs over HTTPS/TLS. The REST style emphasizes that interactions between clients and services are enhanced by having a limited number of operations

Since the architecture is stateless, the application should make use of sessions or cookies to store the HTTP sessions, which allow associating information with individual visitors. The preferred method for REST services is to utilize tokens for interactive information interchange between the user and the server.

By sending this information solely by means of headers, the application is no longer susceptible to CSRF attacks since the CSRF attack utilizes the browsers cookie jar for succesful attacks.

</details>
<br/>
------
- [ ] Verify that XSD schema validation takes place to ensure a properly formed XML document, followed by validation of each input field before any processing of that data takes place.
<details><summary>More information</summary>

XML schema (XSD)

Description:

When adding schema's to your or XML files you have better control over what type of userinput can be supplied in your application. This dramatically decreases an attacker’s vector when implemented the right way. Nonetheless, you should always apply your own input validation and rejection as an extra layer of defense. This approach is also desirable since you also want to do countering and logging on the user’s requests and input.

Solution:

Verify that XSD schema validation takes place to ensure a properly formed XML document, followed by validation of each input field before any processing of that data takes place.

</details>
<br/>
------
- [ ] Verify that the message payload is signed using WS-Security to ensure reliable transport between client and service.
<details><summary>More information</summary>

Signed message payloads WS security

Description:

In order to establish trust between two communicating party's such as servers and clients there message payload should be signed by means of public/private key method. This builds trust and makes it harder for attackers to impersonate different users.

Web Services Security (WSSecurity, WSS) is an extension to SOAP to apply security to Web services. It is a member of the Web service specifications and was published by OASIS.

The protocol specifies how integrity and confidentiality can be enforced on messages and allows the communication of various security token formats, such as Security Assertion Markup Language (SAML), Kerberos, and X.509. Its main focus is the use of XML Signature and XML Encryption to provide endtoend security.

Solution:

WSSecurity describes three main mechanisms:

How to sign SOAP messages to assure integrity. Signed messages also provide nonrepudiation. How to encrypt SOAP messages to assure confidentiality. How to attach security tokens to ascertain the sender's identity. The specification allows a variety of signature formats, encryption algorithms and multiple trust domains, and is open to various security token models, such as:

X.509 certificates, Kerberos tickets, User ID/Password credentials, SAML Assertions, and customdefined tokens. The token formats and semantics are defined in the associated profile documents.

WSSecurity incorporates security features in the header of a SOAP message, working in the application layer.

These mechanisms by themselves do not provide a complete security solution for Web services. Instead, this specification is a building block that can be used in conjunction with other Web service extensions and higherlevel applicationspecific protocols to accommodate a wide variety of security models and security technologies. In general, WSS by itself does not provide any guarantee of security. When implementing and using the framework and syntax, it is up to the implementor to ensure that the result is not vulnerable.

Key management, trust bootstrapping, federation and agreement on the technical details (ciphers, formats, algorithms) is outside the scope of WSSecurity.

Use cases:

Endtoend security If a SOAP intermediary is required, and the intermediary is not more or less trusted, messages need to be signed and optionally encrypted. This might be the case of an applicationlevel proxy at a network perimeter that will terminate TCP (transmission control protocol) connections.

Nonrepudiation One method for nonrepudiation is to write transactions to an audit trail that is subject to specific security safeguards. Digital signatures, which WSSecurity supports, provide a more direct and verifiable nonrepudiation proof.

Alternative transport bindings Although almost all SOAP services implement HTTP bindings, in theory other bindings such as JMS or SMTP could be used; in this case endtoend security would be required.

Reverse proxy/common security token Even if the web service relies upon transport layer security, it might be required for the service to know about the end user, if the service is relayed by a (HTTP) reverse proxy. A WSS header could be used to convey the end user's token, vouched for by the reverse proxy.

</details>
<br/>
------

- [ ] **Does the sprint implement changes that affect and change CI/CD?**
------
- [ ] Verify that all unneeded features, documentation, samples, configurations are removed, such as sample applications, platform documentation, and default or example users.
<details><summary>More information</summary>

insecure application defaults

Description:

When default sample applications, default users, etc are not removed from your production environment you are increasing an attackers potentiall attack surface significantly.

Solution:

Verify that all unneeded features, documentation, samples, configurations are removed, such as sample applications, platform documentation, and default or example users.

</details>
<br/>
------
- [ ] Verify that if application assets, such as JavaScript libraries, CSS stylesheets or web fonts, are hosted externally on a content delivery network (CDN) or external provider, Subresource Integrity (SRI) is used to validate the integrity of the asset.
<details><summary>More information</summary>

Application assets hosted on secure location

Description:

Whenever application assets such as JavaScript libraries or CSS styleshees are not hosted on the application itself but on a external CDN which is not under your control these CDNs' can introduce security vulnerabilities. Whenever one of these CDN gets compromised attackers can include malicious scripts. Also whenever one of these CDNs' get out of service it could affect the operation of the application and even cause a denial of service.

Solution:

Verify that all application assets are hosted by the application, such as JavaScript libraries, CSS stylesheets and web fonts are hosted by the application rather than rely on a CDN or external provider.

</details>
<br/>
------

- [ ] **Is the application in need of a review of configurations and settings?**
------
- [ ] Verify that web or application server and application framework debug modes are disabled in production to eliminate debug features, developer consoles, and unintended security disclosures.
<details><summary>More information</summary>

Debug enabeling

Description:

Sometimes it is possible through an "enabling debug parameter" to display technical information/secrets within the application. As a result, the attacker learns more about the operation of the application, increasing his attack surface. Sometimes having a debug flag enabled could even lead to code execution attacks (older versions of werkzeug)

Solution:

Disable the possibility to enable debug information on a live environment.

</details>
<br/>
------
- [ ] Verify that the HTTP headers or any part of the HTTP response do not expose detailed version information of system components.
<details><summary>More information</summary>

Verbose version information

Description:

Revealing system data or debugging information helps an adversary learn about the system and form a plan of attack. An information leak occurs when system data or debugging information leaves the program through an output stream or logging function.

Solution:

Verify that the HTTP headers do not expose detailed version information of system components. For each different type of server, there are hardening guides dedicated especially for this type of data leaking. The same applies for i.e any other leak of version information such as the version of your programming language or other services running to make your application function.

</details>
<br/>
------
- [ ] Verify that every HTTP response contains a content type header specifying a safe character set (e.g., UTF-8, ISO 8859-1).
<details><summary>More information</summary>

Content type headers

Description:

Setting the right content headers is important for hardening your applications security, this reduces exposure to driveby download attacks or sites serving user uploaded content that, by clever naming could be treated by MS Internet Explorer as executable or dynamic HTML files and thus can lead to security vulnerabilities.

Solution:

An example of a content type header would be:

ContentType: text/html; charset=UTF8
or:
ContentType: application/json;

Verify that requests containing unexpected or missing content types are rejected with appropriate headers (HTTP response status 406 Unacceptable or 415 Unsupported Media Type).

</details>
<br/>
------
- [ ] Verify that all API responses contain Content-Disposition: attachment; filename="api.json" (or other appropriate filename for the content type).
<details><summary>More information</summary>

API resonses security headers

Description:

There are some security headers which should be properly configured in order to protect some API callbacks against Reflective File Download and other type of injections.

Also check if the API response is dynamic, if user input is reflected in the response. If so, you must validate and encode the input, in order to prevent XSS and Same origin method execution attacks.

Solution:

Sanitize your API's input (in this case they should just allow alphanumeric); escaping is not sufficient

Verify that all API responses contain XContentTypeOptions: nosniff, to prevent the browser from interpreting files as something else than declared by the content type (this helps prevent XSS if the page is interpreted as html or js).

Add 'ContentDisposition: attachment; filename="filename.extension"' with extension corresponding the file extension and contenttype, on APIs that are not going to be rendered

</details>
<br/>
------
- [ ] Verify that a content security policy (CSPv2) is in place that helps mitigate impact for XSS attacks like HTML, DOM, JSON, and JavaScript injection vulnerabilities.
<details><summary>More information</summary>

Content security policy headers

Description:

The main use of the content security policy header is to, detect, report, and reject XSS attacks. The core issue in relation to XSS attacks is the browser's inability to distinguish between a script that's intended to be part of your application, and a script that's been maliciously injected by a thirdparty. With the use of CSP(Content Security policy), we can tell the browser which script is safe to execute and which scripts are most likely been injected by an attacker.

Solution:

A best practice for implementing CSP in your application would be to externalize all JavaScript within the web pages.

So this:

    <script>
      function doSomething() {
        alert('Something!');
      }
    </script>

    <button onclick='doSomething();'>foobar!</button>

Must become this:

    <script src='doSomething.js'></script>
    <button id='somethingToDo'>Let's foobar!</button>

The header for this code could look something like:

    ContentSecurityPolicy: defaultsrc'self'; objectsrc'none'; scriptsrc'https://mycdn.com'

Since it is not entirely realistic to implement all JavaScript on external pages we can apply sort of a crosssite request forgery token to your inline JavaScript. This way the browser can again distinguish the difference between code which is part of the application against probable malicious injected code, in CSP this is called the 'nonce'. Of course, this method is also very applicable on your existing code and designs. Now, to use this nonce you have to supply your inline script tags with the nonce attribute. Firstly, it's important that the nonce changes for each response. Otherwise, the nonce would become guessable. So it should also contain a high entropy and should be hard to predict. Similar to the operation of the CSRF tokens, the nonce becomes impossible for the attacker to predict making it difficult to execute a successful XSS attack.

Inline JavaScript example containing nonce:

    <script nonce=sfsdf03nceI23wlsgle9h3sdd21>
    <! Your javscript code >
    </script>

Matching header example:

    ContentSecurityPolicy: scriptsrc 'noncesfsdf03nceI23wlsgle9h3sdd21'

There is a whole lot more to learn about the CSP header for indepth implementation in your application. This knowledge base item just scratches the surface and it would be highly recommended to gain more indepth knowledge about this powerful header

Very Important: When applying the CSP header, although it blocks XSS attacks. Your application still remains vulnerable to HTML and other code injections. So this is not a substitute for, validation, sanitizing and encoding of userinput.

</details>
<br/>
------
- [ ] Verify that all responses contain X-Content-Type-Options: nosniff.
<details><summary>More information</summary>

API resonses security headers

Description:

There are some security headers which should be properly configured in order to protect some API callbacks against Reflective File Download and other type of injections.

Also check if the API response is dynamic, if user input is reflected in the response. If so, you must validate and encode the input, in order to prevent XSS and Same origin method execution attacks.

Solution:

Sanitize your API's input (in this case they should just allow alphanumeric); escaping is not sufficient

Verify that all API responses contain XContentTypeOptions: nosniff, to prevent the browser from interpreting files as something else than declared by the content type (this helps prevent XSS if the page is interpreted as html or js).

Add 'ContentDisposition: attachment; filename="filename.extension"' with extension corresponding the file extension and contenttype, on APIs that are not going to be rendered

</details>
<br/>
------
- [ ] Verify that HTTP Strict Transport Security headers are included on all responses and for all subdomains, such as Strict-Transport-Security: max-age=15724800; includeSubdomains.
<details><summary>More information</summary>

HTTP strict transport security

Description:

HTTP Strict Transport Security (HSTS) is an optin security enhancement that is specified by a web application through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. It also prevents HTTPS click through prompts on browsers.

HSTS addresses the following threats:

  1. User bookmarks or manually types http://example.com and is subject to a maninthemiddle attacker HSTS automatically redirects HTTP requests to HTTPS for the target domain
  2. Web application that is intended to be purely HTTPS inadvertently contains HTTP links or serves content over HTTP HSTS automatically redirects HTTP requests to HTTPS for the target domain
  3. A maninthemiddle attacker attempts to intercept traffic from a victim user using an invalid certificate and hopes the user will accept the bad certificate
  4. HSTS does not allow a user to override the invalid certificate message

    Solution:

When users are visiting the application it should set the following header: These headers should be set in a base class which always sets the header no mather what page the users initially visit.

Simple example, using a long (1 year) maxage: StrictTransportSecurity: maxage=31536000

If all present and future subdomains will be HTTPS: StrictTransportSecurity: maxage=31536000; includeSubDomains

CAUTION: Site owners can use HSTS to identify users without cookies. This can lead to a significant privacy leak.

Cookies can be manipulated from subdomains, so omitting the include "includeSubDomains" option permits a broad range of cookierelated attacks that HSTS would otherwise prevent by requiring a valid certificate for a subdomain. Ensuring the "Secure Flag" is set on all cookies will also prevent, some, but not all, of the same attacks.

</details>
<br/>
------
- [ ] Verify that a suitable "Referrer-Policy" header is included, such as "no-referrer" or "same-origin".
<details><summary>More information</summary>

Referrer policy header

Description: Requests made from a document, and for navigations away from that document are associated with a Referer header. While the header can be suppressed for links with the noreferrer link type, authors might wish to control the Referer header more directly for a number of reasons,

Privacy A social networking site has a profile page for each of its users, and users add hyperlinks from their profile page to their favorite bands. The social networking site might not wish to leak the user’s profile URL to the band web sites when other users follow those hyperlinks (because the profile URLs might reveal the identity of the owner of the profile).

Some social networking sites, however, might wish to inform the band web sites that the links originated from the social networking site but not reveal which specific user’s profile contained the links.

Security A web application uses HTTPS and a URLbased session identifier. The web application might wish to link to HTTPS resources on other web sites without leaking the user’s session identifier in the URL.

Alternatively, a web application may use URLs which themselves grant some capability. Controlling the referrer can help prevent these capability URLs from leaking via referrer headers.

Note that there are other ways for capability URLs to leak, and controlling the referrer is not enough to control all those potential leaks.

Trackback A blog hosted over HTTPS might wish to link to a blog hosted over HTTP and receive trackback links.

Solution:

For more information about the policy and how it should be implemented please visit the following link,

https://www.w3.org/TR/referrerpolicy/referrerpolicies

</details>
<br/>
------
- [ ] Verify that a suitable X-Frame-Options or Content-Security-Policy: frame-ancestors header is in use for sites where content should not be embedded in a third-party site.
<details><summary>More information</summary>

Include anti clickjacking headers

Description:

Clickjacking, also known as a "UI redress attack", is when an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the top level page. Thus, the attacker is "hijacking" clicks meant for their page and routing them to another page, most likely owned by another application, domain, or both.

Using a similar technique, keystrokes can also be hijacked. With a carefully crafted combination of stylesheets, iframes, and text boxes, a user can be led to believe they are typing in the password to their email or bank account, but are instead typing into an invisible frame controlled by the attacker.

Solution:

To avoid your application from being clickjacked you can add the XframeOptions header to your application. These headers can be configured as:

XframeOptions: deny

The page cannot be displayed in a frame, regardless of the site attempting to do so.

XFrameOptions: sameorign  

The page can only be displayed in a frame on the same origin as the page itself.

XFrameOptions: ALLOWFROM uri

The page can only be displayed in a frame on the specified origin.

You may also want to consider to include "Framebreaking/Framebusting" defense for legacy browsers that do not support XFrameOption headers.

Source: https://www.codemagi.com/blog/post/194

</details>
<br/>
------
- [ ] Verify that the application server only accepts the HTTP methods in use by the application or API, including pre-flight OPTIONS.
<details><summary>More information</summary>

HTTP request methods

Description:

HTTP offers a number of methods that can be used to perform actions on the web server. Many of these methods are designed to aid developers in deploying and testing HTTP applications. These HTTP methods can be used for nefarious purposes if the web server is misconfigured. It recommended to read about the different available methods, their purposes and limitations.

Available method are:

GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.)[1] The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations.

HEAD The HEAD method asks for a response identical to that of a GET request, but without the response body. This is useful for retrieving metainformation written in response headers, without having to transport the entire content.

POST The POST method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URI. The data POSTed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a datahandling process; or an item to add to a database.

PUT The PUT method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.

DELETE The DELETE method deletes the specified resource.

TRACE The TRACE method echoes the received request so that a client can see what (if any) changes or additions have been made by intermediate servers.

OPTIONS The OPTIONS method returns the HTTP methods that the server supports for the specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.

CONNECT The CONNECT method converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSLencrypted communication (HTTPS) through an unencrypted HTTP proxy.

PATCH The PATCH method applies partial modifications to a resource.

Some of the methods (for example, GET, HEAD, OPTIONS and TRACE) are, by convention, defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, web caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe. However, this is not mandated by the standard, and it is explicitly acknowledged that it cannot be guaranteed.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Therefore, careless or deliberate programming can cause nontrivial changes on the server. This is discouraged, because it can cause problems for web caching, search engines and other automated agents, which can make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as http://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article.

By contrast, methods such as POST, PUT, DELETE and PATCH are intended for actions that may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.

Methods PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request (note that idempotence refers to the state of the system after the request has completed, so while the action the server takes (e.g. deleting a record) or the response code it returns may be different on subsequent requests, the system state will be the same every time). Methods GET, HEAD, OPTIONS and TRACE, being prescribed as safe, should also be idempotent, as HTTP is a stateless protocol.

In contrast, the POST method is not necessarily idempotent, and therefore sending an identical POST request multiple times may further affect state or cause further side effects (such as financial transactions). In some cases this may be desirable, but in other cases this could be due to an accident, such as when a user does not realize that their action will result in sending another request, or they did not receive adequate feedback that their first request was successful. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may resubmit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other nonidempotent action is triggered by a GET or other request. Ignoring this recommendation, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not.

The TRACE method can be used as part of a class of attacks known as crosssite tracing; for that reason, common security advice is for it to be disabled in the server configuration. Microsoft IIS supports a proprietary "TRACK" method, which behaves similarly, and which is likewise recommended to be disabled

Solution:

Verify that the application accepts only a defined set of HTTP request methods, such as GET and POST and unused methods are explicitly blocked/disabled.

</details>
<br/>
------
- [ ] Verify that the supplied Origin header is not used for authentication or access control decisions, as the Origin header can easily be changed by an attacker.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
<br/>
------
- [ ] Verify that the cross-domain resource sharing (CORS) Access-Control-Allow-Origin header uses a strict white-list of trusted domains to match against and does not support the "null" origin.
<details><summary>More information</summary>

Cross origin resource sharing

Description:

Cross Origin Resource Sharing or CORS is a mechanism that enables a web browser to perform 'crossdomain' requests using the XMLHttpRequest L2 API in a controlled manner. In the past, the XMLHttpRequest L1 API only allowed requests to be sent within the same origin as it was restricted by the same origin policy.

Solution:

CrossOrigin requests have an Origin header, that identifies the domain initiating the request and is always sent to the server. CORS defines the protocol to use a web browser and a server to determine whether a crossorigin request is allowed. In order to accomplish this goal, there are a few HTTP headers involved in this process, that are supported by all major browsers:

Origin AccessControlRequestMethod AccessControlRequestHeaders AccessControlAllowOrigin AccessControlAllowCredentials AccessControlAllowMethods AccessControlAllowHeaders

Things you must consider when using CORS

  1. Validate URLs passed to XMLHttpRequest.open. Current browsers allow these URLs to be cross domain; this behavior can lead to code injection by a remote attacker. Pay extra attention to absolute URLs.

  2. Ensure that URLs responding with AccessControlAllowOrigin: * do not include any sensitive content or information that might aid an attacker in further attacks. Use the AccessControlAllowOrigin header only on chosen URLs that need to be accessed crossdomain. Don't use the header for the whole domain.

  3. Allow only selected, trusted domains in the AccessControlAllowOrigin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks)

  4. Keep in mind that CORS does not prevent the requested data from going to an unauthenticated location. It's still important for the server to perform usual CSRF prevention.

  5. While the RFC recommends a preflight request with the OPTIONS verb, current implementations might not perform this request, so it's important that "ordinary" (GET and POST) requests perform any access control necessary.

  6. Discard requests received over plain HTTP with HTTPS origins to prevent mixed content bugs.

  7. Don't rely only on the Origin header for Access Control checks. Browser always sends this header in CORS requests, but may be spoofed outside the browser. Applicationlevel protocols should be used to protect sensitive data.

NOTE: Modern application frameworks do dynamically allocation of the origin header, resulting in the browser also allowing to send the "AccessControlAllowCredentials: true" header as well in requests. Whenever JSON web tokens are being send in cookies rather than headers, potential attackers could abuse this behaviour to make unauthenticated XHR get requests on the authenticated users behalf to read sensitive information from the pages.


</details>
<br/>
------
skf-integration[bot] commented 5 years ago

alt text

Security knowledge framework!


Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 

Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 

Permit Password Change

Description:

Users should be able to update their password whenever it is necessary. For example, take in consideration the scenario in which they tend to use the same password for multiple purposes. If this password is leaked, the users have to immediately update their credentials in every application they are registered. Therefore, if the application does not provide an accessible password update functionality to a user, there is the risk that his account may be taken over.

Solution:

Applications should provide to the user a functionality that permits the change of its own password.

Unauthorized credential changes

 Description:

An application which offers user login functionality, usually has an administration page
where userdata can be modified. When the user wants to change this data he should
specify his current password.

 Solution:

When changing user credentials or email address the user must always enter a valid
password in order to implement the changes. This is also called reauthentication or
stepup / adaptive authentication. Whenever a user "reauthenticates" himself the current
session ID value should also be refreshed in order to fend oFf so called "session hijackers"

Verify Breached Passwords

 Description:

Multiple database of leaked credentials have been released during breaches over the years. If users choose passwords already leaked, they are vulnerable to dictionary attacks.

 Solution:

Verify that passwords submitted during account registration, login, and password change are checked against a set of breached passwords. In case the password chosen has already been breached, the application must require the user to reenter a nonbreached password.

Provide Password Strength Checker

 Description:

Users may tend to choose easy guessable passwords. Therefore, it is suggested to implement a functionality that encourage them to set password of higher complexity.

 Solution:

Applications should provide the users a password security meter in occasion of account registration and password change.

Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 

no password rotation policy

Description:
Some policies require users to change passwords periodically, often every 90 or 180 days. 
The benefit of password expiration, however, is debatable. Systems that implement such 
policies sometimes prevent users from picking a password too close to a previous selection.

This policy can often backfire. Some users find it hard to devise "good" passwords that are 
also easy to remember, so if people are required to choose many passwords because they have 
to change them often, they end up using much weaker passwords; the policy also encourages 
users to write passwords down. Also, if the policy prevents a user from repeating a recent password, 
this requires that there is a database in existence of everyone's recent passwords (or their hashes) 
instead of having the old ones erased from memory. Finally, users may change their password repeatedly
within a few minutes, and then change back to the one they really want to use, circumventing the 
password change policy altogether.

Solution:
Only force users to update their passwords when the password strength that is enforced by the application
is no longer sufficient to withstand brute force attacks due to increase of computing power.

Does The application enforce the use of secure passwords

 Description:

Applications should encourage the use of strong passwords and passphrases. Preferably the
password policy should not put limitations or restrictions on the chosen passwords (for example
the length of a password). Whenever the application supports strong passwords and
the use of password managers, the possibility for an attacker performing a succesfull bruteforce 
attack drops significantly.
This also increases the possibility that the application can be used with users' passwords managers.

 Solution:

Verify password entry fields allow, or encourage, the use of passphrases, and do not prevent
password managers, long passphrases or highly complex passwords being entered. 
A password ideally should be:
* at least 12 characters in length
* passwords even longer than 64 characters are allowed
* every special characters from Unicode charset should be permitted (including emoki, kanji, multiple whitespaces, ecc.)
* No limit for the number of characters allowed from the same type (lowercase characters, uppercase characters, digits, symbols) 

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

Forget password functions

 Description:

Whenever the application provides a password forget functionality or another 
type of recovery methods there are several implementations of hardened proven ways to make
the user recover his password.

 Solution:

The recommended solutions are to use TOTP (Timebased OneTime Password algorithm). This 
method is an example of a hashbased message authentication code (HMAC). It combines a 
secret key with the current timestamp using a cryptographic hash function to generate 
a onetime password. Because network latency and outofsync clocks can result in the password 
recipient having to try a range of possible times to authenticate against, the timestamp typically 
increases in 30second intervals, which thus cuts the potential search space.

Or the other option is to use a Mathematicalalgorithmbased onetime password method. This other 
type of onetime password uses a complex mathematical algorithm, such as a hash chain, to generate 
a series of onetime passwords from a secret shared key. Each password cannot be guessed even when 
previous passwords are known. The open source OAuth algorithm is standardized; other algorithms are 
covered by U.S. patents. Each password is observably unpredictable and independent on previous ones. 
Therefore, an adversary would be unable to guess what the next password may be, even with the 
knowledge of all previous passwords.

Example of a hard token mathimatical algorithm would be a yubikey
Example of a soft token TOTP would be google authenticator

The last resort would be to send a new password by email. This mail should contain a reset link with 
a token which is valid for a limited amount of time. Additional authentication based on softtokens 
(e.g. SMS token, native mobile applications, etc.) can be required as well before the link is 
sent over. Also, make sure whenever such a recovery cycle is started, the application does not 
reveal the user’s current password in any way.

user notification on critical state changing operations

Description:
When a user is informed of critical operations than the user can determine
if the notification is send by his own actions, or that the notifucation indicates 
potential compromitation of his user account.

Solution:

Verify that secure notifications are sent to users after updates
to authentication details, such as credential resets, email or address changes,
logging in from unknown or risky locations. Users must also be notified when
password policies change or any other important updates that require action from the
user to increase the security of his account.

The use of push notifications  rather than SMS or email  is preferred, but in the 
absence of push notifications, SMS or email is acceptable as long as no sensitive information is disclosed 
in the notification.

Secrets should be secure random generated

 Description:

Secret keys, API tokens, or passwords must be dynamically generated. Whenever these tokens
are not dynamically generated they can become predicable and used by attackers to compromise
user accounts. 

 Solution:

When it comes to API tokens and secret keys these values have to be dynamically generated and valid only once.
The secret token should be cryptographically 'random secure', with at least 120 bit of effective entropy, salted with a unique and random 32bit value and hashed with an approved hashing (oneway) function.

Passwords on the other hand should be created by the user himself, rather than assigning
a user a dynamically generated password. The user should be presented a onetime link with a 
cryptographically random token by means of an email or SMS which is used to activate his 
account and provide a password of his own.

Password leakage

 Description:

After completing a password recovery functionality, the user should not be sent a plaintext
password to his email adress. The application should also under no circumstances disclose the old or current password
to the users.

 Solution:

The application should under no circumstances disclose the users current, old and new password plain text.
This behavior makes the application susceptible to side channel attacks and make the passwords
lose their integrity since they could be compromised by someone looking over another users shoulder to
see the password. 

No shared knowledge for secret questions

 Description:

Whenever an application ask an user a secret question i.e a password forgot
functionality, these questions should not be shared knowledge an attacker could get from
the web to prevent him compromising the account by this function.

 Solution:

Secret questions should never include shared knowledge, predictable or easy
guessable values.

Otherwise the answers for these secret questions can be easilly looked up on the internet by means 
of social media accounts and the like.

Password leakage

 Description:

After completing a password recovery functionality, the user should not be sent a plaintext
password to his email adress. The application should also under no circumstances disclose the old or current password
to the users.

 Solution:

The application should under no circumstances disclose the users current, old and new password plain text.
This behavior makes the application susceptible to side channel attacks and make the passwords
lose their integrity since they could be compromised by someone looking over another users shoulder to
see the password. 

Forget password functions

 Description:

Whenever the application provides a password forget functionality or another 
type of recovery methods there are several implementations of hardened proven ways to make
the user recover his password.

 Solution:

The recommended solutions are to use TOTP (Timebased OneTime Password algorithm). This 
method is an example of a hashbased message authentication code (HMAC). It combines a 
secret key with the current timestamp using a cryptographic hash function to generate 
a onetime password. Because network latency and outofsync clocks can result in the password 
recipient having to try a range of possible times to authenticate against, the timestamp typically 
increases in 30second intervals, which thus cuts the potential search space.

Or the other option is to use a Mathematicalalgorithmbased onetime password method. This other 
type of onetime password uses a complex mathematical algorithm, such as a hash chain, to generate 
a series of onetime passwords from a secret shared key. Each password cannot be guessed even when 
previous passwords are known. The open source OAuth algorithm is standardized; other algorithms are 
covered by U.S. patents. Each password is observably unpredictable and independent on previous ones. 
Therefore, an adversary would be unable to guess what the next password may be, even with the 
knowledge of all previous passwords.

Example of a hard token mathimatical algorithm would be a yubikey
Example of a soft token TOTP would be google authenticator

The last resort would be to send a new password by email. This mail should contain a reset link with 
a token which is valid for a limited amount of time. Additional authentication based on softtokens 
(e.g. SMS token, native mobile applications, etc.) can be required as well before the link is 
sent over. Also, make sure whenever such a recovery cycle is started, the application does not 
reveal the user’s current password in any way.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

The login functionality should always generate a new session id

 Description:

Whenever an user is successfully authenticated the application should generate a
new session cookie.

 Solution:

The login functionality should always generate (and use) a new session ID after a
successful login. This is done to prevent an attacker doing a session fixation attack
on your users.

Some frameworks do not provide the possibility to change the session ID on login such as
.net applications. Whenever this problem occurs you could set an extra random cookie on
login  with a strong token and store this value in a session variable.

Now you can compare the cookie value with the session variable in order to prevent
session fixation since the authentication does not solely rely on the session ID since
the random cookie can not be predicted or fixated by the attacker.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

The logout functionality should revoke the complete session

 Description:

When the logout functionality does not revoke the complete session, an attacker could still
impersonate a user when he has access to the session cookie even after the user is logged off the application.

 Solution:

The logout functionality should revoke the complete session whenever a user
wants to terminate his session.

Each different framework has its own guide to achieve this revocation.
It is also recommended for you to make test cases which you follow to ensure
session revocation in your application.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

Password change leads to destroying concurrent sessions

 Description:

Whenever a user changes his password, the user should be granted the option
to kill all other concurrent sessions. This countermessure helps to exclude
potential attackers living on a hijacked session.

Note: Whenever users are granted the possibility to change their passwords,
      do not forget to make them reauthenticate or to use a form of step up
      or adaptive authentication mechanism.

 Solution:

Verify the user is prompted with the option to terminate all other active sessions 
after a successful change password process.

concurrent session handling

 Description:

You should limit and keep track of all the different active concurrent sessions.
Whenever the application discovers concurrent sessions it should always notify the user
about this and should give him the opportunity to end the other sessions.

With this defense in place it becomes harder for attackers to hijack a users session since
they will be notified about concurrent sessions.

 Solution:

The application should keep track and limit all the granted sessions.
It should store your users IP address, session id and user id. After storing these credentials
it should do regular checks to see if there are:

1. Multiple active sessions linked to same user id
2. Multiple active sessions from different locations
3. Multiple active sessions from different devices
4. Limit and destroy sessions when they exceed an accepted threshold.

The more critical the application becomes the lower the accepted threshold for
concurrent sessions should be.

Session cookies without the Secure attribute

 Description:

The secure flag is an option that can be set when creating a cookie.
This flag ensures that the cookie will not be sent over an unencrypted
connection by the browser,which ensures that the session cookie can not be sent over a nonencrypted link.

 Solution:

When creating a session cookie which is sent over an encrypted connection
you should set the secure flag. The Secure flag should be set during every setcookie.
This will instruct the browser to never send the cookie over HTTP.
The purpose of this flag is to prevent the accidental exposure of a cookie value if a user
follows an HTTP link.

Session cookies without the HttpOnly attribute

 Description:

An HttpOnly flag is an option that can be set when creating a cookie. This v ensures that the cookie cannot be read or edited by JavaScript. This ensures an attacker cannot steal this cookie as a crosssite scripting vulnerability is present in the application.

 Solution:

The HttpOnly flag should be set to disable malicious script access to the cookie values such as the session ID value. Also, disable unnecessary HTTP request methods such as the TRACE option. Misconfiguration of the HTTP request headers can lead to stealing the session cookie even though HttpOnly protection is in place.

same site attribute

Description:
SameSite prevents the browser from sending this cookie along with crosssite requests. 
The main goal is mitigate the risk of crossorigin information leakage. It also provides some 
protection against crosssite request forgery attacks.

Solution:
The strict value will prevent the cookie from being sent by the browser to the target site in all 
crosssite browsing context, even when following a regular link. For example, for a GitHublike website this 
would mean that if a loggedin user follows a link to a private GitHub project posted on a corporate discussion 
forum or email, GitHub will not receive the session cookie and the user will not be able to access the project.

A bank website however most likely doesn't want to allow any transactional pages to be linked from external 
sites so the strict flag would be most appropriate here.

The default lax value provides a reasonable balance between security and usability for websites that want
to maintain user's loggedin session after the user arrives from an external link. In the above GitHub scenario, 
the session cookie would be allowed when following a regular link from an external website while blocking it in 
CSRFprone request methods (e.g. POST).

As of November 2017 the SameSite attribute is implemented in Chrome, Firefox, and Opera. 
Since version 12.1 Safari also supports this. Windows 7 with IE 11 lacks support as of December 2018, 
see caniuse.com below.

host prefix

Description:

the '__Host" prefix signals to the browser that both the Path=/ and Secure attributes are required, 
and at the same time, that the Domain attribute may not be present.

Cross subdomain cookie attack

 Description:

A quick overview of how it works:

1. A website www.example.com hands out subdomains to untrusted third parties
2. One such party, Mallory, who now controls evil.example.com, lures Alice to her site
3. A visit to evil.example.com sets a session cookie with the domain .example.com on Alice's browser
4. When Alice visits www.example.com, this cookie will be sent with the request, as the specs for cookies states, and Alice will have the session specified by Mallory's cookie.
5. Mallory can now use Alice her account.

 Solution:

In this scenario changing the sessionID on login does not make any difference since
Alice is already logged in when she visits Mallory's evil web page.

It is good practice to use a completely different domain for all trusted activity.

For example Google uses google.com for trusted activities and *.googleusercontent.com
for untrusted sites.

Also when setting your cookies to specify which domains they are allowed to
be send to. Especially on your trusted domain you do not want to leak cookies to unintended
subdomains. highly recommended is to not use wildcards when setting this option.

High value transactions

 Description:

Whenever there are high value transactions a normal username/password static authentication method does
not suffice to ensure a high level of security. Whenever the application digests high level of transactions ensure that
risk based reauthentication, two factor or transaction signing is in place.

 Solution:

1 risk based authentication:
In Authentication, riskbased authentication is a nonstatic authentication 
system which takes into account the profile of the agent requesting access to 
the system to determine the risk profile associated with that transaction. 

The risk profile is then used to determine the complexity of the challenge.
Higher risk profiles leads to stronger challenges, whereas a static username/password may suffice for 
lowerrisk profiles. Riskbased implementation allows the application to challenge the user for additional 
credentials only when the risk level is appropriate.

2 two factor authentication:
Multifactor authentication (MFA) is a method of computer access control in which a user is 
granted access only after successfully presenting several separate pieces of evidence to an 
authentication mechanism – typically at least two of the following categories: knowledge (something they know), 
possession (something they have), and inherence (something they are)

3 Transaction signing:
Transaction signing (or digital transaction signing) is the process of calculating a keyed hash function 
to generate a unique string which can be used to verify both the authenticity and integrity of an online transaction.

A keyed hash is a function of the user's private or secret key and the transaction details, 
such as the transfer to the account number and the transfer amount.

To provide a high level of assurance of the authenticity and integrity of 
the hash it is essential to calculate the hash on a trusted device, such as a separate smart card reader.
Calculating a hash on an Internetconnected PC or mobile device such as a mobile telephone/PDA would be
counterproductive as malware and attackers can attack these platforms and potentially subvert the signing process itself.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

All authentication controls must fail securely

 Description:

Handling errors securely is a key aspect of secure coding.
There are two types of errors that deserve special attention. The first is exceptions
that occur in the processing of a security control itself. It's important that these
exceptions do not enable behavior that the countermeasure would normally not allow.
As a developer, you should consider that there are generally three possible outcomes
from a security mechanism:

1. Allow the operation
2. Disallow the operation
3. Exception

In general, you should design your security mechanism so that a failure will follow the same execution path
as disabling the operation

 Solution:

Make sure all the access control systems are thoroughly tested for failing securely before
using it in your application. It is common that complete unittest are created especially
for this purpose.

Insecure direct object references

 Description:

Applications frequently use the actual name or key of an object when generating web pages. 
Applications don’t always verify the user is authorized for the target object. 
This results in an insecure direct object reference flaw. Testers can easily manipulate parameter 
values to detect such flaws and code analysis quickly shows whether authorization is properly verified.

The most classic example:
The application uses unverified data in a SQL call that is accessing account information:

String query = "SELECT * FROM accts WHERE account = ?";
PreparedStatement pstmt = connection.prepareStatement(query , ... );
pstmt.setString( 1, request.getParameter("acct"));
ResultSet results = pstmt.executeQuery();

The attacker simply modifies the ‘acct’ parameter in their browser to send whatever 
account number they want. If not verified, the attacker can access any user’s account, instead of 
only the intended customer’s account.

http://example.com/app/accountInfo?acct=notmyacct

 Solution:

Preventing insecure direct object references requires selecting an approach 
for protecting each user accessible object (e.g., object number, filename):

Use per user or session indirect object references. This prevents attackers from directly 
targeting unauthorized resources. For example, instead of using the resource’s database key, 
a drop down list of six resources authorized for the current user could use the numbers 1 to 6 to 
indicate which value the user selected. The application has to map the peruser indirect reference 
back to the actual database key on the server.

Check access. Each use of a direct object reference from an untrusted source must include an access control 
check to ensure the user is authorized for the requested object.

Cross site request forgery

 Description:

CrossSite Request Forgery (CSRF) is a type of attack that occurs when a malicious Web site,
email, blog, instant message, or program causes a users Web browser to perform an unwanted
action on a trusted site for which the user is currently authenticated.

The impact of a successful crosssite request forgery attack is limited to the
capabilities exposed by the vulnerable application. For example, this attack could result
in a transfer of funds, changing a password, or purchasing an item in the users context.
In effect, CSRF attacks are used by an attacker to make a target system perform a
function (funds Transfer, form submission etc.) via the targets browser without
knowledge of the target user at least until the unauthorised function has been committed.

 Solution:

To arm an application against automated attacks and tooling you need to use unique tokens
which are included into the forms of an application, API calls or AJAX requests.  
Any state changing operation requires a secure random token (e.g CSRF token) to prevent
against CSRF attacks. Characteristics of a CSRF Token are a unique, large random
value generated by a cryptographically secure random number generator.

The CSRF token is then added as a hidden field for forms and validated on the sever side whenever
a user is sending a request to the server.

Note :
Whenever the application is an REST service and is using tokens such as JWT tokens, whenever these tokens are being sent
in the application headers rather than stored in cookies the application should not be suspectible to CSRF attacks for a succesfull CSRF attacke depends on the browsers cookie jar.

Two factor authentication

 Description:

Two factor authenitcation must be implemented to protect your applications users against unauthorized use of the application.

Whenever the users username and password are leaked or disclosed by an application on what ever fashion possible, the 
users account should still be proteced by two factor authentication mechanisms to prevent attackers
from logging in with the credentials.

 Solution:

Multifactor authentication (MFA) is a method of computer access control in which a user is granted access only after successfully presenting several separate pieces of evidence to an authentication mechanism – typically at least two of the following categories: knowledge (something they know), possession (something they have), and inherence (something they are)

Examples of two/multi factor authentication can be 

1. Google authenticator
   Google Authenticator is an application that implements twostep verification services using the Timebased 
   Onetime Password Algorithm (TOTP) and HMACbased Onetime Password Algorithm 

2. Yubikey

  The YubiKey is a hardware authentication device manufactured by Yubico that supports onetime passwords, public key   
  encryption and authentication, and the Universal 2nd Factor (U2F) protocol[1] developed by the FIDO Alliance (FIDO U2F).
  It allows users to securely log into their accounts by emitting onetime passwords or using a FIDObased public/private
  key pair generated by the device

Directory listing

 Description:

Whenever directory listing is enabled, an attacker could gain sensitive information about
the systems hierarchical structure and gain knowledge about directories or files which should
possibly not be publicly accessible. An attacker could use this information to
increase his attack vector. In some cases this could even lead to an attacker gaining knowledge about
credentials or old vulnerable system demo functions which might lead to remote code execution.

 Solution:

Different types of servers require a different type of approach in order to disable
directory listing. For instance: Apache uses a .htacces in order to disable directory listing.
As for iis7, directory listing is disabled by default.

Step up or adaptive authentication

 Description:

Whenever a user browses a section of a webbased application that contains sensitive information the user should be challenged authenticate again using a higher assurance credential to be granted access to this information.
This is to prevent attackers from reading sensitive information after they successfully hijacked a user account.

 Solution:

Verify the application has additional authorization (such as step up or adaptive authentication) so the user is challenged before being granted access to sensitive information. This rule also applies for making critical changes to an account or action.
Segregation of duties should be applied for highvalue applications to enforce antifraud controls as per the risk of application and past fraud.

Verify that structured data is strongly typed and validated

 Description:

Whenever structured data is strongly typed and validated against a defined schema the application
can be developed as a defensible proactive application. The application can now measure everything
that is outside of its intending operation by means of the defined schema's and should be used to
reject the input if the schema checks return false.

 Solution:

Verify that structured data is strongly typed and validated against a defined schema
including allowed characters, length and pattern (e.g. credit card numbers or telephone, 
or validating that two related fields are reasonable, such as validating suburbs and zip or 
post codes match

not available item

 Description:

This item is currently not available.

 Solution:

This item is currently not available.

XSS injection

 Description:

Every time the application gets userinput, whether this showing it on screen or processing
this data in the application background, these parameters should be escaped for malicious
code in order to prevent crosssite scripting injections.
When an attacker gains the possibility to perform an XSS injection,
he is given the opportunity to inject HTML and JavaScript code directly into the
application. This could lead to accounts being compromised by stealing session cookies or directly 
affect the operation of the target application. 

Altough templating engines(razor, twig, jinja, etc) and contextaware applications(Angular, React, etc)
do a lot of auto escaping for you. These frameworks should always be validated for effectiveness.

 Solution:

In order to prevent XSS injections, all userinput should be escaped or encoded.
You could start by sanitizing userinput as soon as it is inserted into the application,
by preference using a so called whitelisting method.
This means you should not check for malicious content like the tags or anything,
but only allow the expected input. Every input which is outside of the intended operation
of the application should immediately be detected and login rejected.
Do not try to help use the input in any way because that could introduce a new type of attack by converting characters. 

The second step would be encoding all the parameters or userinput before putting this in
your html with encoding libraries specially designed for this purpose.

You should take into consideration that there are several contexts for encoding userinput for
escaping XSS injections. These contexts are amongst others:

* HTML encoding, is for whenever your userinput is displayed directly into your HTML.
* HTML attribute encoding, is the type of encoding/escaping that should be applied 
  whenever your user input is displayed into the attribute of your HTML tags.
* HTML URL encoding, this type of encoding/escaping should be applied to whenever you are using userinput into a HREF tag.

JavaScript encoding should be used whenever parameters are rendered via JavaScript; your application will detect normal injections in the first instant. But your application still remains vulnerable to JavaScript encoding which will not be detected by the normal encoding/escaping methods.

type checking and length checking

 Description

Type checking, length checking and whitelisting is an essential in defense in depth strategie to make
your application more resiliant against input injection attacks.

Example:
SELECT * FROM pages WHERE id=mysql_real_escape_string($_GET['id'])
```

This PHP example did effectively not mitigate the SQL injection. This was due to the fact that it only escaped string based SQL injection.

Now, if this application also had additional checks to validate if the value of the $_GET['id'] parameter was indeed as expected an integer and rejected if this condition was false, the attack would effectively been mitigated.

Solution

All the user supplied input that works outside of the intended opteration of the application should be rejected by the application.

Syntax and Semantic Validity An application should check that data is both syntactically and semantically valid (in that order) before using it in any way (including displaying it back to the user).

Syntax validity, means that the data is in the form that is expected. For example, an application may allow a user to select a fourdigit “account ID” to perform some kind of operation. The application should assume the user is entering a SQL injection payload, and should check that the data entered by the user is exactly four digits in length, and consists only of numbers (in addition to utilizing proper query parameterization).

Semantic validity, includes only accepting input that is within an acceptable range for the given application functionality and context. For example, a start date must be before an end date when choosing date ranges.

</details>
------

- [ ] **Does the sprint implement functions that reflect user supplied input on the side of the client?**
------

- [ ] **Does the sprint implement functions that utilize LDAP?**
------

- [ ] **Does the sprint implement functions that utilize OS commands?**
------

- [ ] **Does the sprint implement functions that get/grabs files from the file system?**
------

- [ ] **Does the sprint implement functions that parse or digests XML?**
------

- [ ] **Does the sprint implement functions that deserializes objects (JSON, XML and YAML)**
------
- [ ] Verify that the application correctly restricts XML parsers to only use the most restrictive configuration possible and to ensure that unsafe features such as resolving external entities are disabled to prevent XXE.
<details><summary>More information</summary>

XXE injections

Description:

Processing of an Xml eXternal Entity containing tainted data may lead to the disclosure of confidential information and other system impacts. The XML 1.0 standard defines the structure of an XML document. The standard defines a concept called an entity, which is a storage unit of some type.

There exists a specific type of entity, an external general parsed entity often shortened to an external entity, that can access local or remote content via a declared system identifier and the XML processor may disclose confidential information normally not accessible by the application. Attacks can include disclosing local files, which may contain sensitive data such as passwords or private user data.

Solution:

Disable the possibility to fetch resources from an external source. This is normally done in the configuration of the used XML parser.

</details>
------
- [ ] Verify that deserialization of untrusted data is avoided or is protected in both custom code and third-party libraries (such as JSON, XML and YAML parsers).
<details><summary>More information</summary>

Insecure object deserialization

Description:

Serialization is the process of turning some object into a data format that can be restored later. People often serialize objects in order to save them to storage, or to send as part of communications.

Deserialization is the reverse of that process, taking data structured from some format, and rebuilding it into an object. Today, the most popular data format for serializing data is JSON. Before that, it was XML.

However, many programming languages offer a native capability for serializing objects. These native formats usually offer more features than JSON or XML, including customizability of the serialization process.

Unfortunately, the features of these native deserialization mechanisms can be repurposed for malicious effect when operating on untrusted data. Attacks against deserializers have been found to allow denialofservice, access control, and remote code execution (RCE) attacks.

Solution:

Verify that serialized objects use integrity checks or are encrypted to prevent hostile object creation or data tampering.

A great reduction of risk is achieved by avoiding native (de)serialization formats. By switching to a pure data format like JSON or XML, you lessen the chance of custom deserialization logic being repurposed towards malicious ends.

Many applications rely on a datatransfer object pattern that involves creating a separate domain of objects for the explicit purpose data transfer. Of course, it's still possible that the application will make security mistakes after a pure data object is parsed.

If the application knows before deserialization which messages will need to be processed, they could sign them as part of the serialization process. The application could then to choose not to deserialize any message which didn't have an authenticated signature.

</details>
------
- [ ] Verify that when parsing JSON in browsers or JavaScript-based backends, JSON.parse is used to parse the JSON document. Do not use eval() to parse JSON.
<details><summary>More information</summary>

Parsing JSON with Javascript

Description:

The eval() function evaluates or executes an argument.

If the argument is an expression, eval() evaluates the expression. If the argument is one or more JavaScript statements, eval() executes the statements.

This is exactly the reason why eval() should NEVER be used to parse JSON or other formats of data which could possible contain malicious code.

Solution:

For the purpose of parsing JSON we would recommend the use of the json.parse functionality. Even though this function is more trusted you should always build your own security checks and encoding routines around the json.parse before mutating the data or passing it on to a view to be displayed in your HTML.

</details>
------

- [ ] **Does the sprint implement functions that process sensitive data?**
------

- [ ] **Does the sprint implement functions that impact logging?**
------
- [ ] Verify that the application does not log other sensitive data as defined under local privacy laws or relevant security policy. ([C9](https://www.owasp.org/index.php/OWASP_Proactive_Controls#tab=Formal_Numbering))
<details><summary>More information</summary>

User credentials in audit logs

Description:

Whenever there are user credentials supplied in an audit log, this could become a risk whenever an attacker could gain access to one of these log files.

Solution:

Instead of storing user credentials, you may want to use user ID's in order to identify the user in the log files.

</details>
------

- [ ] **Does the sprint implement functions that store sensitive information?**
------
- [ ] Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII.
<details><summary>More information</summary>

Client side storage

Description:

Client side storage also known as Offline Storage or Web Storage. The Underlying storage mechanism may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it's recommended not to store any sensitive information in local storage.

Solution:

Verify that authenticated data is cleared from client storage, such as the browser DOM, after the session is terminated. This also goes for other session and local storage information which could assist an attacker launching an successful attack.

Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII (personal identifiable information).

</details>
------
- [ ] Verify that authenticated data is cleared from client storage, such as the browser DOM, after the client or session is terminated.
<details><summary>More information</summary>

Client side storage

Description:

Client side storage also known as Offline Storage or Web Storage. The Underlying storage mechanism may vary from one user agent to the next. In other words, any authentication your application requires can be bypassed by a user with local privileges to the machine on which the data is stored. Therefore, it's recommended not to store any sensitive information in local storage.

Solution:

Verify that authenticated data is cleared from client storage, such as the browser DOM, after the session is terminated. This also goes for other session and local storage information which could assist an attacker launching an successful attack.

Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII (personal identifiable information).

</details>
------
- [ ] Verify that sensitive data is sent to the server in the HTTP message body or headers, and that query string parameters from any HTTP verb do not contain sensitive data.
<details><summary>More information</summary>

GET POST requests

Description:

Authors of services which use the HTTP protocol SHOULD NOT use GETbased forms for the submission of sensitive data, because this will cause this data to be encoded in the RequestURI. Many existing servers, proxies, and browsers will log the request URL in some place where it might be visible to third parties. Servers can use POSTbased form submission instead. GET parameters are also more likely to be vulnerable to XSS. Please refer to the XSS manual in the knowledge base for more information.

Solution:

Whenever transmitting sensitive data always do this by means of the POST request or by header. Note: Avoid userinput in your application header, this could lead to vulnerabilities. Also make sure you disable all other HTTP request methods which are unnecessary for your applications operation such as; REST, PUT, TRACE, DELETE, OPTIONS, etc, since allowing these request methods could lead to vulnerabilities and injections.

</details>
------
- [ ] Verify that users have a method to remove or export their data on demand.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------

- [ ] **Does the sprint implement functions that store sensitive information?**
------

- [ ] **Does the sprint implement/changes TLS configuration?**
------
- [ ] Verify using online or up to date TLS testing tools that only strong algorithms, ciphers, and protocols are enabled, with the strongest algorithms and ciphers set as preferred.
<details><summary>More information</summary>

TLS settings are in line with current leading practice

Description:

TLS settings must always be in line with current leading practice. Whenever TLS settings and ciphers get outdated, the TLS connection can be degraded/broken and used by attackers to eavesdrop on users traffic over the application.

Solution:

There should be structural scans that are done regularly against the applications TLS settings and configurations to check whether the TLS settings are in line with current leading practice.

This could be achieved by using the SSLLabs api or the OWASP OSaft project.

OSaft is an easy to use tool to show informations about SSL certificate and tests the SSL connection according to a given list of ciphers and various SSL configurations.

It's designed to be used by penetration testers, security auditors or server administrators. The idea is to show the important information or the special checks with a simple call of the tool. However, it provides a wide range of options so that it can be used for comprehensive and special checks by experienced people.

While doing these tests also take into consideration the following configuration on the server side:

Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite.

</details>
------
- [ ] Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite.
<details><summary>More information</summary>

TLS settings are in line with current leading practice

Description:

TLS settings must always be in line with current leading practice. Whenever TLS settings and ciphers get outdated, the TLS connection can be degraded/broken and used by attackers to eavesdrop on users traffic over the application.

Solution:

There should be structural scans that are done regularly against the applications TLS settings and configurations to check whether the TLS settings are in line with current leading practice.

This could be achieved by using the SSLLabs api or the OWASP OSaft project.

OSaft is an easy to use tool to show informations about SSL certificate and tests the SSL connection according to a given list of ciphers and various SSL configurations.

It's designed to be used by penetration testers, security auditors or server administrators. The idea is to show the important information or the special checks with a simple call of the tool. However, it provides a wide range of options so that it can be used for comprehensive and special checks by experienced people.

While doing these tests also take into consideration the following configuration on the server side:

Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite.

</details>
------

- [ ] **Does the sprint implement changes that affect and change CI/CD?**
------
- [ ] Verify that the application employs integrity protections, such as code signing or sub-resource integrity. The application must not load or execute code from untrusted sources, such as loading includes, modules, plugins, code, or libraries from untrusted sources or the Internet.
<details><summary>More information</summary>

code signing

Description: Code signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed. The process employs the use of a cryptographic hash to validate authenticity and integrity.

Code signing can provide several valuable features. The most common use of code signing is to provide security when deploying; in some programming languages, it can also be used to help prevent namespace conflicts. Almost every code signing implementation will provide some sort of digital signature mechanism to verify the identity of the author or build system, and a checksum to verify that the object has not been modified. It can also be used to provide versioning information about an object or to store other meta data about an objec

Solution: Sign your code and validate the signatures(checksums) of your code and third party components to confirm the integrity of the deployed components.

</details>
------
- [ ] Verify that the application has protection from sub-domain takeovers if the application relies upon DNS entries or DNS sub-domains, such as expired domain names, out of date DNS pointers or CNAMEs, expired projects at public source code repos, or transient cloud APIs, serverless functions, or storage buckets (autogen-bucket-id.cloud.example.com) or similar. Protections can include ensuring that DNS names used by applications are regularly checked for expiry or change.
<details><summary>More information</summary>

sub domain take over

Description: Subdomain takeover is a process of registering a nonexisting domain name to gain control over another domain. The most common scenario of this process follows:

Domain name (e.g., sub.example.com) uses a CNAME record to another domain (e.g., sub.example.com CNAME anotherdomain.com). At some point in time, anotherdomain.com expires and is available for registration by anyone. Since the CNAME record is not deleted from example.com DNS zone, anyone who registers anotherdomain.com has full control over sub.example.com until the DNS record is present.

The implications of the subdomain takeover can be pretty significant. Using a subdomain takeover, attackers can send phishing emails from the legitimate domain, perform crosssite scripting (XSS), or damage the reputation of the brand which is associated with the domain.

Source: https://0xpatrik.com/subdomaintakeoverbasics/

</details>
------

- [ ] **Does this sprint introduce functions with critical business logic that needs to be reviewed?**
------
- [ ] Verify the application will only process business logic flows with all steps being processed in realistic human time, i.e. transactions are not submitted too quickly.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify the application has appropriate limits for specific business actions or transactions which are correctly enforced on a per user basis.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify the application has sufficient anti-automation controls to detect and protect against data exfiltration, excessive business logic requests, excessive file uploads or denial of service attacks.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify the application has business logic limits or validation to protect against likely business risks or threats, identified using threat modelling or similar methodologies.
<details><summary>More information</summary>

Threat modeling

Description:

Threat modeling is a procedure for optimizing Network/ Application/ Internet Security by identifying objectives and vulnerabilities, and then defining countermeasures to prevent, or mitigate the effects of, threats to the system. A threat is a potential or actual undesirable event that may be malicious (such as DoS attack) or incidental (failure of a Storage Device). Threat modeling is a planned activity for identifying and assessing application threats and vulnerabilities.

Solution:

Threat modeling is best applied continuously throughout a software development project. The process is essentially the same at different levels of abstraction, although the information gets more and more granular throughout the lifecycle. Ideally, a highlevel threat model should be defined in the concept or planning phase, and then refined throughout the lifecycle. As more details are added to the system, new attack vectors are created and exposed. The ongoing threat modeling process should examine, diagnose, and address these threats.

Note that it is a natural part of refining a system for new threats to be exposed. For example, when you select a particular technology such as Java for example you take on the responsibility to identify the new threats that are created by that choice. Even implementation choices such as using regular expressions for validation introduce potential new threats to deal with.

More indepth information about threat modeling can be found at: https://www.owasp.org/index.php/Application_Threat_Modeling

</details>
------
- [ ] Verify the application does not suffer from "time of check to time of use" (TOCTOU) issues or other race conditions for sensitive operations.
<details><summary>More information</summary>

race conditions

Description: A race condition is a flaw that produces an unexpected result when the timing of actions impact other actions. An example may be seen on a multithreaded application where actions are being performed on the same data. Race conditions, by their very nature, are difficult to test for.

Race conditions may occur when a process is critically or unexpectedly dependent on the sequence or timings of other events. In a web application environment, where multiple requests can be processed at a given time, developers may leave concurrency to be handled by the framework, server, or programming language.

Solution:

One common solution to prevent race conditions is known as locking. This ensures that at any given time, at most one thread can modify the database. Many databases provide functionality to lock a given row when a thread is accessing it.

</details>
------
- [ ] Verify the application has configurable alerting when automated attacks or unusual activity is detected.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------

- [ ] **Does the sprint implement functions that allow users to upload/download files?**
------
- [ ] Verify that user-submitted filename metadata is not used directly with system or framework file and URL API to protect against path traversal.
<details><summary>More information</summary>

File upload injections

Description:

Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

The consequences of unrestricted file upload can vary, including complete system takeover, an overloaded file system or database, forwarding attacks to backend systems, and simple defacement.

There are really two classes of problems here. The first is with the file metadata, like the path and file name. These are generally provided by the transport, such as HTTP multipart encoding. This data may trick the application into overwriting a critical file or storing the file in a bad location. You must validate the metadata extremely carefully before using it.

The other class of problem is with the file size or content. An attacker can easily craft a valid image file with PHP code inside.

Solution:

Uploaded files always need to be placed outside the document root of the webserver Check to not accept large files that could fill up storage or cause a denial of service attack Check the userinput(filename) for having the right allowed extensions such as .jpg, .png etc Note: when checking these extensions always make sure your application validates the last possible extension so an attacker could not simply inject ".jpg.php" and bypass your validation

Check the userinput(filename) for containing possible path traversal patterns in order to prevent him from uploading outside of the intended directory.

You may also want to check if the filenames do already exist before uploading in order to prevent the overwriting of files.

Also for serving the files back there needs to be a file handler function that can select the file based on an identifier that will serve the file back towards the user.

Most developers also do a mimetype check. This is a good protection however not whenever you are checking this mimetype through the post request. This header can not be trusted since it can be easily manipulated by an attacker.

The best way to check the mimetype is to extract the file from the server after uploading and check it from the file itself. Deleting it whenever it does not comply with expected values.

</details>
------
- [ ] Verify that user-submitted filename metadata is validated or ignored to prevent the disclosure, creation, updating or removal of local files (LFI).
<details><summary>More information</summary>

File upload injections

Description:

Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

The consequences of unrestricted file upload can vary, including complete system takeover, an overloaded file system or database, forwarding attacks to backend systems, and simple defacement.

There are really two classes of problems here. The first is with the file metadata, like the path and file name. These are generally provided by the transport, such as HTTP multipart encoding. This data may trick the application into overwriting a critical file or storing the file in a bad location. You must validate the metadata extremely carefully before using it.

The other class of problem is with the file size or content. An attacker can easily craft a valid image file with PHP code inside.

Solution:

Uploaded files always need to be placed outside the document root of the webserver Check to not accept large files that could fill up storage or cause a denial of service attack Check the userinput(filename) for having the right allowed extensions such as .jpg, .png etc Note: when checking these extensions always make sure your application validates the last possible extension so an attacker could not simply inject ".jpg.php" and bypass your validation

Check the userinput(filename) for containing possible path traversal patterns in order to prevent him from uploading outside of the intended directory.

You may also want to check if the filenames do already exist before uploading in order to prevent the overwriting of files.

Also for serving the files back there needs to be a file handler function that can select the file based on an identifier that will serve the file back towards the user.

Most developers also do a mimetype check. This is a good protection however not whenever you are checking this mimetype through the post request. This header can not be trusted since it can be easily manipulated by an attacker.

The best way to check the mimetype is to extract the file from the server after uploading and check it from the file itself. Deleting it whenever it does not comply with expected values.

</details>
------
- [ ] Verify that user-submitted filename metadata is validated or ignored to prevent the disclosure or execution of remote files (RFI); which may also lead to SSRF.
<details><summary>More information</summary>

File upload injections

Description:

Uploaded files represent a significant risk to applications. The first step in many attacks is to get some code to the system to be attacked. Then the attack only needs to find a way to get the code executed. Using a file upload helps the attacker accomplish the first step.

The consequences of unrestricted file upload can vary, including complete system takeover, an overloaded file system or database, forwarding attacks to backend systems, and simple defacement.

There are really two classes of problems here. The first is with the file metadata, like the path and file name. These are generally provided by the transport, such as HTTP multipart encoding. This data may trick the application into overwriting a critical file or storing the file in a bad location. You must validate the metadata extremely carefully before using it.

The other class of problem is with the file size or content. An attacker can easily craft a valid image file with PHP code inside.

Solution:

Uploaded files always need to be placed outside the document root of the webserver Check to not accept large files that could fill up storage or cause a denial of service attack Check the userinput(filename) for having the right allowed extensions such as .jpg, .png etc Note: when checking these extensions always make sure your application validates the last possible extension so an attacker could not simply inject ".jpg.php" and bypass your validation

Check the userinput(filename) for containing possible path traversal patterns in order to prevent him from uploading outside of the intended directory.

You may also want to check if the filenames do already exist before uploading in order to prevent the overwriting of files.

Also for serving the files back there needs to be a file handler function that can select the file based on an identifier that will serve the file back towards the user.

Most developers also do a mimetype check. This is a good protection however not whenever you are checking this mimetype through the post request. This header can not be trusted since it can be easily manipulated by an attacker.

The best way to check the mimetype is to extract the file from the server after uploading and check it from the file itself. Deleting it whenever it does not comply with expected values.

</details>
------
- [ ] Verify that the application protects against reflective file download (RFD) by validating or ignoring user-submitted filenames in a JSON, JSONP, or URL parameter, the response Content-Type header should be set to text/plain, and the Content-Disposition header should have a fixed filename.
<details><summary>More information</summary>

RFD and file download injections

Description:

Reflective file download occurs whenever an attacker can "forge" a download through misconfiguration in your "disposition" and "content type" headers. Instead of having the attacker to upload an evil file to the web server he can now force the browser to download a malicious file by abusing these headers and setting the file extension to any type he wants.

Now, whenever there is also userinput being reflected back into that download it can be used to forge evil attacks. The attacker can present an evil file to ignorant victim's who are trusting the domain of which the download was presented from.

File download injection is a similar type of attack except this attack is made possible whenever there is userinput that is reflected into the "filename=" parameter in the "disposition" header. The attacker again can force the browser to download a file with his own choice of extension and set the content of this file by injecting this directly into the response like filename=evil.bat%0A%0D%0A%0DinsertEvilStringHere

Whenever the user now opens the downloaded file the attacker can gain full control over the target’s device.

Solution:

First, never use user input directly into your headers since an attacker can now take control over it.

Secondly, you should check if a filename really does exist before presenting it towards the users. You could also create a whitelist of all files which are allowed to be downloaded and terminate requests whenever they do not match.

Also, you should disable the use of "path parameters". It increases the attacker’s attack vector and these parameters also cause a lot of other vulnerabilities. And last you should sanitize and encode all your userinput as much as possible. Reflective file downloads depend on userinput being reflected in the response header. Whenever this input has been sanitized and encoded it should not do any harm to any system it is being executed on

</details>
------
- [ ] Verify that untrusted file metadata is not used directly with system API or libraries, to protect against OS command injection.
<details><summary>More information</summary>

File IO commands

Description:

I/O commands allow you to own, use, read from, write to, close devices and To direct I/O operations to a device. Whenever user supplied input i.e file names and/or file data is being directly used in these commands, this could lead to path traversal, local file include, file mime type, and OS command injection vulnerabilities.

Solution:

File names and file contents should be sanitized before being used in I/O commands.

</details>
------
- [ ] Verify that files obtained from untrusted sources are stored outside the web root, with limited permissions, preferably with strong validation.
<details><summary>More information</summary>

File upload outside document root

Description:

Files that are uploaded by users or other untrusted services should always be placed outside of the document root. This is to prevent malicious files from being parsed by attackers such as PHP/HTML/Javascript files.

Should an attacker succeed to bypass file upload restrictions and upload a malicous file, it would be impossible for the attacker to parse these files since they are not located inside of the applications document root.

Solution:

Files should be stored outside of the applications document root. Preferably files should be stored on a seperate file server which serves back and forth to the application server.

Files should always be stored outside of the scope of the attacker to prevent files from being parsed or executed.

When storing files outside of the document root, take into consideration potential path traversal injections in the applications file name such as "../html/backtoroot/file.php". Whenever this filename is being used directly into the path that is used to store files, it could be used to manipulate the storage path.

</details>
------
- [ ] Verify that files obtained from untrusted sources are scanned by antivirus scanners to prevent upload of known malicious content.
<details><summary>More information</summary>

File upload anti virus check

Description:

whenever files from untrusted services are uploaded to the server, there should be additional checks in place to verify whether these files contain viruses (malware, trojans, ransomware).

Solution:

After uploading the file, the file should be placed in quarantine and antivirus has to inspect the file for malicious viruses. Antivirus software that has a commandline interface is requisite for doing such scans. There are also API's available for other services such as from "VirusTotal.com"

This site provides a free service in which your file is given as input to numerous antivirus products and you receive back a detailed report with the evidence resulting from the scanning process

</details>
------
- [ ] Verify that the web tier is configured to serve only files with specific file extensions to prevent unintentional information and source code leakage. For example, backup files (e.g. .bak); temporary working files (e.g. .swp); compressed files (.zip, .tar.gz, etc) and other extensions commonly used by editors should be blocked unless required.
<details><summary>More information</summary>

Serve files whitelist.

Description:

Configiring the web server to only serve files with an expected file extension helps prevent information leakage whenever developers forget to remove backup files or zipped versions of the web application from the webserver.

Solution:

Verify that the web tier is configured to serve only files with specific file extensions to prevent unintentional information and source code leakage. For example, backup files (e.g. .bak), temporary working files (e.g. .swp), compressed files (.zip, .tar.gz, etc) and other extensions commonly used by editors should be blocked unless required.

</details>
------
- [ ] Verify that direct requests to uploaded files will never be executed as HTML/JavaScript content.
<details><summary>More information</summary>

File upload outside document root

Description:

Files that are uploaded by users or other untrusted services should always be placed outside of the document root. This is to prevent malicious files from being parsed by attackers such as PHP/HTML/Javascript files.

Should an attacker succeed to bypass file upload restrictions and upload a malicous file, it would be impossible for the attacker to parse these files since they are not located inside of the applications document root.

Solution:

Files should be stored outside of the applications document root. Preferably files should be stored on a seperate file server which serves back and forth to the application server.

Files should always be stored outside of the scope of the attacker to prevent files from being parsed or executed.

When storing files outside of the document root, take into consideration potential path traversal injections in the applications file name such as "../html/backtoroot/file.php". Whenever this filename is being used directly into the path that is used to store files, it could be used to manipulate the storage path.

</details>
------
- [ ] Verify that the web or application server is configured with a whitelist of resources or systems to which the server can send requests or load data/files from.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------

- [ ] **Are you building on an application that has API features?**
------
- [ ] Verify that access to administration and management functions is limited to authorized administrators.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify API URLs do not expose sensitive information, such as the API key, session tokens etc.
<details><summary>More information</summary>

Verify that the sensitive information is never disclosed

Description:

Information exposure through query strings in URL is when sensitive data is passed to parameters in the URL. This allows attackers to obtain sensitive data such as usernames, passwords, tokens (authX), database details, and any other potentially sensitive data. Simply using HTTPS does not resolve this vulnerability.

Regardless of using encryption, the following URL will expose information in the locations detailed below: https://vulnerablehost.com/authuser?user=bob&authz_token=1234&expire=1500000000

The parameter values for 'user', 'authz_token', and 'expire' will be exposed in the following locations when using HTTP or HTTPS:

Referer Header Web Logs Shared Systems Browser History Browser Cache Shoulder Surfing

When not using an encrypted channel, all of the above and the following: ManintheMiddle

Solution:

Sensitive informtion should never be included in the URL.

</details>
------
- [ ] Verify that enabled RESTful HTTP methods are a valid choice for the user or action, such as preventing normal users using DELETE or PUT on protected API or resources.
<details><summary>More information</summary>

HTTP request methods

Description:

HTTP offers a number of methods that can be used to perform actions on the web server. Many of these methods are designed to aid developers in deploying and testing HTTP applications. These HTTP methods can be used for nefarious purposes if the web server is misconfigured. It recommended to read about the different available methods, their purposes and limitations.

Available method are:

GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.)[1] The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations.

HEAD The HEAD method asks for a response identical to that of a GET request, but without the response body. This is useful for retrieving metainformation written in response headers, without having to transport the entire content.

POST The POST method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URI. The data POSTed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a datahandling process; or an item to add to a database.

PUT The PUT method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.

DELETE The DELETE method deletes the specified resource.

TRACE The TRACE method echoes the received request so that a client can see what (if any) changes or additions have been made by intermediate servers.

OPTIONS The OPTIONS method returns the HTTP methods that the server supports for the specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.

CONNECT The CONNECT method converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSLencrypted communication (HTTPS) through an unencrypted HTTP proxy.

PATCH The PATCH method applies partial modifications to a resource.

Some of the methods (for example, GET, HEAD, OPTIONS and TRACE) are, by convention, defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, web caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe. However, this is not mandated by the standard, and it is explicitly acknowledged that it cannot be guaranteed.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Therefore, careless or deliberate programming can cause nontrivial changes on the server. This is discouraged, because it can cause problems for web caching, search engines and other automated agents, which can make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as http://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article.

By contrast, methods such as POST, PUT, DELETE and PATCH are intended for actions that may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.

Methods PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request (note that idempotence refers to the state of the system after the request has completed, so while the action the server takes (e.g. deleting a record) or the response code it returns may be different on subsequent requests, the system state will be the same every time). Methods GET, HEAD, OPTIONS and TRACE, being prescribed as safe, should also be idempotent, as HTTP is a stateless protocol.

In contrast, the POST method is not necessarily idempotent, and therefore sending an identical POST request multiple times may further affect state or cause further side effects (such as financial transactions). In some cases this may be desirable, but in other cases this could be due to an accident, such as when a user does not realize that their action will result in sending another request, or they did not receive adequate feedback that their first request was successful. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may resubmit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other nonidempotent action is triggered by a GET or other request. Ignoring this recommendation, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not.

The TRACE method can be used as part of a class of attacks known as crosssite tracing; for that reason, common security advice is for it to be disabled in the server configuration. Microsoft IIS supports a proprietary "TRACK" method, which behaves similarly, and which is likewise recommended to be disabled

Solution:

Verify that the application accepts only a defined set of HTTP request methods, such as GET and POST and unused methods are explicitly blocked/disabled.

</details>
------
- [ ] Verify that JSON schema validation is in place and verified before accepting input.
<details><summary>More information</summary>

JSON validation schema

Description:

JSON Schema is a vocabulary that allows you to annotate and validate JSON documents.

When adding schema's to your or JSON files you have better control over what type of userinput can be supplied in your application. This dramatically decreases an attacker’s vector when implemented the right way. Nonetheless, you should always apply your own input validation and rejection as an extra layer of defense. This approach is also desirable since you also want to do countering and logging on the user’s requests and input.

Solution:

Verify that JSON schema validation takes place to ensure a properly formed JSON request, followed by validation of each input field before any processing of that data takes place.

</details>
------
- [ ] Verify that RESTful web services that utilize cookies are protected from Cross-Site Request Forgery via the use of at least one or more of the following: triple or double submit cookie pattern (see [references](https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet)); CSRF nonces, or ORIGIN request header checks.
<details><summary>More information</summary>

CSRF on REST

Description:

CrossSite Request Forgery (CSRF) is a type of attack that occurs when a malicious Web site, email, blog, instant message, or program causes a users Web browser to perform an unwanted action on a trusted site for which the user is currently authenticated.

The impact of a successful crosssite request forgery attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or purchasing an item in the users context. In effect, CSRF attacks are used by an attacker to make a target system perform a function (funds Transfer, form submission etc.) via the targets browser without knowledge of the target user at least until the unauthorized function has been committed.

Solution:

REST (REpresentational State Transfer) is a simple stateless architecture that generally runs over HTTPS/TLS. The REST style emphasizes that interactions between clients and services are enhanced by having a limited number of operations

Since the architecture is stateless, the application should make use of sessions or cookies to store the HTTP sessions, which allow associating information with individual visitors. The preferred method for REST services is to utilize tokens for interactive information interchange between the user and the server.

By sending this information solely by means of headers, the application is no longer susceptible to CSRF attacks since the CSRF attack utilizes the browsers cookie jar for succesful attacks.

</details>
------
- [ ] Verify that XSD schema validation takes place to ensure a properly formed XML document, followed by validation of each input field before any processing of that data takes place.
<details><summary>More information</summary>

XML schema (XSD)

Description:

When adding schema's to your or XML files you have better control over what type of userinput can be supplied in your application. This dramatically decreases an attacker’s vector when implemented the right way. Nonetheless, you should always apply your own input validation and rejection as an extra layer of defense. This approach is also desirable since you also want to do countering and logging on the user’s requests and input.

Solution:

Verify that XSD schema validation takes place to ensure a properly formed XML document, followed by validation of each input field before any processing of that data takes place.

</details>
------
- [ ] Verify that the message payload is signed using WS-Security to ensure reliable transport between client and service.
<details><summary>More information</summary>

Signed message payloads WS security

Description:

In order to establish trust between two communicating party's such as servers and clients there message payload should be signed by means of public/private key method. This builds trust and makes it harder for attackers to impersonate different users.

Web Services Security (WSSecurity, WSS) is an extension to SOAP to apply security to Web services. It is a member of the Web service specifications and was published by OASIS.

The protocol specifies how integrity and confidentiality can be enforced on messages and allows the communication of various security token formats, such as Security Assertion Markup Language (SAML), Kerberos, and X.509. Its main focus is the use of XML Signature and XML Encryption to provide endtoend security.

Solution:

WSSecurity describes three main mechanisms:

How to sign SOAP messages to assure integrity. Signed messages also provide nonrepudiation. How to encrypt SOAP messages to assure confidentiality. How to attach security tokens to ascertain the sender's identity. The specification allows a variety of signature formats, encryption algorithms and multiple trust domains, and is open to various security token models, such as:

X.509 certificates, Kerberos tickets, User ID/Password credentials, SAML Assertions, and customdefined tokens. The token formats and semantics are defined in the associated profile documents.

WSSecurity incorporates security features in the header of a SOAP message, working in the application layer.

These mechanisms by themselves do not provide a complete security solution for Web services. Instead, this specification is a building block that can be used in conjunction with other Web service extensions and higherlevel applicationspecific protocols to accommodate a wide variety of security models and security technologies. In general, WSS by itself does not provide any guarantee of security. When implementing and using the framework and syntax, it is up to the implementor to ensure that the result is not vulnerable.

Key management, trust bootstrapping, federation and agreement on the technical details (ciphers, formats, algorithms) is outside the scope of WSSecurity.

Use cases:

Endtoend security If a SOAP intermediary is required, and the intermediary is not more or less trusted, messages need to be signed and optionally encrypted. This might be the case of an applicationlevel proxy at a network perimeter that will terminate TCP (transmission control protocol) connections.

Nonrepudiation One method for nonrepudiation is to write transactions to an audit trail that is subject to specific security safeguards. Digital signatures, which WSSecurity supports, provide a more direct and verifiable nonrepudiation proof.

Alternative transport bindings Although almost all SOAP services implement HTTP bindings, in theory other bindings such as JMS or SMTP could be used; in this case endtoend security would be required.

Reverse proxy/common security token Even if the web service relies upon transport layer security, it might be required for the service to know about the end user, if the service is relayed by a (HTTP) reverse proxy. A WSS header could be used to convey the end user's token, vouched for by the reverse proxy.

</details>
------

- [ ] **Does the sprint implement changes that affect and change CI/CD?**
------
- [ ] Verify that all unneeded features, documentation, samples, configurations are removed, such as sample applications, platform documentation, and default or example users.
<details><summary>More information</summary>

insecure application defaults

Description:

When default sample applications, default users, etc are not removed from your production environment you are increasing an attackers potentiall attack surface significantly.

Solution:

Verify that all unneeded features, documentation, samples, configurations are removed, such as sample applications, platform documentation, and default or example users.

</details>
------
- [ ] Verify that if application assets, such as JavaScript libraries, CSS stylesheets or web fonts, are hosted externally on a content delivery network (CDN) or external provider, Subresource Integrity (SRI) is used to validate the integrity of the asset.
<details><summary>More information</summary>

Application assets hosted on secure location

Description:

Whenever application assets such as JavaScript libraries or CSS styleshees are not hosted on the application itself but on a external CDN which is not under your control these CDNs' can introduce security vulnerabilities. Whenever one of these CDN gets compromised attackers can include malicious scripts. Also whenever one of these CDNs' get out of service it could affect the operation of the application and even cause a denial of service.

Solution:

Verify that all application assets are hosted by the application, such as JavaScript libraries, CSS stylesheets and web fonts are hosted by the application rather than rely on a CDN or external provider.

</details>
------

- [ ] **Is the application in need of a review of configurations and settings?**
------
- [ ] Verify that web or application server and application framework debug modes are disabled in production to eliminate debug features, developer consoles, and unintended security disclosures.
<details><summary>More information</summary>

Debug enabeling

Description:

Sometimes it is possible through an "enabling debug parameter" to display technical information/secrets within the application. As a result, the attacker learns more about the operation of the application, increasing his attack surface. Sometimes having a debug flag enabled could even lead to code execution attacks (older versions of werkzeug)

Solution:

Disable the possibility to enable debug information on a live environment.

</details>
------
- [ ] Verify that the HTTP headers or any part of the HTTP response do not expose detailed version information of system components.
<details><summary>More information</summary>

Verbose version information

Description:

Revealing system data or debugging information helps an adversary learn about the system and form a plan of attack. An information leak occurs when system data or debugging information leaves the program through an output stream or logging function.

Solution:

Verify that the HTTP headers do not expose detailed version information of system components. For each different type of server, there are hardening guides dedicated especially for this type of data leaking. The same applies for i.e any other leak of version information such as the version of your programming language or other services running to make your application function.

</details>
------
- [ ] Verify that every HTTP response contains a content type header specifying a safe character set (e.g., UTF-8, ISO 8859-1).
<details><summary>More information</summary>

Content type headers

Description:

Setting the right content headers is important for hardening your applications security, this reduces exposure to driveby download attacks or sites serving user uploaded content that, by clever naming could be treated by MS Internet Explorer as executable or dynamic HTML files and thus can lead to security vulnerabilities.

Solution:

An example of a content type header would be:

ContentType: text/html; charset=UTF8
or:
ContentType: application/json;

Verify that requests containing unexpected or missing content types are rejected with appropriate headers (HTTP response status 406 Unacceptable or 415 Unsupported Media Type).

</details>
------
- [ ] Verify that all API responses contain Content-Disposition: attachment; filename="api.json" (or other appropriate filename for the content type).
<details><summary>More information</summary>

API resonses security headers

Description:

There are some security headers which should be properly configured in order to protect some API callbacks against Reflective File Download and other type of injections.

Also check if the API response is dynamic, if user input is reflected in the response. If so, you must validate and encode the input, in order to prevent XSS and Same origin method execution attacks.

Solution:

Sanitize your API's input (in this case they should just allow alphanumeric); escaping is not sufficient

Verify that all API responses contain XContentTypeOptions: nosniff, to prevent the browser from interpreting files as something else than declared by the content type (this helps prevent XSS if the page is interpreted as html or js).

Add 'ContentDisposition: attachment; filename="filename.extension"' with extension corresponding the file extension and contenttype, on APIs that are not going to be rendered

</details>
------
- [ ] Verify that a content security policy (CSPv2) is in place that helps mitigate impact for XSS attacks like HTML, DOM, JSON, and JavaScript injection vulnerabilities.
<details><summary>More information</summary>

Content security policy headers

Description:

The main use of the content security policy header is to, detect, report, and reject XSS attacks. The core issue in relation to XSS attacks is the browser's inability to distinguish between a script that's intended to be part of your application, and a script that's been maliciously injected by a thirdparty. With the use of CSP(Content Security policy), we can tell the browser which script is safe to execute and which scripts are most likely been injected by an attacker.

Solution:

A best practice for implementing CSP in your application would be to externalize all JavaScript within the web pages.

So this:

    <script>
      function doSomething() {
        alert('Something!');
      }
    </script>

    <button onclick='doSomething();'>foobar!</button>

Must become this:

    <script src='doSomething.js'></script>
    <button id='somethingToDo'>Let's foobar!</button>

The header for this code could look something like:

    ContentSecurityPolicy: defaultsrc'self'; objectsrc'none'; scriptsrc'https://mycdn.com'

Since it is not entirely realistic to implement all JavaScript on external pages we can apply sort of a crosssite request forgery token to your inline JavaScript. This way the browser can again distinguish the difference between code which is part of the application against probable malicious injected code, in CSP this is called the 'nonce'. Of course, this method is also very applicable on your existing code and designs. Now, to use this nonce you have to supply your inline script tags with the nonce attribute. Firstly, it's important that the nonce changes for each response. Otherwise, the nonce would become guessable. So it should also contain a high entropy and should be hard to predict. Similar to the operation of the CSRF tokens, the nonce becomes impossible for the attacker to predict making it difficult to execute a successful XSS attack.

Inline JavaScript example containing nonce:

    <script nonce=sfsdf03nceI23wlsgle9h3sdd21>
    <! Your javscript code >
    </script>

Matching header example:

    ContentSecurityPolicy: scriptsrc 'noncesfsdf03nceI23wlsgle9h3sdd21'

There is a whole lot more to learn about the CSP header for indepth implementation in your application. This knowledge base item just scratches the surface and it would be highly recommended to gain more indepth knowledge about this powerful header

Very Important: When applying the CSP header, although it blocks XSS attacks. Your application still remains vulnerable to HTML and other code injections. So this is not a substitute for, validation, sanitizing and encoding of userinput.

</details>
------
- [ ] Verify that all responses contain X-Content-Type-Options: nosniff.
<details><summary>More information</summary>

API resonses security headers

Description:

There are some security headers which should be properly configured in order to protect some API callbacks against Reflective File Download and other type of injections.

Also check if the API response is dynamic, if user input is reflected in the response. If so, you must validate and encode the input, in order to prevent XSS and Same origin method execution attacks.

Solution:

Sanitize your API's input (in this case they should just allow alphanumeric); escaping is not sufficient

Verify that all API responses contain XContentTypeOptions: nosniff, to prevent the browser from interpreting files as something else than declared by the content type (this helps prevent XSS if the page is interpreted as html or js).

Add 'ContentDisposition: attachment; filename="filename.extension"' with extension corresponding the file extension and contenttype, on APIs that are not going to be rendered

</details>
------
- [ ] Verify that HTTP Strict Transport Security headers are included on all responses and for all subdomains, such as Strict-Transport-Security: max-age=15724800; includeSubdomains.
<details><summary>More information</summary>

HTTP strict transport security

Description:

HTTP Strict Transport Security (HSTS) is an optin security enhancement that is specified by a web application through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. It also prevents HTTPS click through prompts on browsers.

HSTS addresses the following threats:

  1. User bookmarks or manually types http://example.com and is subject to a maninthemiddle attacker HSTS automatically redirects HTTP requests to HTTPS for the target domain
  2. Web application that is intended to be purely HTTPS inadvertently contains HTTP links or serves content over HTTP HSTS automatically redirects HTTP requests to HTTPS for the target domain
  3. A maninthemiddle attacker attempts to intercept traffic from a victim user using an invalid certificate and hopes the user will accept the bad certificate
  4. HSTS does not allow a user to override the invalid certificate message

    Solution:

When users are visiting the application it should set the following header: These headers should be set in a base class which always sets the header no mather what page the users initially visit.

Simple example, using a long (1 year) maxage: StrictTransportSecurity: maxage=31536000

If all present and future subdomains will be HTTPS: StrictTransportSecurity: maxage=31536000; includeSubDomains

CAUTION: Site owners can use HSTS to identify users without cookies. This can lead to a significant privacy leak.

Cookies can be manipulated from subdomains, so omitting the include "includeSubDomains" option permits a broad range of cookierelated attacks that HSTS would otherwise prevent by requiring a valid certificate for a subdomain. Ensuring the "Secure Flag" is set on all cookies will also prevent, some, but not all, of the same attacks.

</details>
------
- [ ] Verify that a suitable "Referrer-Policy" header is included, such as "no-referrer" or "same-origin".
<details><summary>More information</summary>

Referrer policy header

Description: Requests made from a document, and for navigations away from that document are associated with a Referer header. While the header can be suppressed for links with the noreferrer link type, authors might wish to control the Referer header more directly for a number of reasons,

Privacy A social networking site has a profile page for each of its users, and users add hyperlinks from their profile page to their favorite bands. The social networking site might not wish to leak the user’s profile URL to the band web sites when other users follow those hyperlinks (because the profile URLs might reveal the identity of the owner of the profile).

Some social networking sites, however, might wish to inform the band web sites that the links originated from the social networking site but not reveal which specific user’s profile contained the links.

Security A web application uses HTTPS and a URLbased session identifier. The web application might wish to link to HTTPS resources on other web sites without leaking the user’s session identifier in the URL.

Alternatively, a web application may use URLs which themselves grant some capability. Controlling the referrer can help prevent these capability URLs from leaking via referrer headers.

Note that there are other ways for capability URLs to leak, and controlling the referrer is not enough to control all those potential leaks.

Trackback A blog hosted over HTTPS might wish to link to a blog hosted over HTTP and receive trackback links.

Solution:

For more information about the policy and how it should be implemented please visit the following link,

https://www.w3.org/TR/referrerpolicy/referrerpolicies

</details>
------
- [ ] Verify that a suitable X-Frame-Options or Content-Security-Policy: frame-ancestors header is in use for sites where content should not be embedded in a third-party site.
<details><summary>More information</summary>

Include anti clickjacking headers

Description:

Clickjacking, also known as a "UI redress attack", is when an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the top level page. Thus, the attacker is "hijacking" clicks meant for their page and routing them to another page, most likely owned by another application, domain, or both.

Using a similar technique, keystrokes can also be hijacked. With a carefully crafted combination of stylesheets, iframes, and text boxes, a user can be led to believe they are typing in the password to their email or bank account, but are instead typing into an invisible frame controlled by the attacker.

Solution:

To avoid your application from being clickjacked you can add the XframeOptions header to your application. These headers can be configured as:

XframeOptions: deny

The page cannot be displayed in a frame, regardless of the site attempting to do so.

XFrameOptions: sameorign  

The page can only be displayed in a frame on the same origin as the page itself.

XFrameOptions: ALLOWFROM uri

The page can only be displayed in a frame on the specified origin.

You may also want to consider to include "Framebreaking/Framebusting" defense for legacy browsers that do not support XFrameOption headers.

Source: https://www.codemagi.com/blog/post/194

</details>
------
- [ ] Verify that the application server only accepts the HTTP methods in use by the application or API, including pre-flight OPTIONS.
<details><summary>More information</summary>

HTTP request methods

Description:

HTTP offers a number of methods that can be used to perform actions on the web server. Many of these methods are designed to aid developers in deploying and testing HTTP applications. These HTTP methods can be used for nefarious purposes if the web server is misconfigured. It recommended to read about the different available methods, their purposes and limitations.

Available method are:

GET The GET method requests a representation of the specified resource. Requests using GET should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.)[1] The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations.

HEAD The HEAD method asks for a response identical to that of a GET request, but without the response body. This is useful for retrieving metainformation written in response headers, without having to transport the entire content.

POST The POST method requests that the server accept the entity enclosed in the request as a new subordinate of the web resource identified by the URI. The data POSTed might be, for example, an annotation for existing resources; a message for a bulletin board, newsgroup, mailing list, or comment thread; a block of data that is the result of submitting a web form to a datahandling process; or an item to add to a database.

PUT The PUT method requests that the enclosed entity be stored under the supplied URI. If the URI refers to an already existing resource, it is modified; if the URI does not point to an existing resource, then the server can create the resource with that URI.

DELETE The DELETE method deletes the specified resource.

TRACE The TRACE method echoes the received request so that a client can see what (if any) changes or additions have been made by intermediate servers.

OPTIONS The OPTIONS method returns the HTTP methods that the server supports for the specified URL. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource.

CONNECT The CONNECT method converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSLencrypted communication (HTTPS) through an unencrypted HTTP proxy.

PATCH The PATCH method applies partial modifications to a resource.

Some of the methods (for example, GET, HEAD, OPTIONS and TRACE) are, by convention, defined as safe, which means they are intended only for information retrieval and should not change the state of the server. In other words, they should not have side effects, beyond relatively harmless effects such as logging, web caching, the serving of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to the context of the application's state should therefore be considered safe. However, this is not mandated by the standard, and it is explicitly acknowledged that it cannot be guaranteed.

Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Therefore, careless or deliberate programming can cause nontrivial changes on the server. This is discouraged, because it can cause problems for web caching, search engines and other automated agents, which can make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as http://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article.

By contrast, methods such as POST, PUT, DELETE and PATCH are intended for actions that may cause side effects either on the server, or external side effects such as financial transactions or transmission of email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences.

Methods PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request (note that idempotence refers to the state of the system after the request has completed, so while the action the server takes (e.g. deleting a record) or the response code it returns may be different on subsequent requests, the system state will be the same every time). Methods GET, HEAD, OPTIONS and TRACE, being prescribed as safe, should also be idempotent, as HTTP is a stateless protocol.

In contrast, the POST method is not necessarily idempotent, and therefore sending an identical POST request multiple times may further affect state or cause further side effects (such as financial transactions). In some cases this may be desirable, but in other cases this could be due to an accident, such as when a user does not realize that their action will result in sending another request, or they did not receive adequate feedback that their first request was successful. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may resubmit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once.

Note that whether a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other nonidempotent action is triggered by a GET or other request. Ignoring this recommendation, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not.

The TRACE method can be used as part of a class of attacks known as crosssite tracing; for that reason, common security advice is for it to be disabled in the server configuration. Microsoft IIS supports a proprietary "TRACK" method, which behaves similarly, and which is likewise recommended to be disabled

Solution:

Verify that the application accepts only a defined set of HTTP request methods, such as GET and POST and unused methods are explicitly blocked/disabled.

</details>
------
- [ ] Verify that the supplied Origin header is not used for authentication or access control decisions, as the Origin header can easily be changed by an attacker.
<details><summary>More information</summary>

not available item

Description:

This item is currently not available.

Solution:

This item is currently not available.

</details>
------
- [ ] Verify that the cross-domain resource sharing (CORS) Access-Control-Allow-Origin header uses a strict white-list of trusted domains to match against and does not support the "null" origin.
<details><summary>More information</summary>

Cross origin resource sharing

Description:

Cross Origin Resource Sharing or CORS is a mechanism that enables a web browser to perform 'crossdomain' requests using the XMLHttpRequest L2 API in a controlled manner. In the past, the XMLHttpRequest L1 API only allowed requests to be sent within the same origin as it was restricted by the same origin policy.

Solution:

CrossOrigin requests have an Origin header, that identifies the domain initiating the request and is always sent to the server. CORS defines the protocol to use a web browser and a server to determine whether a crossorigin request is allowed. In order to accomplish this goal, there are a few HTTP headers involved in this process, that are supported by all major browsers:

Origin AccessControlRequestMethod AccessControlRequestHeaders AccessControlAllowOrigin AccessControlAllowCredentials AccessControlAllowMethods AccessControlAllowHeaders

Things you must consider when using CORS

  1. Validate URLs passed to XMLHttpRequest.open. Current browsers allow these URLs to be cross domain; this behavior can lead to code injection by a remote attacker. Pay extra attention to absolute URLs.

  2. Ensure that URLs responding with AccessControlAllowOrigin: * do not include any sensitive content or information that might aid an attacker in further attacks. Use the AccessControlAllowOrigin header only on chosen URLs that need to be accessed crossdomain. Don't use the header for the whole domain.

  3. Allow only selected, trusted domains in the AccessControlAllowOrigin header. Prefer whitelisting domains over blacklisting or allowing any domain (do not use * wildcard nor blindly return the Origin header content without any checks)

  4. Keep in mind that CORS does not prevent the requested data from going to an unauthenticated location. It's still important for the server to perform usual CSRF prevention.

  5. While the RFC recommends a preflight request with the OPTIONS verb, current implementations might not perform this request, so it's important that "ordinary" (GET and POST) requests perform any access control necessary.

  6. Discard requests received over plain HTTP with HTTPS origins to prevent mixed content bugs.

  7. Don't rely only on the Origin header for Access Control checks. Browser always sends this header in CORS requests, but may be spoofed outside the browser. Applicationlevel protocols should be used to protect sensitive data.

NOTE: Modern application frameworks do dynamically allocation of the origin header, resulting in the browser also allowing to send the "AccessControlAllowCredentials: true" header as well in requests. Whenever JSON web tokens are being send in cookies rather than headers, potential attackers could abuse this behaviour to make unauthenticated XHR get requests on the authenticated users behalf to read sensitive information from the pages.


</details>
------