input-output-hk / Certification-working-group

8 stars 10 forks source link

Certification of a Dapp Checklist #7

Open MIxAxIM opened 1 year ago

MIxAxIM commented 1 year ago

Given the mind-boggling complexities of the fast-evolving Cardano infrastructure (new platforms, new languages, new tools and new protocols) and the risks associated with deploying smart contracts managing millions of dollars, there are so many things to get right with smart contracts that it is easy to miss a few checks, make incorrect assumptions or fail to consider potential situations. Therefore, smart contract experts need checklists and in this issue we'll build together the checklist for certification of a Dapp.

Checklist

  1. Specification analysis (Manual)
  2. Documentation analysis (Manual)
  3. Testing (Automated)
  4. Static analysis (Automated)
  5. Fuzzing (Automated)
  6. Symbolic checking (Automated)
  7. Formal verification (Automated)
  8. Manual analysis (manual)

The Checklist In Details

Specification Analysis (Manual)

Specification describes in detail what (and sometimes why) the project and its various components are supposed to do functionally as part of their design and architecture.

  1. From a security perspective, it specifies what the assets are, where they are held, who are the actors, privileges of actors, who is allowed to access what and when, trust relationships, threat model, potential attack vectors, scenarios and mitigation.

  2. Analyzing the specification of a project provides auditors with the above details and lets them evaluate the assumptions made and indicate any shortcomings.

  3. Very few smart contract projects have detailed specifications at their first audit stage. At best, they have some documentation about what is implemented. Auditors spend a lot of time inferring specification from documentation/implementation which leaves them with less time for vulnerability assessment.

Documentation Analysis (Manual)

Documentation is a description of what has been implemented based on the design and architectural requirements.

  1. Documentation answers ‘how’ something has been designed/architected/implemented without necessarily addressing the ‘why’ and the design/requirement goals.

  2. Documentation is typically in the form of Readme files in the GitHub repository describing individual contract functionality combined with functional haddock and individual code comments.

  3. Documentation in many cases serves as a substitute for specification and provides critical insights into the assumptions, requirements and goals of the project team.

  4. Understanding the documentation before looking at the code helps auditors save time in inferring the architecture of the project, contract interactions, program constraints, asset flow, actors, threat model and risk mitigation measures.

  5. Mismatches between the documentation and the code could indicate stale/poor documentation, software defects or security vulnerabilities.

  6. Auditors are expected to encourage the project team to document thoroughly so that they do not need to waste their time inferring this by reading code.

Testing (Automated)

Software testing or validation is a well-known fundamental software engineering primitive to determine if software produces expected outputs when executed with different chosen inputs.

  1. Smart contract testing has a similar motivation but is arguably more complicated despite their relatively smaller sizes (in lines of code) compared to Web2 software.

  2. Since smart contract development inside Cardano is heavily done by Haskell, current tools are in fact Haskell testing tools but there are tools under development for Plutus solely with different levels of support for testing.

  3. At the moment testing integrations and composability of main-net contracts with states and state changing, without PAB or a front-end, is non-trivial or hard to do.

  4. Test coverage and test cases give a good indication of project maturity and also provide valuable insights to auditors into assumptions/edge-cases for vulnerability assessments.

  5. Auditors should expect a high-level of testing and test coverage because this is a must-have software-engineering discipline, especially when smart contracts that are by-design exposed to everyone on the blockchain end up holding assets worth tens of millions of dollars.

  6. "Program testing can be used to show the presence of bugs, but never to show their absence!” - E.W. Dijkstra

Static Analysis (Automated)

Static analysis is a technique of analyzing program properties without actually executing the program.

  1. This is in contrast to software testing where programs are actually executed/run with different inputs.

  2. Static analysis typically is a combination of control flow and data flow analyses.

Fuzzing (Automated)

Fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions.

  1. Fuzzing is especially relevant to smart contracts because anyone can interact with them on the blockchain with random inputs without necessarily having a valid reason or expectation (arbitrary byzantine behavior)

Symbolic Checking (Automated)

Symbolic checking is a technique of checking for program correctness, i.e. proving/verifying, by using symbolic inputs (Datum or Redeemer) to represent set of states and transitions instead of enumerating individual states/transitions separately.

Formal Verification (Automated)

Formal verification: is the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property, using formal methods of mathematics

  1. Formal verification is effective at detecting complex bugs which are hard to detect manually or using simpler automated tools.

  2. Formal verification needs a specification of the program being verified and techniques to translate/compare the specification with the actual implementation.

Manual Analysis (manual)

Manual analysis: is complimentary to automated analysis using tools and serves a critical need in smart contract audits

  1. Automated analysis using tools is cheap (typically open-source free software), fast, deterministic and scalable (varies depending on the tool being semi-/fully-automated) but however is only as good as the properties it is made aware of, which is typically limited to Haskell related constraints.

  2. Manual analysis with humans, in contrast, is expensive, slow, non-deterministic and not scalable because human expertise in smart contact security is a rare/expensive skill set today and we are slower, prone to error and inconsistent.

  3. Manual analysis is however the only way today to infer and evaluate business logic and application-level constraints which is where a majority of the serious vulnerabilities are being found.

Note

RSoulatIOHK commented 1 year ago

Thanks for setting up this list!

I have a few comments that we can discuss either here or during one of the meetings:

Auditors spend a lot of time inferring specification from documentation/implementation which leaves them with less time for vulnerability assessment.

In my opinion, if an auditor which writes the specification it cannot be the one that certifies the DApp. We need to have a proper second look on the whole set of documentation, implementation, and verification artifacts in order to issue a certificate.

Testing (Automated)

I think auditors should also perform a manual analysis of the testing activities beyond what is automated. Critical thinking should be applied on what has been tested. Is the test campaign "complete"? Does it cover all cases?

Static Analysis (Automated)

This should be driven by the adoption of a standard common vulnerabilities and weaknesses enumeration. We should be able, as a community, to provide tools that checks for this kind of things in a completely automated way. If we manage to get to this point, this would do a fine addition to L1 certification.

Symbolic Checking (Automated) Formal Verification (Automated)

For both points, there needs to be a corresponding manual analysis of:

I also think that a complete list of tools used for automated verification should be added somewhere in the certificate. We could also think of having tools that are qualified for certification. If one tool is found to be buggy we could always reevaluate the certificate.

Another idea would be to used mutation testing to evaluate the quality of specification, or test oracles. Mutations should not be able to go through the whole test campaign without being killed. Once we have a set of executable specification, it's usually not a lot to ask for.