w3c / miniapp

MiniApps Standardization
Other
371 stars 48 forks source link

Testing #182

Open xfq opened 2 years ago

xfq commented 2 years ago

We need to have a cross-vendor test suite for MiniApp specs. The tests can be used as proof that the MiniApp user agents have implemented the W3C specs. As a result, MiniApp developers can also write standard MiniApps with greater confidence.

If possible, we can design a framework to run tests automatically. If that's not possible, we need to write documentation for running tests manually.

xfq commented 2 years ago

As an example, the browsers uses the web-platform-tests project maintained by the Interop team as the test suite for the Web-platform stack.

web-platform-tests.org contains introduction of the test suite.

wpt.fyi is an archive of test results collected from a few web browsers on a regular basis.

xfq commented 2 years ago

And here are some non-browser standard test examples:

  • Verifiable Credentials
  • ARIA in HTML
  • Publication Manifest and Audiobooks
  • JSON-LD
  • IMSC
  • espinr commented 2 years ago

    I've been checking how similar specifications have dealt with tests and I like how the EPUB group does it. We cannot reuse the web-platform-tests directly because of the different nature of the user agents, so I propose to define something like the EPUB tests.

    It would be a dedicated GitHub repository (e.g., w3c/miniapp-tests/) (see EPUB tests repo), where we might include the tests. Basically, this would be the structure:

    The scripts auto-generate the documentation and the reports in human-readable format, including Information of the test suite. All is maintained in the repository.

    Something important is this clear methodology of how contribute (i.e., pre-requisites, workflow based on issues, templates to use, etc.).

    Of course we don't require an automatic process (we could just use tables or a spreadsheet), but this could help to maintain the tests in mid-long term. Comments? If you like the approach, I can initiate the first proposal based on this approach.

    xfq commented 2 years ago

    Thank you for the proposal. Sounds like a good plan to me.

    espinr commented 2 years ago

    I've worked on a proof of concept to show and explain what this approach would be like. As mentioned in my previous comment, this methodology and system are based on the EBUB tests. This methodology and tool are open to any contributor so that anyone can create tests on specific parts of the specifications.

    All the maintenance would be in Github, and the documentation update is done using GitHub CI actions. They are already included in the repository example.

    The final result is something like this: https://espinr.github.io/miniapp-tests/

    How does it work?

    Every test case:

    For instance, a simple test for the MiniApp Manifest's window.fullscreen member:

    The definition of the test (see test.jsonld) would be something like this:

    {
        "@context": { },
        "dcterms:rights": "https://www.w3.org/Consortium/Legal/2015/copyright-software-and-document",
        "dcterms:rightsHolder": "https://www.w3.org",
        "@type": "earl:TestCase",
        "dc:coverage": "Manifest",
        "dc:creator": ["Martin Alvarez"],
        "dc:date": "2022-05-25",
        "dc:title": "Fullscreen enabled in manifest",
        "dc:identifier": "mnf-window-fullscreen-true",
        "dc:description": "The window's fullscreen member is set to true in the manifest. The app must be shown in fullscreen.",
        "dcterms:isReferencedBy": [
          "https://www.w3.org/TR/miniapp-manifest/#dfn-process-the-window-s-fullscreen-member"
        ],
        "dcterms:modified": "2022-05-25T00:00:00Z"
    }

    This definition uses JSON-LD but we can simplify it.

    After updating the repository, the GitHub CI action will generate the documentation, resulting in something like this: https://espinr.github.io/miniapp-tests/#sec-manifest-data . As you can see, I've only included examples for three sections: packaging, content, and manifest. The documentation organizes the content accordingly.

    In the generated documentation, each test case is represented on a row. It is linked to the code itself (including the metadata that auto-describe the use case), the specification's feature to be tested, and the results of the tests.

    How to perform tests?

    Every test should be tested on any MiniApp platform, one by one. For instance, testing the miniapp in the previous example and noting if the result is the expected one. Results could be pass, fail or N/A.

    The testing results for each platform are specified in a simple JSON file like this:

    {
        "name": "Mini Program #2",
        "ref": "https://example.org/",
        "variant" : "Cross Platform",
        "tests": {
            "cnt-css-scoped-support": true,
            "mnf-window-fullscreen-default": true,
            "mnf-window-fullscreen-true": true,
            "mnf-window-orientation-default": true,
            "mnf-window-orientation-landscape": true,
            "mnf-window-orientation-portrait": true,
            "pkg-pages-same-filenames": false,
            "pkg-root-app-css-empty": true        
        }
    }

    This sample platform (called Mini Program #2) passes all the tests except one. The results, linked to the documentation, are represented visually in a table.

    The testing results for two different miniapp vendors (see all the sample reports) are in this document https://espinr.github.io/miniapp-tests/results.html

    I'll be happy to present this idea at the next meeting. If you have suggestions, I'll be glad to update this proposal.

    Please, note that this testing methodology is complementary to a MiniApp validator, as proposed in the previous meeting.

    EDIT: I've created an example that shows hot to link the tests from the specifications (see the links to the test in this section of the packaging spec)

    espinr commented 2 years ago

    This proposal was presented during the last CG and WG meetings. No objections were raised, so I suggest we move forwards with this proposal so we can start testing as soon as possible to detect the weakest points in the specs.

    I think the best way is organizing all the miniapp tests under the same repository. We can use something like w3c/miniapp-tests/. In the documentation we will be able to define a taxonomy to classify the tests by topic or specification (Content, Packaging, Lifecycle...). @xfq, do you think could we have this repository? Other suggestions?

    xfq commented 2 years ago

    Sounds good to me. Do you want me to create the repo?

    espinr commented 2 years ago

    Sounds good to me. Do you want me to create the repo?

    Yes, please.

    xfq commented 2 years ago

    Done: https://github.com/w3c/miniapp-tests

    espinr commented 2 years ago

    Great, thank you!! We can leave this issue open to collect and discuss the ideas of the MiniApp validator discussed in previous meetings.

    MichaelWangzitao commented 1 year ago

    Through my discussion with some vendors, i.e. Alibaba, Baidu, Huawei, etc, they prefer to set up a formal open-source project and have a professional open-source community to supervise the project. Since it can coordinate more resources to participate and facilitate the organization, supervision, and management of testing, especially for developers, who can have relevant test references to guide their practice.

    Therefore, at the last WG meeting, we discuss some proposals to set up this project, the following issues may need further discussion (Attach some fruit for thought)

    1. Mulan, an open-source community in China, is good at incubating small- and medium-sized projects and using GitHub\gitee to host code.
    2. Open Atom Foundation
    3. OW2

    1- Mustard. Sentences from Buddhist sutras "纳须弥于芥子", the meaning is similar to "Little Things Make Big Things Happen" 2- MAPT (MiniApp Platform Test), Simplicity is beauty.

    Others?