KhronosGroup / glTF

glTF – Runtime 3D Asset Delivery
Other
7.14k stars 1.14k forks source link

How do we organize the list of glTF projects? #1288

Closed pjcozzi closed 3 years ago

pjcozzi commented 6 years ago

The glTF community is lucky to have an amazing problem: the current list of glTF projects is getting so long that we are considering how to best organize it.

Currently, we put quick notes the first time we see a project into #1058 and then update the above list as people open pull requests or as someone has time to curate #1058 to find "finished" projects.

Perhaps we need to separate the list of projects into (1) prominent and well-maintained projects in the main README.md and then (2) a separate .md file that is comprehensive.

No rush...and open to any and all ideas. Thanks!

pjcozzi commented 5 years ago

@ryanoshea-arm started this in https://github.com/KhronosGroup/glTF/pull/1423.

javagl commented 5 years ago

In the previous call we talked about the Late-breaking glTF projects issue. One way of tackling this issue could be to first collect the whole information in a somewhat-machine-processable form. So I went through the issue and collected the information from these posts in a spreadsheet. This refers ONLY to the posts that did not yet have a [X] checkmark, and a table with these new entries (converted from CSV to MD) is at

https://gist.github.com/javagl/4e3e42db09f01a8ddb9f041a7898d81a

(This is rather intended as a "preview". I tried to add information about the Type (Application, Library...), Task (View, Load, ...) and file formats, but but won't dare to say that this is really complete or consistent. Further cleanups will be necessary here, and a few entries are still missing).

In view of the more general question about organizing the tables in the README, a few thoughts:

  1. Any sort of classification of the tools raises questions about where to draw the lines between...

    • an "application" and a "library"
    • a "loader" and an "importer" and a "plugin"
    • a "viewer" and a "debugger" and an "engine"
    • something that can "import+export" and a "converter"

    Of course, there's infinite potential for arguing, nitpicking and bikeshedding here, but IF we add some classification, it should be as consistent and reasonable as possible...

  2. Which of the tools will be presented in the README, and how? The comment at https://github.com/KhronosGroup/glTF/pull/1423#issuecomment-422793469 made a suggestion, but: Who will decide which tools are shown "prominently" in the README? Will the tables/sections be organized by target groups (Designers, Developers...) or by type (Library, Application...) or by tasks (Convert, Optimize...) ?

  3. A functionality for searching+filtering ONLINE would be nice. This could also alleviate the problem of how to group the entries. Google sheets could be an option, and of course, the first step could be to import the whole data there. But one might have to go an extra mile to make search+filter available conveniently for end-users.

  4. For the case that we want to create one "master list", I also extracted the existing tables from the README, so adding these tables should be simple. In any case: Once we have the information in a CSV or similar form, exporting it into Markdown tables is trivial.

  5. Regardless of all these detail questions: Somebody will have to maintain these tables. ("Any volunteers?" ;-) ). I think the approach of PRs is a bit clumsy. Maybe some Wiki-style approach could work?

weegeekps commented 5 years ago

I posted this in #1058 rather than this issue at first on accident. My apologies. @pjcozzi please let me know if I should delete my response in that issue.

Here are some examples of the "Are we...?" pattern sites that the Rust community uses.

I'm particularly fond of the designs for "Are we game yet?" and "Are we GUI yet?" Both provide an ecosystem matrix allowing you to drill down into specific areas in order to find what you need.

javagl commented 5 years ago

After GoogleDocs+Forms was mentioned in one of the calls, I searched a bit and found things like https://sites.google.com/site/scriptsexamples/available-web-apps/awesome-tables https://sites.google.com/site/scriptsexamples/available-web-apps/awesome-tables/using-a-form (These do not seem to be related to Google directly, and I have not yet taken a closer look at them, but ...) : The basic functionality of quickly filtering and searching, as well as editing the contents via forms, seems to be close to something that could be useful for end-users, as well as easy to maintain collaboratively. Has anybody used this before, or knows similar alternatives (or better solutions)?

Otherwise, we could probably ask one of the "arewe..." authors for permission to adapt one of their templates (or just copy it if it is CC-...) and give it a shot. But admittedly, I'm not so familiar with certain parts of web development, and am not sure whether I understand the workflow behind these pages (in terms of the data source and how it is maintained) ...

weegeekps commented 5 years ago

@javagl I know that "arewegameyet" uses Zola, a static site generator written in Rust. It's similar to Jekyll; pages are written in markdown with front-matter that provides metadata. You can also do taxonomies in Toml, which may be perfect for our needs. I've used it before (as well as Jekyll and Hugo, two alternatives) and would be glad to help.

javagl commented 5 years ago

@weegeekps So do you know what version of Zola they have been using? Right now, I wanted to give Zola 0.8.0 a try, and could vaguely say that "it does not work" (without diving into details about which error message appears for which command that is mentioned in the user guide, and at which points the latter is simply incomplete (at least for someone like me, who is a software developer, and not a web developer)). Specifically, it cannot be run over the AreWeGameYet repo in its current form, and I'm not sure how much time I should spend with trying to figure out why...

In any case, I wonder what the actual "repository of data" could look like. The AreWeGameYet spreads the information over dozens of .TOML files in several directories, which looks like a maintenance nightmare to me. But I assume that we'd just throw all information into one .MD file, and apply filters and such in order to generate the desired page...? A solution that is not specific for one templating engine and site generator in general would be preferable, though...

weegeekps commented 5 years ago

@javagl I didn't get a chance to try until this morning, but I can confirm that building arewegameyet locally is failing with Zola on Rust stable. I haven't tried nightly.

I agree about spreading the information over several different TOML files. It does look difficult to manage and I don't think a solution we come up with would need to do that. I also agree we should avoid locking into a single site-generator. Using TOML itself I don't think would lend to this risk, as it generally works across many different site-generators and if we ditched a generator completely we could still easily transform it into the shape we want.

javagl commented 5 years ago

One of the "issues" with spreading the information is that we can not (yet) be sure how to structure the information. As mentioned above, there are certainly overlaps between an "importer+exporter library" and a "converter" library. The differentiation could then be a bit tricky, and might depend on whether the glTF is imported into an in-memory data structure that allows to be (and is supposed to be) modified programmatically by the client code.

(If we end up with a structure that essentially shows each "tool" on a dedicated site, we could even go so far to insert sample code snippets or even screenshots, but that would be the next step...)

When I try to imagine the goals (or "workflows") of people looking at these projects, they probably fall into different categories:

The largest number of degrees of freedom is for actual developers. They may be using a certain engine, with a certain programming language, and a certain task that may be "loading", "viewing" "converting" or "exporting". So there should be some option to search for something specific like "c++ importer for SomeEngine with MIT license".

I'll also try to have another look at Zola (but maybe also the generators used for the other arewe-sites). I assume that some of them should allow importing data from one central data source (maybe some CSV-like file) and offer some sort of search+filter functionality.

weegeekps commented 5 years ago

Hugo or Jekyll may be a better bet for importing a CSV. I found out that Hugo has built in support for taking a CSV and using a template to render it.

I think you’re correct about those workflows. One downside of the “arewe*” sites is that they’re clearly aimed at only the developer themselves, rather than other users.

A structured approach of getting the degrees of freedom mapped out for devs and then progressively adding in effectively “filtered” views for the other needs may be a good angle to take with this.

@javagl Are you going to be at SIGGRAPH this week? If you have some time maybe we can sit down together for an hour or two?

javagl commented 5 years ago

@weegeekps I'll not be at siggraph, but hope that I can carve out some time during the weekend to try out the other approaches (and thought about trying to get some draft/preview on a GitHub page or so).

javagl commented 5 years ago

Only a small update here: I gave Hugo a try, and managed to create a site that shows the table that is contained in a CSV file. Details like having the option to add some

<td>{{ index $r  0 | markdownify }}</td>

and have the (markdown'ed) link that is contained in the CSV appear properly on the page is a nice thing. But beyond that, the complexity of Hugo is daunting, to say the least.

So I now have a large CSV file (with a structure whose details could still be discussed) and a very basic skeleton to display this as a table on a website. But after reading lots about templates, taxonomies, themes, HTML, CSS, shortcodes, pipes, partials, archetypes, front matter and markdown, I'm afraid that I'm lacking too much background knowledge and cannot produce reasonable results here within a reasonable amount of time. If someone has the bandwidth to bring this content into an appealing shape, maybe using https://themes.gohugo.io/ , and add some sort of interactivity like sorting and filtering, that would be great.

weegeekps commented 5 years ago

Marco and I have been discussing via email over the past week or so and trying to come up with a better idea on how to present the data to users. It's clear from working with the example Hugo site that Marco started that a large tabular list is not going to be the best way to present this data, so I've come up with this proposed design:

gltf projects mockup

Please keep in mind this is just a rough sketch, but it should convey the direction we're looking to go in. Any and all input is welcome.

With this proposal we're beginning to sway more towards a more custom solution that allows dynamic searching and filtering of the data using a wide card design. Alternatively, using narrower cards with multiple columns on wider screens and collapsing down on smaller screens would also work well. This can all be done in a modern web app inexpensively and easily against a static data set (either JSON or CSV), and then served up using GitHub project pages or some other static web server. This will likely fit our needs better and prevents us from getting completely locked into a static site generator while keeping our data in a more universal format which we can easily maintain and move to a different solution if necessary in the future.

pjcozzi commented 5 years ago

@weegeekps wow so it looks like you and @javagl are solving both the searching/filtering (currently static in the README.md) and the database/curation issue. Nice!

@outofcontrol may have some input on how this would be deployed on https://www.khronos.org/gltf/ or a separate dedicated page. Currently, this page duplicates the GitHub README.md list.

javagl commented 5 years ago

Regarding the data that is backing the proposed site, we're currently working with a structure that weegeekps proposed:

[
    {
        "title": "Some viewer",
        "description": "Some longer text with markdown",
        "link": "https://example.com"
        "task": ["view", "load"],
        "license": ["Apache-2.0", "MIT"], 
        "type": ["library", "application"],
        "language": ["C++"],
        "inputs": ["glTF 2.0"],
        "outputs": [],
    }
]

Some properties are arrays of strings, to make searching+filtering easier.

For some of the properties, the possible contents may be a bit vague, but in the current state of the list, most of them can be boiled down to a reasonable set of values that they may have. Roughly:

Regarding the input and output fields, there are some degrees of freedom: Some converters have literally dozens of possible input file formats. One could list the file extensions there, but that's hard to maintain. (In the current state, these usually say that the input is Multiple...). Additionally, we might want to make clear whether these are supposed to be file extensions. Currently, they are strings like "glTF 2.0". Maybe we should consider promoting the glTF version to a dedicated field anyhow?

Beyond that, based on the discussion from the working group, there are some further fields that should probably be added. Summarizing from the call, with some first thoughts:

emackey commented 5 years ago

Sounds good. Remember to keep this simple at the start. There are tons of projects and packages that interact with glTF, and each of these fields gets multiplied by that number of projects. For many of these, very little information is known unless the project's maintainer decides to step in and fill out all of these different fields directly.

weegeekps commented 5 years ago

@emackey I agree. @javagl and I have discussed having all fields as optional except for the name and perhaps the link. At a minimum that should be enough to usefully at least add it to the list, and perhaps we could make a special filter for "Needs more info?"

For dates, I think adding a createdDate field (using ISO 8601 formatting) would be a decent start. We may be able to also correlate the existing information in #1058 to dates for the initial entries. For lastUpdatedDate, if a project uses a GitHub repository, we may be able to fetch that information from GitHub if the API limits allow it.

javagl commented 5 years ago

For many of these, very little information is known unless the project's maintainer decides to step in and fill out all of these different fields directly.

That's an important point. I tried to retain the information that was contained in the tables of the README, and while going through the LateBreaking issue, also tried to gather further information (e.g. the license). But in any case, there will still be some curation necessary when it goes online for the first time.

@weegeekps How could the lastUpdatedDate be combined with the given information in the (otherwise quite static) JSON file? Or did you consider pulling the information "on the fly", when the contents is displayed, based on the link? A "middle ground" could be to store a lastReleaseDate, but we'd still have to think about a proper update mechanism for that.

weegeekps commented 5 years ago

My thought was to grab the data on the fly as the cards load, but we may be able to grab the information wholesale when we load the data. I looked deeper today into whether or not this is feasible given the rate limits, and looking at rate limits alone it can be done with GitHub's v4 API which uses GraphQL. Take the following example:

{
  rateLimit {
    cost
    remaining
    resetAt
  }
  glTF: repository(owner: "KhronosGroup", name: "glTF") {
    name
    url
    ref(qualifiedName: "master") {
      target {
        ... on Commit {
          id
          pushedDate
          message
        }
      }
    }
  }
  sampleViewer: repository(owner: "KhronosGroup", name: "glTF-Sample-Viewer") {
    name
    url
    ref(qualifiedName: "master") {
      target {
        ... on Commit {
          id
          pushedDate
          message
        }
      }
    }
  }
  validator: repository(owner: "KhronosGroup", name: "glTF-Validator") {
    name
    url
    ref(qualifiedName: "master") {
      target {
        ... on Commit {
          id
          pushedDate
          message
        }
      }
    }
  }
  blenderExporter: repository(owner: "KhronosGroup", name: "glTF-Blender-IO") {
    name
    url
    ref(qualifiedName: "master") {
      target {
        ... on Commit {
          id
          pushedDate
          message
        }
      }
    }
  }
}

This is just a rough, hand-written query against the v4 GitHub API. The easiest way to test this out is to use the GitHub API Explorer and copy and paste all of that into the left column. This fetches the last commit for several Khronos glTF related repositories.

Note the first block of the response includes the rate limiting information. For me it's been taking a single request to query the last commit for each of those repositories. I'm thinking of writing up something small to just test out if this will remain true for requesting all of the repositories at once.

A significant issue that could prevent us from easily getting this information on the fly is that the user has to be authenticated with OAuth. I'm not sure how this would work right now. For starters this would be entirely client side which adds all sorts of security complexities that I don't think we need to get into the business of solving, at least not right now.

We could maybe also use GitHub Actions to get around the issue of OAuth but then we need to be able to store secrets and other things so we can actually perform an update of the file on a cronjob, and at this point I think we're getting too complex for a first version of this. For us to really get the lastUpdatedDate properly we need a backing web server which I believe is something we want to explicitly avoid right now.

weegeekps commented 4 years ago

On account of my tardiness on this morning, I didn't manage to demo the current progress, so here are two screenshots showing the current state:

glTF Project Explorer

glTF Project Explorer Filter Bar

There are still things missing:

Regarding the filtering logic, currently it's running an OR across all selected filter values. I'm not a fan of this, so I'd like to propose this alternative algorithm:

For each dimension (tasks, types, licenses, etc.) we OR multiple selected values, and between dimensions we AND. An example:

User selects export and load from the tasks filters, and C from the language filters. This turns into effectively the logic: (export || load) && C. This provides the user with a result that includes all projects written in C that support export or load.

Is this sufficient? Or do others envision other cases that may not be satisfied by this logic?

pjcozzi commented 4 years ago

Wow @weegeekps, that is looking sharp!!! We'll get input from the community and the working group, but the general direction is AWESOME.

donmccurdy commented 3 years ago

Closing — https://github.com/KhronosGroup/glTF-Project-Explorer is available now, thanks @weegeekps!