mc-cip / spec

Index specification
MIT License
4 stars 1 forks source link

Tooling Requirements #2

Open ShadowOTE opened 4 years ago

ShadowOTE commented 4 years ago

This issue is to discuss potential tooling for the spec. This is not a discussion on detailed implementation, but rather to gather the types of tooling needed upfront as well as desirable in the longer term in order to facilitate spec build out and drive adoption.

Recap of discussion from Issue #1

Initial comment from @Wtoll:

Have we found a good system for automatically generating or managing creating and releasing new versions because honestly, as a developer myself, having to manually add new versions and enumerate them each in a specific file is kind of a deal breaker.

I think this system looks really great from an index and pack development perspective because it has all of the information in one place, but from a developer perspective it’s kind of a mess and would be really hard to maintain.

Also, if we’re defining forge and fabric in the dependencies section then why do we need to specify which loader it runs on in a separate section. I feel like It can easily be inferred.

Reply @ShadowOTE:

@Wtoll I think that's an excellent question. Looking at the gist @stairman06 kindly put together, I see 3 categories of information:

  1. Project Metadata - This is data that is typically entered once, and only occasionally updated. Tooling could easily be created for client tools or incorporated as part of upload processes to register/update these fields
  2. Version Metadata - This data can be generated by tooling on the client side, or during upload by scanning files and using checksums to find matches in the index.
  3. Download paths/CDN info - This primary download link(s) can probably be populated automatically during upload. If secondary lists are available separately, those could be periodically updated by content scraping bots.

Overall, I think the real trick here is going to be setting this up so it works as a streamlined part of the author's workflow. That will likely mean developing a suite of plugins, templates, bots, and other tooling. For example (and keep in mind, I'm not a minecraft mod dev, so hopefully I'm not too far off the mark here!):

  1. Dev installs a plugin for their IDE of choice
  2. Dev creates a new project using the plugin/template. This prompts them to fill in the project level info, and provides tooling to update it later.
  3. Dev adds a reference to another mod using package manager tooling installed as part of step 1. The tooling downloads the mod/resource and unpacks it in the appropriate folder, then registers it in their IDE.
  4. At some point, the dev chooses to publish. Ideally this would be part of the tooling added to their IDE as part of step 1. During upload, the references are validated and the dev is warned regarding any references that have missing/invalid metadata. Ideally the upload registrar would assist in correcting. The dev could also be given the chance to update metadata, and may or may not need to provide a primary link for download sources + additional sources/CDNs.
  5. After publishing, the dev is then provided an updated version of the file to import back into their project; if the IDE tooling is involved, it should import and overlay automatically on successful publish.

This is obviously very rough, and I suspect may be optimistic depending on what tools mod/resource devs are using, but the core concept is that there should be tooling that abstracts the spec into a "behind the scenes" file that is automatically built and maintained as much as possible. Simultaneously, it needs to make their lives easier in order to drive adoption. Also keep in mind that we need to be able to create tooling to scan and pregenerate these files so existing mods have a starting point, or else the chances of this standard being adopted by dev teams is very low! Ideally they'll be able to drop this into their projects and hit the ground running, instead of painstakingly building it from the ground up (particularly important for mod pack devs!).

I'll defer to others in this discussion with the relevant domain expertise regarding how all of this will work with loader specific dependencies and such. However, I suspect the tooling could be made smart enough to allow the dev to configure this as part of adding references.

Reply @Stairman06

@ShadowOTE IDE-based tooling is an interesting idea. My original idea was to create a web-based tool that allows authors to manipulate manifest and version files visually. However taking advantage of IDEs that developers already use seems like a more-integrated and seamless solution.

One of the issues that appears has to do with publishing. When an author decides they wish to publish their mod, where does this request get stored? GitHub Pull Requests seem like a simple solution, they're built in to GitHub with helpful tools like the ability to merge straight from the website and diff viewing. However if authors are required to fork the repository, commit changes, and create a PR, it could be too large of a hindrance.

With that being said, perhaps the web-based tool or IDE integration could come in handy here? It might be possible, using the GitHub API, to automatically fork the repository, commit changes, and create a PR all with the author only having to click Publish.

I'm not sure what the right answer is. The optimal solution will be extremely seamless for developers and require little effort, and the moderation team will be able to easily review and merge submissions.

Reply @Minenash

Since this issue is for the spec, unless the tooling would affect the spec itself, it should probably be in a separate issue

Reply @ShadowOTE

I like the idea of having publishing hooks integrated in with git, so that PRs could be configured to trigger a publication workflow. Not sure how easy/feasible that would be to accomplish, but that could potentially address one of the thornier areas (and also avoids having to replicate tooling across multiple IDEs!).

@Minenash agreed - this is dragging us off topic. I'll create a new issue.

comp500 commented 4 years ago

A Gradle plugin, like CurseGradle, would work quite well I think. This will allow you to keep all the metadata in the same place - Gradle versions are already used for dependency resolution/artifact naming and can be substituted in mod code using ${version} placeholders, and much of the existing metadata such as dependencies can be reused when submitting. This should also work quite well with the existing tooling for mods, such as Loom and ForgeGradle, and there are plenty of other publishing methods that integrate with Gradle (e.g. Maven repositories, Github Releases).

In terms of the existing listed types of metadata, project metadata can be read from existing Gradle metadata as well as fabric.mod.json/mods.toml files, and any extra needed information can be specified in the build.gradle file. Version metadata can easily be obtained from Gradle, and download paths/CDN info can be generated during the upload process.

This plugin could handle downloading dependencies as well, however it would be easier to implement the maven repository structure on the server hosting the mod files - it's fairly simple, and already used by existing tooling. The actual publishing process shouldn't be too complicated - just make a Gradle task that does the upload, with an API key stored in a file external to the repository.

A Gradle plugin only works for mods, but it wouldn't be hard to implement your own CLI or GUI tool to upload modpacks, and I would be happy to implement a publishing process for modpacks in packwiz. One thing I am not so sure on is how the actual submission process would work in terms of an API, and it likely depends on how the files and metadata themselves are hosted.

I've had a cursory glance over issue 1 as well, I'll have a more thorough read through it shortly.

stairman06 commented 4 years ago

One thing I am not so sure on is how the actual submission process would work in terms of an API, and it likely depends on how the files and metadata themselves are hosted.

This is one of the largest problems in terms of the index. Keeping everything on GitHub is possible. I've been looking through the GitHub API and it's possible to create Pull Requests with little input from the user.

But I'm not sure how feasible storing everything on GitHub is. Tens of thousands of Pull Requests could get unorganized. Perhaps metadata/manifests could be stored in a Git repository, and submissions handled externally. This leads into the question of where submissions should be stored. Who would fund servers, how do we prevent spam, who will write the server code, etc. Organizing a large amount of submissions and updates is hard to do.

This is one of the most important issues to this index project and I'm not sure on the solution. The end goal should be that submitting to the Index is as easy as CurseForge for developers, and that the Index is easily organized for moderators to review. We're clearly not at that point yet and have a long way to go.

ShadowOTE commented 4 years ago

It was mentioned on the discord that there is always the temptation for deletion of rival/competitor mods from the index. One potential solution would be to store updates in a block chain. I'm not normally a fan of this approach (all too often it's just used to create a stir) but this is actually a scenario where it would be a legitimate solution:

-multitenant by nature -public, transparent, write only database -resulys in a single, shared database (no need for multiple copies hosted by rivals - everyone can work off the same set of data) -easily verifiable -publication and acceptance follow a known pattern where publish is easy (just submit onto the chain, ala ethereum) and everyone is incentivized to cross-check and validate

immibis commented 4 years ago

@ShadowOTE that should be an entirely separate issue. And Ethereum is much more complicated than you think.

ShadowOTE commented 4 years ago

Agreed - i think for now the focus should be on identifying the core tooling requirements for minimum viable product.

immibis commented 4 years ago

Surely the minimum viable product includes zero tooling, and just aggregates mods that modders have already uploaded elsewhere? (e.g. CurseForge)