Open jameslkingsley opened 6 years ago
Currently missions are uploaded to the server's local disk in order to extract the PBO and check for errors. If there are no errors it will continue to deploy the mission files (PBO and ZIP) to the cloud disk (whatever that might be).
This approach works well because you have the option of where you want to handle the extraction and validation, but also where to ultimately store the mission files. A downside right now is if you want to download the mission it will always call on Google Cloud. This should be handled in a way that allows local disk storage to still work. Also should scrap signing the download if from GCS, it's OTT.
The upload/update process is monolithic. This needs to be simplified down to three core parts:
The config parser is also not 100% reliable. It breaks on non-quoted strings in the mission.sqm
, and also doesn't handle all pre-processor commands correctly. The checking of loadout files should also check for any reference to ACRE.
For mission validation there should be a class for each type of check. For example when the files are on the local disk ready for validation, it should attempt to decode the config.hpp
, mission.sqm
, and description.ext
. If it succeeds it should run through a set list of classes passing in the decoded objects and the classes can then perform their validation, throwing an error if it fails.
Some validation classes that would be needed:
It would also be useful if each validation class could choose to be strict, and if strict it would fail the upload, but if not strict it would let the upload continue but raise a warning for the mission testers.
Discord notifications should be condensed a bit to avoid the long URLs. The web notifications are non-existent at the moment, but should also come back. Discord notifications should be able to be handled by Laravel's notification system.
At some point I think this whole codebase could do with a big cleanup.