Distinguish between three types of requests: Create a new problem submission (POST), get a problem submission status (GET), and create an execution request for a single test case like ide.usaco.guide (POST).
When a new problem submission is created, create a new DynamoDB row and return the row ID. Then, use one execution lambda request to compile the code. Fetch all the test cases from S3 somehow. For every test case, call another execution lambda. Then, compare the execution lambda output hash with the expected output hash to determine whether the answer is correct or not. Also check for memory usage. Update the dynamodb row as needed
For get problem submission status requests, just fetch data from DynamoDB and return it
Things to keep in mind
AWS Lambda has a 6mb output size limit, and some USACO problems exceed that, so we'll need to use hashing to check whether an output is correct or not.
DynamoDB is very expensive and charges by size. We should avoid storing large text like input / output / etc; if we ever need to do this, we should use S3.
However, we do want to store source code, compilation errors, and small test case outputs for USACO Guide groups (to help users debug their code on small cases more easily). We should put a limit on how big the source code can be (64kb is what Codeforces uses), and use gzip to compress any text we store.
Remaining Tasks
Goal
Add support for submitting and grading code for entire problems (ex. USACO problems), to be used by things like USACO Guide Groups.
Rough guidelines
Execute Lambda
https://github.com/cpinitiative/online-judge/tree/serverless/execute
Submit Lambda
https://github.com/cpinitiative/online-judge/tree/serverless/submit
Things to keep in mind
gzip
to compress any text we store.