eisbuk / EisBuk

Manage bookings for ice skating lessons
GNU Affero General Public License v3.0
0 stars 1 forks source link

Debug setup 1 #946

Open ikusteu opened 5 months ago

ikusteu commented 5 months ago

This is draft as it's not a real PR, but rather a dummy one for easier changes + comments. I will push some debug setups and logs of the debugging process here.

ikusteu commented 5 months ago

Before the debug findings, a little context: I've set up a simple server program, listening to localhost:3001 and logging every request out + responding with preset status code (default: 200, but can be changed between requests), more here

note: I wrote it in Go as I've never learned to use Express.js and didn't want to bother with Node.js' (boilerplaty) native http handling. However, the server is rather simple + I've commented the code through and through for it to be understandable to anybody not familiar with Go's std lib.

Furthermore, I've set up Sentry client (in Eisbuk app) to use localhost:3001 as __sentryDsn__ as it's easier to inspect requests sent out + using the web-sink, we can simulate different response codes from "sentry" service.

ikusteu commented 5 months ago

2nd round of debugging: data trigger

I've created a data trigger (dataTriggerWithFailingSentry) to test out whether or not failing sentry communication will result in data trigger not being executed / being rolled back at error.

I've tested this with:

Web-sink on, res: 200

Nothing to report here: the trigger ran as expected, the new document had been created and logs + web-sink output are consistent with expectations

Web-sink on, res: 403

The trigger ran as expected, the document was copied to dest, web-sink output was expected. Functions logs logged the 403 response, but only as a warning:

Screenshot 2024-03-27 at 16 30 29

Web-sink off

With web sink off, we expect ERRCONNREFUSED, or similar error to be logged to functions logs and the function to fail. The function logs do show the error, after the main part of the function (the data trigger logic) had executed and report the function being killed due to unexpected error (all as expected):

Screenshot 2024-03-27 at 16 34 14

However, looking into the documents, it would appear that no rollback took place as the trigger did write the document as expected:

Screenshot 2024-03-27 at 16 32 58
ikusteu commented 5 months ago

Final setup: staging

I've created a setup so that the function can be deployed to production, allowing us to:

The function was deployed and all the behaviour observed was the same as one in local env. Error being thrown before/after sentry flush was reported by google cloud console and using a non-existing dsn, posing as Sentry server, did throw an error in an expected manner. However: none of the errors thrown prevented the result documents from being written by the data trigger

GC console error reports:

Screenshot 2024-03-29 at 21 23 04

Screenshot of data trigger results (despite the ENOTFOUND error):

Screenshot 2024-03-29 at 21 22 36
ikusteu commented 5 months ago

With all of the tests performed, I don't see any relationship between Sentry error and data triggers producing db mismatches.