ChorusOne / solview

0 stars 0 forks source link

solview: improvements from september / october 2021 #1

Closed kucharskim closed 3 years ago

kucharskim commented 3 years ago

This doesn't cover feedback from @ruuda yet. I need to go through his post.

ruuda commented 3 years ago

Also if you want to handle the exceptions instead of crashing the app, try this one weird trick to make your application more reliable.

kucharskim commented 3 years ago

Just to be clear. Commit early, commit often. Long feedback from Slack is probably multiple commits, multiple pull requests. One step at a time.

kucharskim commented 3 years ago

@ruuda, is there easy way to convert this to nice looking f-string:

    logger.info("Solview addresses: {}".format(", ".join(addresses)))
    logger.info("SPL addresses: {}".format(", ".join(spl_addresses)))
ruuda commented 3 years ago

@ruuda, is there easy way to convert this to nice looking f-string:

    logger.info("Solview addresses: {}".format(", ".join(addresses)))
    logger.info("SPL addresses: {}".format(", ".join(spl_addresses)))

logger.info(f"Solview addresses: {', '.join(addresses)}")
logger.info(f"SPL addresses: {', '.join(spl_addresses)}")
ruuda commented 3 years ago

Or with just concatenation:

logger.info("Solview addresses: " + ", ".join(addresses))
logger.info("SPL addresses: " + ", ".join(spl_addresses))
kucharskim commented 3 years ago

Feedback from @ruuda incorporated:

kucharskim commented 3 years ago

Also if you want to handle the exceptions instead of crashing the app, try this one weird trick to make your application more reliable.

I didn't look into this yet.

ruuda commented 3 years ago

Also if you want to handle the exceptions instead of crashing the app, try this one weird trick to make your application more reliable.

I didn't look into this yet.

What I would do for now: hook it up to Sentry.io and see how often it actually fails. Maybe it fails once a day but restarts, and it’s not an issue at all, then don’t waste any time on improving error handling. Probably at some point an error will pop up that does make sense to handle, and we can add logic for it at that time.

kucharskim commented 3 years ago

What I would do for now: hook it up to Sentry.io and see how often it actually fails. Maybe it fails once a day but restarts, and it’s not an issue at all, then don’t waste any time on improving error handling. Probably at some point an error will pop up that does make sense to handle, and we can add logic for it at that time.

I think having an open issue for improvements, low pri task, would be good enough for me. I don't have time to look into sentry.io and I don't like relying on third party sites. Kubernetes has restart counter, that would be good enough for me, to look at that counter. We should have alarms for high restart counts of our services anyway and getting logs is easy enough.