Closed acka47 closed 7 months ago
Won't fix this for API 1.0, I hope we have this covered for 2.0, though... Closing.
@dr0i As @fsteeg just said in the standup, this might be a thing we should consider for lobid 2.0, especially as more and more users and applications depend on it and we should guarantee a stable production service. What do you think?
We decided to create a status page. We need one for lobid.org (at status.lobid.org) with an overview over all services and I adjusted the issue title accordingly.
I don't think we need separate status pages for each service (status.lobid.org/resources etc.) but will have a section for each service (status.lobid.org#resources).
I think it would make sense to hold a daily updated status of about one week? I think these status infos make sense:
alephXmlClobs
alephXmlClobs
Wouldn't that be enough? @acka47 don't know how
API adjustments (if needed, e.g. JSON LD context, HTML, NWbib ...)
could look like?
Wouldn't that be enough? @acka47 don't know how
Of course we also have to monitor the other lobid services.
Furthermore, as the hbz's internet ~collection~ connection went don't twice in weeks we should also have a status for this if possible. This would mean that the status page needs to be hosted on a non-hbz server which makes things more complicated. But I think this is how one should set up a status server anyway.
collection went don't
"connection went down" I guess , you fuzzy
I thinks when our connection brakes we don't need a status info page anyway? I mean, you see the status page is gone, you know something's wrong, right? As we have already mails informing us what about what we want to see at the status the todo is to just gather the mails, make a simple table and publish that as a website.
The status page is meant for external users, isn't it? Otherwise we could simply read our mails ;)
We have this https://lobid.org/apihealth/ . Is it enough - what's missing?
I think this should be enough for now. Let's close this ten year old issue.
The current workflow being:
changes/addition of record via Aleph client ->
daily generating of AlephXML clobs -> files to webserver -> copy into file system -> transformation flux -> Mysql_DB -> Dump into HDFS -> Conversion to JSON & indexing into elasticserch -> API adjustments (if needed, e.g. JSON LD context, HTML, NWbib ...)
We have to set up monitoring for (all? some?) of these steps.