Open pbonte opened 1 year ago
Please provide a status update about this challenge. Every ongoing challenge needs at least one status update every 2 weeks. Thanks!
Main work on the LDES Semantic Web Thing is completed. Now integrating the Aggregator implementation as datasource.
Please provide a status update about this challenge. Every ongoing challenge needs at least one status update every 2 weeks. Thanks!
Quick update: What we got until now:
DAHCC data in multiple Solid Pods Kush's Aggregator analyses and published in a new Solid Pod LDES SPARQL query engine over HTTP answers SPARQL queries, using LDES data from the Aggregator Pod. Todo:
Check/Debug the Semantic Web Thing implementation for LDES Integrate this Semantic Web Thing with a demo Dynamic Dashboard.
Please provide a status update about this challenge. Every ongoing challenge needs at least one status update every 2 weeks. Thanks!
Getting the reverse proxy to work, as the Dynamic Dashboard only allows HTTPS connections. Some configuration issues still remain.
Getting the reverse proxy to work, as the Dynamic Dashboard only allows HTTPS connections. Some configuration issues still remain.
As stated in the instructions a challenge is only closed after the solution has been approved and if the report has been written. So I assume that you want your solution to be reviewed?
Sorry, must've pressed the wrong button. ...
Is the required screencast available somewhere?
Sorry, must've pressed the wrong button. ...
Is the required screencast available somewhere?
There is one now :-D https://github.com/SolidLabResearch/LDES-Semantic-Web-Thing/blob/main/README.md#screencast
Please provide a status update about this challenge. Every ongoing challenge needs at least one status update every 2 weeks. Thanks!
Technical decisions:
Further actions:
Lessons Learned:
@svrstich Thanks! For the technical decisions, can you be more specific? "similar to" is too vague. And what does "Enhanced integration with Challenge 84 (Aggregator)" mean in terms of lessons learned?
Bugger ... "Enhanced integration with Challenge 84 (Aggregator)" should have been under the header: "Further actions".
As for the technical decisions:
@svrstich Can you provide clear copy-paste instructions to get the demo running? Something similar to this. At the moment the instructions are too vague. For example "A running Aggregator service, defined by Challenge #84", it's not clear what exactly needs to happen.
@pheyvaer Do you really want me to copy the installation instructions from Challenge84 to here as well? I thought it would be more appropriate to refer to Challenge 84, so that when something changes on that side, documentation remains up-to-date? The same applies for the Comunica SPARQL Link Traveral Engine, as far as I'm concerned.
I'll add the running
after the compiling, as I see that that one's missing.
Yes, the instructions have to be concrete.
A running Aggregator service, defined by Challenge #84: https://github.com/argahsuknesib/solid-stream-aggregator
This is not concrete. For example, what settings are required for the aggregator?
when something changes on that side, documentation remains up-to-date?
Every challenge should refer to a specific version or commit. That way we always refer to a working version. Even if the code or documentation changes at later time.
The same applies for the Comunica SPARQL Link Traveral Engine, as far as I'm concerned.
The README says
Example usage ...
Does that mean that we can just copy-paste this command or do we need to change it? If we need to change it what do we need to change? And why isn't the exact command already in the README in that case?
Pitch
Aggregations and data summaries allow to provide succinct view on larger dataset. However, to be useful for non-technical users, visualisation is needed the easily interpret the summarised data. Aggregations and summaries from the DAHCC dataset will be used for purpose, which contains data streams describing the behaviour of various patients. Specifically, a visualisation of the patients activity index will be used. This visualisation directly connects to the streaming aggregator defined in Challenge #84 while the Semantic Dashboard will be used for the visualisation itself.
Desired solution
The generic visualisation components should be able to connect to any aggregator service and allow to visualise the results in the semantic dashboard. The latter allows to specify in a generic manner how to present the visualisation itself. To enable this, a mapping of the internal data of the aggregator to the data format used by the semantic dashboard is necessary (which is build around webThing).
Acceptance criteria
The visualisation should be able to:
Scenarios
This is part of a larger scenario