w3c / sdw

Repository for the Spatial Data on the Web Working Group
https://www.w3.org/2020/sdw/
148 stars 81 forks source link

BP 10 http://www.bbc.co.uk/ontologies gone #1328

Closed PeterParslow closed 2 years ago

PeterParslow commented 2 years ago

http://www.bbc.co.uk/ontologies has been "temporarily unavailable" for a long time now. Sadly, their useful ontologies seem to have gone away. See unanswered user questions from December 2021 at https://www.bbc.co.uk/blogs/internet/entries/5d371e1b-54be-491f-b8ee-9e354bafb168

I suggest we remove the whole example from Best Practice 10. Unless we want to be a bit cheeky and say something like:

"The BBC recognised the 'overly strong nature' of owl:SameAs and defined their own predicate to indicate 'that something is the same as something else, but in a way that is slightly weaker than owl:sameAs. Its purpose is to connect separate identities of the same thing, whilst keeping separation between the original statements of each'

PeterParslow commented 2 years ago

With examples from Dutch Government, OS Ireland, and BBC all disappearing in the past few years, are we still confident that the things they illustrated are still "best practices"?

FransKnibbe commented 2 years ago

With examples from Dutch Government, OS Ireland, and BBC all disappearing in the past few years, are we still confident that the things they illustrated are still "best practices"?

That is a very good question. It would be enlightening to know why services have been discontinued. Were they considered failed experiments?Are they replaced with even better practices?

PeterParslow commented 2 years ago

I am pushing for a clear statement from my own employer, but generally it seems that IT & web service colleagues, and the (less technical) people in "products" no longer see the "Linked Data" pattern as current practice.

There is some enthusiasm still for "linked data" approaches to controlled vocabularies, and some desire to find a maintainable way to assert that some "bit of data" is linked somehow to "some other bit" - which to me is exactly what "linked data" was designed to do! I personally suspect that it was the strong identification with SPARQL and RDF that has put people off the whole package. There is some hope that the OGC API Features will provide a 'better practice' if people actually adopt the 'one web page per spatial thing' approach, with reasonably persistent addresses and structured metadata in the HTML and / or GeoJSON.

But I have no information about the BBC, let alone the Dutch & Irish reasons.

In particular, we have not found any customer sections actually wanting to use our linked data, and find it difficult to recruit people with the skills to maintain it.

oldskeptic commented 2 years ago

I concur with @PeterParslow 's assessment about the state of things. I would add that these projects were often run by champions with a finite tenure and there is often a dichotomy between the published best practice and the sometimes ad-hoc nature of core business operations.

If the examples are no longer maintained, it best to remove them from BP since they evidently gained no traction with the host institution.

However, I do think this work is still valuable. LOD / LD is the only framework that provides a path forward for data management at scale.

PeterParslow commented 2 years ago

This from the relevant strategic product manager:

Alongside that I would say:

(I reckon people would guess "oldskeptic" might be me!)

prushforth commented 2 years ago

Semantics on the web are undeniably important, especially for standardized meaning / processing of HTML elements. However, I've always felt a little bit uncomfortable with the notion of the Semantic Web, and I have never invested enough time in learning its precepts to actually identify contexts in which it's both operational and useful. Do we really want AI to be able to read and deduce stuff on our behalf? Maybe, but perhaps that is a few social and technology leaps beyond where we actually are. So including semantic web technology in spatial data on the Web best practices seems at best to be anticipating something that may never happen. Maybe at this stage we can improve by not adding things, but by taking things away, so +1 to the original suggestion of this issue.

On the other hand, linking Web pages together via <a href>, <img src> and so on has been demonstrably successful (c.f. The Web). We definitely need to encourage this type of approach when implementing spatial data on the Web. I would offer our implementation of linking Web map resources together according to the "culture" of <a href>, <img src> and so on as a potentially beneficial direction to travel in.

situx commented 2 years ago

I think several things hindered the application and implementation of semantic web technologies in the GISSciences so far:

With that being said I can also see more institutions trying out linked data technologies recently. People often see the adoption of linked data technologies synonymous with having to deal with SPARQL and RDF, whereas the same does not seem to be true with (geospatial-aware) SQL databases. People happily use SQL databases, but these databases are often masked by middleware web services. To be successful I believe the same needs to be done for SPARQL endpoints. If the important data (e.g. feature collections), i.e. the data that would normally be provided by the mapping agency, can be exposed using the OGC API Features service, but people can still access more (related) data using a SPARQL endpoint, we can find a middle ground of traditional GIS people who "just want to download their data" and more sophisticated apps which make use of possibly even more than one SPARQL endpoint to solve a task.

FransKnibbe commented 2 years ago

With partial knowledge of the Dutch situation, my impression is that the power of plain Linked Open Data, without embellishments like SPARQL or reasoning, is underestimated. Just assigning persistent IRIs to resources can be immensely fruitful. I think the trick is to publish datasets in such a way that they are manageable for consumers that are following their noses (a trail of hyperlinks). Big LOD datasets become consumable if they exist in data catalogs, if they have extensive metadata and if they are partitioned in a smart way.

As a starting point, one should be able to easily find a dataset in a dataset catalog. Data catalogs of course can cascade. A data catalog of an organisation could be included in a national catalog, which in turn could be included in an international catalog.

Each dataset that can be found in a dataset should have helpful and up-to-date metadata that should provide enough information on how to use and access the dataset. National spatial datasets can be quite big, so smart partitioning is often required for consumers to be able to make the jump from metadata to the dataset itself. Thankfully, spatial datasets should always be able to have at least some kind of spatial partitioning (e.g. have a hierarchy of provinces, municipalities and postal codes to access manageable chunks of data), perhaps next to other kinds of partitioning.

I think good hierarchies in catalogs, metadata and datasets could make LOD both simple and powerful. But it is not something I have seen happening in the Netherlands.

rob-metalinkage commented 2 years ago

Linked Data as a new technology and architecture was always only an augmentation not a disruptor I think. SPARQL endpoints, like SQL or WFS are too hard to use and make robust, so is a flawed approach. So we are left with the same problem regardless of what transport and query platform - how do you describe your data and queries.

Linked Data allows "follow your nose" exploration of data - but still needs to be an adjunct to properly described data, described in the same way across datasets, data packages, APIs, queries and follow-your-nose navigation. So we are still left with the IRI based approach as the only option on the table, but not yet "good enough practice" probably.

The way forward we should be focussing on IMHO is making the OGC-APIs (as the data access architecture du jour) support links to descriptions - is not yet established but the technology platform is there in JSON-LD and definitely gaining some ground in spite of poor support - i.e. there is no reusable JSON context for most component data models - e.g. dcat, GeoSPARQL, SKOS. Schema.org, FIWARE, etc publish big - probably unstable dues to scope - JSON-LD contexts - if and when this becomes an ecosystem of stable reusable modules (following typical technology-at-scale evolution) its probably "good enough" practice.

So perhaps we should consider holding off "best practice" refresh until OGC APIs with JSON-LD have established usage patterns - there might not be a good enough option until then?

PeterParslow commented 2 years ago

Don't under estimate the work involved in keeping URIs persistently dereferenceable. The challenges of "web site migrations" and "domain migrations" are quite well understood, but eventually there are major technological shifts such as the one we're trying to manage at OS, from a triple-store based "linked data platform" to the use of OGC API Features as our "master" source of data & information about the data.

Redirects and proxies can work well, but I'm not convinced they provide a wise approach when handling national sets of millions of feature instances (data items, spatial things). The UK government used to advise than when designing URI sets one considers persistence in the sense of "for at least ten years" - so perhaps it's no coincidence that after data.ordnancesurvey.co.uk has been pretty stable for twelve years it's likely to move in the next couple. The "data." pattern common in linked data 10-15 years is used rather differently in OGC API Features, where it is sometimes given as an example. "data.example.org" is a root, but the URI for an individual feature instance would continue /collections/{collectionId}/items/{itemID}. In our case, we're moving from data.os.uk to api.os.uk/features (with a version number) & from /id/{itemID} to the OGC pattern

PeterParslow commented 2 years ago

In over ten years of having data published in this way, we have very little evidence of anyone traversing the links. The one customer department that we believe has been using the links took a full download of our triple store in order to traverse them locally - crossing the web for each object - property link, or object - object link in datasets of millions is sill not an attractive architectural option. At most it's useful in the exploratory stage of a project when 'seeing what's out there'. At least, that's my opinion (hence my comment about the 'old skeptic' label!)

namedgraph commented 2 years ago

@PeterParslow just because we've solved LinkedData publishing, doesn't mean we've solved consumption just yet.

PeterParslow commented 2 years ago

@namedgraph : I hardly think "we've solved LinkedData publishing", speaking as a data publisher: we have struggled for years to find any individuals who knew how to do it, after the company that set it up for us decided that "linked data publishing" wasn't for them any more.

And the scale & sustainability issues I mention are all publisher issues. Of course, if there was strong customer demand, then the skills & resources to publish would (eventually) increase to fill the gap. Or perhaps the preferred design patterns have changed?

namedgraph commented 2 years ago

Domain changes are not really Linked Data problems.

Otherwise I think it's pretty straightforward: put RDF data in the triplestore and a server in front of it that translates REST access to SPARQL. For example Processor (disclaimer: I'm the author).

namedgraph commented 2 years ago

@PeterParslow does https://environment.data.gov.uk/ actually serve Linked Data? E.g. https://environment.data.gov.uk/catchment-planning/WaterBody/GB30535397 only dereferences as HTML. The SPARQL endpoint is there but no query examples available.

KoalaGeo commented 2 years ago

@namedgraph we serve linked data (https://data.bgs.ac.uk/) but don't yet offer a public SPARQL endpoint, although we do use one internally. No one has asked us for a SPARQL endpoint yet....

We're looking at implementing https://github.com/surroundaustralia/ogcldapi to offer OGCAPI-Features + SPARQL

Geoscience Australia have a couple of test instance up:

  1. http://floods.surroundaustralia.com/
  2. http://asgs.surroundaustralia.com/
PeterParslow commented 2 years ago

"does https://environment.data.gov.uk/ actually serve Linked Data? " - I know who provides the metadata service behind the Defra Data Portal, but I'm not sure who runs the data platform, so I wouldn't know who to ask.

The front page says "APIs built on Linked Data", which could imply they don't serve anything that they see as "Linked data".

oldskeptic commented 2 years ago

TL;DR - If the institution can't follow through on the best practice, let's remove it.

I'm getting on my soap-box this morning.

The semantic web, in its many incarnations, is a hard sell because it deals with what-it-is instead of what-does-it-looks-like. I'm aggressively paraphrasing comments by @philarcher at ldg14 from memory, but there has been a large influx of web programmers in the past decade. Their frame of reference is what is on the screen with a focus on the immediate problem. Interoperability isn't their concern, you just fix that by buying a service API somewhere.

This shouldn't be a problem, but our IT organizations are focused on desktop support and have lost their systems-oriented[?] staff which makes it hard for a champion to push systemic changes. It also results in bizarre conversations where public query[discovery] mechanisms like SPARQL/GraphQL will be installed "over my dead body" because of security concerns while we push EDI transactions over plaintext FTP because "it's always been that way"1.

The product manager quoted by @PeterParslow is entirely right: when ESRI/QGIS/Spreadsheets are the only tools available, you are going to download the whole thing just to find what you are looking for. I have also been told by one national library representative that if I expect to use their data, I need to download their entire 2TB catalog (since it isn't that big?!). There is a bit of a chicken and the egg problem here.

I've built and sold a few services using SemWeb geosparql technologies. From personal experience, the customers care about the product, not that it uses linked open data. The truth is that most of the work here appears as hair-splitting to non-specialists, including web programmers, however it is important and needed:

A limited analogy is email. Every institution used to have a sendmail guru. Now we've outsourced that to a limited number of providers that deal with a twine-ball of standards that are madness to normal people. Yet, most of the time, mail just works. End consumers may want geoJSON and spreadsheets but we need complex semantic web specs to keep the lights on and serve them the data.

The URI endpoints make sense as identifiers and as a way to synchronize specific resources you care about. It also allows me to reference you directly, agree and disagree with you. "Traversing the links" is really short hand for "I'm browsing wikipedia" which was a great demo at the time but doesn't work for search or at scale. A lot of places still run with a few ESRI licenses saving to a shared drive, but I don't see this as a sustainable solutions.

Edit:1 The specific institution that I had in mind for the previous paragraph is not @PeterParslow 's. Apologies for the confusion.

namedgraph commented 2 years ago

@oldskeptic why is the Knowledge Graph (aka Semantic Web, Linked Data) a hard sell in the public sector if it's a growing enterprise market?

oldskeptic commented 2 years ago

@namedgraph A Knowledge Graph does not necessarily mean the Semantic Web or Linked [Open] Data.

There's a lot of term abuse out there, when people say "Knowledge Graph" I assume some ad-hoc NoSQL until I read the fine print.

namedgraph commented 2 years ago

OK fine if you want to nitpick, but even "RDF Knowledge Graph" specifically is growing as a market.

KoalaGeo commented 2 years ago

It's not according to users - https://www.ordnancesurvey.co.uk/documents/geospatial-standards-report.pdf , pg 59 & 60 results of user survey of standards in use or looking to use in future.

I think until the big software vendors are on board IMO it's going to be niche for most GIS professionals for a long time.

namedgraph commented 2 years ago

I'm talking mostly about Fortune 500 companies, pharma and life science companies, banks, fintech startups and the like. See who's using RDF/SPARQL: http://sparql.club

If there's something in the public/geospatial sector that makes RDF tech a hard sell, it does not mean the problem is in the tech.

PeterParslow commented 2 years ago

I have kicked off a rich thread of discussion. I agree with @oldskeptic's conclusions, but not with the way he's characterising my input or my working environment (apologies, friend!):

"public query[discovery] mechanisms like SPARQL/GraphQL will be installed "over my dead body" because of security concerns while we push EDI transactions over plaintext FTP because "it's always been that way"." - this is definitely not what's happening in the publicly owned company where I work. We are flat out implementing OGC APIs, including query capability, because our understanding of our customers is that they are far more likely to employ Javascript programmers who can handle Open APIs than they are to find SPARQL or GraphQL writers.

"when ESRI/QGIS/Spreadsheets are the only tools available, you are going to download the whole thing just to find what you are looking for": again, not the environment our "download the whole thing" customers work in - those are/were major government departments crunching a lot of their own data (national statistics, national environment & agriculture). They have their own "mainframe" data processing facility, which may well embed Knowledge Graph tech inside it - but that doesn't mean they want to pause & call out to the web everytime they want to find our contribution (e.g. the official name of a voting area, or the boundary between two such things).

@namedgraph : I wouldn't see SPARQL Club's analysis of job ads as strong evidence of a growing tech. Here's this from an advocate https://www.ontotext.com/blog/the-semantic-web-20-years-later/, seeing "knowledge graph" as an evolution of the semantic web of 20 years ago.

Perhaps "linked data" is analogous to Linux; for decades it was going to be the "next big thing", and people asked "what happened to it" - and then realised it was powering about half of the web, just invisibly behind the scenes. Not sure how that works when we're talking about (among other things) a publishing format / model....

rob-metalinkage commented 2 years ago

clients and data sources both need to be present - the "Linked Data Web" is not operationally useful and the clients arent there - so hard to really say if it will work or not... need to build client capabilities into potential useful tools that can work with temporary data, and wait for the next cycle of "oh dear we made a lot more fragile API based identifiers".

Perhaps the future will be ubiquitous AI based "sameAs" sniffers in clients able to join the dots for the growing cloud of different versions of things, able to cope with different levels of LinkedData-ness.

prushforth commented 2 years ago

@oldskeptic:

it is one of many interpretations of the inter-provincial boundaries in a dispute that dates back to before the BNA act. Costly business and policy decisions have already been made based on different maps of the boundaries. What happens when a geocoder determines sales tax or residency using this geometry?

Not all maps are general purpose, and if you have a specific application in mind you are wise to identify data to use that fits the purpose, for which search engines are the usual tool for identification. For example, we created an application that details the evolution of the territory of Canada, so perhaps that is a more suitable search result depending on the use case.

  • What you see isn't what you get. Looking at the representation of Newfoundland and Labrador in @prushforth's app,

This application was offered as an example of link relations, because of BP 10 being the subject of this thread. The Web is founded on links, and the application makes use of what we're interpreting HTML-standard link relations and targets might mean in the context of linked geospatial data (maps).

PeterParslow commented 2 years ago

See also BP 12, which includes

"overly small pieces of data are inconvenient to use, too. Data should be packaged in lumps that are convenient to work with. An approach where very small, fine-grained units of information are published that require further HTTP requests to get the related information sufficient to determine context is not useful;"

chris-little commented 2 years ago

@PeterParslow But the other way around: do some general searches to narrow to an appropriate context and then retrieve just the data that you require, probably in a small lump; is a good approach, I hope.

PeterParslow commented 2 years ago

In which case, Chris, we may want to revise that paragraph at BP 12!

PeterParslow commented 2 years ago

Ordnance Survey have announced that their (our) ontologies at data.os.uk will be turned off this year.

PeterParslow commented 2 years ago

Analysis of missing examples, for discussion in the group:

BBC I believe the BBC has moved away from the system they were using built on semantic web technologies and all the relevant staff has left. This is only second hand information.

We only use BBC ontology as one of three examples of alternatives to owl:sameAs; suggest simply removing this example.

However, it is an example of a publisher dropping linked data technology and losing the skills for it.

OS Ireland Their site has been "under maintenance" for many months. Trawling their blogs & web, I guess that it’s gone: replaced by https://www.geohive.ie/ (a more ‘traditional’ portal) which seems to be built on ArcGIS.com. Again, an example of a publisher dropping linked data technology, in this case because their system supplier does not easily support it.

We only use it as in EXAMPLE 59 in 13.7. Suggest just dropping this example, leaving only the CSIRO one.

Ordnance Survey, GB Linked data for features. OS's intention is to move from this to a OGC API Features which won't necessarily deliver any "linked data", and almost certainly won't maintain the URI identifiers. These URIs are not likely to work beyond 2022.

This is another example of a publisher moving away from publishing linked data, in this case due to lack of skills within the organisation & its customers.

However, at present, only the redirect from data.os.uk to data.ordnancesurvey.co.uk has gone, so it's only necessary to make that one change.

Note: data.ordnancesurvey.co.uk is used 15 times as an identifier (and once discussing the linked data API). At present, the feature identifiers still work, but in each case (two features) the geometry identifiers have changed - because the geometry of the features has changed:

North Devon (https://data.ordnancesurvey.co.uk/id/7000000000022933) from 22933-4 to 22933-338 City of Edinburgh (https://data.ordnancesurvey.co.uk/id/7000000000030505), from -10 to -117

This may be worth mentioning - I guess it's not unusual for features to be more persistent than their properties!

ontologies We also reference two of OS's four ontologies: admingeo (six references) and geometry ontology (two references)

Regarding the four ontologies, OS has different ideas for the future:

Netherlands This is the 'success story', sticking with linked data.

However, one of the examples we use (https://brt.basisregistraties.overheid.nl/top10nl/id/gebouw/102625209, three times) returns an HTTP 404. @lvdbrink @jvanulde : perhaps you can tell why?

The most used example, http://bag.basisregistraties.overheid.nl/bag/id/pand/0363100012169587, still works (and is hopefully still Anne Frank’s house!), but the WFS that we use (EXAMPLE 16) has moved (from http://geodata.nationaalgeoregister.nl/bag/wfs to https://service.pdok.nl/lv/bag/wfs/v2_0) & returns XML to a different schema now.

Suggest: fix the WFS link & returned XML

Also, https://monumentenregister.cultureelerfgoed.nl/monuments?MonumentId=4296, given as another URI for Anne Franck's house (BP 1) doesn't work - they've changed pattern so now it's https://monumentenregister.cultureelerfgoed.nl/monumenten/4296

(An example of changing 'good practice' in API / URI design?)

Suggest: fix this URI

To discuss

situx commented 2 years ago

Thanks @PeterParslow I will incorporate this in the presentation for the discussion

rob-metalinkage commented 2 years ago

Interesting - I doubt any non -Linked Data systems will have persisted in this timeframe either - and in the one case the LD stayed stable and the API approach broke, so we need to be careful what we think we are measuring and what conclusions can be reached.

There are quite a few more recent moves towards JSON-LD in various fora - which implies URI referencing at least as an option. I think RDF/XML and SPARQL (the old school semantic web idea) can be declared non-viable - but that was obvious to some of us from the start - any general query interface is putting the burdens in the wrong place for any scalability. The JSON world is still quite a mess however - LD-framing, Schema etc not well supported by infrastructure. (At this point I think of maven which led to a resurgence in Java viability, for better or worse)

IMHO the most we can say is that the technology for LD is likely to shift to become more compatible with mainstream APIs, including OGC APIS, and its probably too early to say what impact this will have on persistent URI management and value.

rob-metalinkage commented 2 years ago

Musing further - RESTful APIS provide dereferencable URIs - and exposing searchable items means Google can be a link engine. So in a general sense both have some sense of being Linked Data if a data representation is available and a client chooses to retain links for some temporal scope It probably comes down to FAIR principles whether a client has any interoperable means of discovering the scope of an identifier - global or within some system "closure". SPARQL doenst actually help because its still based on hidden system closure scopes - so its extra tech with no architectural benefit.

JSON has its identifiable benefit in parseability, and it proliferates. I think we can hypothesise that self documenting APIs and self-documenting data with JSON-LD and resolvable identifiers is the best option in the near future - but relying on stable global identifiers ( as in a Linked Data Cloud) is one option, and "system scoped" identifiers (e.g. REST) is a lower bar in terms of the temporal scope of systems.

namedgraph commented 2 years ago

If anything, these examples tell me that it is hard to implement processes in an organization that survive employee turnover and management change. These are not technology problems.

PeterParslow commented 2 years ago

While there is some truth to that, Martynas, and to Rob's "I doubt any non -Linked Data systems will have persisted in this timeframe either" the fact that it has hit three of the four public agencies that we cite may be significant. In the case of Ordnance Survey, we had already 'taken in house' the service when the key 'triple store support company' in the UK (Talis) decided it didn't want to support other people's triple stores anymore.

Perhaps more importantly, 'linked data' almost uniquely requires/depends on persistent URIs and therefore changes to the external interfaces, which may result from technology change or staff turnover, impact some of the core tenets of the approach. I don't think this is/was the case with other changes of fashion in programming: the user experience of a website powered on Linux or IIS, and using Javascript or VBScript or whatever changes less significantly.

ldesousa commented 2 years ago

@namedgraph we serve linked data (https://data.bgs.ac.uk/) but don't yet offer a public SPARQL endpoint, although we do use one internally. No one has asked us for a SPARQL endpoint yet....

We're looking at implementing https://github.com/surroundaustralia/ogcldapi to offer OGCAPI-Features + SPARQL

Geoscience Australia have a couple of test instance up:

  1. http://floods.surroundaustralia.com/
  2. http://asgs.surroundaustralia.com/

ogcldapi was just a pilot project, strongly intertwined with Fuzeki. It has been recently superseded by Prez, which if successful might be adopted by various public agencies in Australia. The company has also an on-line demo.

Last year I contributed a GeoSPARQL provider to the pygeoapi project. Albeit simple, it was a starting point. But it was never accepted. From my view point, scepticism is a more concerning hurdle than tools. The SW certainly needs to mature further, but many tools are already there.

mirzov commented 2 years ago

Hello! I have been forwarded a request by Linda van den Brink to contribute to this issue with production examples of geospatial data published as Linked Data. We are a research infrastructure for greenhouse gas measurements in Europe, called ICOS. We have had Linked Data in production for 7 years, and intend to keep it running for many years to come. Our SPARQL endpoint is https://meta.icos-cp.eu/sparql , can be conveniently used from https://meta.icos-cp.eu/sparqlclient/ An example query to start discovering the geospatial parts of our Linked Data could be the following:

prefix cpmeta: <http://meta.icos-cp.eu/ontologies/cpmeta/>
select * where{
    ?feature cpmeta:hasSpatialCoverage ?geometry .
}

All the URLs are usable and have been pretty stable. Our main OWL ontology is accessible via the endpoint, too, but also available here: https://github.com/ICOS-Carbon-Portal/meta/blob/master/src/main/resources/owl/cpmeta.owl . Standard ontologies are often used indirectly by inheriting from their classes and properties.

More details can be provided, if there is interest.

rob-metalinkage commented 2 years ago

We may be hung up on semantics of "best" here - common practice is data silos and ineffective data integration strategies - does that "median" qualify as "best" ?

"I do think this work is still valuable. LOD / LD is the only framework that provides a path forward for data management at scale." - whilst I also havent identified another viable candidate - can we say "only". Is "only" sufficiently close to "best" ?

I note the EU data strategy talks about "data spaces" and the "international data spaces" reference model states "the Declarative Representation (IDS Ontology) is the only normative specification of the Information Model." It also is agnostic about physical representation but it does state "A (Digital) Resource in the context of the International Data Spaces is a uniquely identifiable, valuable, digital (i.e. non-physical) commodity that can be traded and exchanged between remote participants using the IDS infrastructure."

Maybe we can reflect on this and make a stronger statement about the underlying principle which is less dogmatic about the expression and state something like "best practice for data on the web is to use unambiguous identifiers for resources, such as the use of HTTP identifiers in Linked Data. Alternative ubiquitous identifier resolution infrastructure has yet to emerge, so this remains a candidate best practice in spite of the preponderance of disconnected data with locally scoped identifiers"

lvdbrink commented 2 years ago

Another example which is currently available is the Linked Data Service of the Swiss Federal SDI.

You’ll find the available datasets here: https://geo.ld.admin.ch/.well-known/void

In each dataset description you’ll find links to example resources and to the SPARQL Endpoint.

Submitter known to @lvdbrink