Open Daniel-Mietchen opened 7 years ago
Giving this 1h to get going by reviewing the links above.
See also https://meta.wikimedia.org/wiki/Research:Understanding_Wikidata_Queries , which states:
Create a query-feature database To enable further processing, the queries and (selected) features will be precomputed and stored in a common database. This should serve as a flexible foundation for further analysis. An RDF representation that can be queried with SPARQL may be a good solution. This would also enable exploring the dataset interactively (e.g., to query for the total number of queries that use certain features in combination) and thus support hypothesis generation for further analysis. It will depend on the actual size of the dataset if this solution will be feasible. Alternatively, an offline processing solution for storing and retrieving queries with their features will be considered.
Here's another nice query:
While there is https://www.wikidata.org/wiki/Wikidata:SPARQL_query_service/queries/examples , lots of useful Wikidata queries are being collected elsewhere, e.g. at
It would be useful to collect them more systematically, like Quarry does for SQL queries, and a Phabricator ticket has been opened for that at https://phabricator.wikimedia.org/T104762 .