Open johnbeve opened 1 year ago
I'm unfamiliar with the workflow re adding languages. Is there anything I can do to help have SPARQL added? You all seem very busy; I'm happy to help, just not sure the best way to do so.
Can we get some traction on this? I'm looking forward to it.
Would love to see this being implemented. We are lacking in resources to teach SPARQL to people and this would become an essential teaching tool for knowledge graph engineering and ontology testing.
Please complete the following information about the language:
Name: SPARQL Website: https://www.w3.org/TR/sparql11-query/ Language Version: 1.1 The following are optional, but will help us add the language:
Test Frameworks: Testing in SPARQL often involves running queries against a known RDF dataset and checking if the results match expected outputs. You can create test cases where specific SPARQL queries should return known results or perform certain transformations.
How to install: SPARQL itself is a query language and does not require installation. However, to run SPARQL queries, you need a SPARQL endpoint or a triplestore (RDF database). Common triplestores include Apache Jena Fuseki, OpenLink Virtuoso, and RDF4J. These can be installed on a server or used as part of a local development environment. Installation usually involves downloading the software package and following the specific setup instructions for the chosen system.
How to compile/run: To run a SPARQL query, you generally need to connect to a SPARQL endpoint through a web interface, command line tool, or programmatically using libraries available in various programming languages (e.g., Python, Java). The query is then executed by the triplestore, which processes the query and returns the results.
Any comments: (e.g., what's interesting about this language): SPARQL is unique in its ability to query data across diverse and distributed data sources as long as they conform to the RDF standard. It's particularly powerful for querying complex relationships and can perform sophisticated queries over graph data. SPARQL supports a variety of functions, including aggregation, subqueries, and federated queries, allowing for versatile and powerful data manipulation and analysis. It's widely used in semantic web technologies, linked data projects, and for integrating data from different domains and sources.
This issue is 7 months old, clear interest in having the language added, but no word on whether it will be.
Can someone at least confirm that there is no intent to add SPARQL to Codewars?
No, it cannot be confirmed that there is no intent to add SPARQL to Codewars.
I am not a platform maintainer, but considering the traffic here in this thread, I would suppose that Codewars maintainers have good motivation to add SPARQL to Codewars. I think they might have not enough time tho.
Hello,
Just wanted to follow-up on this. Can someone from Codewars please reply with an update?
Thank you
Ping again
Is there anything we can do to make progress on this issue?
Also interested in seeing updates on this
I would be willing to try and create a setup which hopefully could be used to add SPARQL to Codewars. I cannot add the language myself, but maybe I would be able to create something what would make adding SPARQL to Codewars easy for admins. Problem is, I know nothing about SPARQL. I more or less know how Codewars runner works w.r.t. running tests, but I would need some support with SPARQL-specific things. If someone would be interested with sharing some knowledge, reach out to me on Codewars Discord, and maybe we can figure something out together.
@avsculley would you mind giving @hobovsky a hand?
Hi @johnbeve ,
Thank you for bringing this to our attention! We prioritize issues based on the date they are reported and the number of reactions they receive from the users. This helps us address the most critical concerns first based on our team's capacity.
We will review your request and will get back to you with an answer as soon as possible. Thank you a lot @hobovsky as always for jumping on this topic very quick to help us we will reach out to you as well.
Please let us know if you have any further questions in the meantime!
Best regards, Claudia Korzeniewski
I skimmed this issue and the Discord thread. Is a SPARQL endpoint really necessary for our purpose? Can we just use rdflib?
The graph data can be written in the test file and parsed with Graph().parse(data=test_data)
. For a large data that can be used in multiple kata, we may consider including files in the container image. Also, authors can create/update the graph data programmatically.
For testing, get the solution query by reading from a file (solution.rq
), query the test data with it, and check the result.
from rdflib import Graph
solution = get_solution() # read from the solution file
graph_data = """
"""
g = Graph().parse(data=graph_data, format="")
result = list(g.query(solution))
# check the result with assertions
# can test the query against multiple graph data
We can provide test helpers to reduce boilerplate if necessary. For a test framework, we currently support Python unittest
, but open to using a different Python test framework. Hopefully, we can produce a decent failure message without too much work.
If this sounds good, the next step is creating a container image project with example kata. Something like https://github.com/codewars/riscv.
I skimmed this issue and the Discord thread. Is a SPARQL endpoint really necessary for our purpose? Can we just use rdflib?
My work on this got stuck due to out-of-CW activities, but my current idea was, like you said (and I described in this message), use in-memory stores which would be built by authors programatically from scratch, or loaded from some library store if possible. Initially I thought that some kind of a server is necessary, but it turned out it's not.
My initial attempt used Java + Jena because I thought that Java ecosystem is more natural for SPARQL, but it has its downsides, and if you prefer something else for Codewars, like Ruby (by analogy to SQL setup), or Python, then it should also work.
I don't really have a strong preference. Just saw the rdflib example on Discord, and wondered what's missing.
We can compare different languages if you'd like, but I doubt using Python for tests will be a problem. The main concern for me is how the failure message looks like and if we can show something easy to understand. For SQL, we tried to minimize Ruby syntax in the output.
My initial attempt used Java + Jena because I thought that Java ecosystem is more natural for SPARQL, but it has its downsides
Is there anything we want/need only supported in Java+Jena? What are the downsides?
Please complete the following information about the language:
The following are optional, but will help us add the language:
:+1: reaction might help to get this request prioritized.