Project that aims to sentenize all the open data of Riksdagen and other sources to create an easily linkable dataset of sentences that can be refered to from Wikidata lexemes and other resources
The Riksdagen open data often contains hyphenated words which end up like this in our rawtoken table:
These are mostly garbage and should be handled somehow. Maybe we can train an AI to recognize when they are good based on lexemes?
The Riksdagen open data often contains hyphenated words which end up like this in our rawtoken table: These are mostly garbage and should be handled somehow. Maybe we can train an AI to recognize when they are good based on lexemes?