Open kiryph opened 4 years ago
Duplicated records are found by comparing their sort.key. Thus you have the full power of constructing a sort key to describe identical records. Thus it is up to you to combine some fields in order to find duplicates.
If you have an example where this is not enough I can think about it.
Afaik you can have only one sort key per bibtool run. This means if you want to define several unique combinations several invocations of bibtool with different rsc files are necessary. This makes it more expensive using bibtool as linter and complicates it. Usually, running linter on a file needs only a single configuration and a single run.
Constructing a single sort key to cover all situations is error-prone, more difficult and less readable.
It would be nice to spot duplicate records by a combination of field values. An obvious example are journal articles. If there is no doi, a duplicate record cannot easily be identified by a single field e.g. author(s) or title. However, the combination of ISSN, volume/year, issue and page range identifies a duplicate record.
The notation could be