lingpy / lingrex

Linguistic Reconstruction with LingPy
https://github.com/lingpy/lingrex
MIT License
12 stars 2 forks source link

Cleaning data prior to correspondence pattern analysis #33

Open LinguList opened 1 year ago

LinguList commented 1 year ago

We might need some basic checks whether a correspondence pattern analysis is useful, since I detected one pattern that causes huge problems:

    {'ID': [365, 371, 370, 367, 364, 369, 368, 366, 362],
 'taxa': ['Hachijo',
  'Hachijo',
  'Kagoshima',
  'Kochi',
  'Kyoto',
  'Oki',
  'Sado',
  'Shuri',
  'Tokyo'],
 'seqs': [['k', 'iː', '-', '-', '-', '-'],
  ['k', 'e', 'b', 'u', 'ɕ', 'o'],
  ['k', 'e', '-', '-', '-', 'i'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'eː', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'iː', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-']],
 'alignment': [['k', 'iː', '-', '-', '-', '-'],
  ['k', 'e', 'b', 'u', 'ɕ', 'o'],
  ['k', 'e', '-', '-', '-', 'i'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'eː', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-'],
  ['k', 'iː', '-', '-', '-', '-'],
  ['k', 'e', '-', '-', '-', '-']],
 'dataset': 'japonic',
 'seq_id': '449 ("hair")'}

Here, we have two words from Hachijo in the same cognate sets, but they differ (!). We can argue that for correspondence patterns, it is impossible for strictly cognate words to differ. So a preprocessing can in fact arbitrarily decide for one of them.

LinguList commented 1 year ago

The pattern is difficult to detect. In CoPaR, only one word of the two is used, the other word is ignored, but this will shrink the alignment, since the one word causes all the gaps.