I think we should make it clear that the main innovation and strength of this system is not the feature definitions themselves, but its generative nature (instead of just having another look-up table that doesn‘t know what to do with sounds that are outside the finite alphabet). Therefore, I think it would make sense to draw the reader‘s/user‘s attention away from the feature definition themselves and towards the generative infrastructure.
If we allow users to define their own features and redefine standard feature mappings, I think we could make a strong point in favor of our architecture. People can (and will) criticize the feature inventory that we define - but instead of complaining about that, they should just be able to define their own system, while still making use of our flexible architecture.
I therefore propose that we provide an API for users to define their own systems by
editing the feature inventory (adding new and deleting existing features)
redefine and adjust the feature mappings
define custom feature sets for sounds directly, which take precedence over the generated vectors
Furthermore, I think it would be worthwhile to implement some established systems (e.g. Chomsky-Halle and Phoible). Based on those systems, we still want to be able to return a feature vector for every sound (even those that are not explicitly defined). The rough idea would be to map any sound to its closest neighbor, e.g. [pʲ] -> [p] or [ʁ] -> [r].
I think we should make it clear that the main innovation and strength of this system is not the feature definitions themselves, but its generative nature (instead of just having another look-up table that doesn‘t know what to do with sounds that are outside the finite alphabet). Therefore, I think it would make sense to draw the reader‘s/user‘s attention away from the feature definition themselves and towards the generative infrastructure.
If we allow users to define their own features and redefine standard feature mappings, I think we could make a strong point in favor of our architecture. People can (and will) criticize the feature inventory that we define - but instead of complaining about that, they should just be able to define their own system, while still making use of our flexible architecture.
I therefore propose that we provide an API for users to define their own systems by
Furthermore, I think it would be worthwhile to implement some established systems (e.g. Chomsky-Halle and Phoible). Based on those systems, we still want to be able to return a feature vector for every sound (even those that are not explicitly defined). The rough idea would be to map any sound to its closest neighbor, e.g. [pʲ] -> [p] or [ʁ] -> [r].