jsbean / PitchSpelling

Repository containing musings related to the eighth-tone pitch spelling algorithm that shan't die.
MIT License
1 stars 1 forks source link

Implement a simple version of the spelling algorithm for verticalities. #6

Open mossheim opened 8 years ago

mossheim commented 8 years ago

I was going to start on the second triad spelling chart, but I feel like most of the rules that we could use have already been discovered, and that making charts (especially with 3 or 4 PC's) is too labor-intensive. Also, it isn't a good way to organize information because there are so many garbage possibilities at that point.

I think it's more effective now to actually start playing around with a basic implementation of the algorithm--that way, we can get a list of possible contenders and see spots where tweaking or refining the rules a bit might improve the search process.

I think I could do this in SuperCollider in about two days if you agree.

jsbean commented 8 years ago

Agreed.

How about putting the algorithm together piece by piece in a series of pull requests?

That can provide a platform for conversation.

Perhaps we could start local level and work up? For example, starting with the rule definitions, edge evaluations, rule applications, then working up to graph construction.

Lemme know if you wanna try a different strategy, but I think this could be neat.

mossheim commented 8 years ago

Glad you agree and yes, I think that's a great idea! Definitely issue #5 needs to be solved first, and I also have questions about the other rules that I'll post as issues/questions.

Not sure how pull requests work but if you wanna get started or assign me something go ahead!

jsbean commented 8 years ago

Here's what we'll do:

jsbean commented 8 years ago

I took an initial stab at #5. Though, it requires a restructuring of the properties dealt with in the definitions, or a more verbose definition.

mossheim commented 8 years ago

I've finished a version now! It's very exciting and works properly as far as I can tell. For verticalities with 6 pitch classes it usually runs within 0.01 sec, and the maximum run time I've recorded so far is 0.06 sec. I tweaked the rule costs so that no double sharps & flats and avoid unisons are cost 2, and the avoid aug/dim intervals rule is x4 and sensitive to eighth-tone augmentation. Other rules have cost 1. Very interested to see if rule cost alterations improve run time!

mossheim commented 8 years ago

Based on my observations so far (I've tried giving it the entire 48-tone cluster chord as an input) I think we will probably need to add considerations for large pitch-class-set inputs. Perhaps block off higher branches even if their current cost is below the maximum cost, since certain rules are guaranteed to be applied with a high number of chord components (for instance, the unison rule will always apply to a chord with more than 7 elements). That could be an interesting theoretical point to discuss!

mossheim commented 8 years ago

for the input [0,1,2,3,4,5,6,7,8,9,10,11]:

best graphs (cost 94):
1 - [ C (0), Db (1), D (2), Eb (3), E (4), F (5), Gb (6), G (7), Ab (8), A (9), Bb (10), B (11) ]
2 - [ C (0), C# (1), D (2), D# (3), E (4), F (5), F# (6), G (7), G# (8), A (9), A# (10), B (11) ]
time to run: 0.15837852000004 seconds.
jsbean commented 8 years ago

Looking great! I'll merge the building blocks -- love to see what you're working with now. Is it up on your fork?

jsbean commented 8 years ago

Perhaps tonight I'll do a quick Swift port!

mossheim commented 8 years ago

I haven't put up the new code yet but I will when I have time today!

jsbean commented 8 years ago

Excellent! Looking forward!

mossheim commented 8 years ago

Yeah, looking forward to hearing your thoughts! I just made a pull request which includes a variant of the new algorithm which happens to live on my dropbox. The "standard" branch is saved as a .sc file, which has to live outside dropbox, so I can't access it at the moment. The variant might be somewhat out of date but from what I remember it works fine. It tries to optimize calculation time by pre-calculating the node costs for every possible node (a similar thing could be done for edge costs at a much higher cost to memory).