Open saltzmanj opened 7 years ago
I implemented a backpropogation neural net here in python, and currently am testing it on the MNIST dataset of handwritten digits.
Results for hidden layer = 25, max_optim = 30
Here's the learning curves with max_iter =50
This is with max_iter = 100... not looking any better
Decreasing lambda to .01 seemed to clean up the high bias issue a bit
I tested my script on some random matrices in a text file and it works fine. When I use PNG images that I convert to bmp's, the matrices are being formed, but the OPTICS analyses are timing out for some reason. If we can figure out why it's timing out though, determining note-type's should be straightforward.
optics-script-easy-implementation.zip
Found an easy implementation of OPTICS. It works with Python 2.4 and 2.5. The only thing that needs to be changed is the calculation for distance, which is currently being done through hcluster package.
I used the hand-drawn notes that @saltzmanj posted under 'Segmentation' Issues for test cases. So far, the clustering script I wrote is able to distinguish between quarter notes and half notes. It also works on computer generated whole notes, but I have not tested it on hand-drawn ones.
Nice! What threshold does it use for black/white? Is it totally binary?
On Wed, Oct 26, 2016 at 2:10 PM, Avery Lieu notifications@github.com wrote:
I used the hand-drawn notes that @saltzmanj https://github.com/saltzmanj posted under 'Segmentation' Issues for test cases. So far, the clustering script I wrote is able to distinguish between quarter notes and half notes. It also works on computer generated whole notes, but I have not tested it on hand-drawn ones.
Hand-Drawn-Tests.zip https://github.com/saltzmanj/keyboardguys/files/554112/Hand-Drawn-Tests.zip note_recognition_test1.pdf https://github.com/saltzmanj/keyboardguys/files/554116/note_recognition_test1.pdf
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/saltzmanj/keyboardguys/issues/11#issuecomment-256431393, or mute the thread https://github.com/notifications/unsubscribe-auth/AGQmPgeZBakGtlzidopAampgVazzd_qFks5q35eYgaJpZM4KCHsi .
Takes an RGB image and compares the tuple directly. Threshold at >= (50, 50, 50).
I cleaned up the note recognition and posted it to the code. It recognizes blank spaces and bar lines now, and the script has been cleaned up for easier translation into LabVIEW. Also, I posted the test cases I used, so hopefully we can get similar images after we apply the segmentation.
Date is flexible. The task on the Gantt chart says "Implement Algorithm": I suppose this would be the first step.