Closed test12to3 closed 1 year ago
Oh I now found that there is already the "Bugs in global optimiser" topic so my report is very likely a related issue.
Thanks for the feedback. Yes this is obviously broken.
I re-ran the optimiser:
EQ for JBL 104 computed from ASR data
Preference Score 1.66 with EQ 3.67
Generated from http://github.com/pierreaubert/spinorama/generate_peqs.py v0.24
Dated: 2023-09-13-12:39:57
Preamp: -2.1 dB
Filter 1: ON LS Fc 71 Hz Gain +1.84 dB Q 0.98
Filter 2: ON PK Fc 71 Hz Gain +2.16 dB Q 2.66
Filter 3: ON PK Fc 169 Hz Gain -2.85 dB Q 0.61
Filter 4: ON PK Fc 391 Hz Gain +2.78 dB Q 0.90
Filter 5: ON PK Fc 1442 Hz Gain +0.98 dB Q 2.88
Filter 6: ON PK Fc 4966 Hz Gain -2.98 dB Q 2.97
Filter 7: ON PK Fc 14473 Hz Gain -3.00 dB Q 0.28
which looks a lot more reasonable.
I am also creating an issue to track at least obviously incorrect EQ.
Thank you!
This 1kHz Low pass behavior seems to occur also at the MoFi Point source 10 https://www.spinorama.org/speakers/MoFi%20SourcePoint%2010/ErinsAudioCorner/index_eac-v1-0.html Maybe this kind of behavior is some kind of systematic and helps to debug the optimizer?
Yes! I found quite a lot of them. I am currently regenerating them.
Nice that you fix things in such a fast pace. An automatic comparison of the predicted preference scores of the original and equalized versions might filter out bad optimization in an automated manner. With some optimization algorithms it makes a lot of sense to do a meta optimization with different parameters of the algorithm and take the best result in terms of the predicted preference score. Most of the time it takes almost no time to program such a script. If the core optimization isn't that computational intensive it might be a great improvement you might consider?
Nice that you fix things in such a fast pace. An automatic comparison of the predicted preference scores of the original and equalized versions might filter out bad optimization in an automated manner.
The software went out of control some time ago. It was meant to be a pet project for a weekend. With more than 1000 speakers, it needs much higher QA and a lot of automated tests.
With some optimization algorithms it makes a lot of sense to do a meta optimization with different parameters of the algorithm and take the best result in terms of the predicted preference score. Most of the time it takes almost no time to program such a script. If the core optimization isn't that computational intensive it might be a great improvement you might consider?
I have been trying. The core optimisation for 1 eq takes 1 min per core. I have enough computing power to find better hyperparameters. I have a version somewhere that run on pytorch and on a GPU.
Before I invest too much time to find better overall parameters, I need more tests to be sure I am solving the correct issue.
End of the day, you are correct.
Hello,
very nice project and website I like it a lot.
The eq for both measurements of the JBL 104 doesn't work. https://www.spinorama.org/speakers/JBL%20104/ASR/index_asr.html https://www.spinorama.org/speakers/JBL%20104/Misc/index_misc-napilopez.html You get a lowpass at 1Hz with a slope of about 24dB per octave. So something went wrong there.