Closed scott-huberty closed 1 year ago
@scott-huberty OK. Getting back at this as a target to have this PR fixed and merged before our Friday call. Sorry for the lag.
@scott-huberty OK. Getting back at this as a target to have this PR fixed and merged before our Friday call. Sorry for the lag.
Awesome thanks for getting to that before me : )
BTW/FYI- sorry I don't have screenshots on hand, but I viewed a topoplot of a clear blink component in an ica
object, first in MNE
and then in the qc-dashboard (generated by our plotly code). There is definitely something weird going on. A beautiful blink component (when you plot the component in MNE), looks like mumbo jumbo with our plotly topolot!
BTW/FYI- sorry I don't have screenshots on hand, but I viewed a topoplot of a clear blink component in an ica object, first in MNE and then in the qc-dashboard (generated by our plotly code). There is definitely something weird going on. A beautiful blink component (when you plot the component in MNE), looks like mumbo jumbo with our plotly topolot!
Yes... I had little doubts that there was an issue of that sort. The topomaps we were seeing were not what we would expect... I'll debug this with the rest.
I started by adding some tests. You might find the last commit interesting because it has a Dash test. These three tests works on my computer, but the last one (the Dash one) won't work for now on CI. It requires a cromedriver
to be installed (and findable). The download, unzip, and move to a path with PATH
would need to be added to the CI script. Also, this requires chrome to be installed and runnable... I don't know if the CI can do that (i.e., I am not sure it has a X server... if it is just a command line OS, I don't see how pytest will be able to launch chrome to make the test. To be tested...
Nevermind. It looks like CI with Dash test will not be complicated (https://community.plotly.com/t/dash-integration-testing-with-selenium-and-github-actions/43602/2). Just pushed a commit with that modif.
Go @christian-oreilly go 🚀
Thanks for tackling this and breaking the ice and the dash tests 🙏
Note that we could make a new GitHub actions yaml file, name it something like test_dashboard
and run the tests from there, so that this shows up as it's own GitHub action when opening PR's.
Note that we could make a new GitHub actions yaml file, name it something like test_dashboard and run the tests from there, so that this shows up as it's own GitHub action when opening PR's.
Yeah... I guess we could do that. I have no objections about it, but I am not sure to see a strong advantage of it. Current tests work for the topoplot but fails for the pipeline:
collected 8 items pylossless/dash/tests/test_topo_viz.py ... [ 37%] tests/test_pipeline.py ..FF [ 87%] tests/test_simulated.py F [100%]
Do I need to rebase against some branch for these to work? I am not sure when these started to break. Do you think these have been broke by this specific PR?
Nevermind! We broke them I think with the changes in "manual" flags. I'll fix these.
Nevermind! We broke them I think with the changes in "manual" flags. I'll fix these.
Yeah - just have to remove this line now that the 'manual'
key is remoeved:
Also CC @Andesha
this PR is API changing.
After it's merged I'll try to add a function for converting flags to mne-bads so that people don't have to worry about custom scripting it.
thanks for letting me know!
is this going to be pushed as a release?
is this going to be pushed as a release?
Not immediately but We will cut a new release relatively soon I believe.
The latest IC topomaps seem more reasonable to me (more like patterns I would expect from IC topomaps):
I don't l know what the hell is wrong with GItHub but it keep saying that there was one pending reviewer (me) although I already approved that PR! Anyway, @scott-huberty do you want to review this before I merge once I fixed the linting?
Sure I'll try to review before todays session! The find_breaks
test is failing and I want to find out why
The latest IC topomaps seem more reasonable to me
They still seem a little fishy to me (I'd expect at least one clear ocular looking topo from this very nice data). Have you checked any of these topos against what would be generated by mne.preprocessing.ica.plot_topomap
?
Codecov Report
83.69% <91.17%> (ø)
100.00% <100.00%> (+73.33%)
100.00% <100.00%> (ø)
100.00% <100.00%> (ø)
64.28% <100.00%> (+33.73%)
72.25% <100.00%> (+53.66%)
... and 4 files with indirect coverage changes
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more