labstreaminglayer / App-emotiv

3 stars 0 forks source link

emotiv app #1

Open cboulay opened 5 years ago

cboulay commented 5 years ago

From @dmedine on May 16, 2016 21:51

I just pushed a new branch called 'emotiv_app'. This is based on some work that cboulay had sent me earlier. Everything works but there are a couple of details to address. Most of these are just cosmetic and I will get around to them before doing a PR on this. There are 2 main concerns, however.

The first is that emotiv seems to be increasingly disinclined to play nice with the research community. Among other things, this means that in order to harvest raw EEG data from their headsets, you need to purchase the 'Premium' edition of their SDK. Basically we can redistribute the binary that holds the functions that read the EEG from the headset.

The other issue is that it seems to be impossible to grab one from of data off the headset at a time. Frames buffer up and every so often, you get a bunch all in a chunk. Worse yet is that the number of frames per buffer differs from chunk to chunk. The function that pulls data off the headset doesn't appear to be blocking, so there is no straightforward way to clock off the device. I also tried writing a callback style function to block until the next frame of data was ready. This did not appear to work very well. I think that the engine in the headset is not equiped to work this way.

One datum in each frame is the emotiv timestamp. What I've done is to find an initial offset (with local_clock()) and then when I get a bunch of frames, I simply add their timestamps to this and push the frames through an lsl outlet frame by frame with the offset+timestamp as the timestamp data. This is all fine and well, but there is certainly going to be some drift between the emotiv clock and the CPU clock. One thing to do would be simply to update the offset value every so often. If anyone has any suggestions as to the 'right' way to do this, please chime in.

_Copied from original issue: sccn/lslarchived#118

cboulay commented 5 years ago

From @chkothe on May 16, 2016 21:58

Hi David, cool stuff! So if you have the premium SDK you can use the app to get raw EEG?

Re timestamps, I would definitely not use the device clock and push that into LSL as an LSL timestamps (you'd have to keep that clock synced carefully, which is pretty hard to do right). Without looking at the code, can you detect when you got a chunk from the driver? Because then you could assume that the last sample in that chunk was recorded "now" (and prev samples successively older based on sampling rate as usual), and you could just call push_chunk without any time-stamp arguments at all.

Christian On May 16, 2016 2:51 PM, "David Medine" notifications@github.com wrote:

I just pushed a new branch called 'emotiv_app'. This is based on some work that cboulay had sent me earlier. Everything works but there are a couple of details to address. Most of these are just cosmetic and I will get around to them before doing a PR on this. There are 2 main concerns, however.

The first is that emotiv seems to be increasingly disinclined to play nice with the research community. Among other things, this means that in order to harvest raw EEG data from their headsets, you need to purchase the 'Premium' edition of their SDK. Basically we can redistribute the binary that holds the functions that read the EEG from the headset.

The other issue is that it seems to be impossible to grab one from of data off the headset at a time. Frames buffer up and every so often, you get a bunch all in a chunk. Worse yet is that the number of frames per buffer differs from frame to frame. The function that pulls data off the headset doesn't appear to be blocking, so there is no straightforward way to clock off the device.

One datum in each frame is the emotiv timestamp. What I've done is to find an initial offset (with local_clock()) and then when I get a bunch of frames, I simply add their timestamps to this and push the frames through an lsl outlet frame by frame with the offset+timestamp as the timestamp data. This is all fine and well, but there is certainly going to be some drift between the emotiv clock and the CPU clock. One thing to do would be simply to update the offset value every so often. If anyone has any suggestions as to the 'right' way to do this, please chime in.

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118

cboulay commented 5 years ago

From @chkothe on May 16, 2016 22:1

PS: of course in reality the sample is older than "now" due to various driver latencies, but that'd be a matter of a constant offset on average as usual, and any jitters from that average could be removed by liblsl. On May 16, 2016 2:58 PM, "Christian Kothe" christiankothe@gmail.com wrote:

Hi David, cool stuff! So if you have the premium SDK you can use the app to get raw EEG?

Re timestamps, I would definitely not use the device clock and push that into LSL as an LSL timestamps (you'd have to keep that clock synced carefully, which is pretty hard to do right). Without looking at the code, can you detect when you got a chunk from the driver? Because then you could assume that the last sample in that chunk was recorded "now" (and prev samples successively older based on sampling rate as usual), and you could just call push_chunk without any time-stamp arguments at all.

Christian On May 16, 2016 2:51 PM, "David Medine" notifications@github.com wrote:

I just pushed a new branch called 'emotiv_app'. This is based on some work that cboulay had sent me earlier. Everything works but there are a couple of details to address. Most of these are just cosmetic and I will get around to them before doing a PR on this. There are 2 main concerns, however.

The first is that emotiv seems to be increasingly disinclined to play nice with the research community. Among other things, this means that in order to harvest raw EEG data from their headsets, you need to purchase the 'Premium' edition of their SDK. Basically we can redistribute the binary that holds the functions that read the EEG from the headset.

The other issue is that it seems to be impossible to grab one from of data off the headset at a time. Frames buffer up and every so often, you get a bunch all in a chunk. Worse yet is that the number of frames per buffer differs from frame to frame. The function that pulls data off the headset doesn't appear to be blocking, so there is no straightforward way to clock off the device.

One datum in each frame is the emotiv timestamp. What I've done is to find an initial offset (with local_clock()) and then when I get a bunch of frames, I simply add their timestamps to this and push the frames through an lsl outlet frame by frame with the offset+timestamp as the timestamp data. This is all fine and well, but there is certainly going to be some drift between the emotiv clock and the CPU clock. One thing to do would be simply to update the offset value every so often. If anyone has any suggestions as to the 'right' way to do this, please chime in.

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118

cboulay commented 5 years ago

From @dmedine on May 16, 2016 22:2

Yes, if you have the premium SDK you get raw eeg. I would have thought it would be the other way, that you'd have to pay for the higher level stuff, but there you go.

Yes (also) I can get 'now' for each chunk in. It never occurred to me to push chunks instead of single samples, for some reason.

cboulay commented 5 years ago

From @dmedine on May 16, 2016 22:7

Also, the chunk size is varying. Should I use a vector of vectors with pushthrough=true?

On 5/16/2016 3:01 PM, Christian Kothe wrote:

PS: of course in reality the sample is older than "now" due to various driver latencies, but that'd be a matter of a constant offset on average as usual, and any jitters from that average could be removed by liblsl. On May 16, 2016 2:58 PM, "Christian Kothe" christiankothe@gmail.com wrote:

Hi David, cool stuff! So if you have the premium SDK you can use the app to get raw EEG?

Re timestamps, I would definitely not use the device clock and push that into LSL as an LSL timestamps (you'd have to keep that clock synced carefully, which is pretty hard to do right). Without looking at the code, can you detect when you got a chunk from the driver? Because then you could assume that the last sample in that chunk was recorded "now" (and prev samples successively older based on sampling rate as usual), and you could just call push_chunk without any time-stamp arguments at all.

Christian On May 16, 2016 2:51 PM, "David Medine" notifications@github.com wrote:

I just pushed a new branch called 'emotiv_app'. This is based on some work that cboulay had sent me earlier. Everything works but there are a couple of details to address. Most of these are just cosmetic and I will get around to them before doing a PR on this. There are 2 main concerns, however.

The first is that emotiv seems to be increasingly disinclined to play nice with the research community. Among other things, this means that in order to harvest raw EEG data from their headsets, you need to purchase the 'Premium' edition of their SDK. Basically we can redistribute the binary that holds the functions that read the EEG from the headset.

The other issue is that it seems to be impossible to grab one from of data off the headset at a time. Frames buffer up and every so often, you get a bunch all in a chunk. Worse yet is that the number of frames per buffer differs from frame to frame. The function that pulls data off the headset doesn't appear to be blocking, so there is no straightforward way to clock off the device.

One datum in each frame is the emotiv timestamp. What I've done is to find an initial offset (with local_clock()) and then when I get a bunch of frames, I simply add their timestamps to this and push the frames through an lsl outlet frame by frame with the offset+timestamp as the timestamp data. This is all fine and well, but there is certainly going to be some drift between the emotiv clock and the CPU clock. One thing to do would be simply to update the offset value every so often. If anyone has any suggestions as to the 'right' way to do this, please chime in.

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-219561850

cboulay commented 5 years ago

From @chkothe on May 16, 2016 22:23

That would certainly work! I think push through is true by default.

Christian On May 16, 2016 3:07 PM, "David Medine" notifications@github.com wrote:

Also, the chunk size is varying. Should I use a vector of vectors with pushthrough=true?

On 5/16/2016 3:01 PM, Christian Kothe wrote:

PS: of course in reality the sample is older than "now" due to various driver latencies, but that'd be a matter of a constant offset on average as usual, and any jitters from that average could be removed by liblsl. On May 16, 2016 2:58 PM, "Christian Kothe" christiankothe@gmail.com wrote:

Hi David, cool stuff! So if you have the premium SDK you can use the app to get raw EEG?

Re timestamps, I would definitely not use the device clock and push that into LSL as an LSL timestamps (you'd have to keep that clock synced carefully, which is pretty hard to do right). Without looking at the code, can you detect when you got a chunk from the driver? Because then you could assume that the last sample in that chunk was recorded "now" (and prev samples successively older based on sampling rate as usual), and you could just call push_chunk without any time-stamp arguments at all.

Christian On May 16, 2016 2:51 PM, "David Medine" notifications@github.com wrote:

I just pushed a new branch called 'emotiv_app'. This is based on some work that cboulay had sent me earlier. Everything works but there are a couple of details to address. Most of these are just cosmetic and I will get around to them before doing a PR on this. There are 2 main concerns, however.

The first is that emotiv seems to be increasingly disinclined to play nice with the research community. Among other things, this means that in order to harvest raw EEG data from their headsets, you need to purchase the 'Premium' edition of their SDK. Basically we can redistribute the binary that holds the functions that read the EEG from the headset.

The other issue is that it seems to be impossible to grab one from of data off the headset at a time. Frames buffer up and every so often, you get a bunch all in a chunk. Worse yet is that the number of frames per buffer differs from frame to frame. The function that pulls data off the headset doesn't appear to be blocking, so there is no straightforward way to clock off the device.

One datum in each frame is the emotiv timestamp. What I've done is to find an initial offset (with local_clock()) and then when I get a bunch of frames, I simply add their timestamps to this and push the frames through an lsl outlet frame by frame with the offset+timestamp as the timestamp data. This is all fine and well, but there is certainly going to be some drift between the emotiv clock and the CPU clock. One thing to do would be simply to update the offset value every so often. If anyone has any suggestions as to the 'right' way to do this, please chime in.

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub < https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-219561850

— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-219563208

cboulay commented 5 years ago

From @dmedine on May 16, 2016 22:42

OK. This is done and pushed. I will evaluate and beautify it this week and pull it in when it is done and tested.

On 5/16/2016 3:23 PM, Christian Kothe wrote:

That would certainly work! I think push through is true by default.

Christian On May 16, 2016 3:07 PM, "David Medine" notifications@github.com wrote:

Also, the chunk size is varying. Should I use a vector of vectors with pushthrough=true?

On 5/16/2016 3:01 PM, Christian Kothe wrote:

PS: of course in reality the sample is older than "now" due to various driver latencies, but that'd be a matter of a constant offset on average as usual, and any jitters from that average could be removed by liblsl. On May 16, 2016 2:58 PM, "Christian Kothe" christiankothe@gmail.com wrote:

Hi David, cool stuff! So if you have the premium SDK you can use the app to get raw EEG?

Re timestamps, I would definitely not use the device clock and push that into LSL as an LSL timestamps (you'd have to keep that clock synced carefully, which is pretty hard to do right). Without looking at the code, can you detect when you got a chunk from the driver? Because then you could assume that the last sample in that chunk was recorded "now" (and prev samples successively older based on sampling rate as usual), and you could just call push_chunk without any time-stamp arguments at all.

Christian On May 16, 2016 2:51 PM, "David Medine" notifications@github.com wrote:

I just pushed a new branch called 'emotiv_app'. This is based on some work that cboulay had sent me earlier. Everything works but there are a couple of details to address. Most of these are just cosmetic and I will get around to them before doing a PR on this. There are 2 main concerns, however.

The first is that emotiv seems to be increasingly disinclined to play nice with the research community. Among other things, this means that in order to harvest raw EEG data from their headsets, you need to purchase the 'Premium' edition of their SDK. Basically we can redistribute the binary that holds the functions that read the EEG from the headset.

The other issue is that it seems to be impossible to grab one from of data off the headset at a time. Frames buffer up and every so often, you get a bunch all in a chunk. Worse yet is that the number of frames per buffer differs from frame to frame. The function that pulls data off the headset doesn't appear to be blocking, so there is no straightforward way to clock off the device.

One datum in each frame is the emotiv timestamp. What I've done is to find an initial offset (with local_clock()) and then when I get a bunch of frames, I simply add their timestamps to this and push the frames through an lsl outlet frame by frame with the offset+timestamp as the timestamp data. This is all fine and well, but there is certainly going to be some drift between the emotiv clock and the CPU clock. One thing to do would be simply to update the offset value every so often. If anyone has any suggestions as to the 'right' way to do this, please chime in.

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub <

https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-219561850

— You are receiving this because you commented. Reply to this email directly or view it on GitHub

https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-219563208

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-219566543

cboulay commented 5 years ago

I think it used to be the case with the older EPOC that you could buy the normal (consumer) version or the 'research edition', and raw EEG was only available with the more expensive research edition (though I read there were some hacks around that). It seems with the new Emotiv Epoc+ that it's SDK-dependent.

And to David's point about 'higher level' stuff should be premium, I think their model is to get exploitative innovative app developers to make peasantconsumer-facing apps that Emotiv can sell on their store and take a chunk off of, and neither these app developers nor the consumers using the apps really need to know where the signals are coming from. I'm only half-joking of course; Emotiv EPOC is the only consumer-grade EEG device that gives reasonable EEG (that I know of) and I'm grateful for that.

cboulay commented 5 years ago

From @chkothe on May 17, 2016 1:25

Yes, it's certainly one of the best headsets for a consumer price point. My standpoint is to stay neutral w.r.t. vendor's business models.

On Mon, May 16, 2016 at 6:20 PM, Chadwick Boulay notifications@github.com wrote:

I think it used to be the case with the older EPOC that you could buy the normal (consumer) version or the 'research edition', and raw EEG was only available with the more expensive research edition (though I read there were some hacks around that). It seems with the new Emotiv Epoc+ that it's SDK-dependent.

And to David's point about 'higher level' stuff should be premium, I think their model is to get exploitative innovative app developers to make peasantconsumer-facing apps that Emotiv can sell on their store and take a chunk off of, and neither these app developers nor the consumers using the apps really need to know where the signals are coming from. I'm only half-joking of course; Emotiv EPOC is the only consumer-grade EEG device that gives reasonable EEG (that I know of) and I'm grateful for that.

— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-219594725

cboulay commented 5 years ago

On my computer, the SDK that was installed, despite being called "Premium", had a different name and different folder structure than the SDK used by this vcproj. I tried to download the new "Premium SDK" only to find that you had to have a commercial application and send a written request. Then I logged into my Emotiv account and found under "Download Legacy SDK and Apps" a link to the previously-named SDK. I downloaded anyway and it installed with the new name and folder structure. So maybe the "Legacy" SDK is actually the new Premium SDK? I'm confused.

Anyway, I just wanted to confirm that I was able to build and run this app on my system, though I haven't tested functionality yet.

cboulay commented 5 years ago

From @norbert7 on May 25, 2016 14:28

Hi David, could you kindly upload the .exe file to capture the emotiv data with lsl? Basically, we intend to capture EEG data using emotiv into matlab in real-time? Do you think this is possible with the code you provided?

Cheers, Norbert

cboulay commented 5 years ago

From @dmedine on May 25, 2016 17:34

Yes, I believe it is possible. This is definitely in a beta stage right now, so any feedback you can provide is most welcome. I can't promise that it is perfect yet since I haven't been able to test it. However, a lot of the code is copied from an earlier version by Chadwick Boulay. I believe he had it working pretty well, so hopefully this version is already fairly bug-free.

I do intend to add a number of features to the GUI, and also extend it to interface with headsets other than the Epoc.

ftp://sccn.ucsd.edu/pub/software/LSL/Apps/Emotiv-beta.zip

Cheers, David

On 5/25/2016 7:28 AM, norbert7 wrote:

Hi David, could you kindly upload the .exe file to capture the emotiv data with lsl? Basically, we intend to capture EEG data using emotiv into matlab in real-time? Do you think this is possible with the code you provided?

Cheers, Norbert

— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-221593747

cboulay commented 5 years ago

From @svsobh on March 9, 2017 23:56

Hi all,

While trying to acquire data using the emotiv_app code, I could not build the solution and faced the following errors. I used Visual Studio 2013. Parallels Windows 10 on Mac. I am a total beginner with Windows :

  1. The system could not find the boost files. What might be the reason for this ?
  2. I got an error that the system could not find the .exe file it needed.

What am I doing wrong here ?

cboulay commented 5 years ago

Why are you building? Does the zip on the ftp not work for you? Is there something that should be changed?

The build instructions should be pretty similar to the general app build instructions.

By the way, I have an Emotiv Epoch and a Mac. If you have the Mac SDK and you're interested in Mac support then let me know.

cboulay commented 5 years ago

From @svsobh on March 10, 2017 0:52

Dear Chadwick,

Thanks for the note. Yes I am interested in the mac version. I was trying the windows version just because of the .sln files I saw.

All I want to do right now is get data from the epoc+ , apply bandpass filter for 2-43 Hz. And see the raw signal and the filtered signal side by side, in real time. I have no clue how to accomplish that.

I'll appreciate any help you could offer. Thank you so much !

Shashwat

On Mar 9, 2017 4:40 PM, "Chadwick Boulay" notifications@github.com wrote:

Why are you building? Does the zip on the ftp not work for you? Is there something that should be changed?

The build instructions should be pretty similar to the general app build instructions https://github.com/sccn/labstreaminglayer/blob/master/Apps/APP%20BUILD%20ENVIRONMENT.txt .

By the way, I have an Emotiv Epoch and a Mac. If you have the Mac SDK and you're interested in Mac support then let me know.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-285535159, or mute the thread https://github.com/notifications/unsubscribe-auth/AL4UkA3s2Tq2bF7RUvGclHgha_LGk7lmks5rkJwHgaJpZM4IfvkG .

cboulay commented 5 years ago

From @svsobh on March 10, 2017 1:1

Also, I want to do all this in Matlab or Python as I'd later want to extract spectral features and do fractal analysis on the features. Which according to you is the better tool for this purpose?

On Mar 9, 2017 4:52 PM, "Shashwat Vinod Singhal" svsobh@gmail.com wrote:

Dear Chadwick,

Thanks for the note. Yes I am interested in the mac version. I was trying the windows version just because of the .sln files I saw.

All I want to do right now is get data from the epoc+ , apply bandpass filter for 2-43 Hz. And see the raw signal and the filtered signal side by side, in real time. I have no clue how to accomplish that.

I'll appreciate any help you could offer. Thank you so much !

Shashwat

On Mar 9, 2017 4:40 PM, "Chadwick Boulay" notifications@github.com wrote:

Why are you building? Does the zip on the ftp not work for you? Is there something that should be changed?

The build instructions should be pretty similar to the general app build instructions https://github.com/sccn/labstreaminglayer/blob/master/Apps/APP%20BUILD%20ENVIRONMENT.txt .

By the way, I have an Emotiv Epoch and a Mac. If you have the Mac SDK and you're interested in Mac support then let me know.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-285535159, or mute the thread https://github.com/notifications/unsubscribe-auth/AL4UkA3s2Tq2bF7RUvGclHgha_LGk7lmks5rkJwHgaJpZM4IfvkG .

cboulay commented 5 years ago

Matlab or Python: Whichever one has the libraries you want to use. I use Python for my online signal processing because I mostly use Neuropype and because I can plot in Python much faster than I can plot in Matlab.

The zip file linked above (ftp://sccn.ucsd.edu/pub/software/LSL/Apps/Emotiv-beta.zip) can be downloaded and extracted and in there you should find an .exe file that launches the Emotiv app. You don't have to build anything if you're willing to work in Windows. But I don't know if it will work in Parallels. (Bootcamp should be fine). Do the Emotiv Windows tools to connect and visualize the data coming from the Epoc work in Parallels?

cboulay commented 5 years ago

@svsobh , send me an e-mail (find my address on my profile page) if you want to talk about getting the Mac version working.

cboulay commented 5 years ago

From @svsobh on March 10, 2017 1:16

I haven't tried the emotiv windows tools yet. I'll do that tonight and get back to you. I have also sent a request to try the Neuropype beta version. Thanks a ton for the help !

On Mar 9, 2017 5:05 PM, "Chadwick Boulay" notifications@github.com wrote:

Matlab or Python: Whichever one has the libraries you want to use. I use Python for my online signal processing because I mostly use Neuropype and because I can plot in Python much faster than I can plot in Matlab.

The zip file linked above (ftp://sccn.ucsd.edu/pub/ software/LSL/Apps/Emotiv-beta.zip) can be downloaded and extracted and in there you should find an .exe file that launches the Emotiv app. You don't have to build anything if you're willing to work in Windows. But I don't know if it will work in Parallels. (Bootcamp should be fine). Do the Emotiv Windows tools to connect and visualize the data coming from the Epoc work in Parallels?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/sccn/labstreaminglayer/issues/118#issuecomment-285541550, or mute the thread https://github.com/notifications/unsubscribe-auth/AL4UkJE6lVBwxcCTAf2wNKoKN0yYFJ39ks5rkKHfgaJpZM4IfvkG .

cboulay commented 5 years ago

From @hkn1304 on March 26, 2017 19:21

When i try to link Emotiv to Lab Recorder it gives an error stating that EmoEngineEventCreate can not be found in the (LSL) folder where i extracted your Emotiv zip. Do you have any idea how i can fix? Thanks.

cboulay commented 5 years ago

@hkn1304 I'm afraid that description isn't enough to go on. Can you maybe provide some screen shots of the error, tell us about whether or not the Xavier control panel is running and you're able to visualize real signals, and confirm you have the premium SDK installed. Note that the Emotiv business model has changed and it might be impossible to access the premium ask unless you had it already.

cboulay commented 5 years ago

From @hkn1304 on March 29, 2017 18:29

I have the following installed in my system: Emotiv Research SDK version v2.0.0.20 LabRecorder version 12.2b Emotiv Control Panel 2.0.0.20-PREMIUM

First i start LabRecorder and then Emotiv-beta.exe( not starting Control Panel) and it gives me the following error: image which says: IEE_EmoEngineEventCreate entry point, C:........\Emotiv\Emotiv.exe dll not found! (Meanwhile, i don't start Openvibe or anything)

Is there any standart procedure to follow? Please share.

cboulay commented 5 years ago

I just pulled in a bunch of changes ( #195 ) into the emotiv_app branch. Can someone else pull that branch, follow the instructions in the README and let me know if it works? If it does then I'll merge it into master.

The API seems to have changed a lot. You'll need exactly Premium SDK v3.3.3