AutomaticBehaviorRecognitionSystem / ABRS

Automatic Behavior Recognition System (ABRS)
https://github.com/AutomaticBehaviorRecognitionSystem/ABRS
BSD 2-Clause "Simplified" License
12 stars 10 forks source link

Filters/SVdictTrainingSet_ST_Gr33a_dust_th0_10_averSubT2_binVar not found #1

Open cgutierrez-Ibanez opened 4 years ago

cgutierrez-Ibanez commented 4 years ago

Hi

I started to try to use your code to analyze some behavior (fish to start), but I run into a problem. I manage to run video_to_ST_image_batch.py and got some ST-images for my video. But then, when I run ST_image_to_ST_feature_batch.py I get the error below:

traceback (most recent call last): File "C:\deeplabcut\ABRS\ST_image_to_ST_feature_batch.py", line 125, in STF_30_posXY_dict = project_to_basis_fun (dirPathInputSTfolder,dirPathU,dirPathOutput,numbFiles) File "C:\deeplabcut\ABRS\ST_image_to_ST_feature_batch.py", line 23, in project_to_basis_fun with open(UDirPathFileName, "rb") as f: FileNotFoundError: [Errno 2] No such file or directory: 'C:\deeplabcut\ABRS\Filters\USVdictTrainingSet_ST_Gr33a_dust_th0_10_averSubT2_binVar'

Which makes sense because there is no folder named "Filters", nor a file with that name anywhere. What is that file and how do I get it/generate it?

Thank you for your time

Cristian Gutierrez-Ibanez

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Cristian,

Sorry for a belated response. I recommend using video_to_ST3C_image_batch instead. This batch produces the 3-color ST-images (ST3C) directly from the video. You can then produce ethograms (behavior identification records) by running batch_3C_to_etho.ipynb (using the ST3C images as inputs). The batch_3C_to_etho.ipynb is recognizing behaviors from ST-images directly using convolutional neural networks (CNNs). Several trained CNNs are provided in the Model folder. I recommend using modelConv2ABRS_3C_train_with_descendingcombinedwithothers_avi_10 .

What you tried to use was the code that utilized "filters" for dimensionality reduction (described in the paper) to produce 30 features from video. These features could then be used by LDA to classify behavior. That works but is already a bit outdated so I would suggest using the CNN - based ABRS as described above.

Please let me know if you can run it and any issues that you encounter. Thank you for trying ABRS!

Primoz

auesro commented 4 years ago

Dear Primoz, I am interested in using your code, it looks very promising. However, I want to use it in mice and for detecting a different behavior...do you think it is possible? In order to do that, I wonder how those available CNNs were trained...do you provide the ability to train our own networks for use with ABRS? What are the differences between the CNNs provided in the Model folder?

Thanks,

A

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Augusto,

Yes, I think it may be possible to use ABRS with mice. At present, the system is optimized for flies but there is nothing particular about it that would limit it to just this animal model. In fact, my hope is to extend the ABRS to diverse animal models and behaviors. Yes, the code (an example) for CNN training is provided: ConvNet_training.ipynb I am happy to help you with it. (We are working on documentation to make the whole system more accessible.) When you load a model, run model.summary() to see the architecture of the neural net implementing the model. The models in the Model folder differ either in architecture or, perhaps more importantly, in the training data used. The most recent model is trained with highly diverse set of fly movies and it performs pretty well. To adapt the ABRS for mice, you would first produce the 3-color ST-images ("ST3C" images or "3CST" images - sorry, our vocabulary is still evolving) from the raw videos using video_to_ST3C_image_batch. You'll need some behavioral labels and the corresponding training data (the ST-images). Next, you would run ConvNet_training.ipynb (load the labels and the training set and train the CNN). (The current (default) ST-images are 80x80x3 format. I suggest start with all the default settings. ) When the model is trained it will be saved in your ABRS main folder. Shortly I will update the ConvNet_training.ipynb . The current version is just a skeleton (the simplest version that can be run) and I strongly encourage adding label-balancing (to ensure similar amounts of behaviors in the training data) and data augmentation (may help with generalization). (Both will be added to the next version shortly.) But even before adding those, you can play with the CNN architecture (kernel sizes and adding layers). Please feel free to contact me with more questions! We can also have a Skype conference to go over the practicalities of implementation. Thank you for using ABRS!

Warm regards,

Primoz

cgutierrez-Ibanez commented 4 years ago

Hi Primoz

Thank you for your answer. My next question was pretty much what Augusto asked, as I want to implement this in fish and birds! so thank you for that detailed answer too. Also glad to hear you are working on documentation to make the whole system more accessible.

For the moment, and if you don't mind I have some basic question: 1) how can you see the ST3C images? how would you produce one like the .png examples you provide? 2) how would you create behavioral labels? what should that file look like, what format is it?

Thank you for your time.

best,

Cristian

auesro commented 4 years ago

Thanks a lot for your answer, Primoz, and I join Cristian and look forward to the extended documentation. I think this system is very promising.

I have pretty much the same questions than Cristian to get started. What type of data (images or videos) does ConvNet_training.ipynb requires to train? How do you label that data? Also, a fundamental one, do you train a network to identify only one specific behavior (for example, walking) or can you train the same network to recognize several different ones (for example, walking, drinking, grooming in mice)?

Thanks for your work and help here!

Cheers,

A

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Cristian,

If you have any examples of videos of fish and birds I'd be happy take a look.

1) After you run the video_to_ST3C_image_batch the ST3C images will be stored as files in the output folder you have defined (dirPathOutput). When the bufferSize = 50 (should be by default), the files in the output folder will contain 50 ST3C images each. The files are dictionary structures containing other information as well (speech of body displacement, strength of signal - max amount of movement and other data). To visualize the images you can just read them from the dictionary files - I have just uploaded visualize_ST3C_images.ipynb which can do that for you (just select the name of the folder containing the ST3C image files and the number of the file you want to visualize).

2) Behavioral labels are stored as a vector (or a single row np array). There are eclectic ways of making the labels. We used VCode (Hagedorn, Joey, Joshua Hailpern, and Karrie G. Karahalios. "VCode and VData: illustrating a new framework for supporting the video annotation workflow." Proceedings of the working conference on Advanced visual interfaces. ACM, 2008.) but I personally prefer labeling frame by frame using cv2. But any method, no matter how "primitive", that lets humans assign a name of a behavior in each frame, will work. Just select a diverse enough set of behaviors and videos for the labels (to obtain good CNN model generalization). We are now developing a "label maker" python program that will utilize the CNN models to make predictions of behavior in real-time and the user would just correct the predictions where they fail - thus speeding up the process of labeling.

Feel free to start a branch here and upload any code that could improve the ABRS and/or add new functionality, including solutions for label-making. Thanks!

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Augusto,

Please see my response to Cristian (above) to find answers to some of your questions. Regarding the training labels, as I mentioned to Cristian, there is no specific method that I could (highly) recommend at this time. You can try VCode but my issue with it is that the behavioral labels only contain the time-stamp, not the frame number (or I don't know how to get it). Also, it only runs on macs.

The ABRS can create ST3C images from AVI or MOV movie formats. In video_to_ST3C_image_batch you can modify the line:

if (ext == 'avi' or ext == 'mov') == True:

to add new formats. We have been using 1024x1024 and 2048x2048 pixel frames with 30 Hz or 60 Hz sampling rates. By default the ST3C images will be 80x80x3. You train the CNN with the ST3C images, not with raw videos. (Fortunately the ABRS is fast enough to produce the images in almost real-time.) Please note that we are using ABRS for videos that contain 4 different animals per frame (each animal is in its own chamber). In typical videos you will probably just have one individual per frame. So please define fbList = [1,2,3,4] in video_to_ST3C_image_batch as just fbList = [1] (otherwise the ABRS will split each frame into 4 separate areas).

There is no upper limit to the number of behavioral classes to recognize (other than the total number of frames :) ). We are using ABRS to classify about 10 different behaviors in flies. All you need to do is expend the CNN output layer to the number of behaviors you are training it with - and, of course, make the labels with all the representative behaviors.

As I said to Cristian, you too are very welcome to start a new branch here.

Please let me know if you are making progress in implementing the ABRS and what other issues you encounter. Thanks!

auesro commented 4 years ago

Hi Primoz, So I started to adapt the code to my videos (they are not square, only one animal inside, etc). Regarding fbList=[1], it doesnt work, cause in the function getting_frame_record there is:

if fb == 1:
                rf = gray[0:200,0:200];

With fbList=[1], the code will still cut your 400*400 frame and keep just the upper left corner, so I needed to make fbList=[0] and create another condition there, but thats solved now. Now I started to convert to ST3C images a video of mine, selected clipEnd=99 and clipsNumberMax=2, just to start figuring out the code but I am getting somethign weird for the first 15 frames of the movie: First_frames

The next batch looks good: Second_frames

As you can see, frames starting on the 16th, look very similar to yours only that here I have a mouse recorded from the top (the thing on the right is the food tray), so that looks good. Now, frames before the 16th...what is going on there? I think it might be related to a warning I am getting in the console:


/home/auesro/anaconda3/envs/ABRS/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:523: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/auesro/anaconda3/envs/ABRS/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:524: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/auesro/anaconda3/envs/ABRS/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/auesro/anaconda3/envs/ABRS/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/auesro/anaconda3/envs/ABRS/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  _np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/auesro/anaconda3/envs/ABRS/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
  np_resource = np.dtype([("resource", np.ubyte, 1)])
0
/home/auesro/Desktop/ABRS Test/hour120.mp4
/home/auesro/Desktop/ABRS-master/ABRS_modules.py:57: RuntimeWarning: invalid value encountered in true_divide
  cG = sFA/sF
/home/auesro/Desktop/ABRS-master/ABRS_modules.py:139: RuntimeWarning: invalid value encountered in true_divide
  imDiffClNorm = imDiffCl/maxImDiffAbs
/home/auesro/Desktop/ABRS-master/ABRS_modules.py:143: RuntimeWarning: invalid value encountered in true_divide
  imVarNorm = imVar/np.max(np.max(imVar))
/home/auesro/Desktop/ABRS-master/ABRS_modules.py:145: RuntimeWarning: invalid value encountered in greater
  imVarBin[imVarNorm > 0.10] = 1;
/home/auesro/Desktop/ABRS-master/ABRS_modules.py:165: RuntimeWarning: invalid value encountered in less
  imDiffClNeg[imDiffClNorm<0] = np.absolute(imDiffClNorm[imDiffClNorm<0])
/home/auesro/Desktop/ABRS-master/ABRS_modules.py:166: RuntimeWarning: invalid value encountered in greater
  imDiffClPos[imDiffClNorm>0] = imDiffClNorm[imDiffClNorm>0]
/home/auesro/Desktop/ABRS-master/ABRS_modules.py:168: RuntimeWarning: invalid value encountered in true_divide
  imDiffClNormNeg = imDiffClNeg/np.max(np.max(imDiffClNeg))
/home/auesro/Desktop/ABRS-master/ABRS_modules.py:169: RuntimeWarning: invalid value encountered in true_divide
  imDiffClNormPos = imDiffClPos/np.max(np.max(imDiffClPos))
/home/auesro/Desktop/ABRS Test/hour120.mp4
1`

Any idea how to fix this?

Thanks a lot!
auesro commented 4 years ago

P.S.: I tried with 4 different videos (recorded with different units of the camera) and it looks the same, always the first 15 frames of the video look like above.

auesro commented 4 years ago

Ok, I got why the first 15 frames look crappy. It has to do with the windowST=16, the number of frames across which you calculate stuff. Is this a sliding window with the current frame in the center or? When I set windowST=5 it is my first 4 frames that look crappy...I would expect only the first 2 frames in this case to look crappy (and actually only the first 8 when windowST=16)...I guess I am missing something.

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Augusto,

Nice progress! Yes, the first window (15 frames in this case) should be discarded. There is not enough data in the window yet at the beginning of each new batch. (If you want to process several clips with a single batch just put them in the same input folder and set clipsNumberMax = numberOfClips In this case the beginnings and endings of the clips will be connected by the buffer so this will not be an issue. )

Yes, for the rf = gray[0:200,0:200] replace the magic 200 numbers with np.shape(gray)[0], np.shape(gray)[1] to get the entire frame. I will update the line after testing it. (And, of course, you can do it in your branch.) Thanks a lot for implementing the ABRS!

AutomaticBehaviorRecognitionSystem commented 4 years ago

PS: the error message (just a warning) is not related to the above issue. There is a division with zero somewhere but it doesn't matter (nan values are converted to zeroes later).

cgutierrez-Ibanez commented 4 years ago

Thank you for the answer Primoz, I can now see the ST3C frames. Like Augusto, by setting fb=1, I am getting only the left upper corner. I tried to fix with what you mentioned but I keep getting an error,

cannot reshape array of size 160000 into shape (200,200)

I am probably doing something wrong. Augusto, if you could post the full code that would be great. If not ill just wait until Primoz posts it.

cheers

auesro commented 4 years ago

Hi Christian,

Of course, I just uploaded my latest version of Primoz files to my fork. I made them python files instead of jupyter notebooks, I am more familiar with. I have tried to make it as platform independent as possible (given my skills), I am on Linux so might want to check for platform-specific bugs.

The specific code to avoid the cropping is located in a couple of places. For me (videos with a single animal in the frame) the easiest was to make fbList = [0]; and then create a new condition in here.

Another issue is the shape of your videos, mine are not square so I decided to pad my frames with black to make them square by adding a piece of code here and here.

And then, as Primoz said, you need to be careful with all the 200 and 400 values out there. I made the newSize variable to be a parameter the user can set at the beggining which implied changing everywhere where the 200 or 400 appeared by something like newSize[0],newSize[1] or int(newSize[0])*int(newSize[1]) depending on the function.

If I can help, just ask!

Cheers,

A

auesro commented 4 years ago

Hi Primoz,

Thanks, it took me a whole weekend to understand the few things I have touched now!

More questions: -The ROI of 80x80 pixels is a bit too small for my mice. I have tried the following: design a bigger ROI of 100x100 or 192x192 (doesnt matter the number) and then resize the frame with cv2.resize just before reshaping here. The issue is that I get a very ugly blue channel: im3C2 Where you can not see anything..the original frame is: imRAW Red and green channels look good: im3C0 im3C1 So I dont know why or where this is happening... If you could shed some light it would be great, I could get more of the mice in the frame but still keep the 80x80 size and not modify a whole lot more of the code.

And another question now that you mention the clips: is there a way to make the code start looking at the movie at a defined frame, not the beginning?I tried setting clipStart at something other than 0 but the code still started from 0. It is time and space consuming to run the code through the whole movie file (108022 frames, 1 hour) just to select the frames where the interesting behaviour happens for the CNN training.

Right now the only thing stopping me from using the code fully is the lack of a GUI or code (with my skills it would take me forever to write one from scratch) that would make labelling frames and saving them ready for training easy.

Thanks!

A

cgutierrez-Ibanez commented 4 years ago

Thank you very much! that works! I am not versed in coding (trying to learn) so I won't produce fixed files any time soon. Also, I have some of the same concerns, like if you can start the video later or if you can make the frame bigger. Also, I agree that labeling is the thing stopping me to keep going.

Below some examples of what I'm getting with an open field video of a fish. There are some interesting problems. One is that in this video there is more movement than just the fish. These are old videos I have, and at the beginning, the fish is in an enclosing that is lifted. This creates some enduring waves/movement in the water that seems to be detected by the algorithm. Notice it jumps between the fish and the water movement and this is like 2 min into the video when the movement of the water is barely perceivable. These are not the ones I really want to analyze, I just thought they would be the simplest but the other one will probably have the same problems.

nofish

Also, for example, the tank glass walls act as a mirror and that is detected too

mirror

but if the fish moves enough, it seems to work well

Untitled-3lfish

would be important to see if after training these non-fish movements creates a problem.

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Augusto,

So, as you can see, the whole ABRS code uses 80x80 ROI and the ST3C images are also 80x80. This needs to be made more flexible (of course). So if you can get it to work for 100x100 ROI (or other areas) that would be half of the work done. Note that 80x80 is the final size of the square cropped around the animal.

To change it, the lines before the cfr = rs[topEdge:bottomEdge,leftEdge:rightEdge] have to be edited as follows: all the magic numbers 40 should be replaced by the desired ROI size (e.g. 100) * 0.5 (=50). This would produce 100x100 ROIs. (The cropping is done 50 pixels from the position of maximal movement - where the animal is probably located - in each direction.)

Next, in the lines following the cfr = rs[topEdge:bottomEdge,leftEdge:rightEdge] the magic numbers 80 should be replaced by the desired ROI size (e.g. 100).

Finally, in the line cfrVect = cfr.reshape(1,80*80) the 80 should be replaced by the desired ROI size (e.g. 100). Of course the 100 is just an example. The ROI size should be specified in the input argument of the function.

This is just for the getting_frame_record function. Other functions that use the 80x80 ROI must also be updated. I will proceed doing that on my end and update the code on the Master. Of course, if you have already done so in your fork, we can merge it.

"The issue is that I get a very ugly blue channel:"

I'm not sure where the issue is but my best guess would be that the create_3C_image function is not working properly due to the same magic number bug. The updated function should take the ROI size as an argument and then all the 80 magic numbers in it should be replaced by that ROI size. Or better, the current input argument's shape ("cfrVectRec") already contains this information. So:

SizeROI = np.shape(cfrVectRec)[1] should be the new shape (to replace the 80 magic number in create_3C_image).

I hope this helps. I will address your other questions/comments shortly.

auesro commented 4 years ago

Hi Primoz,

Thanks for the explanation. Lookiing forward to your other replies.

Yes, I know where to change those 80 numbers and where they come from. The thing is if I do that, I can get bigger ROIs but processing will take longer too, right?(By the way, I am getting dict3 files that are 7.7 mb for just 50 80x80 frames...is that normal?)

And then, I guess the CNN training is designed to work with 80x80 frames? Thats why I think it would be better if we can just resize from whatever ROI the user prefers, to the 80x80. The user will lose some resolution when you go from ROIxROI down to 80x80 but it will increase field of view to adapt the analysis to the size of the animal model of each user.

A

CG66 commented 4 years ago

Hi Primoz,

I tried to run your model using my own video, but the result seems to be different from your example.

Why is the color of my test result opposite to yours, or is my test result wrong at all?

I would appreciate it if you could spare time to answer me.

1 2

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Augusto,

Sorry it took a while. Yes, you can start from any resolution or size of ROI and then reshape it to 80x80. I do realize that in current implementation you can only read a clip from frame 0. The batch was originally designed for processing many short clips rather than few long ones. I will try to update it soon as to make it more flexible. However, I have uploaded ABRS_labelMaker script which is essentially a condensed version of ABRS minus training. It reads movies from any given frame and converts them to 3CST images in real-time. It can also be used to make new labels but this is still in development so you can comment those lines out. I suggest using this script to create optimal 3CST images for your purposes. Once the images look good you can start the CNN training with them as well as adjust the batch with the new parameters.

Yes, the 3CST image files are huge, unfortunately. The images are not compressed the way the original movies are so it is not surprising that the size is big. I suggest discarding the 3CST images once satisfactory ethograms are produced (as the 3CST images are just an intermediate product anyway).

I am now also writing a much improved batch to convert 3CST images to ethograms and will upload it when ready (1-2 weeks). This batch is more in line with the old ABRS version, described in the paper, when it comes to post-processing of probability outputs of the CNN.

Thanks for your patience and for your efforts to make ABRS usable with other animals.

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi CG66,

Your images actually look pretty good! The colors are reversed just in the visual display when you use cv2.imshow(). The 0th channel should be RED and 2th channel should be BLUE (so the background should be blue and movements should be red), however, cv2.imshow() has the blue and red channels inverted - as far as I can tell. This should have no effect on further processing and training. It looks the right side of your images has been cut off. That's probably also okay. The ABRS is cropping images around the pixel with max signal (max movement) and when that pixel is too close to the edge of the original frame, ABRS fills the area over the edge with zeros.

I hope this helps. Thanks for using ABRS!

Primoz Ravbar

AutomaticBehaviorRecognitionSystem commented 4 years ago

CG66,

The issue arises from initial cropping of the frame. (Please see the discussion above.) The ROI should be centered around the max movement. Make sure the frames are not cut into 4 areas, which is the default setting. This might have been the case for getting the edge too close to the interesting points.

AutomaticBehaviorRecognitionSystem commented 4 years ago

@cgutierrez-Ibanez,

The images look good. Yes, the training will need a good sample size to generalize across different backgrounds. You can pre-process the movies to remove some of those but even if you don't, the CNN will learn to ignore them with enough training samples. Please let me know how it goes.

Thanks for using ABRS!

auesro commented 4 years ago

@CG66 follow the thread from here, specially the part regarding fb values.

auesro commented 4 years ago

Hi @AutomaticBehaviorRecognitionSystem ,

Thanks for your answer. I am looking forward to that labeler!

CG66 commented 4 years ago

@AutomaticBehaviorRecognitionSystem,Thank you for your help. I am trying again. If there is any result, I will reply to you

CG66 commented 4 years ago

@auesro ,I'll have a try, thank you

CG66 commented 4 years ago

Thanks for your help. My previous problems have been solved.

But I still don't understand how to make my own training set. I just want to catch the insects cleaning the head at present.

I don't quite understand the labeling method you mentioned above, and my computer is not Mac. Can you tell me more about it, or I'm really looking forward to your update.

auesro commented 4 years ago

Hi @CG66 ,

Yes, that where I am stuck also at the moment. I decided to wait for Primoz to release a way to label frames apropriately.

Cheers,

A

auesro commented 4 years ago

Hi @AutomaticBehaviorRecognitionSystem

Any news regarding that labelling code? I think that would give you a boost in exposure and usability.

In my opinion, the main issue to run with ABRS is an easy way to: visualize the original frame, give it a label, transform all labelled frames to 3CST images. So far we need to transform whole videos to 3CST (huge size) and then manually go through them, select good frames for training and label them...

Cheers,

A

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Augusto,

Yes, that's exactly the idea. I am now close to being done with a "label maker" GUI. It will let you run through the movie frames in both directions, at any speed (which makes it easy to explore behaviors by eye). For each position in the movie you'll be able to see: the raw frame, the corresponding 3CST image, the automatically labeled behavior, the ethogram and the probability of the behavior. The user will then just correct the machine-label where the mismatch occurs. At present the basic GUI is already working up to the stage where the label needs to be corrected. So I would say it's about 80% ready. Will publish it here as soon as a workable full application version is done and let you know.

auesro commented 4 years ago

Sounds great, Primoz, looking forward!

In the meantime, I have made the code compatible with any frame size and shape of video. And I have separated the size of the ROI around the animal from the size of the final image to be fed to the CNN. Now you can specify any size of area around the subject and any size of final image (in case someone wants to modify the CNN training code).

Cheers,

A

AutomaticBehaviorRecognitionSystem commented 4 years ago

I have updated the ABRS_modules file with some important changes.

In getting_frame_record(frRec, startWin, endWin, fb) function when you set fb argument to 0 the 3CST images will be made from the entire arena around the animal. If you record from split arenas (4 quarters) select the quarter 1-4 clockwise, starting from 1 being upper left quarter. For all other videos keep fb=0.

The GUI for semi-automatic label-making is almost ready. The first version is gonna be a bit messy but I hope other users will help develop it.

Primoz Ravbar

auesro commented 4 years ago

Ready to help!

Happy New Year,

A

CG66 commented 4 years ago

Happy New Year

------------------ 原始邮件 ------------------ 发件人: "Augusto Escalante"<notifications@github.com>; 发送时间: 2020年1月7日(星期二) 晚上8:04 收件人: "AutomaticBehaviorRecognitionSystem/ABRS"<ABRS@noreply.github.com>; 抄送: "哼哼公子"<841102344@qq.com>; "Mention"<mention@noreply.github.com>; 主题: Re: [AutomaticBehaviorRecognitionSystem/ABRS] Filters/SVdictTrainingSet_ST_Gr33a_dust_th0_10_averSubT2_binVar not found (#1)

Ready to help!

Happy New Year,

A

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

AutomaticBehaviorRecognitionSystem commented 4 years ago

I have just uploaded the prototype of a GUI for making labels: ABRS_label_maker_GUI.py

It's basically working but please set the values on the bottom of the code file (paths where to output your labels and training data, path to the model and the first video source). In the code file, under class BufferRecord: (in the constructor function) set quadrant = 0 if you don't want to divide your video into quadrants. Then run the code. GUI window should open. Go to File --> Open to select a video you want to label; Next, select first frame and last frame (right panel of the GUI). I suggest starting with 1-50 frames; Click ENTER and then click LOAD frames; Wait for the frames to load. During this process the machine ethograms will also be created; In the green panel you will see 2 ethograms: top: current working area; bottom: the entire ethogram of all frames labeled so far. The ethograms are only the suggested predictions of behaviors. To add your labels, select the area of the ethogram you'd like to label by clicking on the top ethogram. First click will draw a red line to indicate the beginning of the area to be labeled, the second click will draw a blue line to indicate the end. Since you don't have a good model yet, select the entire ethogram and label it "0" by entering "0" to the field to the right of the ethograms and then clicking the Correct It! You'll notice that the lines on the ethogram have shifted to the zeroth row (shifted all the way to the top of the ethograms); Now you're ready for labeling. Scroll through the frames by the commands in the blue panel below the ethograms. You can also zoom in and out by clicking on the top ethogram and selecting the area you wish to zoom in and then clicking Zoom In. When you're scrolling through the frames (back and forth) the images of the animal will appear on the top of the GUI. Left is the raw image, right is the corresponding 3CST image (ST-image). Don't worry about new ethogram suggestions being created as you're exploring the movie. Only the corrected labels will be saved. Find the behavior you wish to label. Scroll back and forth over the behavior to find its start and end. Indicate those on the top ethogram by clicking once for the starting point and again for the ending point (red and blue lines will be drawn). You can do this with the zoomed-in ethogram too. Now label the behavior with a numerical name, e.g. "8", and click Correct it! The ethogram will be updated accordingly; When you are done with this section of the movie, click Save Labels. Both the labels (the ethogram) and the ST-images (the training data) will be saved into the folders you had selected (the bottom of the code file). This prototype works but feel free to modify it right away on your forks. I will update a slightly more polished version soon. I uploaded this version because I wanted you to be able to start working on it ASAP, so please forgive its appearance and incomplete menu. In the next version the menu will be fully functional and hopefully no values will need to be set in the code file (everything will be done in the GUI). Thank you for your patience and for using the ABRS! Primoz Ravbar

auesro commented 4 years ago

Dear Primoz, This is fantastic news! I hope to have some time to play with it next week! Will report back on findings! Cheers

auesro commented 4 years ago

Dear Primoz, I started playing with it. I did not get very far. On pushing the Load frames button I get the following error:

Exception in Tkinter callback
Traceback (most recent call last):
  File "/home/augustoer/anaconda3/envs/ABRS/lib/python3.6/tkinter/__init__.py", line 1705, in __call__
    return self.func(*args)
  File "/home/augustoer/ABRS/ABRS_label_maker_GUI.py", line 213, in create_buffer_obj
    predLabel, predProb = self.bufferObj.get_predictions(i)
AttributeError: 'tuple' object has no attribute 'get_predictions'
Exception in Tkinter callback
Traceback (most recent call last):
  File "/home/augustoer/anaconda3/envs/ABRS/lib/python3.6/tkinter/__init__.py", line 1705, in __call__
    return self.func(*args)
  File "/home/augustoer/anaconda3/envs/ABRS/lib/python3.6/tkinter/__init__.py", line 749, in callit
    func(*args)
  File "/home/augustoer/ABRS/ABRS_label_maker_GUI.py", line 342, in update
    rawFrame = self.bufferObj.get_raw_frame(self.frameInd + self.startZoom)
AttributeError: 'tuple' object has no attribute 'get_raw_frame'

And I dont really know what to do here. Also the variable modelPathName should be empty when starting for the first time, right? We dont have a model yet.

Cheers,

A

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Augusto,

For now just use any model - don't leave it empty. When you have the labels you will be able to train the CNN with them to create your own model. See if that helps with the error you're getting.

Cheers,

auesro commented 4 years ago

Hi Primoz, I finally got some time to test the Label Maker GUI. It works for me, meaning it outputs the ST3C images and labels in two separate files. You were right about not leaving the model path empty. However, if possible I would change something: As it is right now, the GUI will save the images and labels for the whole stretch of video selected (from First Frame to Last Frame), it would be much more useful if it would save only the portion of frames that we label differently from 0. Otherwise, we need to label correctly all of the frames loaded if we dont know the exact frame numbers of the video where our behavior of interest is happening, right? Which I guess it is pretty normal, specially since knowing the exact frame numbers in a video is tricky.

Cheers

A

PS: are you still developing ABRS?

AutomaticBehaviorRecognitionSystem commented 4 years ago

Hi Augusto,

I'm glad to hear the GUI is working for you. I am still developing the ABRS and will update it soon.

The Label Maker GUI is saving all the labels including those with 0 value. The frame number is now the index of the label. During the training with CNNs the zero labels will not be included so in effect this is the same as saving only the non-zero labels. I can make it so that just the non-zero labels (and the corresponding ST3C images) are saved. In that case we need to add the frame index to the saved data.

Let me think about it and I'll design a single file containing the images, the labels and the frame index. Will post it soon.

auesro commented 4 years ago

Hi Primoz, Awesome to hear you are working on it! Good to know about the CNN training ignoring 0 labels. In that case, yes, its just a waste of processing time and drive space to store all those images with labels=0, right?

Looking forward!

auesro commented 2 years ago

Hi Primoz, Any plans to update the GUI any time soon? Found your recent bioRxiv paper, so I guess you are still working with ABRS. Cheers, A

AutomaticBehaviorRecognitionSystem commented 2 years ago

Hi Augusto,

Yes, indeed we are working on the GUI. It's now in a pretty good shape and we are finishing it up - finally. (I'm already using it to create semi-automatic labels and then use them as the training data set to improve the model.) Much of this GitHub will be revised too with easer installation instructions, improved documentation, and, of course, the GUI.

Primoz

auesro commented 2 years ago

That sounds great, Primoz! Do you guys have any ETA for public release?

A

AutomaticBehaviorRecognitionSystem commented 2 years ago

The new GUI is finally released. Please see the updated readme file. We added a user manual for the GUI as well. The training is now much improved by the implementation of ResNet-50 deep learning architecture.

Authus1234 commented 1 year ago

Dear Primoz, I got this error report, and the relevant raw video has been sent to you by email.Do you know how to solve it? (Sorry for taking it too long,because these days the vpn was not well,I couldn't get in the github) Error reporting

AutomaticBehaviorRecognitionSystem commented 1 year ago

It looks like you’re using the GUI, the purpose of which is to produce training labels for creation of a new model. We noticed the same error when attempting to run the GUI without the quadrant selection (choosing "all" when setting up the protocol). It seems the code is still separating the movie frame into the four quadrants even when the user selects the "all" option, which is why the sizes don’t match. This is clearly a bug (we have tested the GUI with selecting quadrants but apparently not thoroughly enough with selecting the "all" option). Note: whenever you run the GUI afresh you need to delete the ABRS_cust.dat file created by the previous run.

Unfortunately we are currently NOT actively supporting the ABRS GitHub. This is because of the lack of resources (the person who developed the GUI is no longer with us and we do not have any other developers to actively work on the ABRS). However, we encourage you to proceed with the following:

1) Create a new "issue" here and copy/paste your post there. This way other users in the GitHub community could help with the bug.

2) Try running the GUI with a quadrant selected, e.g. select "1" instead of "all" when the dialog opens.

2a) If you can see the movie frames after loading them, then you know the bug was indeed in the quadrant selection.

3) Go through the GUI code and try to find the lines specifying the quadrant and make sure that when no quadrant is selected by the user (the "all" option) the movie frames are not cut into quadrants.

4) If you succeed in fixing this bug, please start your own fork here and update the GUI with the corrected version, in accordance with the GitHub rules and the software license.

Alternatively, you can still use the ABRS, following the instructions, with the models already provided here (so skip the GUI altogether).

Best of luck!

Authus1234 commented 1 year ago

Thank you for your reply. Later I'll create an issue. I selected the left top,but it seems nothing change. And I've always used the model you provided at the first time.But I couldn't skip the GUI.So are there any other possible problems here. Looking forward!