Closed ewmson closed 9 years ago
Hi Eric,
For the openCV code, we want to run it on ubuntu, Amruta is working on configuring the PCs in the observation room.(BTW Amruta how's your progress?)
You need to setup a openCV and qt environment. I also have compile errors telling me multiple definition of main. I suspect it's the GLOB_RECURSE in the CMakeFile caused the problem? It's strange because the code was running last week but start to have compile error from Saturday... Anirudh, I wonder if you have any idea why so?
thanks, nuo
On Tue, Oct 13, 2015 at 10:58 AM, Eric Williamson notifications@github.com wrote:
I am unable to set up a correct build environment despite my attempts to test the networking code for the networking branch on the open CV code.
I was wondering if someone with the correct build environment could ensure it is building on their environment.
In addition I would like more details on the build environment we will be using (if windows we need to include different socket files than a linux environment) so that we can get the networking running as fast as possible.
The interface specified in the code should not change provided my brain still functioned when writing it, so we can also begin getting the data into the proper struct and the calling the sendX functions appropriately without worrying about those interfaces changing
— Reply to this email directly or view it on GitHub https://github.com/MirrorWorldsVT/MultiObjectTracking-OpenCV/issues/1.
Nuo Ma Technical Project Manager of Mirror Worlds, ICAT M.S. Candidate, Computer Engineering Virginia Tech tel:(540)750-1035, lorinma@gmail.com nuoma@vt.edu
You will need OpenCV 3 (not any 2.4.x version or earlier) to compile the code. Install Qt as well. Right now the code isn't using Qt specifically but OpenCV uses it in the background. Ideally install Qt 5 (not Qt 4); right now OpenCV will work with either but newer version is always preferred.
Not sure about the multiple definition of main issue, can you paste it here? The GLOB_RECURSE is to compile all the *.cpp files in the src/ directory; it seems to be working for me so not sure what the problem is.
For networking, are we going with TCP or WebSockets? Either way could someone post the syntax for the JSON we are sending (or post a link to where I can find it)? Incidentally, Qt provides modules for both TCP and WebSockets communication, so since Qt is already being used by OpenCV we can use it in the future for the networking, unless we want to pull in some other library for WebSockets or continue using tcp_client.
Also, I see you already wrote the blob struct for use in the networking code - right now all the blob/track information is in the Track class. It doesn't have some things like age right now because I'm still working on implementing the Kalman filter for tracking blobs, but it will eventually. It does have the ID number and bounding box implemented. With this in mind, do you think we still need the blob struct?
I did use latest opencv3. Also, it prompt cannot find <opencv2/bgsegm.hpp>. so what i did is to include <opencv2/video.hpp> and <opencv2/video/background_segm.hpp> and use cv::createBackgroundSubtractorMOG2.
The problem is
CMakeFiles/mw-tracking.dir/CMakeFiles/
2.8.12.2/CompilerIdCXX/CMakeCXXCompilerId.cpp.o: In function main': CMakeCXXCompilerId.cpp:(.text+0x0): multiple definition of
main'
CMakeFiles/mw-tracking.dir/main.cpp.o:main.cpp:(.text+0x46): first defined
here
collect2: error: ld returned 1 exit status
make[2]: * [mw-tracking] Error 1
make[1]: * [CMakeFiles/mw-tracking.dir/all] Error 2
make: *\ [all] Error 2
On Tue, Oct 13, 2015 at 12:27 PM, Anirudh Bagde notifications@github.com wrote:
You will need OpenCV 3 (not any 2.4.x version or earlier) to compile the code. Install Qt as well. Right now the code isn't using Qt specifically but OpenCV uses it in the background. Ideally install Qt 5 (not Qt 4); right now OpenCV will work with either but newer version is always preferred.
Not sure about the multiple definition of main issue, can you paste it here? The GLOB_RECURSE is to compile all the *.cpp files in the src/ directory; it seems to be working for me so not sure what the problem is.
For networking, are we going with TCP or WebSockets? Either way could someone post the syntax for the JSON we are sending (or post a link to where I can find it)? Incidentally, Qt provides modules for both TCP and WebSockets communication, so since Qt is already being used by OpenCV we can use it in the future for the networking, unless we want to pull in some other library for WebSockets or continue using tcp_client.
Also, I see you already wrote the blob struct for use in the networking code - right now all the blob/track information is in the Track class. It doesn't have some things like age right now because I'm still working on implementing the Kalman filter for tracking blobs, but it will eventually. It does have the ID number and bounding box implemented. With this in mind, do you think we still need the blob struct?
— Reply to this email directly or view it on GitHub https://github.com/MirrorWorldsVT/MultiObjectTracking-OpenCV/issues/1#issuecomment-147768284 .
Nuo Ma Technical Project Manager of Mirror Worlds, ICAT M.S. Candidate, Computer Engineering Virginia Tech tel:(540)750-1035, lorinma@gmail.com nuoma@vt.edu
Okay so the include error is because OpenCV has two implementations for the Gaussian-based background subtraction we are using. There is MOG2, which is part of the built-in video modules, and MOG, which is part of the additional bgsegm module. Apparently bgsegm is part of opencv_contrib, which is a bunch of OpenCV modules that aren't included in the default configuration. You need to download opencv_contrib separately, and tell OpenCV the location of the download opencv_contrib when compiling OpenCV. I don't think there is a big difference between MOG and MOG2 for now, so we can just use MOG2 for now so we don't need to install the contrib modules.
I'm not sure why the multiple redefinition issue is happening. It should only be pulling in files in the src
directory to compile. Are you compiling the code in the build
directory, separate from the src
directory? Is your build
directory within the src
directory? If that's the issue, I'll update the readme to make it more clear.
Also do you think I should include instructions for compiling OpenCV in the readme? It was a little complicated to figure out so if enough people are going to be compiling OpenCV then I'll include instructions.
I think i'm compiling in src directory and the build directory is not within src directory...I'll retry that later. And yes, please write a bit more detailed instructions for compiling opencv. I think it's always good to document as much as possible.
On Tue, Oct 13, 2015 at 2:40 PM, Anirudh Bagde notifications@github.com wrote:
Okay so the include error is because OpenCV has two implementations for the Gaussian-based background subtraction we are using. There is MOG2, which is part of the built-in video modules, and MOG, which is part of the additional bgsegm module. Apparently bgsegm is part of opencv_contrib, which is a bunch of OpenCV modules that aren't included in the default configuration. You need to download opencv_contrib separately, and tell OpenCV the location of the download opencv_contrib when compiling OpenCV. I don't think there is a big difference between MOG and MOG2 for now, so we can just use MOG2 for now so we don't need to install the contrib modules.
I'm not sure why the multiple redefinition issue is happening. It should only be pulling in files in the src directory to compile. Are you compiling the code in the build directory, separate from the src directory? Is your build directory within the src directory? If that's the issue, I'll update the readme to make it more clear.
Also do you think I should include instructions for compiling OpenCV in the readme? It was a little complicated to figure out so if enough people are going to be compiling OpenCV then I'll include instructions.
— Reply to this email directly or view it on GitHub https://github.com/MirrorWorldsVT/MultiObjectTracking-OpenCV/issues/1#issuecomment-147807341 .
Nuo Ma Technical Project Manager of Mirror Worlds, ICAT M.S. Candidate, Computer Engineering Virginia Tech tel:(540)750-1035, lorinma@gmail.com nuoma@vt.edu
All right, in that case I'll also include instructions for compiling the bgsegm module. I just checked MOG and MOG2, and it looks like MOG (in bgsegm) is a little bit faster.
Eric, what build issues were you having?
@nuoma I also had same problem about bgsegm.h file initially. I fixed it by adding separate *.h file in the source files. Try to build the branch code I just added branch to the repository in which I did the required changes. @ewmson I have written TCP socket and integrated with the code. Currently it is sending a test structure to nodejs server. For testing with your code you have to change the server URL from "locahost" to server URL. I have checked the communication with nodejs server and it is working on localhost. Please check the same with your code. @anidev if we can meet on saturday then we can send all blob information to the server. Because I tried to extract the blob details from Tracks but it was giving me some error. I was not able to get the all details like "age" and "update Time" etc. On Saturday we can sit and fix this issues. Currently I have added the branch "serverOpenCV" so that @ewmson can continue his testing. @nuoma We can do installation of ubuntu and opencv on our machines on Friday if possible.
I fixed the bgsegm issue in master. Feel free to rebase your respective branches. The MOG class that bgsegm provides appear to be better than the built-in MOG2 class, but for now we can use MOG2 until I put up instructions for compiling bgsegm.
@bharambe77 I won't be here this weekend because fall break, but I can still work on code this week and while I am away. Right now, the Track class does not have "age" or "update time" or any fields other than "id" and "bounding box", because I am still implementing the Kalman filter. Once that is done and we get proper blob tracking, the Track class will have those fields. I think I can get it done within a few days.
Also off topic, but I think we should have a few guidelines for code formatting, to keep things consistent. For example, the code right now uses four spaces for indents, braces on same line, pragma once, etc. I don't really care if we change those or not but we should try to be consistent, so I'll post something about how the code is formatted so far.
We can use either TCP or websockets, if we envision that we will have the server doing the managing of the blobs(as we were talking about at the meeting) then I do not see a need for the server to send data back to the client so TCP will be fine as the server side code for receiving TCP has been thoroughly tested and I do not think will need modifications.
We do not need a blob struct, but I just created one to allow any data that was had into the struct.
The "age" field is a field that will be set internally depending on if the blob is a new/update/remove blob so is not required to be set by the track (I just left all the current fields matlab was sending in a struct as I assumend those would be filled in at some point) Some fileds like "updateTime" was always set to 0 from matlab, and "age" is as I previously stated set depending on which type of blob it is (age may not be a good term to refer to it as, I would prefer type, but it is what is left over as from before I was on the project so do not want to break any legacy code). @bharambe77 I will attempt to merge my code with the branch you posted so we can get this tested as soon as possible so we can meet the goal nuo wants for this week
@anidev I will look into using the QT socket implementations; will also adjust the style of my changes to adhere to the format the vision team defines once those are written up
I will update when I push to my branch again, and will also update the confluence site for the correct JSON data and what each one means (https://webapps.es.vt.edu/confluence/display/ICATVT/API+Documentation will be updated when I have the time with the correct JSON stuff)
All right, TCP is fine with me. I was envisioning a structure where we have an abstract class for the network communications and two implementing classes, one for TCP and one for WebSockets, so we can easily switch between them. If we aren't ever going to use WebSockets then we won't need that complexity.
Thanks for the confluence link, that was what I was looking for. However I get a "page not found" error; is that something I have to fix or is everyone having the same issue?
On 10/13/2015 04:49 PM, Eric Williamson wrote:
We can use either TCP or websockets, if we envision that we will have the server doing the managing of the blobs(as we were talking about at the meeting) then I do not see a need for the server to send data back to the client so TCP will be fine as the server side code for receiving TCP has been thoroughly tested and I do not think will need modifications.
We do not need a blob struct, but I just created one to allow any data that was had into the struct.
The "age" field is a field that will be set internally depending on if the blob is a new/update/remove blob so is not required to be set by the track (I just left all the current fields matlab was sending in a struct as I assumend those would be filled in at some point) Some fileds like "updateTime" was always set to 0 from matlab, and "age" is as I previously stated set depending on which type of blob it is (age may not be a good term to refer to it as, I would prefer type, but it is what is left over as from before I was on the project so do not want to break any legacy code). @bharambe77 https://github.com/bharambe77 I will attempt to merge my code with the branch you posted so we can get this tested as soon as possible so we can meet the goal nuo wants for this week
@anidev https://github.com/anidev I will look into using the QT socket implementations; will also adjust the style of my changes to adhere to the format the vision team defines once those are written up
I will update when I push to my branch again, and will also update the confluence site for the correct JSON data and what each one means (https://webapps.es.vt.edu/confluence/display/ICATVT/API+Documentation will be updated when I have the time with the correct JSON stuff)
— Reply to this email directly or view it on GitHub https://github.com/MirrorWorldsVT/MultiObjectTracking-OpenCV/issues/1#issuecomment-147849247.
I updated serverComm branch with blob sending. Could somebody test it and see how it works? Right now I'm assuming that for blob deletion, the server doesn't actually care about bounding box, only the id, so it's sending 0s for the bounding box (see main.cpp). If that's wrong and the server does care about blob data for deletion, we can do that, we'll have to restructure the code slightly.
If the networking code works then we can merge it into the kalman branch and continue development there until the kalman filter is fully implemented.
For removal Matlab was sending the blob bounding box, but I do not know if the clients are using it (I know for the server logic I do not care either way).
I am unable to test it as I do no have access to a camera or the "../walk-cut.mov" that it is currently looking for.
If I could have access to that file I would be able to verify that we are sending correct blobs to the server.
I'll upload the video file I was using.
I did a basic test with Wireshark to inspect the network traffic; it looks like the data is being sent to the server fine, and also apparently the server is sending the same data back every time.
Yes, the server is currently broadcasting all blobs back to the blobsenders (this was from when we had decided to request that feature so that id handling could be done on the blobsender sides), but I believe the current plan is to have that be done on the server. I can turn off the broadcasting back to the blobsenders if that is what we want.
I am unable to set up a correct build environment despite my attempts to test the networking code for the networking branch on the open CV code.
I was wondering if someone with the correct build environment could ensure it is building on their environment.
In addition I would like more details on the build environment we will be using (if windows we need to include different socket files than a linux environment) so that we can get the networking running as fast as possible.
The interface specified in the code should not change provided my brain still functioned when writing it, so we can also begin getting the data into the proper struct and the calling the sendX functions appropriately without worrying about those interfaces changing