uzh-rpg / rpg_emvs

Code for the paper "EMVS: Event-based Multi-View Stereo" (IJCV, 2018)
143 stars 51 forks source link

Undergraduate Computer Science student using a Prophesee Gen 4 HD event camera and a UR5 robot arm #15

Open Sturok opened 3 years ago

Sturok commented 3 years ago

HI I was wondering if there is any reason the EMVS algorithm wouldn't work with a 1280x720 event camera?

I am an undergraduate computer science student and I have outlined below my setup. I am not using a system that has ROS so i rewrote data loading code for my data to go straight into the message data structures,vectors, and maps that the EMVS code uses.

I have calibrated my event camera twice using the tool provided by prophesee and get similar numbers each time - i hardcode these into a cameraInfo msg

I ahve an RGB camera and an event camera attached to a robot arm. I have a set path and orientation of the tool part of the arm programmed into the robot. I record video and event data as the robot moves the cameras along the path...

My event data is the x,y,polarity,time in comma separated format My pose data is from a VSLAM algorithm..

I parsed out one of the examples in the readme into the format my event data and pose data is in and it still outputs OK.. and fed this into my alterations in the code and it outputs the expected output

with my data and pose information I get .. unexpected results.. the point cloud looks like linear diverging lines out from a central point.. I will try and post some pictures..

hmm when i recorded it.. it was on a small screen and there is a LOT of noise across the entire video.. It might be a simple sitatuon of "put crap in" and get ..well.. its not gonna make gold out of crap is it..

guillermogb commented 3 years ago

Hi @Sturok . No, there is no reason why it should not work due to the sensor resolution (provided you adjust some parameters of the DSI). Is there any light source (i.e., flickering light) causing more events than the ones due to the camera motion? Do you have hot pixels in your data? You should remove those if possible. Did you check some suggestions here?

No, it's not going to make gold out of crap :P , I wish it did.

Sturok commented 3 years ago

Thanks for you reply i appreciate it. I will look into the hot pixels. I ran a added a filter called an "activity filter" from the prophesee API into the pipileine that converts from the RAW file to the CSV and I ran on only 2 seconds in the middle of my path and i get more "stuff" now.. the path sort of half orbits the hanging satellite moveing from this end in the pictures above to looking at the other end and the camera rotates about quite a lot around the Z access in its path to look back at the other side of the satellie

dsi pc

Sturok commented 3 years ago

I think i might also have my translation and orentations in the path wrong compares to the axis in the imagery.. I have come across both - XY ground plane and Z up/down ------ and XZ ground plane and Y up and down.. I feel I am using the former and not the latter.. and as a computer vision algorithm the X and Y dimensions on the camera are X left/right and Y up/down and Z depth.. so i think i am inputting my trajectory wrong.. as I have Y as depth i think..

guillermogb commented 3 years ago

yeah, maybe the coordinates are wrong, as you mention. BTW, did you set properly the min and max depth values used in the DSI according to the expected depth of your scene? I do not know... it looks like somewhere between 0.3 meters (min) and 3 meters (max)? @Sturok

Sturok commented 3 years ago

depth_colored

i swapped the Y and Z and their parts of the quaternion and i get something thet starts to at least look a bit like the satellite on the left side of the picture i am a bad computer scientist right now @ 11pm my local time.. I definately have in the past but on these more recent "successes" i haven't really looked strictly at what config settings i am using

guillermogb commented 3 years ago

+1. Good luck

Sturok commented 3 years ago

--min_depth=0.3 --max_depth=1 --dimZ=100 --dimX=1280 --dimY=720 --start_time_s=4 --stop_time_s=5

--adaptive_threshold_c=9 --median_filter_size=3 --radius_search=0.05 --min_num_neighbors=3

if i limit to 1 second

Sturok commented 3 years ago

dsy1 pc1

Sturok commented 3 years ago

.

Sturok commented 3 years ago

is the majority of the event file i am working with.. the screen recording cut it a little short.. is the flickering of the target in and out a bad thing? (i imagine it is)

guillermogb commented 3 years ago

The events look a bit "weak". image Could you try a more advantageous motion (a "wax-on" motion) to validate the setup before trying others?

Sturok commented 3 years ago

Hey thanks a lot for your replies! yeah i have some lab time in the coming days with the camera... is "wax on" a more circular backwards and forwards motion or a linear backwards and forwards motion? the karate kid was a long time ago for me..

i am trying to think about how to get the robot arm to do little circles.. might be easier when i am staring at it and the control pendant.. I control the path by settings way-points.. do you think a motion like below (but probably with a lot more little loops in it? each triangle would be a way point and while semicircle moves are possible i can do smoother moves between the way points - "blending" the corners is easier but at the end of the day I will probably test multiple things.. path1

guillermogb commented 3 years ago

Dear @Sturok. Please take a look at the five examples at the end of the EMVS video for the circular type of motion that I mention. I would try with only 1-2 circles first, as in the examples, to understand how good the reconstructions can be and to adjust the parameters (median filter, adaptive_threshold_c, etc.). Then, I would try more complicated motions.