NorthStarUAS / ImageAnalysis

Aerial imagery analysis, processing, and presentation scripts.
MIT License
150 stars 43 forks source link

Unable to test on new aerial images dataset #15

Open alishan2040 opened 2 years ago

alishan2040 commented 2 years ago

Hello @clolsonus ,

thanks for providing the implementation.

I've few questions about the work. I installed all the dependencies for the project and used some images having GPS information for map creation. However I stuck into my issues as you can see in the attach screenshot. Also getNode() method from props package is not working as not returning any children.

I've also attached one of input images if you want to verify the geo information attached to the images. Can you please guide me what am I doing wrong in the project?

image image

clolsonus commented 2 years ago

Hi @alishan2040,

I haven't worked with sense fly images before. Would you be able to share your data set with me (privately is ok)? I might need to do some digging/tracing to figure out what's happening here. Happy to help if I can.

If you don't have a better way to share the images you can send them to me directly through my filemail account: https://curtolson.filemail.com

Best regards,

Curt.

clolsonus commented 2 years ago

I was able to run the sample data you sent me. I first created a preliminary camera config which I have pushed to the repository. Then I ran the process.py script with --match-strategy bestratio This is a new option/strategy which I just added and I really like how well it works in the general case.

I also pushed a few other minor code tweaks that I had sitting here at home, but had been lagging on committing to the repository.

If you still are running into glitches, the script creates a detailed log file of everything it does in image_directory/ImageAnalysis/messages-hostname Maybe you could share that with me and I can spot something there.

One potential issue that didn't seem to cause a problem, but I'm still a little worried about. DJI image meta data reports the orientation of the camera. The sensefly appears to report the orientation of the aircraft. So there may be a need to add a 90 degree pitch down offset to all the image orientations, but the system didn't seem to need that for the 7 images sample set you sent me. (So just a heads up it may be something we'll need to circle back around to and not forget about.)

Screenshot from 2021-10-24 06-36-27

alishan2040 commented 2 years ago

@clolsonus Thanks for the prompt reply. Yes it worked on my side too but have few questions about the process.

Thanks for your help. Looking forward to hear from you!

clolsonus commented 2 years ago

@alishan2040 Great to hear you got the code to work on your side too!

Let me know if you run into any issues/questions running the explorer.py script. At some point I would like to work through the code to generate a full /single/ geotiff for the entire stitched area, but that has been a lower priority because the other existing tools already do that task pretty well for most people.

alishan2040 commented 2 years ago

@clolsonus Thanks for answering my questions. I had the opportunity to capture data using Parrot_Sequoia but when I ran process.py on this new dataset, faced a lot of new issues. One of the main issue is, it failed to make groups for images. When I debugged the code, found that compute(image_list, matches) from ImageAnalysis/scripts/lib/groups.py always returned empty list for groups. I also confirmed that images have all the necessary information such as (lat, lon), etc. How can we tackle these issues on new datasets.

Thanks!

image image

clolsonus commented 2 years ago

You really should have a data set of at least 10 pictures or more if you want to do an effective stitch process.

On Thu, Oct 28, 2021 at 11:46 AM Shan Ali @.***> wrote:

@clolsonus https://github.com/clolsonus Thanks for answering my questions. I had the opportunity to capture data using Parrot_Sequoia but when I ran process.py on this new dataset, faced a lot of new issues. One of the main issue is, it failed to make groups for images. When I debugged the code, found that compute(image_list, matches) from ImageAnalysis/scripts/lib/groups.py always returned empty list for groups. I also confirmed that images have all the necessary information such as (lat, lon), etc. How can we tackle these issues on new datasets.

Thanks!

[image: image] https://user-images.githubusercontent.com/30209002/139298444-80b98501-393f-4b40-ad29-b90f45884cad.png [image: image] https://user-images.githubusercontent.com/30209002/139299491-2ce7750f-5c10-46b1-b44d-63646af29a2d.png

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/UASLab/ImageAnalysis/issues/15#issuecomment-954020861, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACXYYDPUQY5CW5FBTVSMT7TUJF453ANCNFSM5GQNNZUQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

-- Curtis Olson University of Minnesota, Aerospace Engineering and Mechanics, UAS Lab

alishan2040 commented 2 years ago

@clolsonus There are total 54 images in my dataset. I faced the same issue on this dataset. You can see the logs below:

/content/ImageAnalysis/scripts Project processed on host: 06d6b5f3e0b6 Project processed with arguments: Namespace(cam_calibration=False, camera=None, detector='SIFT', filter='gms', force_altitude=None, grid_detect=1, ground=None, group=0, match_ratio=0.75, match_strategy='bestratio', max_angle=25.0, max_dist=None, min_chain_length=3, min_dist=None, min_pairs=25, orb_max_features=20000, pitch_deg=-90.0, project='/content/parrot', refine=False, reject_margin=0, roll_deg=0.0, scale=0.4, star_line_threshold_binarized=8, star_line_threshold_projected=10, star_max_size=16, star_response_threshold=30, star_suppress_nonmax_size=5, surf_hessian_threshold=600, surf_noctaves=4, yaw_deg=0.0) Step 1: setup the project Creating analysis directory: /content/parrot/ImageAnalysis project: creating meta directory: /content/parrot/ImageAnalysis/meta project: creating cache directory: /content/parrot/ImageAnalysis/cache project: creating state directory: /content/parrot/ImageAnalysis/state project: project configuration doesn't exist: /content/parrot/ImageAnalysis/config.json Continuing with an empty project configuration Created project: /content/parrot Camera auto-detected: Parrot_Sequoia Parrot Sequoia None Camera file: ../cameras/Parrot_Sequoia.json Unknown child type: mount <class 'props.PropertyNode'> Step 2: configure camera poses and per-image meta data files Configuring images Creating pix4d image pose file: /content/parrot/pix4d.csv images: 54 Setting aircraft poses pose: IMG_211028_073944_0000_RGB.JPG yaw=290.8 pitch=17.2 roll=-2.0 extreme attitude: IMG_211028_074000_0001_RGB.JPG roll: -44.01 pitch: 75.26 extreme attitude: IMG_211028_074004_0002_RGB.JPG roll: -24.4 pitch: 71.85 extreme attitude: IMG_211028_074006_0003_RGB.JPG roll: -54.81 pitch: 33.47 extreme attitude: IMG_211028_074015_0004_RGB.JPG roll: -94.38 pitch: 30.14 extreme attitude: IMG_211028_074018_0005_RGB.JPG roll: -95.94 pitch: 57.13 extreme attitude: IMG_211028_074020_0006_RGB.JPG roll: -50.6 pitch: 37.72 extreme attitude: IMG_211028_074024_0007_RGB.JPG roll: -27.08 pitch: 28.93 extreme attitude: IMG_211028_074032_0008_RGB.JPG roll: -53.54 pitch: 35.58 extreme attitude: IMG_211028_074035_0009_RGB.JPG roll: -20.09 pitch: 30.81 pose: IMG_211028_074036_0010_RGB.JPG yaw=325.9 pitch=4.1 roll=11.7 pose: IMG_211028_074039_0011_RGB.JPG yaw=327.1 pitch=8.1 roll=24.7 pose: IMG_211028_074053_0012_RGB.JPG yaw=312.2 pitch=-18.6 roll=15.6 extreme attitude: IMG_211028_074054_0013_RGB.JPG roll: 25.49 pitch: 5.14 extreme attitude: IMG_211028_074056_0014_RGB.JPG roll: 31.74 pitch: 17.21 extreme attitude: IMG_211028_074057_0015_RGB.JPG roll: 29.32 pitch: 20.97 extreme attitude: IMG_211028_074058_0016_RGB.JPG roll: -13.8 pitch: 44.85 extreme attitude: IMG_211028_074100_0017_RGB.JPG roll: -28.77 pitch: 25.85 extreme attitude: IMG_211028_074103_0018_RGB.JPG roll: -35.34 pitch: 40.13 extreme attitude: IMG_211028_074123_0019_RGB.JPG roll: -76.94 pitch: 13.7 extreme attitude: IMG_211028_074125_0020_RGB.JPG roll: -66.56 pitch: 12.87 extreme attitude: IMG_211028_074127_0021_RGB.JPG roll: -52.98 pitch: 14.57 extreme attitude: IMG_211028_074128_0022_RGB.JPG roll: -42.93 pitch: 8.13 extreme attitude: IMG_211028_074130_0023_RGB.JPG roll: -40.46 pitch: 2.8 extreme attitude: IMG_211028_074131_0024_RGB.JPG roll: -91.71 pitch: 0.9 extreme attitude: IMG_211028_074133_0025_RGB.JPG roll: -30.39 pitch: 56.51 extreme attitude: IMG_211028_074137_0026_RGB.JPG roll: -42.87 pitch: 23.1 extreme attitude: IMG_211028_074154_0027_RGB.JPG roll: -7.91 pitch: -36.74 pose: IMG_211028_074157_0028_RGB.JPG yaw=296.5 pitch=-8.7 roll=1.6 pose: IMG_211028_074158_0029_RGB.JPG yaw=305.2 pitch=-22.1 roll=-3.0 pose: IMG_211028_074200_0030_RGB.JPG yaw=315.4 pitch=-23.6 roll=-11.3 extreme attitude: IMG_211028_074201_0031_RGB.JPG roll: -3.67 pitch: -32.95 pose: IMG_211028_074202_0032_RGB.JPG yaw=319.9 pitch=7.4 roll=-16.1 extreme attitude: IMG_211028_074204_0033_RGB.JPG roll: -37.96 pitch: 12.6 extreme attitude: IMG_211028_074206_0034_RGB.JPG roll: -25.81 pitch: 11.02 pose: IMG_211028_074216_0035_RGB.JPG yaw=306.7 pitch=-15.4 roll=-24.3 extreme attitude: IMG_211028_074227_0036_RGB.JPG roll: -25.52 pitch: 3.11 extreme attitude: IMG_211028_074229_0037_RGB.JPG roll: 1.05 pitch: 25.65 extreme attitude: IMG_211028_074231_0038_RGB.JPG roll: 27.99 pitch: 16.81 pose: IMG_211028_074234_0039_RGB.JPG yaw=331.8 pitch=-0.0 roll=17.0 pose: IMG_211028_074254_0040_RGB.JPG yaw=308.4 pitch=-0.8 roll=-9.8 extreme attitude: IMG_211028_074256_0041_RGB.JPG roll: -24.15 pitch: 27.64 pose: IMG_211028_074258_0042_RGB.JPG yaw=311.1 pitch=-1.4 roll=-17.9 pose: IMG_211028_074300_0043_RGB.JPG yaw=295.2 pitch=24.8 roll=-11.0 extreme attitude: IMG_211028_074313_0044_RGB.JPG roll: -47.58 pitch: -8.08 extreme attitude: IMG_211028_074315_0045_RGB.JPG roll: -38.91 pitch: -0.09 pose: IMG_211028_074317_0046_RGB.JPG yaw=275.1 pitch=-8.5 roll=-5.5 extreme attitude: IMG_211028_074341_0047_RGB.JPG roll: 70.76 pitch: 66.67 extreme attitude: IMG_211028_074342_0048_RGB.JPG roll: 67.98 pitch: 62.2 extreme attitude: IMG_211028_074344_0049_RGB.JPG roll: -3.13 pitch: 80.27 extreme attitude: IMG_211028_074346_0050_RGB.JPG roll: -83.98 pitch: 45.35 extreme attitude: IMG_211028_074349_0051_RGB.JPG roll: -34.34 pitch: 52.96 pose: IMG_211028_074416_0052_RGB.JPG yaw=308.7 pitch=-15.9 roll=6.4 pose: IMG_211028_074446_0053_RGB.JPG yaw=317.7 pitch=20.1 roll=-21.9 NED reference location: [31.377038072643746, 74.48203398324375, 0.0] Setting camera poses (offset from aircraft pose.) camera pose: IMG_211028_073944_0000_RGB camera pose: IMG_211028_074036_0010_RGB camera pose: IMG_211028_074039_0011_RGB camera pose: IMG_211028_074053_0012_RGB camera pose: IMG_211028_074157_0028_RGB camera pose: IMG_211028_074158_0029_RGB camera pose: IMG_211028_074200_0030_RGB camera pose: IMG_211028_074202_0032_RGB camera pose: IMG_211028_074216_0035_RGB camera pose: IMG_211028_074234_0039_RGB camera pose: IMG_211028_074254_0040_RGB camera pose: IMG_211028_074258_0042_RGB camera pose: IMG_211028_074300_0043_RGB camera pose: IMG_211028_074317_0046_RGB camera pose: IMG_211028_074416_0052_RGB camera pose: IMG_211028_074446_0053_RGB Initializing the SRTM interpolator SRTM: loading DEM tiles switched to http://bailu.ch so no remapping dictionary needed. SRTM: parsing .hgt file: /var/tmp/N31E074.hgt.zip SRTM: constructing LLA interpolator SRTM: constructing NED area interpolator /content/parrot/ImageAnalysis/smart.json: json load error: [Errno 2] No such file or directory: '/content/parrot/ImageAnalysis/smart.json' Step 3: feature matching Loading keypoint (pair) matches: 100% 16/16 [00:00<00:00, 19588.11it/s] detector: SIFT image scale for fearture detection/matching: 0.4 Matching features 0 217.6066808728646 1 24.76323144280852 2 30.925487356971164 3 39.760079922760276 4 23.90717596411241 5 24.14614529686835 6 49.50855833407613 7 72.87859016313135 8 100.13484550448987 9 27.702309778426116 10 48.75721095607442 11 24.33096398057288 12 76.4068379586795 13 135.6527595052686 14 24.95158363516944 Median pair interval: 39.8 m Generating work list for range: 0 - 160 100% 16/16 [00:00<00:00, 2513.72it/s] 100% 93/93 [11:19<00:00, 7.30s/it] saving matches and image meta data ... Pair-wise matches successfully saved. Average # of features per image found = 53645 Loading feature keypoints: 100% 16/16 [00:02<00:00, 6.70it/s] Loading keypoint (pair) matches: 100% 16/16 [00:00<00:00, 3166.86it/s] [orig] Determining feature usage in matching pairs... Indexing features by unique uv coordinates: 100% 16/16 [00:00<00:00, 115.49it/s] Merging keypoints with duplicate uv coordinates: 100% 16/16 [00:00<00:00, 396.70it/s] Checking for pair duplicates (there never should be any): 100% 16/16 [00:00<00:00, 2312.58it/s] Testing for 1 vs. n keypoint duplicates (there never should be any): 100% 16/16 [00:00<00:00, 5683.82it/s] Constructing unified match structure: 100% 16/16 [00:00<00:00, 235.11it/s] Total feature pairs in image set: 6037 Keypoint average instances = 2.0 (should be 2.0 here) Linking common matches together into chains: Iteration 0 (6037): 100% 6037/6037 [00:00<00:00, 355702.15it/s] Iteration 1 (6008): 100% 6008/6008 [00:00<00:00, 328125.45it/s] Replacing keypoint indices with uv coordinates: 100% 6008/6008 [00:00<00:00, 438646.75it/s] Sorting matches by longest chain first. Total unique features in image set: 6008 Keypoint average instances: 2.00 Writing full group chain file: /content/parrot/ImageAnalysis/matches_grouped Loading source matches: /content/parrot/ImageAnalysis/matches_grouped Looking up [smart] base elevation for each image location... Estimating initial projection for each feature... 100% 6008/6008 [00:01<00:00, 5200.51it/s] Writing triangulated group file: /content/parrot/ImageAnalysis/matches_grouped Loading source matches: /content/parrot/ImageAnalysis/matches_grouped matched features: 6008 Start of grouping algorithm... /config/matcher/min_chain_len: 3 max features desired per image: 2000 Notice: I should really work on this formula ... Start of new group level: 0 Seed index: 0 connections: 4 Seeding group with: IMG_211028_074053_0012_RGB Iteration: 0 Iteration: 1 Total images: 16 Group sizes: Counting allocated features... Features: 18/6008 Writing grouped tagged matches: /content/parrot/ImageAnalysis/matches_grouped Step 4: Optimization (fit) Loading source matches: /content/parrot/ImageAnalysis/matches_grouped matched features: 6008 Setting up optimizer data structures... Traceback (most recent call last): File "process.py", line 382, in cam_calib=args.cam_calibration) File "/content/ImageAnalysis/scripts/lib/optimizer.py", line 299, in setup for name in groups[group_index]: IndexError: list index out of range CPU times: user 5.58 s, sys: 724 ms, total: 6.3 s Wall time: 11min 26s

clolsonus commented 2 years ago

If you are able to share your image set with me as before, I can take a closer look, but from the log it appears that my system thinks many (most) of your images are looking too close to the horizon to process (versus looking straight down.) It skips those images because when the horizon (or features far away) are in view it can lead to a lot of issues downstream in the processing that are hard to deal with. But if that's not the situation with your pictures, maybe my system is extracting the pose information incorrectly? I'm happy to take a closer look if the image set is something you are allowed to share with me.

On Fri, Oct 29, 2021 at 1:01 AM Shan Ali @.***> wrote:

@clolsonus https://github.com/clolsonus There are total 93 images in my dataset. I faced the same issue on this dataset. You can see the logs below:

/content/ImageAnalysis/scripts Project processed on host: 06d6b5f3e0b6 Project processed with arguments: Namespace(cam_calibration=False, camera=None, detector='SIFT', filter='gms', force_altitude=None, grid_detect=1, ground=None, group=0, match_ratio=0.75, match_strategy='bestratio', max_angle=25.0, max_dist=None, min_chain_length=3, min_dist=None, min_pairs=25, orb_max_features=20000, pitch_deg=-90.0, project='/content/parrot', refine=False, reject_margin=0, roll_deg=0.0, scale=0.4, star_line_threshold_binarized=8, star_line_threshold_projected=10, star_max_size=16, star_response_threshold=30, star_suppress_nonmax_size=5, surf_hessian_threshold=600, surf_noctaves=4, yaw_deg=0.0) Step 1: setup the project Creating analysis directory: /content/parrot/ImageAnalysis project: creating meta directory: /content/parrot/ImageAnalysis/meta project: creating cache directory: /content/parrot/ImageAnalysis/cache project: creating state directory: /content/parrot/ImageAnalysis/state project: project configuration doesn't exist: /content/parrot/ImageAnalysis/config.json Continuing with an empty project configuration Created project: /content/parrot Camera auto-detected: Parrot_Sequoia Parrot Sequoia None Camera file: ../cameras/Parrot_Sequoia.json Unknown child type: mount <class 'props.PropertyNode'> Step 2: configure camera poses and per-image meta data files Configuring images Creating pix4d image pose file: /content/parrot/pix4d.csv images: 54 Setting aircraft poses pose: IMG_211028_073944_0000_RGB.JPG yaw=290.8 pitch=17.2 roll=-2.0 extreme attitude: IMG_211028_074000_0001_RGB.JPG roll: -44.01 pitch: 75.26 extreme attitude: IMG_211028_074004_0002_RGB.JPG roll: -24.4 pitch: 71.85 extreme attitude: IMG_211028_074006_0003_RGB.JPG roll: -54.81 pitch: 33.47 extreme attitude: IMG_211028_074015_0004_RGB.JPG roll: -94.38 pitch: 30.14 extreme attitude: IMG_211028_074018_0005_RGB.JPG roll: -95.94 pitch: 57.13 extreme attitude: IMG_211028_074020_0006_RGB.JPG roll: -50.6 pitch: 37.72 extreme attitude: IMG_211028_074024_0007_RGB.JPG roll: -27.08 pitch: 28.93 extreme attitude: IMG_211028_074032_0008_RGB.JPG roll: -53.54 pitch: 35.58 extreme attitude: IMG_211028_074035_0009_RGB.JPG roll: -20.09 pitch: 30.81 pose: IMG_211028_074036_0010_RGB.JPG yaw=325.9 pitch=4.1 roll=11.7 pose: IMG_211028_074039_0011_RGB.JPG yaw=327.1 pitch=8.1 roll=24.7 pose: IMG_211028_074053_0012_RGB.JPG yaw=312.2 pitch=-18.6 roll=15.6 extreme attitude: IMG_211028_074054_0013_RGB.JPG roll: 25.49 pitch: 5.14 extreme attitude: IMG_211028_074056_0014_RGB.JPG roll: 31.74 pitch: 17.21 extreme attitude: IMG_211028_074057_0015_RGB.JPG roll: 29.32 pitch: 20.97 extreme attitude: IMG_211028_074058_0016_RGB.JPG roll: -13.8 pitch: 44.85 extreme attitude: IMG_211028_074100_0017_RGB.JPG roll: -28.77 pitch: 25.85 extreme attitude: IMG_211028_074103_0018_RGB.JPG roll: -35.34 pitch: 40.13 extreme attitude: IMG_211028_074123_0019_RGB.JPG roll: -76.94 pitch: 13.7 extreme attitude: IMG_211028_074125_0020_RGB.JPG roll: -66.56 pitch: 12.87 extreme attitude: IMG_211028_074127_0021_RGB.JPG roll: -52.98 pitch: 14.57 extreme attitude: IMG_211028_074128_0022_RGB.JPG roll: -42.93 pitch: 8.13 extreme attitude: IMG_211028_074130_0023_RGB.JPG roll: -40.46 pitch: 2.8 extreme attitude: IMG_211028_074131_0024_RGB.JPG roll: -91.71 pitch: 0.9 extreme attitude: IMG_211028_074133_0025_RGB.JPG roll: -30.39 pitch: 56.51 extreme attitude: IMG_211028_074137_0026_RGB.JPG roll: -42.87 pitch: 23.1 extreme attitude: IMG_211028_074154_0027_RGB.JPG roll: -7.91 pitch: -36.74 pose: IMG_211028_074157_0028_RGB.JPG yaw=296.5 pitch=-8.7 roll=1.6 pose: IMG_211028_074158_0029_RGB.JPG yaw=305.2 pitch=-22.1 roll=-3.0 pose: IMG_211028_074200_0030_RGB.JPG yaw=315.4 pitch=-23.6 roll=-11.3 extreme attitude: IMG_211028_074201_0031_RGB.JPG roll: -3.67 pitch: -32.95 pose: IMG_211028_074202_0032_RGB.JPG yaw=319.9 pitch=7.4 roll=-16.1 extreme attitude: IMG_211028_074204_0033_RGB.JPG roll: -37.96 pitch: 12.6 extreme attitude: IMG_211028_074206_0034_RGB.JPG roll: -25.81 pitch: 11.02 pose: IMG_211028_074216_0035_RGB.JPG yaw=306.7 pitch=-15.4 roll=-24.3 extreme attitude: IMG_211028_074227_0036_RGB.JPG roll: -25.52 pitch: 3.11 extreme attitude: IMG_211028_074229_0037_RGB.JPG roll: 1.05 pitch: 25.65 extreme attitude: IMG_211028_074231_0038_RGB.JPG roll: 27.99 pitch: 16.81 pose: IMG_211028_074234_0039_RGB.JPG yaw=331.8 pitch=-0.0 roll=17.0 pose: IMG_211028_074254_0040_RGB.JPG yaw=308.4 pitch=-0.8 roll=-9.8 extreme attitude: IMG_211028_074256_0041_RGB.JPG roll: -24.15 pitch: 27.64 pose: IMG_211028_074258_0042_RGB.JPG yaw=311.1 pitch=-1.4 roll=-17.9 pose: IMG_211028_074300_0043_RGB.JPG yaw=295.2 pitch=24.8 roll=-11.0 extreme attitude: IMG_211028_074313_0044_RGB.JPG roll: -47.58 pitch: -8.08 extreme attitude: IMG_211028_074315_0045_RGB.JPG roll: -38.91 pitch: -0.09 pose: IMG_211028_074317_0046_RGB.JPG yaw=275.1 pitch=-8.5 roll=-5.5 extreme attitude: IMG_211028_074341_0047_RGB.JPG roll: 70.76 pitch: 66.67 extreme attitude: IMG_211028_074342_0048_RGB.JPG roll: 67.98 pitch: 62.2 extreme attitude: IMG_211028_074344_0049_RGB.JPG roll: -3.13 pitch: 80.27 extreme attitude: IMG_211028_074346_0050_RGB.JPG roll: -83.98 pitch: 45.35 extreme attitude: IMG_211028_074349_0051_RGB.JPG roll: -34.34 pitch: 52.96 pose: IMG_211028_074416_0052_RGB.JPG yaw=308.7 pitch=-15.9 roll=6.4 pose: IMG_211028_074446_0053_RGB.JPG yaw=317.7 pitch=20.1 roll=-21.9 NED reference location: [31.377038072643746, 74.48203398324375, 0.0] Setting camera poses (offset from aircraft pose.) camera pose: IMG_211028_073944_0000_RGB camera pose: IMG_211028_074036_0010_RGB camera pose: IMG_211028_074039_0011_RGB camera pose: IMG_211028_074053_0012_RGB camera pose: IMG_211028_074157_0028_RGB camera pose: IMG_211028_074158_0029_RGB camera pose: IMG_211028_074200_0030_RGB camera pose: IMG_211028_074202_0032_RGB camera pose: IMG_211028_074216_0035_RGB camera pose: IMG_211028_074234_0039_RGB camera pose: IMG_211028_074254_0040_RGB camera pose: IMG_211028_074258_0042_RGB camera pose: IMG_211028_074300_0043_RGB camera pose: IMG_211028_074317_0046_RGB camera pose: IMG_211028_074416_0052_RGB camera pose: IMG_211028_074446_0053_RGB Initializing the SRTM interpolator SRTM: loading DEM tiles switched to http://bailu.ch so no remapping dictionary needed. SRTM: parsing .hgt file: /var/tmp/N31E074.hgt.zip SRTM: constructing LLA interpolator SRTM: constructing NED area interpolator /content/parrot/ImageAnalysis/smart.json: json load error: [Errno 2] No such file or directory: '/content/parrot/ImageAnalysis/smart.json' Step 3: feature matching Loading keypoint (pair) matches: 100% 16/16 [00:00<00:00, 19588.11it/s] detector: SIFT image scale for fearture detection/matching: 0.4 Matching features 0 217.6066808728646 1 24.76323144280852 2 30.925487356971164 3 39.760079922760276 4 23.90717596411241 5 24.14614529686835 6 49.50855833407613 7 72.87859016313135 8 100.13484550448987 9 27.702309778426116 10 48.75721095607442 11 24.33096398057288 12 76.4068379586795 13 135.6527595052686 14 24.95158363516944 Median pair interval: 39.8 m Generating work list for range: 0 - 160 100% 16/16 [00:00<00:00, 2513.72it/s] 100% 93/93 [11:19<00:00, 7.30s/it] saving matches and image meta data ... Pair-wise matches successfully saved. Average # of features per image found = 53645 Loading feature keypoints: 100% 16/16 [00:02<00:00, 6.70it/s] Loading keypoint (pair) matches: 100% 16/16 [00:00<00:00, 3166.86it/s] [orig] Determining feature usage in matching pairs... Indexing features by unique uv coordinates: 100% 16/16 [00:00<00:00, 115.49it/s] Merging keypoints with duplicate uv coordinates: 100% 16/16 [00:00<00:00, 396.70it/s] Checking for pair duplicates (there never should be any): 100% 16/16 [00:00<00:00, 2312.58it/s] Testing for 1 vs. n keypoint duplicates (there never should be any): 100% 16/16 [00:00<00:00, 5683.82it/s] Constructing unified match structure: 100% 16/16 [00:00<00:00, 235.11it/s] Total feature pairs in image set: 6037 Keypoint average instances = 2.0 (should be 2.0 here) Linking common matches together into chains: Iteration 0 (6037): 100% 6037/6037 [00:00<00:00, 355702.15it/s] Iteration 1 (6008): 100% 6008/6008 [00:00<00:00, 328125.45it/s] Replacing keypoint indices with uv coordinates: 100% 6008/6008 [00:00<00:00, 438646.75it/s] Sorting matches by longest chain first. Total unique features in image set: 6008 Keypoint average instances: 2.00 Writing full group chain file: /content/parrot/ImageAnalysis/matches_grouped Loading source matches: /content/parrot/ImageAnalysis/matches_grouped Looking up [smart] base elevation for each image location... Estimating initial projection for each feature... 100% 6008/6008 [00:01<00:00, 5200.51it/s] Writing triangulated group file: /content/parrot/ImageAnalysis/matches_grouped Loading source matches: /content/parrot/ImageAnalysis/matches_grouped matched features: 6008 Start of grouping algorithm... /config/matcher/min_chain_len: 3 max features desired per image: 2000 Notice: I should really work on this formula ... Start of new group level: 0 Seed index: 0 connections: 4 Seeding group with: IMG_211028_074053_0012_RGB Iteration: 0 Iteration: 1 Total images: 16 Group sizes: Counting allocated features... Features: 18/6008 Writing grouped tagged matches: /content/parrot/ImageAnalysis/matches_grouped Step 4: Optimization (fit) Loading source matches: /content/parrot/ImageAnalysis/matches_grouped matched features: 6008 Setting up optimizer data structures... Traceback (most recent call last): File "process.py", line 382, in cam_calib=args.cam_calibration) File "/content/ImageAnalysis/scripts/lib/optimizer.py", line 299, in setup for name in groups[group_index]: IndexError: list index out of range CPU times: user 5.58 s, sys: 724 ms, total: 6.3 s Wall time: 11min 26s

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/UASLab/ImageAnalysis/issues/15#issuecomment-954452935, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACXYYDLHPEEVJGTPJCUX5ODUJI2CVANCNFSM5GQNNZUQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

-- Curtis Olson University of Minnesota, Aerospace Engineering and Mechanics, UAS Lab

alishan2040 commented 2 years ago

Dear @clolsonus, Yes, I have sent you the images taken by GoPro HERO 9 BLACK as well. I created camera json first using 99-new-camera.py script. I found that images do not have yaw information so I set the flag images_have_yaw = True in scripts/lib/pose.py manually to avoid rcUAS installation. After that, images were processed but the generated map was not that accurate. I've also attached screenshots for review. Kindly test and let me know if you need further information.

Looking forward to hear from you! Thanks, Shan

clolsonus commented 2 years ago

Hi Shan,

The challenges with this data set include (a) a camera with significant (but unknown) fisheye distortion, and (b) very inaccurate initial pose estimation.

We can manually estimate the distortion parameters and lens calibration for your camera. That might be a good first step. I can't find good information for the hero 9 black online. If you look in the 3rd_party directory under ltseez-opencv there is a file called "camera-calibration-checker-board_9x7.pdf" If you are able to print this out and lay it as flat as possible on a table (so the paper isn't rippled or curled). I sometimes press the paper between two large books for a day or two to help flatten it out after printing. Then with your camera, take maybe 30-50 images of the checkerboard pattern from a variety of angles and distances and different places within the frame. This step can be a bit tedious, sorry! Then you (or I) can run the calibrate.py script in the same directory and generate a preliminary camera calibration for your camera. This script should estimate the camera intrinsic matrix (K) and the lens distortion parameters. It won't be perfect, but it should be close.

Here is a basic explanation (with a few specific different details) of the calibration process: https://www.theeminentcodfish.com/gopro-calibration/

Side note: recently most of my work has been with DJI systems which give a fairly accurate pose estimate (gimbal roll, pitch, yaw) and have close to zero lens distortion. But the system should work with fisheye lenses as long as we can get a reasonable estimate of the lens calibration using the checkerboard pattern.

Also keep in mind that a general principle with image stitching is that redundancy and overlap is good. I don't know anything about the system you are using to capture the images, but if you are able to capture a set of images in a fairly consistent grid (like 10x10 for example) with a 70-80% end-lap and side-lap (overlap) that might work better for a first test to figure if we can make the system work with your setup.

Best regards,

Curt.

On Tue, Nov 2, 2021 at 6:36 AM Shan Ali @.***> wrote:

Dear @clolsonus https://github.com/clolsonus, Yes, I have sent you the images taken by GoPro HERO 9 BLACK as well. I created camera json first using 99-new-camera.py script. I found that images do not have yaw information so I set the flag images_have_yaw = True in scripts/lib/pose.py manually to avoid rcUAS installation. After that, images were processed but the generated map was not that accurate. I've also attached screenshots for review. Kindly test and let me know if you need further information.

Looking forward to hear from you! Thanks, Shan

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/UASLab/ImageAnalysis/issues/15#issuecomment-957357529, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACXYYDOBAQ5LOJ57S65Q7JTUJ7LLBANCNFSM5GQNNZUQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

-- Curtis Olson University of Minnesota, Aerospace Engineering and Mechanics, UAS Lab

alishan2040 commented 2 years ago

Dear @clolsonus, Thanks for detailed answer. If I just want to calculate (lat, lon) for each pixel in source JPG images by positioning each image in the true north by calculating bearing angle from a centre reference location given by compute_ned_reference_lla(self) method in your script which computes a center reference location (lon, lat) for the group of images. Is this approach workable? Thanks