brycefrank / pyfor

Tools for analyzing aerial point clouds of forest data.
MIT License
93 stars 19 forks source link

Add avenues for creating bare earth models for already-classified points #50

Closed brycefrank closed 5 years ago

brycefrank commented 5 years ago

This could easily be added directly to Zhang2003 and KrausPfeifer1998, should be accompanied with additions to the user manual.

bw4sz commented 5 years ago

I was just about to ask for this kind of thing. In general, it wasn't clear to me whether normalize reclasses the ground, or uses previously classed ground. If you give me a sense of general structure you'd like, I can give it a go. Right now i'm getting some funny results in a handful of cases.

uJzcCDu8EZB8rMCJEqvhgKHQ8

mTWcBuBnl6EqljusrNcPxeQBZ

just fyi, green are ground truth trees, blue is predictions. Top is RGB, bottom is a lidar height mask (any points above 2m normalized). Getting alot of artifacts in areas with alot of slope. This was done on a full tile, we are just looking at a portion. My plan is to emulate something from lidR. Ideas appreciated and i'll PR.

brycefrank commented 5 years ago

The default Cloud.normalize is a wrapper for Zhang2003, the same algorithm JR uses:

https://github.com/brycefrank/pyfor/blob/master/pyfor/ground_filter.py

the normalization does the following (lines 137-148):

  1. Computes a bare earth model
  2. For each point, finds the underlying elevation of the bare earth model
  3. Subtracts this elevation from each point

Another method in cloud, Cloud.subtract can use already computed bare earth models (as .tif) to normalize heights. It is untested.

bw4sz commented 5 years ago

Okay, let me have a fork and poke around. Can we agree on some goals?

  1. Read in ground model from either disk or

  2. Computes a new raster from current ground classification - emulating

    dtm <- lidR::grid_terrain(las,res=1, algorithm = lidR::knnidw(k = 10 , p = 2))

    from liDR.

  3. Utilities the same routine as the ground_filter class to compute steps 2 and 3 above.

brycefrank commented 5 years ago

Here is what I am seeing,

Cloud.subtract already accomplishes (1), but if I recall it reads a tif from file, rather than using a Raster object.

(2) is straightforward, just subset the points dataframe to classification==2 (that is las spec standard for sorted ground points), proceed with Grid.interpolate. Where to put the wrapper is the question.

What's missing I think is a standard way to use already existing rasters. The Raster object is really only created as a derivative of a Cloud (or Grid). We could add functionality to create a Raster from file, but this conflicts with the goal of converting Raster to in-memory rasterio objects.

It is not so much the challenge of writing any new functions, but how to organize it in a consistent way, I think.

brycefrank commented 5 years ago

As for the original normalization weirdness. Have you inspected the point cloud? There seems to be a cliff or some other oddity around that location.

bw4sz commented 5 years ago

I will definitely followup on the current results, just haven't had a time to sit and look. Its pretty common, and I haven't noticed this in the liDR normalization - but i've been using the already classed grounds there.

I'm resubmitting a paper this week, but I have a feeling that normalization is definitely a problem, so i'll commit some time to a PR and we can discuss.

bw4sz commented 5 years ago

here is an example from a standalone small file (the other was from a full tile - done all at once). I have not investigated this yet, but this is where i'll start. KVifdqHFgrOhAH8UW9Yb4kAic DzBthwklT5ex3re7LxOuzqiW2

just for completeness, that .laz and .tif are here (054): https://github.com/weecology/NeonTreeEvaluation/blob/master/SJER/plots/SJER_054.laz

and the code to generate the binary height mask is here:

https://github.com/weecology/DeepLidar/blob/10421f0ec9532e09e47f60f994b0f969ed3af5b5/DeepForest/Lidar.py#L160

brycefrank commented 5 years ago

Here is my normalization attempt:

import pyfor                                                                
from time import sleep                                                      

pc = pyfor.cloud.Cloud('SJER_054.laz')                                     
pc.normalize(0.33)                                                                                                                               

pc_2m = pc                                                                  
pc_2m.data.points = pc.data.points[pc.data.points['z']>2]                   
pc_2m.write('test.las')  

And the output (no mask, just displaying points > 2 m)

image

Perhaps adjusting the parameters are what you need, here I just adjusted the normalization cell size. Other parameters can be adjusted. See Zhang2003 docstrings and the sample in the pyfor samples repositories.

bw4sz commented 5 years ago

i've noticed the effect of cell size as well. Trying to generalize has been a bit tricky. i've got millions of crops. Let me try 0.33 on a bunch and see. cell size of 1 had been working well until this recent site. What was your intuition for making it bigger/smaller?

I don't know how NEON produces their ground class, but my hope was that it would be pretty robust.

On Sun, Mar 24, 2019 at 12:41 PM Bryce Frank notifications@github.com wrote:

Here is my normalization attempt:

import pyfor from time import sleep

pc = pyfor.cloud.Cloud('SJER_054.laz') pc.normalize(0.33)

pc_2m = pc pc_2m.data.points = pc.data.points[pc.data.points['z']>2] pc_2m.write('test.las')

And the output (no mask, just displaying points > 2 m)

[image: image] https://user-images.githubusercontent.com/24326298/54884775-01f9f300-4e32-11e9-9531-ec31fb155e30.png

Perhaps adjusting the parameters are what you need, here I just adjusted the normalization cell size. Other parameters can be adjusted. See Zhang2003 docstrings and the sample in the pyfor samples repositories.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/brycefrank/pyfor/issues/50#issuecomment-475992574, or mute the thread https://github.com/notifications/unsubscribe-auth/ABJwrK0zqEudu3Zvngwr4o9eAiW8by-3ks5vZ9T9gaJpZM4bo7AY .

-- Ben Weinstein, Ph.D. Postdoctoral Fellow University of Florida http://benweinstein.weebly.com/

brycefrank commented 5 years ago

I am sure it is robust. It is not too hard to write ground points to a raster file to reconstruct the bare earth model.. You could try to use it with Cloud.subtract but that function is untested

pc = pyfor.cloud.Cloud('your_cloud.laz')
pc_ground = pc
pc_ground.data.points = pc.data.points[pc.data.points['classification'] == 2]

pc_ground.grid(0.5).interpolate("min", "z").write('your_bem.tif')

pc.subtract('your_bem.tif')

That is a free-hand attempt, but I ran something similar for the last post.

Of course, coordinate references and such should be added to pc.crs for best results.


The Zhang filter basically operates on neighboring "steepness" of a rasterized point cloud, if a point/cell is too steep compared to a reference point, it is removed from consideration as ground. This process iterates a few times to provide the BEM. Larger cell sizes wash out this "steepness", and leave artifacts like those you observed, usually in flatter landscapes.

brycefrank commented 5 years ago

Added in https://github.com/brycefrank/pyfor/commit/7ae2458af90fa7528e4369fb73f7d0adf113dca0