Open tiger-bug opened 4 years ago
Hello @tiger-bug, I have a similar case for my custom dataset and my evaluation loss just shoots up very badly although the training loss and accuracy are performing expectedly. Let me know if you have found a feasible solution
Good afternoon,
First of all, thank you for your work in point clouds and for this great architecture you made.
I don't know if anyone else has run into this, but I am seeing low accuracy results in my testing code and am not sure why. I have modified the code for LiDAR classification and instead of using XYZRGBX'Y'Z', I use XYZ, Intensity, Curvature, height - minimum height per 3m grid, X',Y',Z'. I split the point cloud into 5m grids and split it into 80 percent training, 10 percent validation, and 10 percent testing. The training and validation accuracy is approximately 80 - 90 percent (depending on the number of epochs). When I go to test it on the 'testing' grids, however, I get results around 45 to 50 percent. I'm using 4 classes, ground, building, vegetation, and other.
Here is my testing code:
Note: I use the term 'hashes' for grids (not originally my idea: Some of this is based on work found here. Also I know this is a bit sloppy and incomplete, I am just trying to get it to work right now.
Note I'm running this in Jupyter with TF 1.14 and Docker. If I need to clear anything up I certainly will. I realize this isn't the most clear; I've been trying to solve this all day. I can't see any error in the code so maybe it's my method? Again, if I need to be more clear I will. thank you!