Closed solmazhajmohammadi closed 7 years ago
@solmazhajmohammadi could you please provide an example of a bad file, and what makes it bad, and how you could tell when it is fixed?
This is one sample image of full-field data Example:2016-06-0803-27-36-711 Position of the pointclouds respect to each other, is off. This is the second sample, The data is still tilted Example: 2016-06-0721-58-13-417
@smarshall-bmr can you please run the ply worker on the dataset form 2016-06-07 and 2016-06-08, so we can make sure that calibration is working, then we can re-run it over all dataset and transfer the files again to the NCSA. And if there is a bug in the calibration file, we can fix it.
@solmazhajmohammadi The ply worker is defaulting to a start date in November of last year and I'm unable to change it. If it's allowed to run long enough it may loop back around to earlier dates but I have no way of knowing.
You can copy the data to another folder, and set the input folder from ply worker to that path. And set the output path to the same folder. (Not in the gantry/MovingSensor though) I can copy the files through ftp and check it. Thanks,
Sent from my iPhone
On Mar 1, 2017, at 1:58 AM, smarshall-bmr notifications@github.com<mailto:notifications@github.com> wrote:
@solmazhajmohammadihttps://github.com/solmazhajmohammadi The ply worker is defaulting to a start date in November of last year and I'm unable to change it. If it's allowed to run long enough it may loop back around to earlier dates but I have no way of knowing.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/terraref/reference-data/issues/90#issuecomment-283270838, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AQQcC4u7il2RQbopygA6z0gIQh4ZSU54ks5rhSUFgaJpZM4MLvPZ.
@jdmaloney want to tag you on this thread as it might be relevant to our discussion yesterday.
@solmazhajmohammadi @smarshall-bmr I'm guessing the default start date is Nov. 1 of last year? We've been purging data (from all sensors) off the gantry-cache machine to ensure we maintain sufficient space on that server in case (when) we experience network trouble getting data back to NCSA. We just had that issue where for a couple weeks we were getting terrible transfer performance (couldn't even keep up with a non-hyperspectral data) which we traced back to a change that was made in the University of Arizona's data center. The problem was resolved early this week and the logjam is just now clearing from that. Issues like that or for times we have to take the ROGER system here down for regular maintenance and security patching is why we put that cache server in place. We've been lax in keeping it purged, especially since there was a good junk of time last year where we didn't have the VNIR or SWIR sensors running.
This latest network issue spurred us to finish development on our automated data transfer validation and purge code which is in the final phase of testing now, and I hope to put in full operation next week. I had already validated data through Oct 31 when we started having the network issue so that data was purged to ensure we had space for data (we had filled to 93%, only 5TB free). Going forward the plan is to look at directories two days prior, so data from sensors in 2017-03-05 directories for example will be inspected on March 7th to ensure all files also reside on the ROGER file system, if files are missing they are transferred, and then the date directory is purged from the cache.
If there are some dates you want me to push back down to the gantry I'd be happy to do so, so a determination can be made for calibration. However even now, and especially as time goes on it is not feasible to have all the reproc data reside on the cache, there just isn't the room for it. It looks like you guys need June 7th, 8th data, if someone can confirm that, I can start getting data from those dates put back on the gantry-cache.
Thanks @jdmaloney for clarification. It seems that data from last season has been deleted from cache server, so I am not sure if the issue, described above, is caused by new calibration or it is due to data transfer. I am testing the data from different days during the season, I'll keep you updated if the issue is relevant to data transfer.
@jdmaloney I checked couple recent dataset, It seems with new calibration we dont have this issue. @smarshall-bmr were you able to re-run the plyclowder on a sample set? If the calibration works for a sample set, we can re-generate the ply files from season one.
@solmazhajmohammadi I'm not sure what you mean by plyclowder, do we have a way to run the PLY worker without moving the data back to the local server?
I meant plyworker... You can copy two consecutive days data back to local server for checking.
@solmazhajmohammadi what is the next step?
@dlebauer These are the season one dataset. As we discussed in last week meeting, the data has been deleted from the gantry cache server(by NCSA). Part of the data can be transferred back to the server, so we can re-run the plyworker on the dataset and make sure the calibration is correct. Eventually, we need to transfer all the data back and run it through the plyworker again.
data needs to be run through ply worker again. @jdmaloney and @max-zilla will transfer all png files back to the cache server so that @solmazhajmohammadi can re-run.
Can you please transfer back the data from 2016-06-08 and 2016-06-07? Thanks
@solmazhajmohammadi This has been done
@solmazhajmohammadi The PlyWorker has been modified to start with dates in 2017. I don't actually know how to change this to hit the older files!
If the files are in the same path that plyworker is pointed, you should be fine. With starting the worker, it will generate the ply files if it doesn't exist already.
On May 22, 2017, at 11:46 AM, smarshall-bmr notifications@github.com<mailto:notifications@github.com> wrote:
@solmazhajmohammadihttps://github.com/solmazhajmohammadi The PlyWorker has been modified to start with dates in 2017. I don't actually know how to change this to hit the older files!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/terraref/reference-data/issues/90#issuecomment-303156177, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AQQcCxDVAzodSFBqHuInV0jK3qdEIUcmks5r8bwfgaJpZM4MLvPZ.
@solmazhajmohammadi It looks like the PlyWorker is restarting in January and never getting back to older files?
@solmazhajmohammadi identified the issue and @smarshall-bmr needs to move these files from output to input
@smarshall-bmr is this done?
@smarshall-bmr update please
@max-zilla I'm not finding the old files for reprocessing. Are we sure they haven't been processed and deleted?
it appears that the files were processed already and JD’s script cleaned them so i believe it’s done
@smarshall-bmr can you please confirm?
I just noticed that calibrated data has not been transferred properly to globus. Part of data from June still has old calibration.