A subtle bug was limiting the number of files that could be successfully crawled by the crawler before the crawler abruptly stopped (without error). Investigation released that this had to do with the "depth" variable in the THREDDSLoader class, that was originally put in place to limit the depth to which the crawler will crawl. This original logic was largely retained, however, it turns out, there was an error in the logic which only showed up while crawling several deeply-nested data structures like the datasets in the UofT CMIP6 data catalog. Essentially, the depth variable was always linearly decreasing and when it became 0, the crawler stopped working. This PR fixes that issue.
A subtle bug was limiting the number of files that could be successfully crawled by the crawler before the crawler abruptly stopped (without error). Investigation released that this had to do with the "depth" variable in the
THREDDSLoader
class, that was originally put in place to limit the depth to which the crawler will crawl. This original logic was largely retained, however, it turns out, there was an error in the logic which only showed up while crawling several deeply-nested data structures like the datasets in the UofT CMIP6 data catalog. Essentially, the depth variable was always linearly decreasing and when it became 0, the crawler stopped working. This PR fixes that issue.