Closed WouterDX closed 3 years ago
Fairly similar problem to https://github.com/PDAL/python/issues/23:
`def read_lidarfile(lidarfile,grouplabel='Intensity'): '''Read lidarfile into a python array object ''' reader = { "pipeline": [ lidarfile, { "type": "filters.groupby", "dimension": "%s"%(grouplabel) } ]} pipeline = pdal.Pipeline(json.dumps(reader)) pipeline.validate() n_points = pipeline.execute()
logging.info('Pipeline selected {} points'.format(n_points)) return pipeline.arrays
for lidarfile in lidarfiles: resultgroups=read_lidarfile(lidarfile)
` Memory keeps growing in this loop. Setting resultgroups=None changes nothing, seems like the issue is in the function itself.
Fixed by #71
Fairly similar problem to https://github.com/PDAL/python/issues/23:
`def read_lidarfile(lidarfile,grouplabel='Intensity'): '''Read lidarfile into a python array object ''' reader = { "pipeline": [ lidarfile, { "type": "filters.groupby", "dimension": "%s"%(grouplabel) } ]} pipeline = pdal.Pipeline(json.dumps(reader)) pipeline.validate() n_points = pipeline.execute()
logging.info('Reference count of arrays is:%d'%(sys.getrefcount(pointgroups)))
for lidarfile in lidarfiles: resultgroups=read_lidarfile(lidarfile)
pseudo: results=list(map(dosomething,resultgroups))
` Memory keeps growing in this loop. Setting resultgroups=None changes nothing, seems like the issue is in the function itself.