PDAL / python

PDAL's Python Support
Other
115 stars 34 forks source link

Cannot assign values to structured array returned from PDAL pipeline #125

Closed chambbj closed 1 year ago

chambbj commented 1 year ago

I was experimenting with a workflow recently that looked something like this. The basic workflow is easy: read some LAS, do something to the classifications (we'll just say zero them out in this example), and write the updated point cloud to LAS.

import numpy as np
import pdal

reader = pdal.Reader("/path/to/input.las").pipeline()
reader.execute()

reader.arrays[0]['Classification'] = np.zeros_like(reader.arrays[0]['Classification'])

writer = pdal.Writer("/path/to/output.las").pipeline(reader.arrays[0])
writer.execute()

I was a little surprised to find I could not make the assignment to the structured array returned by the PDAL python bindings. I'd expect to be able to overwrite the existing classifications with zeros. Despite being marked as WRITEABLE (see readers.arrays[0]['Classification'].flags, it seems I cannot actually change these values.

My current workaround is to make a copy of the returned structured array, update the values there, and pass that to the writer.

import copy
import numpy as np
import pdal

reader = pdal.Reader("/path/to/input.las").pipeline()
reader.execute()

data = copy.deepcopy(reader.arrays[0])
data['Classification'] = np.zeros_like(data['Classification'])

writer  = pdal.Writer.las("/path/to/output.las").pipeline(data)
writer.execute()
hobu commented 1 year ago

When I run your fail example on my OSX build, I don't get the same failure as you. I end up with an invalid pipeline being made for the writer. Your workaround works fine for me though.

{
  "pipeline":
  [
    {
      "order": "column",
      "shape": "0, 0, 0",
      "tag": "readers_memoryview1",
      "type": "readers.memoryview"
    },
    {
      "filename": "foo.las",
      "inputs":
      [
        "readers_memoryview1"
      ],
      "tag": "writers_las1",
      "type": "readers.las"
    }
  ]
}

I think the issue is how type is being guessed/inferred from the filename, and there's some default that's not correct.