Open mo-vic opened 11 months ago
Yeah it would be nice to make them serializable! It would be nice if it was defined on the original object though (vs a separate fcl2.py
+ inheritance), what if you defined an __iter__
and then passed them around as dict objects for keyword arguments?
class Thing:
def __init__(self, a=10, b = 20):
self.a = a
self.b = b
def __iter__(self):
return iter([('a', self.a),
('b', self.b)])
if __name__ == '__main__':
t = Thing(a=5, b=2)
# picklable, json-dumpable, etc.
thing_serial = dict(t)
Yeah it would be nice to make them serializable! It would be nice if it was defined on the original object though (vs a separate
fcl2.py
+ inheritance), what if you defined an__iter__
and then passed them around as dict objects for keyword arguments?class Thing: def __init__(self, a=10, b = 20): self.a = a self.b = b def __iter__(self): return iter([('a', self.a), ('b', self.b)]) if __name__ == '__main__': t = Thing(a=5, b=2) # picklable, json-dumpable, etc. thing_serial = dict(t)
But, you still need to create the FCL object in the Process right? then again, this re-occurs when you define the FCL object used the serialized dict. For example, using FCL in PyTorch's dataloader with multiprocessing support, I think it is still necessary to define a reduce method.
Yeah it would be nice to make them serializable! It would be nice if it was defined on the original object though (vs a separate
fcl2.py
+ inheritance), what if you defined an__iter__
and then passed them around as dict objects for keyword arguments?class Thing: def __init__(self, a=10, b = 20): self.a = a self.b = b def __iter__(self): return iter([('a', self.a), ('b', self.b)]) if __name__ == '__main__': t = Thing(a=5, b=2) # picklable, json-dumpable, etc. thing_serial = dict(t)
Hi mikedh, I tried adding a __iter__
in the original Cython class like below:
def __iter__(self):
return iter([("a", 1), ("b", 2)])
but the object is not directly picklable:
In [5]: import fcl
In [6]: tf_to_pickle = fcl.Transform(R, T)
In [7]: pickled_tf = pickle.dumps(tf_to_pickle)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[7], line 1
----> 1 pickled_tf = pickle.dumps(tf_to_pickle)
File stringsource:2, in fcl.fcl.Transform.__reduce_cython__()
TypeError: no default __reduce__ due to non-trivial __cinit__
Also, when I try to cache the data in the __cinit__
method, like this:
def __cinit__(self, *args):
self.args = args
when creating the object, an AttributeError
has been thrown:
In [5]: tf_to_pickle = fcl.Transform(R, T)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[5], line 1
----> 1 tf_to_pickle = fcl.Transform(R, T)
File /mnt/e/Users/movic/python-fcl/src/fcl/fcl.pyx:56, in fcl.fcl.Transform.__cinit__()
54
55 def __cinit__(self, *args):
---> 56 self.args = args
57 if len(args) == 0:
58 self.thisptr = new defs.Transform3d()
AttributeError: 'fcl.fcl.Transform' object has no attribute 'args'
so I guess the inheritance solution is still worth considering.
Picklable objects for multithreading support.
Sometime we have a list of queries to run where each query is independent of each other. In such case, we can create a thread pool and copy the same environment in each thread to run queries in parallel. The following code gives an example of doing this:
However, objects like
fcl.BVHModel
,fcl.CollisionObject
are ==unpicklable==, making it unable to serialize them, and de-serialize them in threads:My solution to address this issue is to derive subclasses from those
Cython
class, add__init__
method to cache input arguments and__reduce__
method to return the cached data for pickling.Currently only support
fcl.Transform
,fcl.BVHModel
,fcl.CollisionObject
, with this commit and a simple magic import:from fcl import fcl2 as fcl
, the above example script can run in parallel.