def observation(self, observation):
# careful! This undoes the memory optimization, use
# with smaller replay buffers only.
return np.array(observation).astype(np.float32) / 255.0
I think the memory optimization in the comment is indicating the LazyFrames, but it seems optimization via reference sharing of duplicated frames is still valid because we stack the frames after the scaling.
Do I miss something? Is there any other memory optimization technique corrupted by ScaledFloatFrame?
In ScaledFloatFrame function , there is comment like this.
I think the memory optimization in the comment is indicating the LazyFrames, but it seems optimization via reference sharing of duplicated frames is still valid because we stack the frames after the scaling.
Do I miss something? Is there any other memory optimization technique corrupted by ScaledFloatFrame?