karlwessel / mplopengl

OpenGL based backend for matplotlib
MIT License
15 stars 5 forks source link

Markers not displaying properly on small intervals #16

Open JakkuSakura opened 4 years ago

JakkuSakura commented 4 years ago

Markers not displaying properly on small intervals, when using mplopengl. When n = 1000, the markers are slightly offset image When n = 100000, the markers are totally broken image On Qt5Agg backend, there is no such issue. image

import matplotlib
# matplotlib.use("Qt5Agg")
matplotlib.use('module://mplopengl.backend_qtgl')
import numpy as np
from matplotlib import pyplot as plt
sample_size = 1000
X = np.linspace(100079, 100090, sample_size)
Y = np.random.random(sample_size)

fig, (ax1) = plt.subplots(1, 1, sharex=True, sharey=True)

ax1.plot(X, Y, linestyle='solid', marker='*')

plt.pause(30)
if __name__ == '__main__':
    pass
JakkuSakura commented 4 years ago

It turns out due to precision loss in draw_markers(). Here's a temporary fix by moving float32 matrix on GPU to float64 matrix on CPU.

   def draw_markers(self, gc, marker_path, marker_trans, path,
                     trans, rgbFace=None):
        positions = path.vertices
        if len(positions) == 1:  # single markers don't need a particle shader to draw
            positions = trans.transform(positions)
            translation = Affine2D().translate(*positions[0])
            return self.draw_path(gc, marker_path, marker_trans + translation, rgbFace)

        marker_path = marker_trans.transform_path(marker_path)
        polygons = self.path_to_poly(marker_path, rgbFace is not None)

        # for precision
        positions = trans.transform(positions)
        arr_data = numpy.array(positions).astype(numpy.float32).tobytes()
        pos_vbo = self._gpu_cache(self.context, hash(arr_data), VBO, arr_data)

        with ObjectContext(self.particle_shader) as program, ClippingContext(gc):
            program.bind_attr_vbo("shift", pos_vbo)
            program.set_uniform3m("trans", Affine2D().get_matrix(), transpose=True)
            program.set_attr_divisor("pos", 0)
            program.set_attr_divisor("shift", 1)

            for polygon in polygons:
                arr_data = numpy.array(polygon).astype(numpy.float32).tobytes()
                poly_vbo = self._gpu_cache(self.context, hash(arr_data), VBO, arr_data)
                program.bind_attr_vbo("pos", poly_vbo)

                if rgbFace is not None and len(polygon) >= 3:
                    col = get_fill_color(gc, rgbFace)
                    program.set_uniform4f("color", *col)
                    glDrawArraysInstanced(GL_POLYGON, 0, len(polygon) - 1, len(positions))

                if gc.get_linewidth() > 0:
                    with StrokedContext(gc, self):
                        col = get_stroke_color(gc, self)
                        program.set_uniform4f("color", *col)
                        glDrawArraysInstanced(GL_LINE_STRIP, 0, len(polygon), len(positions))
karlwessel commented 1 month ago

Yes, when the zoom or translation gets to large you quickly see the difference between 32 and 64 bit precision. However fixing this correctly is complicated (you can't do it every time on the CPU) and I am not sure how often that kind of accuracy is really needed.

JakkuSakura commented 1 month ago

I use nanosecond precision. so when I zoom in, it gets inaccurate (and this time, the number of points is small). This PR is a potential fix for this