soft-matter / trackpy

Python particle tracking toolkit
http://soft-matter.github.io/trackpy
Other
445 stars 131 forks source link

Draw trajectories onto video #167

Open danielballan opened 10 years ago

danielballan commented 10 years ago

The new pims.export and pims.play (for notebooks) makes this pretty straightforward. The missing ingredient (note to self) is: http://stackoverflow.com/questions/8598673/how-to-save-a-pylab-figure-into-in-memory-file-which-can-be-read-into-pil-image

AlexejStukov commented 10 years ago

Hi. Could you please post your code for drawing the trajectories onto video. Would be very helpfull for me, because I'm trying to do the same. Thanks, Norrec

danielballan commented 10 years ago

This is what I am working with. It works for me, but it can be improved. If you improve it, please share back.

drawings = []
fig, ax = plt.subplots()
for frame in np.arange(0, len(v), 50):  # show every 50th frame to keep file size low
    tp.annotate(t.query('frame<={0}'.format(frame)), v[frame],
                plot_style=dict(marker=',', mfc='b', color='b'), ax=ax)
    ax.set(yticks=[], xticks=[])
    fig.tight_layout(pad=0)
    fig.canvas.draw()
    data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
    data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))

    drawings.append(data)

fig.clf()
pims.play(drawings, bitrate=v.frame_shape[0]*v.frame_shape[1]*8*3*30)

N.B. pims.play and pims.export rely on the optional dependency PyAV, which itself relies on FFmpeg or libav. PyAV works on Linux and Mac but definitely not on Windows (yet).

AlexejStukov commented 10 years ago

Thanks for the fast reply, sure I'll share my code back! I'll try your code on monday, as soon as i get back to university. I think you could replace drawings = [] with something like np.zeros((int(len(v)/50), v.frame_shape[0], v.frame_shape[1])) and drawings.append(data) with drawings[i] = datato speed things up a bit.

AlexejStukov commented 9 years ago

Here is my tested Code:

framerange = np.arange(0, len(frames), OutputFrameStep)
drawing = np.zeros((len(framerange), frames.frame_shape[0], frames.frame_shape[1], 3))
fig, ax = plt.subplots()
for frame, i in zip(framerange, range(len(framerange))):  
    trackpy.annotate(traj.query('frame<={0}'.format(frame)), frames[frame], plot_style=dict(marker='o', mfc='r', color='r'), ax=ax)
    ax.set(yticks=[], xticks=[])
    fig.set_dpi(1)                              # keep old resolution
    fig.set_figwidth(frames.frame_shape[1])     #
    fig.set_figheight(frames.frame_shape[0])    #
    fig.tight_layout(pad=0)
    fig.canvas.draw()
    data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
    data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
    drawing[i] = data
    print(i, " frames calculated for film-output")
    fig.clf()
framerate = int(math.ceil(20.0/OutputFrameStep))
pims.export(drawing, "0_video.avi", rate=framerate)

OutputFrameStep sets how many frames will be omitted for the video. I hope it helps somebody.

mluerig commented 6 years ago

Hey, I was just wondering if 4 years later there is another solution to drawing trajectories into videos. I am interested in writing what comes out of plot_traj frame by frame into a video, to i) get a more interactive experience when selecting the link parameters , and to ii) visualize the results for control and demonstration-purposes. I am not sure which way to pursue here, also because I think I might be using trackpy a bit unorthodoxly: I collect framewise blobs using a backgroundsubtractor in opencv that dumps all coordinates to a dataframe, and then in the end they I send that dataframe through the link function. All my code and some description is here: https://github.com/mluerig/iso-track

Maybe it is possible to go frame by frame (e.g. always the last two frames), identify trajectories, and draw them onto the new frame, to see them "grow". Is there a way to extract coordinates from trackpy's plot function?

Thanks!

AlexejStukov commented 6 years ago

Hi @mluerig

I have not used trackpy for a long time, so my knowledge might be outdated. Couldn't you just use my code from above and plot more than one trajectory frame on the same picture frame (assuming that your trajectory dataframe can be used by trackpy.annotate):

framerange = np.arange(0, len(frames), OutputFrameStep)
drawing = np.zeros((len(framerange), frames.frame_shape[0], frames.frame_shape[1], 3))
fig, ax = plt.subplots()
for frame, i in zip(framerange, range(len(framerange))):  
    for j in range(5):
        trackpy.annotate(traj.query('frame<={0}'.format(max(frame - b, 0))), frames[frame], plot_style=dict(marker='o', mfc='r', color='r'), ax=ax)
    ax.set(yticks=[], xticks=[])
    fig.set_dpi(1)                              # keep old resolution
    fig.set_figwidth(frames.frame_shape[1])     #
    fig.set_figheight(frames.frame_shape[0])    #
    fig.tight_layout(pad=0)
    fig.canvas.draw()
    data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='')
    data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
    drawing[i] = data
    print(i, " frames calculated for film-output")
    fig.clf()
framerate = int(math.ceil(20.0/OutputFrameStep))
pims.export(drawing, "0_video.avi", rate=framerate)
younesba commented 6 years ago

Hi @AlexejStukov @danielballan i'm trying to use your script to Draw trajectories onto video but i don't know what's the package of traj.query() to install it?
thanks

AlexejStukov commented 6 years ago

Hi @younesba. traj is the pandas dataframe you get when you connect the position frames with trackpy.link_df. It is called t in the tutorial.

g2-bernotas commented 5 years ago

Just in case somebody needs it. I slightly modified AlexejStukov answer to suit my needs (the proposed answer did not work straight away). I stored my features in a pkl file, but they were extracted as demonstrated in the tutorial. Imgs in this case are the original grayscale images, while the features were collected using segmented images.

from __future__ import division, unicode_literals, print_function  # for compatibility with Python 2 and 3

import matplotlib as mpl
import matplotlib.pyplot as plt

# Optionally, tweak styles.
mpl.rc('figure',  figsize=(10, 5))
mpl.rc('image', cmap='gray')

import numpy as np
import pandas as pd
import trackpy as tp
import imageio
import glob
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
import cv2

def cvtFig2Numpy(fig):
    canvas = FigureCanvas(fig)
    canvas.draw()

    width, height = fig.get_size_inches() * fig.get_dpi()
    image = np.fromstring(canvas.tostring_rgb(), dtype='uint8').reshape(height.astype(np.uint32), width.astype(np.uint32), 3)    
    return image

def makevideoFromArray(movieName, array, fps=25):
    imageio.mimwrite(movieName, array, fps=fps);

imgs = glob.glob("path\\to\\your\\images\\*.png")
features = pd.read_pickle("path\\to\\your\\features\\features.pkl")
pred = tp.predict.NearestVelocityPredict(span=10)
t= pred.link_df(features, 30, memory=20)
t1 = tp.filter_stubs(t, 100)

arr = []
for i,idx in enumerate(imgs):
    frame = cv2.imread(idx)
    fig = plt.figure(figsize=(16, 8))
    plt.imshow(frame)
    axes = tp.plot_traj(t1.query('frame<={0}'.format(i)))
    axes.set_yticklabels([])
    axes.set_xticklabels([])
    axes.get_xaxis().set_ticks([])
    axes.get_yaxis().set_ticks([])
    arr.append(cvtFig2Numpy(fig))
    plt.close('all')

makevideoFromArray("yourName.mp4", arr, 10)