balthazarneveu / interactive_pipe

create interactive image processing pipeline with friendly sliders
The Unlicense
8 stars 1 forks source link
| ![Interactive pipe](/static/interact-pipe-logo-horizontal-rgb.svg) | |:--:| |Quick setup `pip install interactive-pipe` | | [Project website](https://balthazarneveu.github.io/interactive_pipe/) |

Interactive-pipe code

Concept

:heart: You do not need to learn anything about making a graphical user interface (GUI) :heart:

Examples

Science notebook Toddler DIY Jukebox on a raspberry Pi
science notebook jukebox
Sliders are added automatically in your jupyter notebook. This works on Google Collab and the code takes about 40 lines of code. No Widgets, event handlers or matplotlib knowledge required. Plays some music when you touch the icon. Images can be generated using the OpenAI Dall-E API helpers . Caption added through the title mechanism. Music samples generated by prompting MusicGen
Demo notebook on collab jukebox.py demo code

Local setup

git clone git@github.com:balthazarneveu/interactive_pipe.git
cd interactive-pipe
pip install -e ".[full]"

Who is this for?

:mortar_board: Scientific education


:scroll: Terminology

interactive_pipe_concept

:scroll: Features

Version 0.8.6

:keyboard: Keyboard shortcuts

Shortcuts while using the GUI (QT & matplotlib backends)

Status

:star: PyQt / PySide Matplotlib Jupyter notebooks including Google collab Gradio
Backend name qt mpl nb gradio
Preview qt backend mpl backend nb backend mpl backend
Plot curves :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_check_mark:
Auto refreshed layout :heavy_check_mark: :heavy_check_mark: :heavy_check_mark: :heavy_minus_sign:
Keyboard shortcuts / fullscreen :heavy_check_mark: :heavy_check_mark: :heavy_minus_sign: :heavy_minus_sign:
Audio support :heavy_check_mark: :heavy_minus_sign: :heavy_minus_sign: :heavy_check_mark:
Image buttons :heavy_check_mark: :heavy_minus_sign: :heavy_minus_sign: :heavy_minus_sign:
Circular slider :heavy_check_mark: :heavy_minus_sign: :heavy_minus_sign: :heavy_minus_sign:

Tutorials

Main tutorial

tuto

Tutorial on Hugging Face space

Tutorial in a Colab notebook

Learn by examples

Basic image processing (python code sample for PyQT GUI)

GUI Pipeline

Speech exploration notebook (colab, signal processing)

Speech processing exploration in a notebook

:rocket: Ultra short code

Since ipywidgets in notebooks are supported, the tutorial is also available in a google collab notebook

Let's define 3 image processing very basic filters exposure, black_and_white & blend.

By design:

We use the @interactive() wrapper which will turn each keyword parameters initialized to a tuple/list into a graphical interactive widgets (slider, tick box, dropdown men).

The syntax to turn keyword arguments into sliders is pretty simple (default, [min, max], name) will turn into a float slider for instance.

Finally, we need to the glue to combo these filters. This is where the sample_pipeline function comes in.

By decorating it with @interactive_pipeline(gui="qt"), calling this function will magically turn into a GUI powered image processing pipeline.

from interactive_pipe import interactive, interactive_pipeline
import numpy as np

@interactive()
def exposure(img, coeff = (1., [0.5, 2.], "exposure"), bias=(0., [-0.2, 0.2])):
    '''Applies a multiplication by coeff & adds a constant bias to the image'''
    # In the GUI, the coeff will be labelled as "exposure". 
    # As the default tuple provided to bias does not end up with a string, 
    # the widget label will be "bias", simply named after the keyword arg. 
    return img*coeff + bias

@interactive()
def black_and_white(img, bnw=(True, "black and white")):
    '''Averages the 3 color channels (Black & White) if bnw=True
    '''
    # Special mention for booleans: using a tuple like (True,) allows creating the tick box.
    return np.repeat(np.expand_dims(np.average(img, axis=-1), -1), img.shape[-1], axis=-1) if bnw else img

@interactive()
def blend(img0, img1, blend_coeff=(0.5, [0., 1.])):
    '''Blends between two image. 
    - when blend_coeff=0 -> image 0  [slider to the left ] 
    - when blend_coeff=1 -> image 1   [slider to the right] 
    '''
    return  (1-blend_coeff)*img0+ blend_coeff*img1

# you can change the backend to mpl instead of Qt here.
@interactive_pipeline(gui="qt", size="fullscreen")
def sample_pipeline(input_image):
    exposed = exposure(input_image)
    bnw_image = black_and_white(input_image)
    blended  = blend(exposed, bnw_image)
    return exposed, blended, bnw_image

if __name__ == '__main__':
    input_image = np.array([0., 0.5, 0.8])*np.ones((256, 512, 3))
    sample_pipeline(input_image)

:heart: This code shall display you a GUI with three images. The middle one is the result of the blend

Notes:


:bulb: Some more tips

from interactive_pipe import interactive, interactive_pipeline
import numpy as np

COLOR_DICT = {"red": [1., 0., 0.],  "green": [0., 1.,0.], "blue": [0., 0., 1.], "gray": [0.5, 0.5, 0.5]}
@interactive()
def generate_flat_colored_image(color_choice=["red", "green", "blue", "gray"], context={}):
    '''Generate a constant colorful image
    '''
    flat_array =  np.array(COLOR_DICT.get(color_choice)) * np.ones((64, 64, 3))
    context["avg"] = np.average(flat_array)
    return flat_array

:bulb: Can filters communicate together? Yes, using the special keyword argument context={}.

@interactive()
def special_image_slice(img, context={}):
    if context["avg"] > 0.4:
        out_img[out_img.shape[0]//2:, ...] = 0.
    return out_img

@interactive()
def switch_image(img1, img2, img3, image_index=(0, [0, 2], None, ["pagedown", "pageup", True])):
    '''Switch between 3 images
    '''
    return [img1, img2, img3][image_index]

Note that you can create a filter to switch between several images. In ["pagedown", "pageup", True], True means that the image_index will wrap around. (it will return to 0 as soon as it goes above the maximum value of 2).

@interactive()
def black_top_image_slice(img, top_slice_black=(True, "special", "k"), context={}):
    out_img = img.copy()
    if top_slice_black:
        out_img[:out_img.shape[0]//2, ...] = 0.
    return out_img

@interactive_pipeline(gui="qt", size="fullscreen")
def sample_pipeline_generated_image():
    flat_img = generate_flat_colored_image()
    top_slice_modified = black_top_image_slice(flat_img)
    bottom_slice_modified_image = special_image_slice(flat_img)
    chosen = switch_image(flat_img, top_slice_modified, bottom_slice_modified_image)
    return chosen

if __name__ == '__main__':
    sample_pipeline_generated_image()

History

FAQ

An alternative is to decorate the processing block outside... in a file dedicated to interactivity for instance

# core_filter.py
def processing_block(angle=0.):
...
# graphical.py
from core_filter import processing_block

def add_interactivity():
    interactive(angle=(0., [-360., 360.]))(processing_block)

:gift: Want to dig into the code? Take a look at code_architecture.md

Short term roadmap

Long term roadmap

Further examples

Minimalistic pytorch based ISP

ISP means image signal processor

:warning: Work in progess (no proper demosaicking, no denoiser, no tone mapping.)

Ultra simplistic ISP

:test_tube: Experimental features