:heart: You do not need to learn anything about making a graphical user interface (GUI) :heart:
Science notebook | Toddler DIY Jukebox on a raspberry Pi |
---|---|
Sliders are added automatically in your jupyter notebook. This works on Google Collab and the code takes about 40 lines of code. No Widgets, event handlers or matplotlib knowledge required. | Plays some music when you touch the icon. Images can be generated using the OpenAI Dall-E API helpers . Caption added through the title mechanism. Music samples generated by prompting MusicGen |
Demo notebook on collab | jukebox.py demo code |
git clone git@github.com:balthazarneveu/interactive_pipe.git
cd interactive-pipe
pip install -e ".[full]"
Version 0.8.6
KeyboardControl
: no slider on UI but exactly the same internal mechanism, update on key press.free_text=("Hello world!", None),
)Shortcuts while using the GUI (QT & matplotlib backends)
F1
to show the help shortcuts in the terminalF11
toggle fullscreen modeW
to write full resolution image to diskR
to reset parametersI
to print parameters dictionary in the command lineE
to export parameters dictionary to a yaml fileO
to import parameters dictionary from a yaml file (sliders will update)G
to export a pipeline diagram for your interactive pipe (requires graphviz)gui='qt'
pyQt/pySide gui='mpl'
matplotlibgui='nb'
ipywidget for jupyter notebooksgui='gradio'
gradio wrapping (+use share_gradio_app=True
to share your app with others)gui='nb'
):star: | PyQt / PySide | Matplotlib | Jupyter notebooks including Google collab | Gradio |
---|---|---|---|---|
Backend name | qt |
mpl |
nb |
gradio |
Preview | ||||
Plot curves | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
Auto refreshed layout | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: |
Keyboard shortcuts / fullscreen | :heavy_check_mark: | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: |
Audio support | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: |
Image buttons | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
Circular slider | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: |
Tutorial on Hugging Face space
GUI | Pipeline |
---|---|
Since ipywidgets in notebooks are supported, the tutorial is also available in a google collab notebook
Let's define 3 image processing very basic filters exposure
, black_and_white
& blend
.
By design:
We use the @interactive()
wrapper which will turn each keyword parameters initialized to a tuple/list into a graphical interactive widgets (slider, tick box, dropdown men).
The syntax to turn keyword arguments into sliders is pretty simple (default, [min, max], name)
will turn into a float slider for instance.
Finally, we need to the glue to combo these filters. This is where the sample_pipeline function comes in.
By decorating it with @interactive_pipeline(gui="qt")
, calling this function will magically turn into a GUI powered image processing pipeline.
from interactive_pipe import interactive, interactive_pipeline
import numpy as np
@interactive()
def exposure(img, coeff = (1., [0.5, 2.], "exposure"), bias=(0., [-0.2, 0.2])):
'''Applies a multiplication by coeff & adds a constant bias to the image'''
# In the GUI, the coeff will be labelled as "exposure".
# As the default tuple provided to bias does not end up with a string,
# the widget label will be "bias", simply named after the keyword arg.
return img*coeff + bias
@interactive()
def black_and_white(img, bnw=(True, "black and white")):
'''Averages the 3 color channels (Black & White) if bnw=True
'''
# Special mention for booleans: using a tuple like (True,) allows creating the tick box.
return np.repeat(np.expand_dims(np.average(img, axis=-1), -1), img.shape[-1], axis=-1) if bnw else img
@interactive()
def blend(img0, img1, blend_coeff=(0.5, [0., 1.])):
'''Blends between two image.
- when blend_coeff=0 -> image 0 [slider to the left ]
- when blend_coeff=1 -> image 1 [slider to the right]
'''
return (1-blend_coeff)*img0+ blend_coeff*img1
# you can change the backend to mpl instead of Qt here.
@interactive_pipeline(gui="qt", size="fullscreen")
def sample_pipeline(input_image):
exposed = exposure(input_image)
bnw_image = black_and_white(input_image)
blended = blend(exposed, bnw_image)
return exposed, blended, bnw_image
if __name__ == '__main__':
input_image = np.array([0., 0.5, 0.8])*np.ones((256, 512, 3))
sample_pipeline(input_image)
:heart: This code shall display you a GUI with three images. The middle one is the result of the blend
Notes:
def blend(img0, img1, blend_coeff=0.5):
, blend_coeff will simply not be a slider on the GUI no more.blend_coeff=[0., 1.]
, blend_coeff will be a slider initalized to 0.5bnw=(True, "black and white", "k")
, the checkbox will disappear and be replaced by a keypress event (press k
to enable/disable black & white)from interactive_pipe import interactive, interactive_pipeline
import numpy as np
COLOR_DICT = {"red": [1., 0., 0.], "green": [0., 1.,0.], "blue": [0., 0., 1.], "gray": [0.5, 0.5, 0.5]}
@interactive()
def generate_flat_colored_image(color_choice=["red", "green", "blue", "gray"], context={}):
'''Generate a constant colorful image
'''
flat_array = np.array(COLOR_DICT.get(color_choice)) * np.ones((64, 64, 3))
context["avg"] = np.average(flat_array)
return flat_array
color_choice
list will be turned into a nice dropdown menu. Default value here will be red as this is the first element of the list!:bulb: Can filters communicate together?
Yes, using the special keyword argument context={}
.
special_image_slice
is going to use that value to set the half bottom image to dark in case the average is high.@interactive()
def special_image_slice(img, context={}):
if context["avg"] > 0.4:
out_img[out_img.shape[0]//2:, ...] = 0.
return out_img
@interactive()
def switch_image(img1, img2, img3, image_index=(0, [0, 2], None, ["pagedown", "pageup", True])):
'''Switch between 3 images
'''
return [img1, img2, img3][image_index]
Note that you can create a filter to switch between several images. In ["pagedown", "pageup", True]
, True means that the image_index will wrap around. (it will return to 0 as soon as it goes above the maximum value of 2).
@interactive()
def black_top_image_slice(img, top_slice_black=(True, "special", "k"), context={}):
out_img = img.copy()
if top_slice_black:
out_img[:out_img.shape[0]//2, ...] = 0.
return out_img
@interactive_pipeline(gui="qt", size="fullscreen")
def sample_pipeline_generated_image():
flat_img = generate_flat_colored_image()
top_slice_modified = black_top_image_slice(flat_img)
bottom_slice_modified_image = special_image_slice(flat_img)
chosen = switch_image(flat_img, top_slice_modified, bottom_slice_modified_image)
return chosen
if __name__ == '__main__':
sample_pipeline_generated_image()
global_params
and context
?
No,
global_params
,global_parameters
,global_state
,global_context
,context
,state
all mean the same thing and are all supported for legacy reasons.context
is the preferred wording.
KeyboardSlider
when using gradio or notebook backends?
No, don't worry, these will be mapped back to regular sliders!
:sound: Inside a processing block, write the audio file to disk and use
context["__set_audio"](audio_file)
@interactive
If you use the
@
decoration style, your function won't be useable in a regular manner (wich may be problematic in a serious development environment)@interactive(angle=(0., [-360., 360.])) def processing_block(angle=0.): ...
An alternative is to decorate the processing block outside... in a file dedicated to interactivity for instance
# core_filter.py def processing_block(angle=0.): ...
# graphical.py
from core_filter import processing_block
def add_interactivity():
interactive(angle=(0., [-360., 360.]))(processing_block)
:question: Can I call the pipeline in a command line/batch fashion?
Yes, headless mode is supported. :soon: documentation needed.
:question: Can I use inplace operations?
Better avoid these in general. To avoid making extra copies, computing hashes everywhere and avoid loosing precious computation time, there are no checks that inputs are not modified in place.
# Don't do that! def bad_processing_block(inp): inp+=1
Roadmap and todos
:bug: Want to contribute or interested in adding new features? Enter a new Github issue
:gift: Want to dig into the code? Take a look at code_architecture.md
ISP means image signal processor
:warning: Work in progess (no proper demosaicking, no denoiser, no tone mapping.)
G
key