Open JJassonn69 opened 3 months ago
To address your question about video vs frames, I think that it is desirable to return frames using a VideoResponse, as we currently don’t have a precedent for returning files through go-livepeer and work is being done to address returning a URL to cloud storage for client retrieval.
changed optional argument in routes to resolve the openapi and go api bindings.
added the frame interpolation pipeline and its corresponding routes, tested in the uvicorn server.
added utility functions for directory reader and writer as the pipeline expects frames in a very specific way. (the model expects the images in the directory to be indexed such that when ordering by sort they should lineup frames by frame)
minor correction to upscaling.py with incorrect naming in info.
added a line in the dl_checkpoinsts.sh script to download the model from djes github repo, might need to update it later to make it dynamic.
the input is changed to a video as a default, so its cleaner implementation. and returns frames for the interpolated video decoded in base64.
building the docker image is also the similar to the segment_anything_2 by using a separate dockerfile to isolate the dependencies.
docker buildx build -f Dockerfile.frame_interpolation -t livepeer/ai-runner:frame_interpolation .
the docker image can be run the similar way as to other pipelines.
docker run --name frame_interpolation_runner -e MODEL_DIR=/models -e PIPELINE=frame-interpolation -e MODEL_ID=film_net_fp16.pt --gpus 0 -p 8002:8000 -v ~/.lpData/models:/models livepeer/ai-runner:frame_interpolation
The new images are build isolated from other general pipelines for frame-interpolation so there wont be any conflicts while building the image but when creating openapi bindings, it requests some dependencies which are only used for the frame-interpolation pipeline so will have to update the requirements.txt file for the missing dependencies.