Open rharter opened 9 years ago
Issue #506 (Save to file API) should handle this use case. It's still in the pipeline though.
If you have a transformation we would have to decode the bitmap in order to run them.
load(url).save(file); // Bytes streamed to disk
load(url).resize(50, 50).save(file); // Bytes streamed to BitmapFactory, written to disk.
load(url).transform(circle()).save(file); // Bytes streamed to BitmapFactory, written to disk.
However, there's all kinds of problems when you start dealing with alternate actions.
What happens when I do the following at the same time?
load(url).save(file);
load(url).into(imageView);
Here's we want to stream bytes into the file as they're read, but we also want BitmapFactory to be streaming bytes off the network call and performing the decode.
This becomes very challenging to do without Okio behind the scenes.
Yeah, I'll admit, I had the Okio architecture in mind when I thought this up. The alternate action is why I was thinking of having a Decoder
object in there. So it would be something like this.
load(url).decode(new FileDecoder()).save(file);
load(url).decode(new BitmapDecoder()).into(imageView);
To take that a step further, my initial thought was not to have a save
and into
, but just into, with the imageView case basically being a convenience method that provides a default Target<Bitmap>
. So here, the target would be a generic type, and take whatever type the decoder outputs.
load(url).decode(new FileDecoder()).into(Target<File>() {...});
load(url).decode(new BitmapDecoder()).into(Target<Bitmap>() {...});
load(url).decode(new MetadataDecoder()).into(Target<Metadata>() {...});
In the case of into(imageView)
, that convenience method would just call into(Target<? extends Decoder> target)
passing it a Target<Bitmap>
that simply sets the bitmap on the image view.
Shortly after the debate that led to the v2 API, it became obvious that we really needed a low-level API and a high-level API—the latter of which was built on the former. This would provide the power and granularity of the underlying mechanisms (dispatcher, request handler, downloader) without the burden of the simple API for those which wanted to do more advanced things.
What you describe sounds very similar to what this low-level API could be. Maybe the distinction between the two doesn't have to be so binary, and could rather be a progressive-enhancement API. One thing I'm sure of is that we want to isolate users of the simple API from the more advanced methods. Perhaps just a different load
method (like buildRequest
or something) which returned a superset.
Anyways, I think it's worth exploring with the caveat that no one is really focusing on Picasso work. I'm more than happy to participate in driving the discussion around use cases, historical lessons, and the API, but I don't have much bandwidth for implementation.
I'll try to capture my thoughts on what the data pipeline is later tonight and post them here.
Would be nice to get this architecture change in where decoding the bitmap is a target for bytes that's separate from obtaining those bytes.
NB: This grew much larger than anticipated and should probably be broken up.
This is super old, but I'd like to revisit it to discuss some ideas in here and how they might fit into a high-level/low-level api.
To begin, I see the pipeline of loading an image with Picasso in 4 steps:
load(uri)
call, which creates a request and opens an InputStream
from whereverBitmapFactory
/ImageDecoder
. A pluggable version of this is implemented in #1890into(ImageView)
step, which is responsible for loading the actual image into the view.Currently, there are several assumptions made: that the source will always be a Uri (or Uri like value), that the image will always be a bitmap, that the target is always an image view, etc. While these likely fit most simple use cases, there is an opportunity to make this a much more feature rich, general purpose API without complicating things for the simple use case.
When breaking out these steps it becomes clear that this is really a pipeline. The current pipeline combines some steps and makes assumptions about types, but I believe it can be opened up using generics to be much more flexible, while still being able to retain the simplicity of the API for most users.
Here is a diagram depicting the pipeline as I see it. The matching sources will need to be paired with matching sinks, but I believe that can be done with type specific pipeline steps.
If we consider each step in this pipeline, we can allow this type of custom functionality by allowing the user to customize the input/output type of each step. I'm not sure of the best way to handle the chaining, but I think we can do it in a way in which the API can remain very clean. I assume there's a design pattern for this type of chaining, already. 😄
Some assumptions of input/output can still be made. For instance, the fetch
step can always return a BufferedSource
, which the Decoder
will consume. Assuming this illustration includes parameters for each types Input (I
) and Output (O
), the types might look like this:
Request<I, BufferedSource>
Decoder<BufferedSource, O>
Transformation<I, O>
Target<I, O>
Here are some examples of what the API might look like.
// Simple request: No Api updates required
picasso.load(uri) // returns a RequestCreator<Uri, *>
.into(imageView) // a Target<*, ImageView> is used to set the image on the Imageview
// With transformations: This requires the user to add a single `asBitmap()` line to their request so we know what kind of transformations to support.
picasso.load(uri) // returns RequestCreator<Uri, *>
.asBitmap() // refines into BitmapRequestCreator : RequestCreator<Uri, Bitmap>
.resize(50, 50) // BitmapRequestCreator adds all of the Bitmap specific transformations
.into(file) // A Target<Bitmap, File> is created to save the Bitmap to the file, and the pipeline is executed.
// With Animated Gifs
picasso.load(uri) // returns a RequestCreator<Uri, *>
.asGif() // refines into AnimatedRequestCreator : RequestCreator<Uri, AnimatedDrawable>
.autoPlay() // AnimatedRequestCreator adds all of the AnimatedDrawable specific transformations
.into(imageView) // A Target<AnimatedDrawable, File> is created to set the image
// With custom types
picasso.load(myProject) // returns a RequestCreator<Project, *>
.asDrawable() // refines into a DrawableRequestCreator : RequestCreator<Project, Drawable>
.opacity(.9f) // DrawableRequestCreator adds all of the Drawable specific transformations
.into(imageView) // A Target<AnimatedDrawable, File> is created to set the image
When used in Kotlin, extension functions could be leveraged to make the API extremely clean, by adding things like toFoo()
methods to the RequestCreator class so users can avoid the parameterized versions alltogether.
The fetch is always the starting point of the pipeline, used to retrieve images, as a BufferedSource
from any location. By making the RequestCreator
generic, this step can be opened to a wide range of image sources, including network sources, file system sources, image fonts, custom file types, etc.
The current API has 4 load
methods, which are effectively some predefined special cases, but limit the user to exactly those 4 cases.
fun load(file: File): RequestCreator
fun load(resourceId: Int): RequestCreator
fun load(path: String): RequestCreator
fun load(uri: Uri): RequestCreator
By adding a generic parameter to the RequestCreator
, these can all be addressed for the simple cases, but a single new method can be added to address more advanced cases.
fun load(file: File): RequestCreator<File> = load<File>(file)
fun load(resourceId: Int): RequestCreator<Int> = load<Int>(resourceId)
fun load(path: String): RequestCreator<String> = load<String>(path)
fun load(uri: Uri): RequestCreator<Uri> = load<Uri>(uri)
fun <T> load(source: T): RequestCreator<T>
This will result in the RequestHandler
class also being parameterized to accept any type in it's load()
method, and convert that type into a BufferedSource
.
class RequestHandler<T> {
fun canHandleSource(source: T): Boolean
fun load(source: T): BufferedSource
}
The generic version of this RequestCreator
factory method will use a RequestHandler
factory populated with RequestHandler
s that can receive the specified type.
In my app I have project files for which I need to load project thumbnails. A project is currently stored as a directory on disk, but is imported/exported as a zip file. As the internal structure could change between versions, I'd like to create a RequestHandler
that can simply take a project file and find the thumbnail.
class V1ProjectRequestHandler : RequestHandler<Project> {
override fun canHandleSource(source: Project) = source.version == 1
override fun load(source: Project): BufferedSource {
// thumbnailSource is a BufferedSource of the project thumbnail
// pageSource is a BufferedSource of the project page SVG, used
// if a thumbnail hasn't been generated yet
return source.thumbnailSource ?: source.pageSource
}
}
It might make sense to allow this step to be repeated. For instance, if my project file in the example above contains a Uri
to an image it would be ideal to defer converting the Uri
into a BufferedSource
instead of having to reimplement that logic. This might be accomplished by simply passing the RequestHandlerFactory
into the RequestHandler
methods so they can defer that portion to another RequestHandler
.
class RequestHandler<T> {
fun canHandleSource(handlers: RequestHandlerFactory, source: T): Boolean
fun load(handlers: RequestHandlerFactory, source: T): BufferedSource
}
The decode step is largely implemented in #1890, so I won't detail it too much here. The idea is that a RequestHandler
no longer handles decoding the image, instead deferring that to am ImageDecoder
.
In the implementation in #1890, the ImageDecoder
decodes a BufferedSource
into a Bitmap
or a Drawable
(contained in the Image
class). This is another assumption, and could maybe be parameterized.
Bitmaps and Drawables make sense to support things like animated GIFs, AVDs, etc. It might be too broad to try to cover more than that, because that also has an impact on the following steps in the pipeline, but could be accomplished by adding parameters for the return type to the decoder.
interface Decoder<T> {
fun decode(source: BufferedSource): T
}
If this were the case, to ease the burden on the api, in order to use transformations users could add a method that would indicate the type to be returned.
class RequestCreator<Source> {
fun asBitmap(): Request<Source, Bitmap> = as()
fun asDrawable(): Request<Source, Drawable> = as()
inline <reified T> fun as(): Request<Source, T> = Request<Source, T>(this)
}
picasso
.load(project)
.asBitmap()
Transformations allow the user to specify how an image is to be transformed, and is dependent on the type of image being decoded. They parameterize both their input and output since the type could be changed as a result of the transformation. For instance, if you want to fade Bitmap
, it might be converted into an AniimatedDrawable
, which would result in a class looking like FadeTransformation<Bitmap, Drawable>
.
interface Transformation<Input, Output> {
fun transform(source: Input): RequestCreator<*, Output>
}
Ideally each transformation would return a builder of some sort that understands the Output type and can show relevant transformations.
Finally we need to deliver the downloaded, decoded and transformed image to it's destination. Currently that's assumed to be an ImageView, in most cases, but could be anything, like the Project example I used above. When the user calls this method the request is initiated and the pipeline is processed.
interface Target<Input, Target> {
fun deliver(image: Input, destination: Target)
}
This is fairly simple, with the implementation simply delivering the image into the destination, as an ImageView, File, Custom type, or anything else.
This may be related to #916, but I'll let you decide.
Picasso's RequestHandler setup is very powerful and would be great in many other instances than just filling an ImageView.
In my case, I want to copy the image at the URL returned from the Documents Provider into my file cache for future use. The Document Provider is a bit of a mess, since the result can be a ContentProvider Uri, or an HTTP Url, or a File Uri, or who knows what.
I'd like to save the file at full resolution, but that can be problematic if I have to do that by loading the entire bitmap into memory, then write it out to a file. It would be much more convenient to get the InputStream from the URL and do what I want with it.
After some preliminary investigation, I think this can be accomplished by separating the
BitmapHunter
/RequestHandler
's bitmap decoding functionality into a separate step in the process. So theRequestHandler
would return just an InputStream, and that would get passed to aDecoder
, which could be aBitmapDecoder
, or aFileDecoder
.The
Target
could also be updated to be generic and handle whatever the result of the decoder is.Here's what I see the API looking like
Some questions I'd have here:
BitmapDecoder
, that will be decoded in addition to aBitmapDecoder
? Sounds messy.