Closed marcaube closed 12 years ago
The first problem is interesting, and it does appear to make sense, however i'm not sure it's that simple. What you say is true - but in order to test PHP would have to:
1) Receive the image request 2) Check the file exists cached or not [in your scenario it isn't, but we'd still have check that] 3) Read the source image into memory 4) Create the new rescaled and re-sharpened image 5) Write that to disc 6) Read the new one's file size 7) Read the original's file size 8) Compare, and if the image is larger, delete the new version from disc 9) Copy the original image into the cache folder so the next time it's requested it does re-run steps 2-9.
I think that's overkill and stands to hurt server performance even more than the normal process. I may run some tests and see if it's as bad as I expect, but right now I'm not sure it's worth it for what is likely to be a few dozen Kb on generally large images.
Removing the meta-data would be great, unfortunately it doesn't seem that GDlib can do that, and I can't make AI dependent on non-standard PHP libraries. From what I've seen, GD doesn't add any metadata (nor does it seem to have a function I can find to do anything with meta-data or EXIF data).
I've been able to implement the progressive JPG idea though, which does indeed shave off some file-size, thanks - that is in 1.5.1 :)
Ok, that could be a big overhead for a website serving a lot of images. This is kind of an edge case so I guess it's not that much of a problem.
For the metadatas, I looked with jhead and GD adds a comment like this :
Comment : CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 80"
If GD can add it, there must be a way to remove it without external dependencies.
Thanks for the progressive jpeg !
I might give it a re-think and see if I can get it optimised enough to be worth it :)
According to the GD documentation on php.net, there's nothing to be done about EXIF and MetaData :(
No worries, cheers for the feedback and ideas :)
Sometimes, the generated image is bigger than the original, since the goal is to save bandwidth, this is a problem. The script could check if the new filesize is smaller than the original. This occurs most when the original image size is close to a break-point : say a 1000px wide image and the default 992px break-point.
Also, I think the filesize could be brought down further by removing meta-data (GPS, Camera infos, etc.) from the resized pictures like the EXIF datas in jpegs. GD even tends to add comment when resizing.
I read somewhere that past 10k in filesize, jpeg images should be saved as progressive. The greater the filesize, the better it is to use progressive encoding. imageinterlace() does exactly that. Maybe we can save a tiny bit more there.