Best workflow for assets of greatly different technical quality?

In E-Commerce, we have to deal with a mixture of manufacturer-provided imagery and self produced images. Manufacturer-assets are frequently heavily compressed already. Gumlet – ideally – only transferred them to WebP or AVIF.

Photos we took ourselves have large pixel-dimensions, are uncompressed and could use size reduction, compression as well as file-format conversion.

Is Gumlets compressor smart enough to decide what’s done best to quite diverse images? I as a layman would expect some typical compression markers inside the image code that compressors might detect. Or even some AI, that “visually” spots nasty compression-artefacts :grinning:.

Or is segmenting our assets by “untreated” and “precompressed” the best thing to do?

Hi,
Our compressor will automatically detect if the image is compressed or not. If the image is already of not good quality, we will not apply any further compression and it will look as it is. If you are uploading your own very high resolution images, Gumlet will compress them.

In short, it’s smart enough to detect original quality and adapt compression.

Thank you, Aditya,
that sounds like the way it should be. Does this compression-detection also work for already compressed images that got re-edited?

Concrete case: A manufacturer has released compressed material, but it all uses some background colour that doesn’t match our Shop-Design. Or the manufacturer has placed logos in all images (which we show elsewhere on the product page.

So we remove what doesn’t fit. The outcome of our operations is a PNG which claims to be losslessly saved – but it still contains the heavily compressed pixel information, obviously.

Is your algorithm smart enough for this case as well?

Pixel information inside losslessly saved PNG should also work fine and we won’t degrade the quality. You can give it a try by checking a few images processed from our platform. Please let us know in support if oyu still face any issues,.