Our technique learns different retouching styles from a single example before-after image pair, which can be found in Additional Results (Left: Top row, Right: Bottom row).

Abstract

Photo retouching is a difficult task for novice users as it requires expert knowledge and advanced tools. Photographers often spend a great deal of time generating high-quality retouched photos with intricate details. In this paper, we introduce a one-shot learning based technique to automatically retouch details of an input image based on just a single pair of before and after example images. Our approach provides accurate and generalizable detail edit transfer to new images. We achieve these by proposing a new representation for image to image maps. Specifically, we propose neural field based transformation blending in the patch space for defining patch to patch transformations for each frequency band. This parametrization of the map with anchor transformations and associated weights, and spatio-spectral localized patches, allows us to capture details well while staying generalizable. We evaluate our technique both on known ground truth filters and artist retouching edits. Our method accurately transfers complex detail retouching edits.

overview

Transformation Blending

A novel learning-based image mapping technique in patch space designed to capture highly detailed retouching styles. Inspired by artistic retouching pipelines, our approach provides accurate representations of intricate details while preserving the semantics of the input image. With the help of prior knowledge about artistic retouching pipelines, our map representation can generalize well to new images with different structures.

Patch-adaptive Weights

Transformations are tuned based on patch values and hence achieve local adjustments. Weights learned by the MLP block blend various transformation matrices. Our approach helps us mimic local edit tools of professionals, such as a brush tool or local filters.

Additional Results

overview
Our technique automatically retouches photos by learning the desired edits from one example before-after pair (as indicated in images or shown in small boxes). The learned retouches accurately capture edits in intricate details, such as skin textures as shown in input-retouched pairs.

Acknowledgements

The website template was borrowed from Ref-NeRF.