Skip to main content

Online Data Augmentation and Subject Enhancement for Context-Aware Image Inpainting

  • Conference paper
  • First Online:
Image Pattern Recognition and Computer Vision (PRCV 2025)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 16279))

Included in the following conference series:

  • 574 Accesses

Abstract

In the field of image editing, inpainting tasks that aim to integrate customized elements into a given image context pose significant challenges. Existing methods often fail to adequately incorporate contextual features from the source image and exhibit limitations in maintaining subject consistency. This paper presents a novel approach to subject-driven image inpainting, leveraging a context-aware framework to achieve superior semantic integration of subjects. Our method introduces two key innovations. First, we employ an online augmented dataset constructed through image stitching, enabling the diffusion inpainting model to learn contextual embeddings of subject images. Second, we enhance the model’s semantic understanding by integrating subject image encoding into the diffusion UNet via cross-attention mechanisms and incorporating a prompt identifier. Through extensive experimentation, we demonstrate state-of-the-art performance across overall quality score metrics, surpassing recent methods in the field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Alaluf, Y., Tov, O., Mokady, R., Gal, R., Bermano, A.: Hyperstyle: StyleGAN inversion with hypernetworks for real image editing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18511–18521 (2022)

    Google Scholar 

  2. Bau, D., et al.: Semantic photo manipulation with a generative image prior. arXiv preprint arXiv:2005.07727 (2020)

  3. Chen, X., et al.: Zero-shot image editing with reference imitation. arXiv preprint arXiv:2406.07547 (2024)

  4. Corneanu, C., Gadde, R., Martinez, A.M.: LatentPaint: image inpainting in latent space with diffusion models. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 4334–4343 (2024)

    Google Scholar 

  5. Couairon, G., Verbeek, J., Schwenk, H., Cord, M.: DiffEdit: diffusion-based semantic image editing with mask guidance. arXiv preprint arXiv:2210.11427 (2022)

  6. Gal, R., Arar, M., Atzmon, Y., Bermano, A.H., Chechik, G., Cohen-Or, D.: Encoder-based domain tuning for fast personalization of text-to-image models. ACM Trans. Graph. (TOG) 42(4), 1–13 (2023)

    Article  Google Scholar 

  7. Gu, S., Bao, J., Yang, H., Chen, D., Wen, F., Yuan, L.: Mask-guided portrait editing with conditional GANs. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3436–3445 (2019)

    Google Scholar 

  8. Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Adv. Neural. Inf. Process. Syst. 33, 6840–6851 (2020)

    Google Scholar 

  9. HuggingFace, Contributors: stable-diffusion-inpainting (2024). https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-inpainting

  10. Hui, Z., Li, J., Wang, X., Gao, X.: Image fine-grained inpainting. arXiv preprint arXiv:2002.02609 (2020)

  11. Lin, Y., Chen, Y.W., Tsai, Y.H., Jiang, L., Yang, M.H.: Text-driven image editing via learnable regions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7059–7068 (2024)

    Google Scholar 

  12. Ling, H., Kreis, K., Li, D., Kim, S.W., Torralba, A., Fidler, S.: EditGAN: high-precision semantic image editing. Adv. Neural. Inf. Process. Syst. 34, 16331–16345 (2021)

    Google Scholar 

  13. Liu, H., Jiang, B., Song, Y., Huang, W., Yang, C.: Rethinking image inpainting via a mutual encoder-decoder with feature equalizations. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020, Part II. LNCS, vol. 12347, pp. 725–741. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58536-5_43

    Chapter  Google Scholar 

  14. Lu, L., Li, J., Zhang, B., Niu, L.: DreamCom: finetuning text-guided inpainting model for image composition. arXiv preprint arXiv:2309.15508 (2023)

  15. Lüddecke, T., Ecker, A.: Image segmentation using text and image prompts. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7086–7096 (2022)

    Google Scholar 

  16. Lugmayr, A., Danelljan, M., Romero, A., Yu, F., Timofte, R., Van Gool, L.: Repaint: inpainting using denoising diffusion probabilistic models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11461–11471 (2022)

    Google Scholar 

  17. Ntavelis, E., et al.: AIM 2020 challenge on image extreme inpainting. In: Bartoli, A., Fusiello, A. (eds.) ECCV 2020, Part III. LNCS, vol. 12537, pp. 716–741. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-67070-2_43

    Chapter  Google Scholar 

  18. Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)

    Google Scholar 

  19. Peng, J., Liu, D., Xu, S., Li, H.: Generating diverse structure for image inpainting with hierarchical VQ-VAE. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10775–10784 (2021)

    Google Scholar 

  20. Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., Chen, M.: Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022). 1(2), 3

  21. Ren, Y., Yu, X., Zhang, R., Li, T.H., Liu, S., Li, G.: StructureFlow: image inpainting via structure-aware appearance flow. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 181–190 (2019)

    Google Scholar 

  22. Richardson, E., et al.: Encoding in style: a styleGAN encoder for image-to-image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2287–2296 (2021)

    Google Scholar 

  23. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10684–10695 (2022)

    Google Scholar 

  24. Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., Aberman, K.: DreamBooth: fine tuning text-to-image diffusion models for subject-driven generation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22500–22510 (2023)

    Google Scholar 

  25. Shen, Y., Gu, J., Tang, X., Zhou, B.: Interpreting the latent space of GANs for semantic face editing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9243–9252 (2020)

    Google Scholar 

  26. ThereforeGames: txt2mask (2022). https://github.com/ThereforeGames/txt2mask. Accessed 31 Dec 2024

  27. Wan, Z., Zhang, J., Chen, D., Liao, J.: High-fidelity pluralistic image completion with transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4692–4701 (2021)

    Google Scholar 

  28. Wang, T., et al.: Pretraining is all you need for image-to-image translation. arXiv preprint arXiv:2205.12952 (2022)

  29. Wei, Y., Zhang, Y., Ji, Z., Bai, J., Zhang, L., Zuo, W.: Elite: encoding visual concepts into textual embeddings for customized text-to-image generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 15943–15953 (2023)

    Google Scholar 

  30. Xie, S., Zhang, Z., Lin, Z., Hinz, T., Zhang, K.: SmartBrush: text and shape guided object inpainting with diffusion model. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 22428–22437 (2023)

    Google Scholar 

  31. Xie, S., et al.: DreaminPainter: text-guided subject-driven image inpainting with diffusion models. arXiv preprint arXiv:2312.03771 (2023)

  32. Xu, J., et al.: Personalized face inpainting with diffusion models by parallel visual attention. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 5432–5442 (2024)

    Google Scholar 

  33. Yang, B., et al.: Paint by example: exemplar-based image editing with diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18381–18391 (2023)

    Google Scholar 

  34. Yang, S., Chen, X., Liao, J.: Uni-paint: a unified framework for multimodal image inpainting with pretrained diffusion model. In: Proceedings of the 31st ACM International Conference on Multimedia, pp. 3190–3199 (2023)

    Google Scholar 

  35. Ye, H., Zhang, J., Liu, S., Han, X., Yang, W.: IP-adapter: text compatible image prompt adapter for text-to-image diffusion models. arXiv preprint arXiv:2308.06721 (2023)

  36. Yu, Y., et al.: Diverse image inpainting with bidirectional and autoregressive transformers. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 69–78 (2021)

    Google Scholar 

  37. Zeng, Y., Fu, J., Chao, H., Guo, B.: Learning pyramid-context encoder network for high-quality image inpainting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1486–1494 (2019)

    Google Scholar 

  38. Zhang, P., Zhang, B., Chen, D., Yuan, L., Wen, F.: Cross-domain correspondence learning for exemplar-based image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5143–5153 (2020)

    Google Scholar 

  39. Zhao, L., et al.: UCTGAN: diverse image inpainting based on unsupervised cross-space translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5741–5750 (2020)

    Google Scholar 

  40. Zheng, C., Cham, T.J., Cai, J.: Pluralistic image completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1438–1447 (2019)

    Google Scholar 

  41. Zhengwentai, S.: clip-score: CLIP Score for PyTorch (2023). https://github.com/taited/clip-score. Version 0.1.1

  42. Zhou, X., et al.: CoCosNet V2: full-resolution correspondence learning for image translation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11465–11475 (2021)

    Google Scholar 

  43. Zhu, S., Fang, P., Zhu, C., Zhao, Z., Xu, Q., Xue, H.: Text image inpainting via global structure-guided diffusion models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 7775–7783 (2024)

    Google Scholar 

Download references

Acknowledgements

The research was supported by the NSFC (62021002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bin Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2026 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yin, H., Zhu, J., Yong, J., Wang, B. (2026). Online Data Augmentation and Subject Enhancement for Context-Aware Image Inpainting. In: Kittler, J., et al. Pattern Recognition and Computer Vision . PRCV 2025. Lecture Notes in Computer Science, vol 16279. Springer, Singapore. https://doi.org/10.1007/978-981-95-5682-3_33

Download citation

Keywords

Publish with us

Policies and ethics