29 November 2024 DrawNet: a neural painting network with attention module
Fachao Zhang, Xiaoman Liang, Hongtao Mou, Yuan Qin, Yue Chen, Huihuang Zhao
Author Affiliations +
Abstract

We propose a image-to-painting translation method that can generate paintings on a stroke-by-stroke basis. Unlike previous pixel-to-pixel or sequential optimization methods, our method generates a set of physically meaningful stroke parameters, which are closer to the way humans draw. These parameters can be further rendered using a renderer. We add an attention mechanism network to the proposed renderer to improve the quality of the painting images and use smooth L1 loss in the training process of the renderer to make the model converge faster. Our method can join neural style transfer, and we used Visual Geometry Group perceptual loss in the neural style transfer stage to get more realistic results. The experimental results show that the renderer used in our method is better than other renderers, and peak signal-to-noise ratio evaluation metrics are improved by 4.9% compared with previous renderers.

© 2024 SPIE and IS&T
Fachao Zhang, Xiaoman Liang, Hongtao Mou, Yuan Qin, Yue Chen, and Huihuang Zhao "DrawNet: a neural painting network with attention module," Journal of Electronic Imaging 33(6), 063037 (29 November 2024). https://doi.org/10.1117/1.JEI.33.6.063037
Received: 15 April 2024; Accepted: 14 November 2024; Published: 29 November 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
Back to Top