This implementation is based on Gatys’, Johnson’s, and Ulyanov’s work. Style transfer network is initially based on Gatys’ work, then Johnson wrote the amazing fast neural style which faster and it includes Ulyanov’s instance normalization to replace batch normalization.
The intuition of style transfer is very well described in the paper. Essentially, we need to extract content feature from content image and style feature from style image. We compute the total loss which is a linear combination between the content loss and the style loss defined in the paper. Then, we need to update the output image such that it minimizes the total loss where it matches the content and style feature.
I tried 5 styles: Udnie, The Great Wave off Kanagawa, The Scream, The Shipwreck, and some random child’s drawing with crayon. I then applied the styles to a photo I took from Arthur’s seat facing toward Edinburgh. These are the results.