We have shown that it is possible to achieve artisticstyle transfer within a purely image processing paradigm. This isin contrast to previous work that utilized deep neural networks tolearn the difference between “style” and “content” in a painting.We leverage the work by Kwatra et. al. on texture synthesis toaccomplish “style synthesis” from our given style images, buildingoff the work of Elad and Milanfar. We have also introduced anovel “style fusion” concept that guides the algorithm to followbroader structures of style at a higher level while giving it thefreedom to make its own artistic decisions at a smaller scale.Our results are comparable to the neural network approach,while improving speed and maintaining robustness to differentstyles and contents.