An artificial intelligence system designed to render pictures into the styles of famous children’s book illustrators did not turn out quite well, a recent report revealed. According to the report, the resulting images were a far cry from child-friendly pictures, but instead showed “apocalyptic nightmare” and “hellish” versions of children’s books.
The AI’s concept was based on a young child’s ability to distinguish the artistic style of children’s book illustrator Korky Paul in various books. In line with this, a team of researchers from the Hacettepe University and Middle East Technical University in Turkey tested a computer’s ability to do the same. The researchers first trained a deep-learning algorithm to recognize certain patterns in a large number of data to distinguish the artistic styles of renowned children’s book illustrators such as Dr. Seuss, Beatrix Potter, Marc Brown, and Maurice Sendak. The images used were based on a large data set of 6438 illustrations from 223 children’s books, created by 24 illustrators.
The research team used a technique known as “style transfer” to produce the image. The technique works by allowing a neural network to learn the defining characteristics of a certain artist and apply these into a new image. Apps like Prisma use this imaging technique. “Style transfer model combines the appearance of a style image, e.g. an artwork, with the content of another image, e.g. an arbitrary photograph, by minimizing the loss of content and style. In our case, style is learned from an illustration of a particular illustrator, and transferred to another image. The target image could be a cartoon, a natural photograph, or another illustration from another artist. We expect the resulting image to contain the content of the target image drawn with the style of source illustration,” the researchers explained in DailyMail.co.uk.
The research team used three training models — ALexNEt, VGG-19 and GoogLeNet — for the learning algorithm. According to the research team, the AI system was able to detect certain parts and objects in the illustrations such as eyes, fish, cars, wheels, as well as houses, plants, people, and clothes. The AI system was even capable of discriminating poses such as side views of humans and animals, the researchers said. Other features such as hair, fur, and ears were also among images detected by the AI system, the research team added.
According to the experts, the AI system was able to distinguish the illustrators’ styles with a surprising 94 percent accuracy. However, it turned out that the AI system was mostly transferring colors from one image to another. This suggested that color pattern was the main feature that the AI system learned to decipher in order to distinguish illustrators from one another, the researchers said.
With colors being the only defining factor in transferring illustrations, the resulting images were rather horrifying. A scene from Dr. Seuss’ The Cat in the Hat was recreated as an image that resembled a blood splatter. Another illustration featuring two children standing was rendered as a rather creepy image. An illustration of a smiling turtle, and an otherwise snow-capped landscape turned out to be burning images.
On a related note, Google recently produced its own hellish rendering of advancement. A company engineer filtered an episode of the late Bob Ross’ television show The Joy of Painting using the artificial neural network called DeepDream. The network was known to detect objects and animals that were not really present in the video. The video originally featured Ross painting his iconic happy trees. However, the rendering featured numerous bug-eyed creatures though out the video.
Sources include:Submit a correction >>