Create a free profile to get unlimited access to exclusive videos, sweepstakes, and more!
PizzaGAN (Generative Adversarial Network) is a neural network that can figure out which ingredients and how much of each are on a pizza, then how to layer them correctly, just by looking at one. It was developed by a research team from MIT CSA/QCRI who programmed it to visualize what a pizza looks like with or without certain ingredients. Want it to add pepperoni but hold the olives? It can do that, at least virtually.
“In order to teach a machine to 'understand' food and its preparation, a natural approach is to teach it the conversion of raw ingredients to a complete dish, following the step-by-step instructions of a recipe,” the team said in the results they recently published on MIT’s website.
While PizzaGAN isn’t making real pizzas yet, it has been amazingly taught to generate the image of a layer that needs to be added (like that pepperoni). The challenges faced by this artificial brain included using separate GAN modules to reconstruct the steps of a recipe the correct order.
It also had to learn how to partially block out ingredients from a picture of a pizza with all its different layers—needing to know the difference between pepperoni, olives, cheese and other toppings just by looking at the finished pizza so it would be able to add or remove them.
Each adding module was trained to make the added topping appear and, according to the team, generate “a mask that indicates the pixels of the new layer that are visible in the image after adding the layer.” Removing modules would show what a layer under a removed layer would look like and an opposite mask that would make the pixels in that image invisible. That’s what would happen if it looked at a picture of a pizza with pepperoni and olives, but you decided you just wanted pepperoni.
The team experimented on both real and synthetic pizza images to teach PizzaGAN how to add and remove ingredients and even cook or uncook the pizza. What resulted is a program that doesn't need much human supervision to put together a decent pie (at least on a digital screen) with 88 percent accuracy. Robots may have already been taught to deliver pizza, but it’s going to take at least a little while before robot-made pizza will be showing up at anyone’s door.
This type of AI doesn't stop at pizza. The team believes it can be useful for other layered foods such as sandwiches and salads, and even digital shopping assistants that can instantly show you what you look like with or without different layers of clothes.
Don’t be surprised if a robot is soon putting together your outfits as if you were a pizza.