Home / Join the revolution! / These are what the Google Artificial Intelligence’s dreams look like

These are what the Google Artificial Intelligence’s dreams look like

Spoiler alert: they’re as weird as electric sheep


Google’s artificial neural network has some explaining to do.
Michael Tyka/ Google

Google’s servers drive the much of the world’s data, and apparently, they dream as well, according to a Google blog post by two Google software engineers and an intern.

Google’s artificial neural networks (ANNs) are stacked layers of artificial neurons (run on computers) used to process Google Images. To understand how computers dream, we first need to understand how they learn. In basic terms, Google’s programmers teach an ANN what a fork is by showing it millions of pictures of forks, and designating that each one is what a fork looks like. Each of network’s 10-30 layers extracts progressively more complex information from the picture, from edges to shapes to finally the idea of a fork. Eventually, the neural network understands a fork has a handle and two to four tines, and if there are any errors, the team corrects what the computer is misreading and tries again.

The Google team realized that the same process used to discern images could be used to generate images as well. The logic holds: if you know what a fork looks like, you can ostensibly draw a fork.


This is what Google’s neural network thinks animals and objects look like.
Michael Tyka/ Google

This showed that even when shown millions of photos, the computer couldn’t come up with a perfect Platonic form of an object. For instance, when asked to create a dumbbell, the computer depicted long, stringy arm-things stretching from the dumbbell shapes. Arms were often found in pictures of dumbbells, so the computer thought that sometimes dumbbells had arms.


Google’s artificial neural network’s take on a what dumbbells looks like.
Michael Tyka/ Google

This helped refine the company’s image processing capabilities, but the Google team took it further. Google used the ANN to amplify patterns it saw in pictures. Each artificial neural layer works on a different level of abstraction, meaning some picked up edges based on tiny levels of contrast, while others found shapes and colors. They ran this process to accentuate color and form, and then by told the network to go buck wild, and keep accentuating anything it recognizes. So if a cloud looks similar to a bird, the network would keep applying its idea of a bird in small iterations over and over again.

Arms were often found in pictures of dumbbells, so the computer thought that sometimes dumbbells had arms.

Oddly enough, the Google team found patterns. Stone and trees often became buildings. Leaves often became birds and insects.


Google’s artificial neural network often found similar patterns in images of rocks or trees.
Michael Tyka/ Google

Researchers then set the picture the network produced as the new picture to process, creating an iterative process with a small zoom each time, and soon the network began to create a “endless stream of new impressions.” When started with white noise, the network would produce images purely of its own design. They call these images the neural network’s “dreams,” completely original representations of a computer’s mind, derived from real world objects.


The produce of an artificial neural network being asked to amplify and pull patterns out of white noise.
Michael Tyka/ Google


The produce of an artificial neural network being asked to amplify and pull patterns out of white noise.
Michael Tyka/ Google


Source: https://www.popsci.com

Check Also

Time To Pay Attention: China Is Building Refugee Camps, Prepping For Nuclear War

Beijing, China – Revealing the extreme direness of the situation on the Korean Peninsula, a …