Looks like we have a definitive yes, yes they do:
Here's a nice photo gallery. Very trippy.
I haven't found a video showing the endless stream of visuals produced by repeatedly feeding the DNN's outback, but I'm sure one will be posted in a few days. I'm also interested whether DNN trained on speech recognition could "hear" voices or even entire phrases in white noise.
From an engineering point of view, it's fascinating that implementations of DNN "see" details in images that aren't their, like their own flavor of optical illusions or pareidolia :)
Quote:
Originally Posted by author
We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the output layer is reached. The networks answer comes from this final output layer.
One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretationsthese neurons activate in response to very complex things such as entire buildings or trees. One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in Banana. Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana (see related work in [1], [2], [3], [4]). By itself, that doesnt work very well, but it does if we impose a prior constraint that the image should have similar statistics to natural images, such as neighboring pixels needing to be correlated. . . . If we apply the algorithm iteratively on its own outputs and apply some zooming after each iteration, we get an endless stream of new impressions, exploring the set of things the network knows about. We can even start this process from a random-noise image, so that the result becomes purely the result of the neural network, as seen in the following images. |
I haven't found a video showing the endless stream of visuals produced by repeatedly feeding the DNN's outback, but I'm sure one will be posted in a few days. I'm also interested whether DNN trained on speech recognition could "hear" voices or even entire phrases in white noise.
From an engineering point of view, it's fascinating that implementations of DNN "see" details in images that aren't their, like their own flavor of optical illusions or pareidolia :)
via International Skeptics Forum http://ift.tt/1SvDZTV
Aucun commentaire:
Enregistrer un commentaire