vendredi 20 novembre 2015

Without dodging the question by giving links, whats the math of how minds work?

I'm an AI researcher and would like to talk about the math of how minds work, but this rarely happens because most people who know some about it give links to books or large systems but cant explain how they work.

The basics of how minds work are similar to a neuralnet, the simplest kind of which is a hopfield net so I'll start there. A hopfield net is a bunch of nodes/variables that are on or off at any one time, and many certain pairs of them try to be or not be on at the same time in proportion to the weight. For example, if nodes x and y have a weight of -1 between them, then x can be on and y off, or y can be on and x off, or both can be off, without paying the cost, but if both are on then that cost is paid so that outcome becomes less likely unless other combinations of nodes make up for it, combinations that can only happen when certain costs occur. This generalizes to continuous numbers and is most related to the Subset Sum kind of NPComplete math, as a node being on or off (or sigmoid math between) depends on the sum of a subset of other nodes being on or off. Neuralnets can learn combinations of permutations because sum does not depend on order things are summed. In layered neuralnets, each layer is another recursive depth of permutation that can be learned. Such permutations can apply to some of the nodes or all together. An example of such a permutation in the real world is seeing a tree you walk by on the sidewalk. It continuously changes where in your vision you see the tree and its angle and size, but still from one moment to the next you recognize it as the same tree.


via International Skeptics Forum http://ift.tt/1laedZH

Aucun commentaire:

Enregistrer un commentaire