How does learning take place in a connectionist account? Consider the knowledge that "George Washinton was president". What it means to know this fact about Washington is to have a pattern of connections among the many nodes that together represent “Washington” and the many nodes that together represent “president.” Once these connections are in place, activation of either of these patterns will lead to the activation of the other. That knowledge refers to a potential rather than a state.
“knowing” something, in network terms, corresponds to how the activation will flow. This is different from “thinking about” something, which corresponds to which nodes are active at a particular moment, with no comment about where that activation will spread next. “learning” must involve some sort of adjustments of the connections among nodes, so that after learning, activation will fl ow in a fashion that can represent the newly gained knowledge or the newly acquired skill. The adjustment must be governed entirly at the local level. In other words, the adjustment of connection weights - the strength of the individual connections - must be controlled entirely by mechanisms in the immediate neighborhood of each connection and influenced only by information about what's happening in that neighborhood. Any bit of learning requires the adjustment of a great many connection weights: We need to adjust the connections, for example, so that the thousands of nodes representing “Washington” manage, together, to activate the thousands of nodes representing “president.” In order to achieve learning in the network, the various adjustments in connection weights must in some way end up coordinated with each other. Thus, if learning is to happen at all, we need to ensure that there is some coherence in the changes made across the network. How is learning possible? Connectionists have offered a number of powerful computing schemes, called "learning algorithms", that seek to accomplish learning within the this setup.
One type of learning algorithm, for example, is governed by whether neighboring nodes are activated at the same time or not. If the nodes are activated at the same time, the connection between them is strengthened. This gives the network a means of learning whatgoes- with-what in experience. Another type of learning relies on feedback: In this algorithm, nodes that led to an inappropriate response receive an error signal from some external source, and this causes a decrease in the node’s connections to the other nodes that led it to the error. In addition, the node also transmits the error signal to those same inputs, so that they can make their own adjustments. (Again, it is as if the initial node is saying, “You misled me, so you had it wrong; therefore, you should put less faith in the nodes that misled you.”) In this fashion, the error signal is transmitted backwards through the network—starting with the nodes that immediately triggered the incorrect response, but with each node then passing the error signal backto the nodes that had caused it to fi re. This process is called back propagation.

Report Place comment