Summary form only given, as follows. The authors propose that the back propagation algorithm for supervised learning can be generalized, put on a satisfactory conceptual footing, and very likely made more efficient by defining the values of the output and input neurons as probabilities and varying the synaptic weights in the gradient direction of the log likelihood rather than the error.
|Original language||English (US)|
|Number of pages||1|
|State||Published - 1987|
ASJC Scopus subject areas