Complexity of Connectionist Learning with Various Node Functions

WE FORMALIZE A NOTION OF LEARNING IN CONNECTIONIST NETWORKS THAT CHARAC- TERIZES THE TRAINING OF FEED-FORWARD NETWORKS. CONSIDERING DIFFERENT FAM- ILIES OF NODE FUNCTIONS, WE PROVE THE LEARNING PROBLEM NP-COMPLETE AND THUS DEMONSTRATE THAT IS HAS NO EFFICIENT GENERAL SOLUTION. ONE FAMILY OF NODE FUNCTIONS STUDIED IS THE SET OF LOGISTIC-LINEAR FUNCTIONS, AS USED BY THE POPULAR BACK-PROPOGATION ALGORITHM. SEVERAL IMPLICATIONS OF THE THEOREM ARE DISCUSSED, INCLUDING WHY THIS RESULT IS ACTUALLY HELPFUL FOR CONNECTION IST LEARNING RESEARCH.