Neural Control Within the BMFT-Project NERES

Whereas the identification and control of linear systems is well understood, this does not apply in general to nonlinear systems. Here, neural nets open up new paths for the treatment of multidimensional nonlinear systems as well as the possibility of adaptive readjustments to changes of the environment and of the system parameters. The advantages of neural control are of particular value for robotics. On the subsymbolic level, the goal is a symbiosis between sensorics and actuatorics and neural signal processing and control. However, we do intend to use traditional AI-techniques in cases where a robust knowledge representation is required which goes beyond the subsymbolic level, e.g. for space representation. In many applications, the problem is to extract significant control parameters from visual sensor data in a robust and efficient manner. For this task, neural nets are suited particularly well. Mathematical models for machine learning as well as unifying dynamical concepts will be utilized to achieve quantitative, generalizable results with respect to the efficiency of neural nets, by taking into account the real world requirements for control tasks with respect to performance, reliability and fault tolerance. Speech is of special significance for the dialogue with autonomous systems. Since neural nets have lead to encouraging results in speech processing, corresponding techniques will also be applied in robotics.

[1]  David E. Rumelhart,et al.  Generalization by Weight-Elimination with Application to Forecasting , 1990, NIPS.

[3]  Hans Ulrich Simon,et al.  Algorithmisches Lernen auf der Basis empirischer Daten , 1991, Wissensbasierte Systeme.

[4]  M. Kearns,et al.  Recent Results on Boolean Concept Learning , 1987 .

[5]  H. Harry Asada,et al.  Teaching and learning of compliance using neural nets: representation and generation of nonlinear compliance , 1990, Proceedings., IEEE International Conference on Robotics and Automation.

[6]  Daniel Hernández,et al.  Relative Representation of Spatial Knowledge: The 2-D Case , 1991 .

[7]  David M. Mark,et al.  Cognitive and Linguistic Aspects of Geographic Space: New Perspectives on Geographic Information Research , 1991 .

[8]  David Haussler,et al.  Learnability and the Vapnik-Chervonenkis dimension , 1989, JACM.

[9]  Christian Freksa,et al.  Temporal Reasoning Based on Semi-Intervals , 1992, Artif. Intell..

[10]  Lorien Y. Pratt,et al.  Comparing Biases for Minimal Network Construction with Back-Propagation , 1988, NIPS.

[11]  Eric B. Baum,et al.  The Perceptron Algorithm is Fast for Nonmalicious Distributions , 1990, Neural Computation.

[12]  Ehud D. Karnin,et al.  A simple procedure for pruning back-propagation trained neural networks , 1990, IEEE Trans. Neural Networks.

[13]  Christian Freksa,et al.  Qualitative spatial reasoning , 1990, Forschungsberichte, TU Munich.

[14]  S. Miesbach,et al.  Efficient gradient computation for continuous and discrete time-dependent neural networks , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[15]  Eric B. Baum,et al.  Polynomial time algorithms for learning neural nets , 1990, Annual Conference Computational Learning Theory.

[16]  H. Hackbarth,et al.  Scaly artificial neural networks for speaker-independent recognition of isolated words , 1989, International Conference on Acoustics, Speech, and Signal Processing,.

[17]  Geoffrey E. Hinton Connectionist Learning Procedures , 1989, Artif. Intell..

[18]  David Haussler,et al.  What Size Net Gives Valid Generalization? , 1989, Neural Computation.

[19]  Jürgen Schmidhuber,et al.  Learning to generate sub-goals for action sequences , 1991 .

[20]  Bernd Schürmann,et al.  The `detailed balance' net: a stable asymmetric artificial neural system for unsupervised learning , 1990, 1990 IJCNN International Joint Conference on Neural Networks.

[21]  U. Ramacher,et al.  Unified Description of Neural Algorithms for Time-Independent Pattern Recognition , 1991 .

[22]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[23]  F. Girosi,et al.  Networks for approximation and learning , 1990, Proc. IEEE.

[24]  Michael C. Mozer,et al.  Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment , 1988, NIPS.

[25]  Kumpati S. Narendra,et al.  Identification and control of dynamical systems using neural networks , 1990, IEEE Trans. Neural Networks.

[26]  Yann LeCun,et al.  Optimal Brain Damage , 1989, NIPS.

[27]  F. Lange A Learning Concept for Improving Robot Force Control , 1988 .

[28]  Jürgen Schmidhuber,et al.  Learning to Control Fast-Weight Memories: An Alternative to Dynamic Recurrent Networks , 1992, Neural Computation.

[29]  James S. Albus,et al.  New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC)1 , 1975 .

[30]  John Moody,et al.  Fast Learning in Networks of Locally-Tuned Processing Units , 1989, Neural Computation.