Efficient simulation of finite automata by neural nets

Let <italic>K</italic>(<italic>m</italic>) denote the smallest number with the property that every <italic>m</italic>-state finite automaton can be built as a neural net using <italic>K</italic>(<italic>m</italic>) or fewer neurons. A counting argument shows that <italic>K</italic>(<italic>m</italic>) is at least &OHgr;((<italic>m</italic> log <italic>m</italic>)<supscrpt>1/3</supscrpt>), and a construction shows that <italic>K</italic>(<italic>m</italic>) is at most <italic>O</italic>(<italic>m</italic><supscrpt>3/4</supscrpt>). The counting argument and the construction allow neural nets with arbitrarily complex local structure and thus may require neurons that themselves amount to complicated networks. Mild, and in practical situations almost necessary, constraints on the local structure of the network give, again by a counting argument and a construction, lower and upper bounds for <italic>K</italic>(<italic>m</italic>) that are both linear in <italic>m</italic>.