Bayesian modeling of substantive biases for word order in an artificial language learning paradigm

A central hypothesis of generative linguistics is that typological universals arise because of constraints on the grammars people can learn (e.g. Chomsky 1965; Baker 2001). Recent work suggests that artificial language learning (ALL) experiments with adults can provide direct behavioral evidence for substantive biases parallel to typological tendencies (e.g. Wilson 2006; Finley & Badecker 2008). In this paper, we develop a Bayesian model which formalizes and quantifies biases hypothesized to affect the learning of word order patterns in the nominal domain. Using data from an ALL experiment in which learners exposed to mixtures of grammars tended to shift those mixtures in certain ways rather than others, we show that learners' inferences are systematically affected by a set of prior biases. These biases are in line with a typological generalization—Greenberg's Universal 18. This test case illustrates how learners’ internal biases impose properties on the grammars they learn, resulting in the kinds of crosslinguistic regularities known as typological universals.