Linguistic nativism and the poverty of the stimulus
Clark, Alexander
Lappin, Shalom
INDICE: Preface. 1 Introduction: Nativism in Linguistic Theory. 1.1 Historical Development. 1.2 The RationalistEmpiricist Debate. 1.3 Nativism and Cognitive Modularity. 1.4 Connectionism, Nonmodularity, and Antinativism. 1.5 Adaptation and the Evolution of Natural Language. 1.6 Summary and Conclusions. 2 Clarifying the Argument from the Poverty of the Stimulus. 2.1 Formulating the APS. 2.2 Empiricist Learning versus Nativist Learning. 2.3 Our Version of the APS. 2.4 A Theory-Internal APS. 2.5 Evidence for the APS: Auxiliary Inversion as a Paradigm Case. 2.6 Debate on the PLD. 2.7 Learning Theory and Indispensable Data. 2.8 A Second Empirical Case: Anaphoric One. 2.9 Summary and Conclusions.3 The Stimulus: Determining the Nature of Primary Linguistic Data. 3.1 Primary Linguistic Data. 3.2 Negative Evidence. 3.3 Semantic, Contextual, and Extralinguistic Evidence. 3.4 Prosodic Information. 3.5 Summary and Conclusions. 4 Learning in the Limit: The Gold Paradigm. 4.1 Formal Models of Language Acquisition. 4.2 Mathematical Models of Learnability. 4.3 The Gold Paradigm of Learnability. 4.4 Critique of the Positive-Evidence-Only APS in IIL. 4.5 Proper Positive Results. 4.6 Variants of the Gold Model. 4.7 Implications of Gold's Results for Linguistic Nativism. 4.8 Summary and Conclusions. 5 Probabilistic Learning Theory for Language Acquisition. 5.1 Chomsky's View of Statistical Learning. 5.2 Basic Assumptions of Statistical Learning Theory. 5.3 Learning Distributions. 5.4 Probabilistic Versions of the IIL Framework. 5.5 PAC Learning. 5.6 Consequences of PAC Learnability. 5.7 Problems with the Standard Model. 5.8 Summary and Conclusions. 6 A Formal Model of Indirect Negative Evidence. 6.1 Introduction. 6.2. From Low Probability to Ungrammaticality. 6.3 Modeling the DDA. 6.4 Applying the Functional Lower Bound. 6.5 Summary and Conclusions. 7 Computational Complexity and Efficient Learning. 7.1 Basic Concepts of Complexity 7.2 Efficient Learning. 7.3 Negative Results. 7.4 Interpreting Hardness Results. 7.5 Summary and Conclusions. 8 Positive Results in Efficient Learning. 8.1 Regular Languages. 8.2 Distributional Methods. 8.3 Distributional Learning of Context-Free Languages. 8.4 Lattice-Based Formalisms. 8.5 Arguments against Distributional Learning. 8.6 Summary and Conclusions. 9 Grammar Induction through Implemented Machine Learning. 9.1 Supervised Learning. 9.2Unsupervised Learning. 9.3 Summary and Conclusions. 10 Parameters in Linguistic Theory and Probabilistic Language Models. 10.1 Learnability of Parametric Models of Syntax. 10.2 UG Parameters and Language Variation. 10.3 Parameters in Probabilistic Language Models. 10.4 Inferring Constraints on Hypothesis Spaces with HierarchicalBayesian Models. 10.5 Summary and Conclusions. 11 A Brief Look at Some Biological and Psychological Evidence. 11.1 Developmental Argumen
- ISBN: 978-1-4051-8784-8
- Editorial: Wiley-Blackwell
- Encuadernacion: Cartoné
- Páginas: 264
- Fecha Publicación: 07/01/2011
- Nº Volúmenes: 1
- Idioma: Inglés