Handbook of Automated Essay Evaluation
Routledge (Verlag)
978-0-415-81096-8 (ISBN)
Highlights of the book’s coverage include:
The latest research on automated essay evaluation.
Descriptions of the major scoring engines including the E-rater®, the Intelligent Essay Assessor, the Intellimetric™ Engine, c-rater™, and LightSIDE.
Applications of the uses of the technology including a large scale system used in West Virginia.
A systematic framework for evaluating research and technological results.
Descriptions of AEE methods that can be replicated for languages other than English as seen in the example from China.
Chapters from key researchers in the field.
The book opens with an introduction to AEEs and a review of the "best practices" of teaching writing along with tips on the use of automated analysis in the classroom. Next the book highlights the capabilities and applications of several scoring engines including the E-rater®, the Intelligent Essay Assessor, the Intellimetric™ engine, c-rater™, and LightSIDE. Here readers will find an actual application of the use of an AEE in West Virginia, psychometric issues related to AEEs such as validity, reliability, and scaling, and the use of automated scoring to detect reader drift, grammatical errors, discourse coherence quality, and the impact of human rating on AEEs. A review of the cognitive foundations underlying methods used in AEE is also provided. The book concludes with a comparison of the various AEE systems and speculation about the future of the field in light of current educational policy.
Ideal for educators, professionals, curriculum specialists, and administrators responsible for developing writing programs or distance learning curricula, those who teach using AEE technologies, policy makers, and researchers in education, writing, psychometrics, cognitive psychology, and computational linguistics, this book also serves as a reference for graduate courses on automated essay evaluation taught in education, computer science, language, linguistics, and cognitive psychology.
Mark D. Shermis, Ph.D. is a professor at the University of Akron and the principal investigator of the Hewlett Foundation-funded Automated Scoring Assessment Prize (ASAP) program. He has published extensively on machine scoring and recently co-authored the textbook Classroom Assessment in Action with Francis DiVesta. Shermis is a fellow of the American Psychological Association (Division 5) and the American Educational Research Association. Jill Burstein, Ph.D. is a managing principal research scientist in Educational Testing Service's Research and Development Division. Her research interests include natural language processing, automated essay scoring and evaluation, educational technology, discourse and sentiment analysis, English language learning, and writing research. She holds 13 patents for natural language processing educational technology applications. Two of her inventions are e-rater®, an automated essay evaluation application, and Language MuseSM, an instructional authoring tool for teachers of English learners.
Carl Whithaus, Foreword. M. D. Shermis, J. Burstein, S. A. Bursky, Introduction to Automated Essay Evaluation. N. Elliot, A. Klobucar, Automated Essay Evaluation and the Teaching of Writing. S. C. Weigle, ESL Writing and Automated Essay Evaluation. J. Burstein, J. Tetreault, N. Madnani, The E-rater® Automated Essay Scoring System. P. W. Foltz, L. A. Streeter, K. E. Lochbaum, T. K Landauer, Implementation and Applications of the Intelligent Essay Assessor. M. T. Schultz, The Intellimetric™ Automated Essay Scoring Engine – A Review and an Application to Chinese Essay Scoring. C. S. Rich, M. C. Schneider, J. M. D’Brot, Applications of Automated Essay Evaluation in West Virginia. E. Mayfield, C. Penstein Rosé, LightSIDE: Open Source Machine Learning for Text. C. Brew, C. Leacock, c-rater: Automated Short Answer Scoring at Educational Testing Service. D. M. Williamson, Probable Cause: Developing Warrants for Automated Scoring of Essays. Y. Attali, Validity and Reliability of Automated Essay Scoring. K. L. K. Koskey, M. D. Shermis, Scaling and Norming for Automated Essay Scoring. B.Bridgeman, Human Ratings and AEE. S. M. Lottridge, E. M. Schulz, H. C. Mitzel, Using Automated Scoring to Monitor Reader Performance and Detect Reader Drift in Essay Scoring. M. Gamon, M. Chodorow, C.Leacock, J.Tetreault, Grammatical Error Detection in Automatic Essay Scoring and Feedback. J. Burstein, J. Tetreault, M. Chodorow, D. Blanchard, S. Andreyev, Automated Evaluation of Discourse Coherence Quality in Essay Writing. Jill Burstein, B. Beigman-Klebanov, N. Madnani, A. Faulkner, Automated Sentiment Analysis for Essay Evaluation. P. Deane, Covering the Construct: An approach to Automated Essay Scoring Motivated by a Socio-cognitive Framework for Defining Literacy Skills. M. D. Shermis, B. Hamner, Contrasting State-of-the-Art Automated Scoring of Essays. K. Hakuta, The Policy Turn in Current Education Reform: The Common Core State Standards and Its Linguistic Challenges and Opportunities.
Zusatzinfo | 62 Tables, black and white; 86 Line drawings, black and white; 28 Halftones, black and white; 54 Illustrations, black and white |
---|---|
Verlagsort | London |
Sprache | englisch |
Maße | 178 x 254 mm |
Gewicht | 710 g |
Themenwelt | Schulbuch / Wörterbuch ► Wörterbuch / Fremdsprachen |
Geisteswissenschaften ► Psychologie ► Allgemeine Psychologie | |
Geisteswissenschaften ► Psychologie ► Test in der Psychologie | |
Geisteswissenschaften ► Sprach- / Literaturwissenschaft ► Sprachwissenschaft | |
Informatik ► Theorie / Studium ► Künstliche Intelligenz / Robotik | |
Recht / Steuern ► Privatrecht / Bürgerliches Recht ► IT-Recht | |
Sozialwissenschaften ► Pädagogik | |
ISBN-10 | 0-415-81096-5 / 0415810965 |
ISBN-13 | 978-0-415-81096-8 / 9780415810968 |
Zustand | Neuware |
Haben Sie eine Frage zum Produkt? |
aus dem Bereich