Date of Award
Summer 8-11-2017
Level of Access Assigned by Author
Campus-Only Thesis
Degree Name
Master's of Science in Teaching (MST)
Department
Teaching
Advisor
Daniel Capps
Second Committee Member
Jonathon Shemwell
Third Committee Member
Christopher Gerbi
Abstract
Models are central to the development of scientific knowledge and because of this have become important components of recent reform movements in science education. Although models are commonly used in science classrooms, it is much less common for students to learn about what models are. This may be in part because the science education literature has tended to focus on teaching students what models are not, leaving a vacancy of ideas for teaching what they are. One potential idea to teach about models is that they are abstractions. By this, I mean that models are ideas that are pulled away, or abstracted, from the particulars of what they represent. Even though the literature recognizes that models are abstractions, this idea has not been positioned as a learning objective. In this study, I investigated the degree to which a written assessment meant to measure that models are abstractions could accurately measure this idea. To do so, I used think-aloud interviews where students answered two written assessment items related to abstraction as they talked out loud. I compared inferences about student knowledge made on the written assessment items to the cognitive output from the think-aloud interview to see how accurately the items measured what students knew. The results showed that the items accurately gauged understanding for most students. Across the two items, there were four instances in which the items did not correctly measure student model knowledge. In three of these instances, the coding of items underestimated what students knew. The other instance of a mismatch between the students inferred understanding was a bit more troublesome, as it overestimated the student’s knowledge, though this was most likely due to the response being particularly difficult to code, so it was difficult to know how often, if at all, this kind of coding error would occur in a larger sample. Overall, the items adequately assessed student knowledge of abstraction though some improvements are possible. I conclude with a discussion of how items could be improved to measure student knowledge of abstraction.
Recommended Citation
LaBarron, Derek C. Mr., "Investigating the Assessment of Abstraction as a Key Component of the Nature of Models" (2017). Electronic Theses and Dissertations. 2758.
https://digitalcommons.library.umaine.edu/etd/2758