Loading…
Wednesday, March 3 • 12:00pm - 12:15pm
Talk Session 5: Comparing accuracy of automated scoring models for constructed response assessment across institutional types

Log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.

Constructed response (CR) questions on core concepts can elicit multiple student ideas and reveal mixed thinking; however, they are infrequently used due to the time and effort required for evaluation. Computer scoring models (CSMs) have been used across disciplines, including biology, chemistry, physics, and statistics, to examine CRs and overcome these burdens, making CR usage more accessible. Previous research revealed CSMs for CRs can perform unequally across different institutions. As most responses used to develop CSMs (training set) are from research-intensive colleges and universities (RICUs), we examined the effectiveness of seven CSMs on CRs from RICUs, two-year colleges (TYCs), and primarily undergraduate institutions (PUIs). A human scorer and the CSMs categorized 444 new CRs (testing set) for seven ideas relating to the transformation of matter and energy during human weight loss. Five CSMs maintained high agreement with human scores (Cohen’s Kappa>0.80), while two had reduced agreement (Cohen’s Kappa<0.65). Qualitative examination of miscodes from these two CSMs revealed rare and vague language contributed to the majority of incorrectly categorized CRs. Comparing amongst institutional types, CRs from RICUs had more miscodes overall; however, these CRs also contained more ideas, predisposing them to increased error. Therefore, there was no significant differences in the number of miscodes for individual CSMs or overall (p>0.23). These data support the utility of CSMs across institutional types and diverse responses. Further it provides insight into potential sources of reduced performance in CSMs, which will facilitate development of CSMs for complex topics across STEM core concepts.

Speakers
avatar for Megan Shiroda

Megan Shiroda

Post-Doctoral Researcher, Michigan State University
I am a post-doctoral researcher at Michigan State University. I completed my PhD in Microbiology in 2019. My work in DBER has focused on understanding thinking through constructed response assessments.


Wednesday March 3, 2021 12:00pm - 12:15pm CST
Zoom