Cracking the code: Building an assessment plan with student discussion boards
Abstract
Library instruction sessions offer students a chance to learn a variety of information literacy
skills and often give them a chance to apply these abilities with a librarian close by for
assistance. But how can the librarian be sure the tips and tricks being taught are retained beyond
the classroom? In the Fall of 2019, librarians at the University of Missouri-Kansas City
recognized an assessment gap in their library instruction program. Undergraduate student
responses to source evaluations were assessed after completing the program’s flipped classroom
educational module but not after in-person instruction sessions—a pre-test without a post-test. In
an effort to measure the effectiveness of classroom instruction, librarians created an assessment
plan and tool to capture results post-instruction. Students were asked to respond to information
literacy questions in a Canvas discussion board within 24 hours of receiving instruction
regarding sources found. A total of 231 students reported 411 sources on the discussion board.
The posts were extracted from Canvas and imported into OpenRefine, where the data was
anonymized, organized, and generally cleaned up. Data was then coded by the librarians using
Google Forms, replicating the assessment process for each source presented by the students, both
scholarly and popular in type. With a new data set, the librarians were able to create
visualizations and identify trends from the student responses. After analyzing the coded
information, librarians were able to then alter lesson plans with the intention of better meeting
the student learning outcomes for undergraduate library instruction.