Rebecca Mogg (Cardiff University)
The Pathway to Success: Using Research Trails for Summative Assessment
Assumes embeddedness of IL
Design assessments that evaluate IL & what students are doing in their courses—RESEARCH TRAIL.
Coursework submitted with their essay that documents the research process. A required component to the assignment. Includes evaluative reflection of why they included their sources. Also can be used to assess referencing (citations). How is this different than an annotated bibliography? Could it be used as a essay component to the bib?
Why do it?
Draws on higher order skills. We can choose which criteria from the IL outcomes we wanted students to address. Also can be used to assess writing and critical thinking outcomes. Not just for IL outcomes.
Promotes both formative and summative assessments.
Suitable for all level of learners. It depends what you are asking students to evaluate (remember, we set the criteria).
Promotes problem solving: learn from what works and what doesn’t.
Strong emphasis on the assignment and rubric (see handouts). Students saw the rubric ahead of time. Built on what students already know how to do. In her sessions with students, one session on what they knew already (Google, Library catalog), second session on more advanced database searching, third on mirroring the process students would go through.
Learning objectives—finding and evaluating.
Shared everything they did to find their sources:
What did they use to find it.
What keywords did they use, did they use Boolean, truncation
Include reasons for selection (relevance, objectivity, reliability, currency)
IDEA: Increase the emphasis on currency. Might be an interesting way to connect more to primary source materials—some things don’t go out of date while somethings go out of date almost immediately.
Provided extensive feedback in order to make the assessment summative AND formative. Give them something to take out of the process as well.
Validity: how well did the assessment work.
Reliability: did it connect to the criteria?
Sufficiency: how much time did it take for them to do the assignment and the time she gave to assessing in comparison to what we learned from it.
Validity: concerns about students falsifying their approach in order to meet the criteria rather than conducting a genuine process.
Reliability: students did what was required and the rubric worked well. Reliability decreased as more people participated in the marking of papers. Different assessers have different ideas about what meets/does not meet. (Also a concern when faculty take over the assessment)
Sufficiency: an enormous amount of work on her part.
Does think it is successful. Assessed higher order skills and process rather than strictly product. Student feedback is not as positive. Helped her identify gaps in student understanding and use that information to develop her teaching strategies. The relevance has to be made clear to students.
By taking a journal approach, could you deal with the issues of students falsifying their process? Isn’t this where faculty could be partners: creating benchmarks?
Very little discussion about faculty as partners or benchmarks. Essential component to success, especially for students that are not as advanced?
If you do use Grad Assistants or Professors, must collaborate IOT make significant connections between their final work (the essay) and the research trail.
More on promoting problem solving: continue applying what they’ve learned but it is on their own, independent of the instruction.
At Cardiff, a bottom up and a top down approach. It is recognized at the University level and a directive from the top about IL as well as within departments/schools.
To make the activity easier and equally sufficient: list x number of references rather than all the things they look at.
How much help was provided: email, reference, but not additional class time.