Brown, Gary, Tamara Smith, and Tom Henderson. "Student Perceptions of Assessment Efficacy in Online and Blended Learning Classes." Blended Learning: Research Perspectives. Eds. Anthony G. Picciano and Charles D. Dziuban, eds. Needham, MA: Sloan-C, 2007. Print. 145-160In brief, this study surveyed students of varying age levels and levels of learning experience who took online and blended courses, in order to compare their perceptions of whether the ways they were assessed (i.e., "graded") in the course was a meaningful assessment of what they actually learned.
The authors follow the relevant literature on "social" learning in their presumption that collaborative, social learning is deeper and more meaningful to enable students to participate in academic "conversations" in a subject area, rather that just regurgitate material on a test or in a perfuctory paper. The study distinguishes between "school" activities, those involving interaction between the individual student and instructor (think multiple-choice exams and traditional term papers) and "community" activities, those involving peer interaction and assessment (think peer discussions and collaborative activities in which peers evaluate each others' work). Predictably, perhaps, the study confirms that younger and less experienced students prefer traditional "school" work, while older and more experienced students are mpre likely to see "community" activity assessment as more reflective of learning.
Given the relatively homogenous student population at Augie, perhaps a predictable implication is that younger students are less ready for online collaboration in blended classes than older students. On the other hand, the authors argue that their findings aren't just reflective of developmental learning theories, but also imply that younger students are less receptive because they have been less exposed to richer learning activities beyond conventional classroom pedagogies. They argue that if more students are exposed to collaborative "community" pedagogies sooner and more frequently, they'll be more receptive to it. The truth of that assertion will require more research. But it's a provocative assertion, for sure.
Less provocative research summary after the jump.
Chap. 7: "Student Perceptions of Assessment Efficacy in Online and Blended Learning Classes” – Brown, Gary, Tamara Smith, and Tom Henderson
• Changing undergraduate student demographics (especially their older age, comfort with technology, and “place-bound or time-constrained” circumstances) are a factor leading them to “blended or asynchronous courses” (145)
• “the growing complexity of global society” makes fostering “the need for lifelong learning” imperative, and thus we need to consider how students learn and pursue pedagogies that “provide students with deep, authentic, and responsive learning opportunities” (145)
• This study “examines students’ perceptions of the efficacy in different learning contexts, focusing particularly on different ways students are assessed or graded in online and in blended courses” (145), as students’ understanding of assessment shapes how they perceive the learning activity and how they should be performed; Purpose: how assessment in online and blended learning environments may be conducted in order to help students understand and learn from their expectations and experiences in learning
Review of Literature
• Students’ perception of assessment efficacy (i.e., whether it “accurately reflects what they understand”, 146) “assumes some degree of self-knowledge or metacognition as well as some understanding of the way knowledge is structured within a domain” (146); the more experience a student has, the more sophisticated their thinking in these areas; ultimately, students need to be able to “effectively monitor and assess their own and each others’ learning” (146) and become more independent learners
• Engaged, meaningful learning is “social”, takes place in a “community” beyond the student/teacher relationship (146)
• So, we need to examine both “student perception of and engagement in their own and each others’ learning, and the efficacy of the assessments we use” (147)
• “social presence” is “a sense of belonging to a group related to online interaction”; students’ perception of social presence “associates with significantly better performance on essay examinations” ,but not with improvement in a “multiple choice examination” (146), suggesting that different kind of assessments suggest different kinds of learning and ways of valuing them
• If students perceive value incentives to improve, they perform better – which suggests that perceptions of relevant value affects perceptions of assessment efficacy [I’m not sure this leap is entirely warranted]
• “social interaction” is increasingly being viewed as “an essential requisite for effective online learning” (147), vs. “pseudocommunicative” or “school” tasks that are solely between student and teacher (147); the latter kinds of tasks may not always associate with the “essential [learning] attributes . . . gained from collaborative or ‘community’ interactions” (147) and actually deemed important learning outcomes
• “The findings suggest, first, that traditional tests and student evaluations are not particularly sensitive measures for assessing what educators value most and, second and not incidentally, that even novice learners in socially rich collaborative . . . learning contexts are capable of deeper, more meaningful learning when the measures used to assess that learning are appropriately calibrated” (148)… and “traditional entry level students . . . may not have been adequately prepared to recognize” the distinctions between pedagogies and their implications (148)
• Hypothesis: “[S]tudents’ views of socially isolated forced-choice assessment measures . . . usually associated with ‘school’ tasks will be perceived with diminishing value as learners gain cognitive maturity” (148); the hypothesis is tested for “students in predominantly online environments and . . . .across different populations” (148)
Method
• Survey data over a two-year period from Washington State University’s Center for Teaching, Learning, and Technology; surveys examined “faculty and student learning goals, activities, and practices (GAPs)” (148)
- Instructor survey: course goals, activities/practices, and assessment of student work
- Student survey: questions based on Chickering’s Seven Principles of Good Practice in Undergraduate Teaching, and questions on “the alignment of goals, perceived efficacy of teaching and learning practices, and teaching goals” (149)
- Data from a convenience sample of blended and fully online courses at WSU
- GAPs survey is a mid-term formative assessment tool for faculty
• “Analysis of the qualitative responses was correlated with students’ age, sex and race, as well as the number of college courses students had taken since high school” (149); responses coded into assessment tool subcategories (e.g., exam types, projects, term papers, discussions, etc.), then into two broader categories: “school tasks [instructor is sole audience for work] or community learning. . . . [which] reflects some degree of participation or review of the student work . . . by peers or other professionals or paraprofessionals” (150); “Percentages in each category were compared with results for age [18-20, 21-23, 24 and over], sex, and number of college course [sic] taken” (150); analysis of sex results not included in the present report
Results
Response Rates
• Most student respondents were female, and there was significant participation from distance learning students, “who tend to be older, with more web access, [and therefore] are more likely to respond” (151)
Perceptions of Assessment Efficacy
1. Less experienced learners were more confident in all methods of assessment in both blended and online courses
2. “Novice learners . . . reported that ‘school’ activities, predominantly multiple choice exams and customary term papers, better reflected their learning than did ‘community’ assessment activities” (151)
3. Novice learners more likely to prefer multiple-choice questions as the best reflection of learning
4. Novice learners also more confident in other individualized “school” activities – “essays . . . simulations, homework, and term papers” (151-152)
5. “[M]ore experienced or older students reported that ‘community’ assessment activities better reflected their learning than did ‘school’ assessment activities” (152)
6. More experienced students reported “activities such as peer assessments and peer (threaded) discussions . . . as significantly more efficacious assessments” (152)
7. The “relationship between age and the ‘community/school’ variable” is significant; “For every increase in age category, there is an approximately 50% chance reduction in preference for ‘school’” (152)
8. The “relationship between ‘course experiences’ and the ‘community/school’ variable” is significant; “For every categorical increase . . . there is an approximate 40% increase in preference for ‘community’” (152)
9. More experience leads to greater likelihood to “report that peer assessment was a viable assessment technique” (152)
10. These findings are consistent over time, “though . . . slightly less pronounced [for the course experiences variable]” (152)
Discussion
• Combining the data for blended and online populations might be a limitation, but the purpose here was to focus on “underlying learning constructs necessary for the effective implementation of efficacious learning experiences” (155) – here, how students perceive “different grading strategies” (155)
• The primary hypothesis is supported – as students age and/or get more experienced, preferences for “school” activities (that only involve individual student/instructor interaction) declines in favor of “community” activities (that involve peer interaction and assessment)
• “‘[S]chool’ is not synonymous with learning” (156)
• Traditional assessment tools like “[m]ultiple-choice tests, term papers and even essay writing . . . is often viewed as pseudocommunicative” (156), but learning understood as the capacity to participate in a social “conversation” in a knowledge area improves significantly with learner age and experience… and “school” assessments don’t get at that level of learning; “Mature learners recognize a difference between grading and learning”, and don’t get caught up on matters such as “objective evaluation” (156)
• These “findings are consistent with developmental theories”, but the authors caution that the key is not just age, but “exposure to more sophisticated learning opportunities” (157)
• As humans, especially young ones, are social beings, they may well be “predisposed to have greater appreciation of ‘community learning’ opportunities that the results of this study suggest”; the problem may be that “students have been habituated in ways that have delayed their maturity as learners” (157) – so, “capitalizing on technologies creates a potent path for [providing opportunities for community learning as a required component of student education, and] for engaging even novice learners into the larger conversation” (158)
• Students (and skeptics) observe that peer assessment is problematic and needs instructor guidance is a fair point, and underscores the need for instructors to develop collaborative environments effectively in order to leverage technology in ways that deepen genuine, sophisticated learning, especially as online and blended learning opportunities escalate
No comments:
Post a Comment