Category Archives: Assessment

Assessing student contributions to a wiki

The article Towards a Process for K-12 Students as Content Producers by John Concilus has some great ideas. I really like the way he refers back to well-founded research from an earlier era (eg on process writing and writing for an authentic audience) while discussing the impact of new technology. John’s in-depth article raises lots of interesting issues and explores how tools such as a wiki can be used in student learning without lapsing into over-simplistic promotion of the tool.

One innovation he describes which raises some fascinating issues is the WikiDashboard, which provides a way to analyse individual contributions to a collaborative wiki. WikiDashboard shows a list of users who have contributed to a page and quantitative data about the amount each user has contributed and when they did so.

wikidashboard

While this tool provides fascinating information on who has edited a wiki article, I have some strong reservations about its use as an assessment tool. My main concern is that it provides an easier way to quantify contributions but does not really provide any qualitative insights into the quality of these contributions. The danger here is related to the assessment dilemma – we tend to assess the things that are easy to measure, but these are often less important than the things which are harder to measure.

If we want to assess educational outcomes such as higher order thinking, analysis and critical thinking, we need to assess qualitative evidence. While the Wiki Dashboard is a great tool that can help an assessor find qualitative evidence, the data it provides is not in itself such evidence. It can help us find who wrote what content on a collaborative wiki, but we still need to assess each person’s contribution qualitatively and avoid any tendency to use its percentages in allocating grades.

Assessing student collaboration using a wiki

wiki matrixIn Wiki experiences in the classroom, Megan Poore provides an interesting anecdotal summary of one teacher’s experience with using a wiki as a tool for student collaboration on an assessment task.

One problem experienced was that students didn’t really collaborate: they tended to work in isolation from each other even though they were instructed to work in pairs: The idea was to have students use the medium as it’s meant to be used: as a collaboration space for students to build their understanding of the topic. But the fact that students did their work separately from each other, in Word, annulled the value of the wiki as a collaborative tool.

Admittedly, we don’t know the full story behind this report. But I believe this issue sometimes arises because of an assumption that the wiki is by nature collaborative. In fact, collaboration is more of an affordance of wikis rather than an attribute. An affordance provides the potential and possibility for collaboration but it does not occur automatically.

Students will choose their own ways of learning whatever the chosen tool. So if how they complete an assessment task is important, we will need to set requirements around this. It is relatively easy to establish a requirement to use a wiki: we just need to specify that that is the format in which it is submitted. However, if we want to set a requirement for collaboration, we will need to include that in the assessment criteria. And since collaboration is part of the learning process, we will need to be clear on what evidence would be available so we could assess it.

I enjoy using wikis as a learning activity and I like to incorporate their use into assessment where appropriate. But assessing student collaboration using a wiki can be complex and requires some major rethinking of the assessment processes.

Assessing what’s important in online discussion

One of the problems with assessment is common to all forms of measurement: ”what is easy to measure may not be important, what is important is often hard to measure.” This is true of online assessment as well as more traditional forms.

The assessment dilemma


This dilemma can cause assessment developers to assess what is less important: they focus on easily quantifiable criteria or standards but neglect to include more important criteria because they are not so easily quantified or measured.

In assessment of online discussion, we see the effects of this dilemma when assessment criteria call for a quantifiable contribution from learners: ‘At least 3 contributions to the forum’ or ‘One forum thread started and responses to at least three others’. This can lead to lightweight discussion as learners post superficial responses to achieve their tally. And this of course is contagious – every lightweight posting says to everyone else: ‘Look, this is all you need to do’. Even worse, genuinely thoughtful postings can get lost in the tide of mediocre ones.

To avoid this, we need to focus on what really is important: usually, what we’d like to see each learner achieve is something like ‘Make a significant contribution to the development of ideas through the discussion forum‘. But since that is clearly more subjective and harder to measure, we need to spell out how this might occur and what it might look like:

  • Contributes original ideas which are relevant and well-developed
  • Provides significant insights into the existing body of literature
  • Helps synthesise ideas and concepts
  • Contributes detailed and relevant examples from their own practice
  • Helps others to explore ideas in depth

Of course, such a list is still relatively subjective – but it does start to establish what is important to me as an educator. And since I’d rather see one ‘significant’ posting than ten which don’t really add anything substantial, I prefer to avoid the use of quantitative measures when developing rubrics and other assessment tools.