Tag Archives: learning

Evaluating online community activities

The purpose of evaluation

A key question for anyone managing or facilitating an online community is how to make it sustainable. Sustainable communities need to maintain (and grow) an active and engaged membership. Structured online events or activities can play a very important role in engaging community members and ensuring their regular and active involvement. However, these events must be effective – badly planned and/or facilitated events can turn members off and lead to the failure of the community.

The facilitator’s overall impression of whether an event is effective or not is useful. But without a more rigorous evaluation, we can miss underlying issues which have the potential to damage members’ ongoing engagement in the community. So some kind of evaluation process is vital to its ongoing success and sustainability.

This article focuses on evaluating community events: not to check that they meet a minimum standard, but as a way of engaging in a process of ‘continuous improvement’.

The evaluation process

Many educational and management approaches employ cyclical models involving reflection or evaluation – eg Kolb’s experiential learning cycle and the action research cycle. The PDSA Cycle shown here represents the stages of Deming’s approach to quality improvement in business, but can be adapted to provide a model useful in relation to online community events and activities:

  1. PLAN: plan the community event
  2. DO: facilitate the community event
  3. STUDY: evaluate the event
  4. ACT: feed the evaluation results back into further community development

Carrying out the evaluation

Most of the evaluation will take place as part of the STUDY phase. But during the event (the DO phase) the facilitator should keep notes on what’s going well and what isn’t. If it’s not too big a group, it can also be really helpful to keep notes on the level of engagement of each member – not as a form of assessment, but as data that may be useful later.

During the STUDY phase:

Decide how you will gather feedback

Without its members, the community does not exist. So feedback from participants in an event is an essential component of evaluation:

  • Online tools such as surveymonkey or polldaddy are quick to complete, so community members are usually happy to complete a survey. But make the survey brief, and tell them how long it’ll take.
  • Individual interviews: you may get much more meaningful information about the event’s effectiveness if you personally approach participants. This may be a phone call or skype, or using an asynchronous method such as email. Ideally, a neutral 3rd party will gather the data, since participants may be unwilling to open up to the facilitator. You may be able to set up a reciprocal arrangement with another facilitator to gather data from each other’s participants.
  • Collaborative feedback: you could set up a wiki that participants can use to record their feedback. Or you could set up a forum or a synchronous discussion space that participants use to discuss the event. Ideally this would be anonymous, so you may appoint one member to gather the raw data and provide you with a summary. Again, you may be able to arrange with another facilitator to manage this and gather feedback.

Develop evaluation questions and tools

  • Communicate clearly what you are evaluating – point the participants back to the activity if possible so they can reflect on it. Be specific – make sure the questions clearly identify aspects of the activity that you want feedback on.
  • Focus on how well the activity met their needs, not just on how much they enjoyed the process. If possible, ask for feedback at the higher levels of Kirkpatrick’s model – eg has the activity had a positive result, has it made a difference, have they been able to apply what they learned during the activity?
  • Focus separately on the design of the activity and its facilitation: they are distinct, and the effectiveness of each is essential.
  • Include questions that are open and qualitative so you can find out why things happened the way they did.

Gather feedback from community members.

  • Tell participants how you will make use of the feedback: if you focus on improvement and they believe you are sincerely interested in making things better next time, they are more likely to engage in the evaluation process.
  • Gather feedback not just from those that took part in the event, but also from those who chose not to. Asking those who didn’t take part why they didn’t can tell you a lot about the design of the event and the way it was communicated to community members!

Reflect and evaluate
Feedback from participants is just one form of information on which to base the evaluation. Your own reflection is another essential component.

  • You may want to reflect on the activity before you gather data from participants – that way your own thoughts won’t be overly influenced by feedback. But once you have gathered the data, that’s a chance to reflect on and learn from the feedback from participants.
  • In your reflection, avoid placing blame on participants. Assume you can do better and learn from mistakes. If people go wrong, ask yourself how could I have communicated more clearly?
  • Use your own recollections and notes from during the activity to triangulate – compare them with what participants have told you about the activity, what happened and why.
  • Focus on improvement – even where you think the activity was effective, try to identify specific things you could do to make it better next time.
  • Share your evaluation with the participants – it’s a community, right? Sharing the evaluation means it becomes part of the larger community collaboration and conversation, and can enhance member commitment. But… if there are comments or conclusions that you feel are private or could offend members, leave them out of the published version.

Image: A Midnight Modern Conversation by William Hogarth.

Bibliography

Davies, C (nd). Kolb Learning Cycle Tutorial. Downloaded 30 May 2010 from http://www.ldu.leeds.ac.uk/ldu/sddu_multimedia/kolb/static_version.php

Unknown author, Business Performance Ltd (nd). Why Measure Training Effectiveness? Downloaded 30 May 2010 from http://www.businessperform.com/workplace-training/evaluating_training_effectiven.html

Unknown author, Carpenter group (nd). The Deming Cycle. Downloaded 30 May 2010 from http://www.quality-improvement-matters.com/deming-cycle.html

Unknown author, Warwick University (2008). Action Research. Downloaded 30 May 2010 from http://www2.warwick.ac.uk/services/ldc/resource/evaluation/tools/action/

Tweetdeck problem with Greek text

It seems that the latest version of Tweetdeck (0.30.3) for the Mac does not appear to work properly with Greek characters. I have a simple ‘Greek verb of the day’ service set up using the Twitter API. It automatically posts an entry each day from a database of 1400 Greek verbs, including the three main tenses and with a translation in English.

In Seesmic Desktop or a browser it appears correctly:

seesmic-in-greek

But in Tweetdeck all Greek words are replaced with a ‘pi’ symbol (∏) like this:

Tweetdeck in Greek

Trying to diagnose the problem, I found when using Tweetdeck to compose an update that Greek characters are not recognised at all. That is, you can’t type anything on the keyboard when in Greek text entry mode: this is most unusual for a Mac application. It’s not a problem with AIR as Seesmic desktop seems to handle Greek characters perfectly well.

Whether this is a problem with all non-English characters sets I don’t know: this needs some further investigation. But anyone wanting to use Twitter for language teaching and learning will need to check whether it works with Tweetdeck. If you use the Twitter API to send automated updates, check that these are readable in Tweetdeck. And check that students can use Tweetdeck to post updates. I’ll be posting a note on my rimata homepage advising users that the ‘verb of the day’ may not be accessible using Tweetdeck on the Mac.

Collaborative learning: It’s how you use a wiki that counts

wiki matrixMary Bennet at eScholars says:

Wikis are excellent tools for collaboration. When wikis are used students learn to collect and share information as well as publish and negotiate.

I agree with the first statement, but the second appears to confuse the tool with how it is used. It seems to suggest that the learning identified will happen because a wiki is used. That’s not the case – it’s the collaborative activities the teacher sets up based around a wiki that will (if successful) enable students to learn to ‘collect and share information as well as publish and negotiate’.

Use of a wiki does not automatically lead to learning to collaborate. Likewise, not using a wiki does not prevent such learning from occurring: other tools such as Google Docs can be used in learning to collaborate. So it’s how the tool is used that leads to the desired learning.

As I’ve described elsewhere, wikis are useful for more than just collaboration: they are also a very useful tool for non-collaborative learning such as personal reflection. So the thoughtful application of the tool is crucial to achieving success. Those of us working in professional development in education have seen poorly-planned incorporation of wikis and other technology tools lead to disappointment and disillusionment.

The use of a wiki as a learning tool within a course is more complex than a straightforward collaboration between a small group of co-workers working on a shared project. So how the wiki is applied in the learning context requires more careful planning. If this were not so, learning design would cease to be a productive activity, and solving the technical issues of incorporating a wiki and training teachers and students in its use would be all that’s required. The limited uptake of tools such as wikis in education suggests this is not the case.

Games and learning: are they incompatible?

John Bohannon recently wrote in Science magazine about the review by a group of scientists of the game Spore from a scientific point of view (Flunking Spore). Given the comments of the reviewers, it’s clear that many aspects of the game do not provide an accurate model of evolutionary science. As the article states, ‘Spore clearly has little in common with science’.

The writer goes on to say ‘with very minor tweaks, the game could live up to its promise’. But the tweaks required to make it good science might easily make it a less engaging game. This is not to say that learning shouldn’t be enjoyable and engaging: just that what makes a game enjoyable and engaging might not be quite the same thing as what makes a game enjoyable and engaging. And that what makes a game engaging might be precisely because it is nothing like reality.

In his critique of the article, John Hawks counters the observation that Spore is not good science by saying ‘Dude, it’s a game‘. Exactly: the design imperatives for a successful game do not necessarily match the design imperatives for a learning experience. He even goes on to point out how some of the ‘tweaks’ that Bohannon suggests for improving the science of Spore would diminish its value as a game.

Games and learning are not intrinsically incompatible. But because games do not necessarily represent reality well, how we incorporate games into learning experiences can be all-important. If I was to incorporate Spore into a science class, I’d be trying to engage learners in a critical analysis of the science implicit in the game, not relying on the game to impart scientific principles.

Image: Giardia

Wikis: more than just collaboration

Definitions of wikis, especially in education, often state that wikis are ‘collaborative’. Most wiki software does support collaboration, but not all applications of wikis need to be collaborative. In fact, collaborative features can be detrimental if we want to publish our own writing and not have it changed or deleted… the reflective thinker may not want to be disturbed!

For example, I maintain a Mediawiki site for my own articles and other resources that I don’t want changed. It used to be an open wiki, but dealing with the spam became too time-consuming. So now the wiki is only open to be read by visitors, not for writing. I no longer feel the need to apologise for this – I don’t let others browse the documents on my hard disk, but I do let others browse (but not edit) the writing on my wiki. And there are plenty of other channels for collaboration out there.

Some wiki purists might say that using a wiki solely for your own writing, without allowing for input of others, is against the wiki philosophy. But I’d argue we should be able to use the tools in ways that best meet our needs, and the best tools provide flexibility in how we use them. And most of us have the need to write in different read-write modes: sometimes it’s private, sometimes it’s public, and sometimes it’s collaborative. The best wiki tools should let us easily write and manage documents in a range of read-write modes.

Ideally, Mediawiki would allow me to easily manage the read-write mode of any article and its associated discussion page. Unfortunately, it’s not that straightforward – permissions are set in the Mediaiwiki configuration file and the documentation warns against relying on the plugins available for finer-grained managing of permissions. So it hasn’t been feasible for me to effectively manage the read-write mode of individual pages on my main wiki.

Recently I’ve been trying out PMWiki, which makes much better allowance for controlling access to the wiki site and to individual pages within it. Any page can have a password for reading and a password for writing, and these can be set relatively simply. That means you have fine-grained control over the read-write mode of any specific page.

That’s just what I need – and I believe that’s what learners need too. Not all learning happens collaboratively: successful learners need to be reflective as well as collaborative. Effective Web 2.0 tools provide for personal reflection as well as more social approaches to learning.

Photo by Matan: Copy of Rodin’s ‘Thinker’ at Columbia University