As a professional developer, I make extensive use of collaborative groups to generate and share knowledge. Stephen Downes says some pejorative things about groups, but I’ve recently seen some positive spin-offs from collaboration involving groups of educators from different institutions.
As part of the DFE project, I’ve seen how a small group of educators from a cluster of institutions can share and compare good practice from within their respective institutions and work together on a synthesis. For example, drawing together systems and tools for managing flexible learning development – individually these have some big gaps, but they can be synthesised to create something much more comprehensive and useful.
Rather than groups being a force which automatically homogenises everything into a bland conformity (Stephen’s ‘metal ingot’), they can build on the diversity of members and their shared goals to produce work of value which respects diversity. The results can be far more effective than the individuals could develop on their own.
I suspect one key factor is the effectiveness with which the group self-manages the tension between the diversity of its members and the shared goals to which they are working. In other words, the institutional capability is partly reliant on the personal capabilities of the individuals within the collaborative group. This illustrates an issue I raised in an earlier post: what is the relationship between individual and institutional capability? Systems and resources alone are not enough, and developing institutional capability must incorporate professional development which helps develop individuals’ capability as well as their skills and knowledge.
One thing I really like about the eMM model is its emphasis on explicitly informing learners about the learning process. For example, two practices incorporated in the model are:
Students are provided with course documentation describing all of the communication channels used.
Students are provided with course documentation describing how different communication channels will support their learning.
In other words, good practice requires not just that learners are told how the course will be delivered, but given some justification for this in terms of how this will benefit their learning.
For some time, this has struck me as something that educators don’t always do very well – too often, the learning activities that teachers choose can seem arbitrary to learners. From my own experience as a learner, I know that knowing why I am going to be involved in a certain activity increase my motivation and engagement. Conversely, if the educational value of the learning activity is not clear, my motivation and engagement is decreased.
This is true of face-to-face (kanohi-ki-te-kanohi) or online learning – and given the lower motivation some online learners report, perhaps it is more crucial in that context?
Some years ago, an incident in my professional development work highlighted this issue: Open teaching
I’ve had lecturers say to me “What is good for students they often don’t like.” But I know that when I’ve been a learner myself, I don’t like not knowing where we’re headed and how, and I do like knowing why – that is, if I understand why it’s good for me, I’m much more inclined to like it!
The assessment of institutional capability for e-learning, distance or flexible learning is a process which results in a generalised statement of overall capability based on evidence from a sample of the institution’s programmes.
I’m currently involved in a major project to assess the institutional capability for e-learning of 20 institutes of technology and polytechnics throughout New Zealand. The project is co-ordinated by Terry Neal and aims to report back early in 2008.
The capability model used is the eLearning Maturity Model (eMM) developed by Stephen Marshall of Victoria University.
The eMM model assesses capability in five broad process areas…