Thursday, November 12, 2015

Keynes: the return of the master

Keynes the return of the master I've been a Keynsian since before I had labels to put on these things, but I've been slow catching up with Robert Skidelsky's commentary on the current financial crisis.

Robert Skidelsky. (2010) Keynes: the return of the master is divided into three sections. The first is scene-setting history and biography. The second is likely to be rather heavy going for non-economists, but it was the third section that really grabbed me. Here, Skidelsky goes beyond analysis of the current crisis to propose solutions for our woes (and also has a pop at a few neo-Keynsians such as Stiglitz). But Skidelsky also goes way beyond economics with a discussion of religion, duty, ethics and post Utilitarianism, particularly G.E. Moore's influence on Keynes.

Skidelsky also has a shot at asking How much is enough?, suggesting that Keynes didn't quite hit that nail on the head. How Much is Enough? Money and the Good Life (2012) is on my C-word reading list now.

Monday, November 09, 2015

Chalk and Cheese

Contrast In a week when all was doom and gloom over #HEgreenpaper, something good happened. Paul Orsmond and Stephen Merry published a paper.

One day last week I was ranting at one of my project students about contrasts - as a piece of statistical jargon and as a vehicle to construct hypotheses. So let's have some contrasts. Orsmond and Merry have let the cat out of the bag - it's not all about teachers, it's about the students too. I misread one sentence of their paper and temporarily thought they were calling for "non-constructive alignment" - then I was disappointed that they hadn't. Anyhow, the contrast between the reductive approach of #HEgreenpaper and the constructive approach in this paper could not be greater. You've read one, now read the other:

Tutors’ assessment practices and students’ situated learning in higher education: chalk and cheese. Assessment & Evaluation in Higher Education, 28 Oct 2015. doi: 10.1080/02602938.2015.1103366
This article uses situated learning theory to consider current tutor assessment and feedback practices in relation to learning practices employed by students outside the overt curriculum. The case is made that an emphasis on constructive alignment and explicitly articulating assessment requirements within curricula may be misplaced. Outside of the overt curriculum students appear to be interdependent learners, participating in communities of practice and learning networks, where sense-making occurs through negotiation and there is identity development. Such negotiation may translate curriculum requirements articulated by tutors into unexpected meanings. Hence, tutors’ efforts might be better placed on developing students’ ability to self-assess and to effectively evaluate and negotiate information, rather than primarily on their own delivery of the curriculum content and feedback. Tutors cannot be fully effective if they fail to consider students’ learning outside the overt curriculum, and ways to facilitate such learning processes are suggested together with future research directions.

Friday, November 06, 2015

How to fix feedback

Checklist Yet another paper telling us how to (start to) fix the feedback problem. This one contains some very sensible recommendations which I have highlighted below. Those of you who been playing along for many years may feel that this manuscript is rather similar to this and this (not referenced, but perhaps that's understandable given that the HEA have binned all their publicly-funded open-access journals). And so we reinvent the wheel. Again. Anyway, here's how to fix feedback:

Clear Transferability - Programme-level assessment. Yes please. Can't see it happening.

Feedback On Draft Work - Yes please, it's feedback, not assessment. Lift and separate.

Directly Linked To Criteria - Rubrics. Hmm... maybe...

Wasted Opportunities - Separate feedback and assessment. It's simple isn't it? I'll say it again. Separate feedback and assessment. Want me to say it again? OK, separate feedback and assessment.

Making connections: technological interventions to support students in using, and tutors in creating, assessment feedback. (2015) Research in Learning Technology, 23: 27078 -
This paper explores the potential of technology to enhance the assessment and feedback process for both staff and students. The ‘Making Connections’ project aimed to better understand the connections that students make between the feedback that they receive and future assignments, and explored whether technology can help them in this activity. The project interviewed 10 tutors and 20 students, using a semi-structured approach. Data were analysed using a thematic approach, and the findings have identified a number of areas in which improvements could be made to the assessment and feedback process through the use of technology. The findings of the study cover each stage of the assessment process from the perspective of both staff and students. The findings are discussed in the context of current literature, and special attention is given to projects from the UK higher education sector intended to address the same issues.

Thursday, November 05, 2015

Let's be honest

Bloxham, S., den-Outer, B., Hudson, J., & Price, M. (2015) Let’s stop the pretence of consistent marking: exploring the multiple limitations of assessment criteria. Assessment & Evaluation in Higher Education, 23 Mar 2015, 1-16
Unreliability in marking is well documented, yet we lack studies that have investigated assessors’ detailed use of assessment criteria. This project used a form of Kelly’s repertory grid method to examine the characteristics that 24 experienced UK assessors notice in distinguishing between students’ performance in four contrasting subject disciplines: that is their implicit assessment criteria. Variation in the choice, ranking and scoring of criteria was evident. Inspection of the individual construct scores in a sub-sample of academic historians revealed five factors in the use of criteria that contribute to marking inconsistency. The results imply that, whilst more effective and social marking processes that encourage sharing of standards in institutions and disciplinary communities may help align standards, assessment decisions at this level are so complex, intuitive and tacit that variability is inevitable. We conclude that universities should be more honest with themselves and with students, and actively help students to understand that application of assessment criteria is a complex judgement and there is rarely an incontestable interpretation of their meaning.

"Accepting the inevitability of grading variation means that we should review whether current efforts to moderate are addressing the sources of variation. This study does add some support to the comparison of grade distributions across markers to tackle differences in the range of marks awarded. However, the real issue is not about artificial manipulation of marks without reference to evidence. It is more that we should recognise the impossibility of a ‘right’ mark in the case of complex assignments, and avoid overextensive, detailed, internal or external moderation. Perhaps, a better approach is to recognise that a profile made up of multiple assessors’ judgements is a more accurate, and therefore fairer, way to determine the final degree outcome for an individual. Such a profile can identify the consistent patterns in students’ work and provide a fair representation of their performance, without disingenuously claiming that every single mark is ‘right’. It would significantly reduce the staff resource devoted to internal and external moderation, reserving detailed, dialogic moderation for the borderline cases where it has the power to make a difference. This is not to gainsay the importance of moderation which is aimed at developing shared disciplinary norms, as opposed to superficial procedures or the mechanical resolution of marks."

It's quite easy to criticize this paper - small scale study (n=24), no attempt at statistical analysis or validation. But there's still an inescapable feeling that as the stakes have escalated, HE is kidding itself about assessment practices.

Thursday, October 22, 2015

Peer-assessment in higher education

pier After my mini-rant yesterday, here's a new review article on peer-assessment in higher education. In my view the author is possibly too circumspect in his conclusions - if a student wrote this for me I'd tell them to get off the fence. However, he does us a service by pointing out that peer assessment isn't necessarily a magic bullet that will save staff time. Too darn right. Nevertheless, this is a very useful bibliography for those of us convinced that peer assessment has to be the future of higher education but are struggling to make it work.

Peer-assessment in higher education – twenty-first century practices, challenges and the way forward. Assessment & Evaluation in Higher Education 19 Oct 2015 doi: 10.1080/02602938.2015.1100711
Peer assessment in higher education has been studied for decades. Despite the substantial amount of research carried out, peer assessment has yet to make significant advances. This review identifies themes of recent research and highlights the challenges that have hampered its advance. Most of these challenges arise from the manual nature of peer assessment practices, which prove intractable as the number of students involved increases. Practitioners of the discipline are urged to forge affiliations with closely related fields and other disciplines, such as computer science, in order to overcome these challenges.

Wednesday, October 21, 2015

PeerMark Frustration


Frustration because it's so close to being a brilliant tool, apart from being mind-blowingly complex to set up (which means that few will use it).

Sadly, as reported, it is buggy as hell generating dozens of emails from students :-(

Tuesday, September 29, 2015

How to write an essay

Problem 1: Students write descriptive essays which do not demonstrate critical thinking.
Solution: Title as question.

Problem 2: Massive over reliance on essays as an assessment format in higher education.
Solution: ???

Henri D, Morrell L. and Scott G. Ask a clearer question, get a better answer. F1000Research 2015, 4: 901 doi: 10.12688/f1000research.7066.1
Many undergraduate students struggle to engage with higher order skills such as evaluation and synthesis in written assignments, either because they do not understand that these are the aim of written assessment or because these critical thinking skills require more effort than writing a descriptive essay. Here, we report that students who attended a freely available workshop, in which they were coached to pose a question in the title of their assignment and then use their essay to answer that question, obtained higher marks for their essay than those who did not attend. We demonstrate that this is not a result of latent academic ability amongst students who chose to attend our workshops and suggest this increase in marks was a result of greater engagement with ‘critical thinking’ skills, which are essential for upper 2:1 and 1st class grades. The tutoring method we used holds two particular advantages: First, we allow students to pick their own topics of interest, which increases ownership of learning, which is associated with motivation and engagement in ‘difficult’ tasks. Second, this method integrates the development of ‘inquisitiveness’ and critical thinking into subject specific learning, which is thought to be more productive than trying to develop these skills in isolation.

I'm not quite sure how peer review works on the F1000 education channel. I have been asked to peer review articles in the past and have done so, but I'm not sure how open the process is. So here's my open peer review of this paper.

This is an interesting and potentially valuable study of a method to improve the quality of student writing. The sample size is relatively small and the major weaknesses are pointed out by the authors in the Discussion:
"For the purpose of this study we assumed that students who posed a question in the title of their essay had attended the workshop and understood the underlying concepts of the workshop, and this has been used as the independent factor in our analysis. We acknowledge that this lack of certainty in the allocation of students to the did/did not attend category does need to be borne in mind when interpreting our results. Another possible confounding factor is that voluntary workshop attendance may be skewed towards individuals who are more engaged or motivated with the module; and these individuals are more likely to obtain higher grades because of this higher engagement with the module content"
To counteract these factors, the authors should cite an effect size to validate the p-values quoted in the results.