Thursday, May 26, 2016

The extent of students’ feedback use has a large impact on subsequent academic performance

... which you'd kinda hope it would! However, it's important to get empirical evidence that it does, and this well-conducted study proves that (only marred the the absence of Effect Sizes!). But since correlation does not equal causation, does feedback use improve academic performance, or is it just a proxy for engagement?

Are they using my feedback? The extent of students’ feedback use has a large impact on subsequent academic performance. Assessment & Evaluation in Higher Education 19 May 2016 doi: 10.1080/02602938.2016.1174187
Feedback is known to have a large influence on student learning gains, and the emergence of online tools has greatly enhanced the opportunity for delivering timely, expressive, digital feedback and for investigating its learning impacts. However, to date there have been no large quantitative investigations of the feedback provided by large teams of markers, feedback use by large cohorts of students, nor its impact on students’ academic performance across successive assessment tasks. We have developed an innovative online system to collect large-scale data on digital feedback provision and use. Our markers (n = 38) used both audio and typed feedback modalities extensively, providing 388 ± 4 and 1126 ± 37 words per report for first- and second-year students, respectively. Furthermore, 92% of first year and 85% of second-year students accessed their feedback, with 58% accessing their feedback for over an hour. Lastly, the amount of time students spent interacting with feedback is significantly related to the rate of improvement in subsequent assessment tasks. This study challenges assertions that many students do not collect, or use, their feedback. More importantly, we offer novel insights into the relationships between feedback provision, feedback use and successful academic outcomes.

Friday, May 06, 2016

Heads of University Biosciences Annual Meeting

Heads of University Biosciences Annual Meeting
4-5 May 2016 College Court, University of Leicester
Special Interest Group of the Royal Society of Biology

Organised by Professor Jon Scott (University of Leicester) and Professor Judith Smith (University of Salford)


HE Bioscience Teacher of the Year: Finalist Case Studies
a) Dr Kevin Coward (University of Oxford) Problem based teaching the development of laboratory skills.
In a non-assessed activity, postgraduate students devise an experimental protocol based on a scenario which are then applied in the laboratory. Students take turns in acting as students and teachers. Links the syllabus to the "real world", both science and delivery of teaching.
b) Dr Lesley Morrell (University of Hull) Increasing feedback, reducing marking
In an undergraduate research skills module, staff explain their published papers to students in a seminar programme. Students write eight weekly 500 word "News and Views" summary articles on one of the published papers. Feedback is given weekly leading to feed-forward within a single module, together with a rubric-generated mark. To make the module sustainable, feedback is tapered as the module continues, anonymized feedback is made available to all students. Summative assessment is performed on two student selected articles from the course. There is statistical evidence of mark improvement during the course. Weaker students improve more than stronger students.
c) Dr Katharine Hubbard (University of Hull) Building partnerships with students WINNER
Students in practical classes suffer information overload. Levels of confidence vary considerably. Because the lab environment is stressful, student-produced pre-lab video tutorials and post-lab online revision quizzes were added. Students are involved at all stages - design, execution, evaluation and dissemination.

Dr Anna Zecharia (British Pharmacological Society) The Pharmacological Core Curriculum
The Delphi process used to build a consensus curriculum covering subject knowledge, research and practical skills, transferrable skills. Process is ongoing.

Session One Academic Integrity
Professor Jon Scott (University of Leicester) Introduction & Institutional Strategies
Spectrum from poor academic practice to cheating. Strategies range from deterrence through detection, education and assessment design.
Dr Phil Newton (Swansea University) Ghostwriting - Essay Mills
Essay mills now specialize in custom writing driven by an auction process. Average price for a standard essay starts from 100, turnaround time 1-5 days. Buyers market (Mechanical Turk). Claim to be providing model answers, if student submits the work provided they are committing the offence. Well established business run by many umbrella companies under many different names. Most important response is assessment design - increasing student numbers are a challenge.
Dr Irene Glendinning (Coventry University) European Perspectives on Academic Integrity
Findings of IPPHEAE Erasmus project, 27 EU countries, 5000 survey and interview responses. Wide variation in attitudes and responses across Europe, but very difficult to compare statistics. Inconsistent views on acceptable academic practice across Europe. "Academic Maturity Model" implies UK is doing better than most of EU due to emphasis on training.

Session Two - Designing out Plagiarism
Dr Erica Morris (Anglia Ruskin University) Designing out Plagiarism
Gill Rowell (Turnitin)
Dr Heather McQueen (University of Edinburgh) Plagiarism: The Student View
Session Three Teaching Excellence Framework (TEF)
Professor Sean Ryan (Higher Education Academy-STEM) Achieving and demonstrating teaching excellence
Discussion Workshop - What does the TEF mean for us?
Unfortunately I was called away on departmental duties and was not able to attend this session.

Session Four Wider Outreach
Professor Andy Miah (University of Salford) The Pathway to Impact
Professor Miah talked about science communication.
Professor Adam Hart (University of Gloucestershire) Citizen Science
Awareness raising may be more important than the scientific output. Data generation is a bonus.

Friday, April 29, 2016

Who needs anonymity? The role of anonymity in peer assessment

Golden Pygmy Can students reliably and fairly assess the work of peers that are known to them? A solution to this problem (if it exists) is anonymity, but what about open learning situations where anonymity is not possible? And is anonymity desirable anyway? This well conducted study shows that anonymity improves the reliability of peer marking - but it's not essential - training of markers improves outcomes to the same extent.

The role of anonymity in peer assessment. Assessment & Evaluation in Higher Education 22 Apr 2016 doi: 10.1080/02602938.2016.1174766
This quasi-experimental study aimed to examine the impact of anonymity and training (an alternative strategy when anonymity was unattainable) on students’ performance and perceptions in formative peer assessment. The training in this study focused on educating students to understand and appreciate formative peer assessment. A sample of 77 students participated in a peer assessment activity in three conditions: a group with participants’ identities revealed (Identity Group), a group with anonymity provided (Anonymity Group) and a group with identities revealed but training provided (Training Group). Data analysis indicated that both the Anonymity Group and Training Group outperformed the Identity Group on projects. In terms of perceptions, however, the Training Group appreciated the value of peer assessment more and experienced less pressure in the process than the other two groups.

Wednesday, April 20, 2016

Student motivation in low-stakes assessment

A knotty problem Although densely written in stats-speak, this is an interesting paper for all those who, like me, have failed to get many student cohorts to engage with formative assessment. The major finding of interest here is that cohort effects trump other factors, including prior mathematical knowledge. It works in some groups, not in others. What this paper is not able to sort out is whether these differences are due to the quality of teaching groups experience, or unknown (and possibly unmeasurable) stochastic factors.

Student motivation in low-stakes assessment contexts: an exploratory analysis in engineering mechanics. Assessment & Evaluation in Higher Education 19 Apr 2016 doi: 10.1080/02602938.2016.1167164
The goal of this paper is to examine the relationship of student motivation and achievement in low-stakes assessment contexts. Using Pearson product-moment correlations and hierarchical linear regression modelling to analyse data on 794 tertiary students who undertook a low-stakes engineering mechanics assessment (along with the questionnaire of current motivation and the ‘Effort Thermometer’), we find that different measures of student motivation (effort, interest, sense of challenge, expectancy of success and anxiety to fail) showed atypical correlation patterns. The nature of the correlations further varies depending on the type of test booklet used by students. The difficulty of the early items in the assessment were positively correlated with ‘anxiety’ and ‘success’, but negatively correlated with ‘interest’. In the light of our findings, we suggest that future research should systematically explore (taking into account testing conditions like test booklet design and test-item format) the implications of student motivation for achievement in low-stakes assessment contexts. Until the consequences of these processes are better understood, the validity of assessment data generated in low-stake conditions in the higher education sector will continue to be questioned. With a greater understanding of these processes, steps could be taken to correct for student motivation in such settings, thus increasing the utility of such assessments.

Friday, January 08, 2016

Reconsidering the role of recorded audio in learning

Audio "the educational use of the recorded voice needs to be reconsidered and reconceptualised so that audio is valued as a manageable, immediate, flexible, potent and engaging medium."

Yes, it does. Audio remains the greatest under-utilized technical resource in education - potentiated by the fact that it is well suited to mobile devices. But on reading this paper, the question I ask myself is "Why?".

"Technically speaking, podcasting is the serial distribution of locally generated downloadable digital media episodes, usually audio, via RSS (Really Simple Syndication) feeds to niche audiences of subscribers. RSS incorporates structured information about the podcast channel and the appended items (‘episodes’). In this way the RSS feed file can be automatically and regularly checked by the end-user’s aggregation software (e.g. iTunes), which triggers the downloading of new episodes whenever they become available."

And there's the rub. The death of RSS and the iTunes Walled Garden almost killed true (subscription channel) podcasting. But there are other reasons. We still have a ridiculous over reliance on keyboard input - laughable when mobile phone keyboards are considered. Captain Kirk is laughing himself silly, and I suspect the USS Enterprise computer is having a chuckle too.

So why am I typing this rather than speaking it? Permanence is one reason. Editability is another. I can edit my writing as I go but not my spoken words. I can speak from a script but that level of production takes longer that I have for this communication and robs the medium of immediacy and engagement unless I am a professional actor.

So yes, lets reconsider the role of recorded audio in learning. But let's not kid ourselves that it's easy.

Reconsidering the role of recorded audio as a rich, flexible and engaging learning space. (2016) Research in Learning Technology 24: 28035
Audio needs to be recognised as an integral medium capable of extending education’s formal and informal, virtual and physical learning spaces. This paper reconsiders the value of educational podcasting through a review of literature and a module case study. It argues that a pedagogical understanding is needed and challenges technology-centred or teacher-centred understandings of podcasting. It considers the diverse methods being used that enhance and redefine podcasting as a medium for student-centred active learning. The case study shows how audio created a rich learning space by meaningfully connecting tutors, students and those beyond the existing formal study space. The approaches used can be categorised as new types of learning activity, extended connected activity, relocated activity, and recorded ‘captured’ activity which promote learner replay and re-engagement. The paper concludes that the educational use of the recorded voice needs to be reconsidered and reconceptualised so that audio is valued as a manageable, immediate, flexible, potent and engaging medium.

Friday, December 11, 2015

More Than a Literature Review: Final Year Projects for Bioscience Students

Final year projects are very much the capstone experience of the undergraduate bioscience experience. Yet many academics cling doggedly to the belief that they are training Mini-Mes and that projects are only an entry into PhD programmes. But alternative (non-laboratory) types of project remain very much second class options. This is because, in my personal opinion which does not represent that of my employer, activity is valued over thinking. And in the weird world we live in, scholarship does not count as activity. But the vast and increasing majority of our graduates will never don a lab coat or set foot in a laboratory again after they graduate. They will, however, be required to think. And to make judgements and have opinions. In other words, we live in hope that our graduates will live their lives in a scholarly way. The least we can do is to try to train them to do that.

Julia Lodge (2015) More Than a Literature Review: An Alternative Final Year Project for Bioscience Students. Education in Practice, Vol. 2 No. 1.
With increasing pressures on staff time and increased diversity in the student population it is important that universities explore different ways of providing final year projects. This case study describes a successful format developed in the School of Biosciences at the University of Birmingham. The project consists of a literature review followed by an in-depth critical analysis of five key papers and the subsequent development of a research proposal. In addition to written reports and an oral presentation students are asked to write both a lay and a technical abstract for their research proposal. This challenges the student to explain the importance and also the scientific approach to different audiences. The analysis of five key papers has been found to be an excellent tool to encourage a deep understanding and critical analysis of research papers and that the skills demonstrated in this section are distinct from those demonstrated in a traditional literature review. The research proposal allows students to develop many of the skills usually associated with a practical project including identification of gaps in current knowledge and developing a hypothesis. Overall this format allows students not choosing a lab or field based final year project to apply the skills and knowledge accumulated during their degree in a discipline related context and despite not having a practical element it remains true to its roots in experimental science.

Education in Practice Education in Practice - December 2015

Wednesday, December 02, 2015

Assessing Teamwork

Teamwork I'm in the middle of a big team work evaluation right now, so the title of this paper immediately grabbed my attention. The Team-Q tool described looks very good, although sadly it remains the case that the procedures involved are still too cumbersome for easy widespread adoption.

Oh, and then there's the whole business of student "satisfaction". Team assessment? Students hate it.

Assessing teamwork in undergraduate education: a measurement tool to evaluate individual teamwork skills. (2015): Assessment & Evaluation in Higher Education, doi: 10.1080/02602938.2015.1116497
Effective teamwork skills are essential for success in an increasingly team-based workplace. However, research suggests that there is often confusion concerning how teamwork is measured and assessed, making it difficult to develop these skills in undergraduate curricula. The goal of the present study was to develop a sustainable tool for assessing individual teamwork skills, with the intention of refining and measuring these skills over time. The TeamUp rubric was selected as the preliminary standardised measure of teamwork and tested in a second year undergraduate course (Phase One). Although the tool displayed acceptable psychometric properties, there was concern that it was too lengthy, compromising student completion. This prompted refinement and modification leading to the development of the Team-Q, which was again tested in the same undergraduate course (Phase Two). The new tool had high internal consistency, as well as conceptual similarity to other measures of teamwork. Estimates of inter-rater reliability were within a satisfactory range, although it was determined that logistical issues limited the feasibility of external evaluations. Preliminary evidence suggests that teamwork skills improve over time when taught and assessed, providing support for the continued application of the Team-Q as a tool for developing teamwork skills in undergraduate education.