The aim of this contribution is to analyse the role of the type of assessment criteria and the criteria engagement strategies on evaluative judgement (Boud, Ajjawi, Dawson, & Tai, 2018) as a strategy to foster Self-Regulated Learning (SRL) (Nicol, Thomson & Breslin, 2014; Panadero, Jonsson & Strijbos, 2016). A group task with two feedback loops, which met Panadero, Jonsson and Strijbos (2016) criteria, was design so that students could peer-assess two versions of the task before the final submission. Students had also to reflect on the feedback received and state how they had integrated it (Winstone & Boud, 2020). An initial activity to develop students’ engagement with assessment criteria was done. The written criteria (focused onthe task and the process) were presented and discussed (Winstone, Nash., Parker & Rowntree, 201; Carless & Boud, 2018). The role of assessment criteria and criteria engagement strategies on evaluative judgement was studied analysing teacher students’ feedback quality. The content of students’ feedback was analysed through a guide ad hoc which focused on the type, focused, tone, content and direction of feedback. The results presented in this contribution belong to a 1st year subject from the Bachelor’s Degree in Primary Education at the University of Barcelona in which 59 students were enrolled. The analysis of the type of peer-feedback between loops 1 and 2 shows an improvement of feedback quality, which was progressively oriented to the process (Ajjawi & Boud, 2018; Hattie & Timperley, 2007); the feedback was more didactic and suggestive (more than corrective) and addressed to the peer. The components of evaluative judgement (Panadero, Broadbent, Boud & Lodge, 2019) seem to be fostered based on the improvement of the quality of feedback and the reflections on how it is integrated into the next task.
REFERENCES:
Ajjawi, R., & Boud, D. (2018). Examining the nature and effects of feedback dialogue. Assessment and Evaluation in Higher Education, 1-14, https://doi-org.sire.ub.edu/10.1080/02602938.2018.1434128
Boud, D., Ajjawi, R., Dawson, P., & Tai, J. (Eds.) (2018). Developing Evaluative Judgment in Higher Education. Assessment for Knowing and Producing Quality Work. New York: Routledge.
Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, DOI: 10.1080/02602938.2018.1463354
…
Providing high-quality, affordable, and enjoyable online learning opportunities for adults has never been more competitive or important. Effectively designed courses provide evidence of learning outcome mastery, efficient designs lead to quick results, and appealing designs are enjoyed by the learner. One challenge of addressing these tensions is that most higher education institutions lack the kinds of metrics that would allow leaders to make timely decisions related to curriculum and instruction. What if real-time data could provide proactive insight into a student’s experiences? Are there online-learner behaviors (rather than algorithms, which can be biased) that leaders could observe as early-warning signs of problems with the effectiveness of the instruction, the efficiency of the design, or the overall appeal of the courses? What are the key ingredients of learning, and could they be measured and monitored on a large scale?
Feedback is a powerful construct in the design of effective instruction, so it seems logical that feedback-delivery technology could be leveraged to increase efficiency by delivering immediate feedback, improve quality by delivering accurate feedback, and maintain appeal by being user-friendly. Many of these points of data are at least partially tracked by today’s learning management systems (LMSs) and adaptive learning courseware technologies.
This hypothesis was tested in a correlational study in which I compared the feedback experiences of learners with their achievement on standardized exams. Secondly, I compared the feedback experiences of learners with their satisfaction as reported on end-of-course surveys. To evaluate the learners’ feedback experiences, I gathered data from the last three online courses they took before completing their academic degree programs. I wanted to learn about the cumulative effects on a student who received, for example, great feedback from Professor A but less effective feedback from Professors B and C. At the same time, would learners who simultaneously had three great experiences with feedback be more likely to learn and enjoy their learning? The research question guiding my work is this: Are there correlations between learner achievement, learner satisfaction, and several measurable dimensions of the learner’s experience with feedback?
Learners’ feedback literacy, i.e. their capacity to seek, understand and take action on feedback to enhance the quality of their future work (Carless & Boud, 2018) is unlikely to be developed without feedback literate teachers. One of the aspects of teachers’ feedback literacy is their willingness to adapt, reflect upon and refine the feedback strategies used with students (Winstone & Carless, 2020). When educators notice that feedback does not promote student uptake or has little or no impact on student learning, they need to be willing to change their entrenched feedback practices in favour of experimentation in new pedagogic approaches. However, as teacher feedback literacy is a relatively new research area, current literature provides little insight into how these processes may occur. There is a need, therefore, to explore what motivates practitioners to enhance their own feedback practices and how the growth of teachers’ own feedback literacy may subsequently impact students as well as fellow teachers.
The aim of this lightning talk is to recount the speaker’s journey as a teacher, feedback researcher and feedback intervention designer. The talk will demonstrate how the self-reflective enquiry into feedback practices used with direct entry students at a major Australian university has stimulated the speaker to, first, undertake classroom action research and then pursue PhD study in student feedback literacy. The speaker’s reflections, supported by the review of relevant literature, have subsequently informed the design of the ipsative feedback intervention, implemented with three groups of students between February and May 2020. The intervention focused on placing student individual progress at the centre of feedback practice and providing opportunities for individual goal-formation and uptake of feedback. The talk will highlight how the process of designing and implementing the intervention has increased speaker’s interest in students’ judgements and emotional responses to feedback, thus strengthening student-teacher partnership.
References:
Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education, 43(8), 1315-1325. doi:10.1080/02602938.2018.1463354
Winstone, N. E., & Carless, D. (2020). Designing Effective Feedback Processes in Higher Education: A Learning-Focused Approach. London: Routledge.
Students are not alone in needing to develop feedback literacies, but they can feel alone if teaching is something that is done to them by educators. This paper will explore how thinking about connections and relationality leads us towards new ways of thinking about how students and teachers’ experiences can be interconnected, both with one another, and with a wider context. Sharing our own experiences as academics developing feedback literacies can be powerful. Normalising failures, expressing vulnerability, and being open about our continued engagement in learning processes can be transformative for both student and teacher, meaning that teaching and learning become entangled and that teacher and learner become co-learners. During this paper, I will explore theory to discuss how pedagogy can become a matter of relations and lead us towards a ‘pedagogy of response-ability’ (Bozalek et al. 2018) where we can share learning and teaching in new ways. I will also draw upon recent research (Gravett et al. 2019) to disrupt the binary between learning and teacher, and I will explore practical strategies for how we might enact relational pedagogies in the classroom, using storytelling, feedback exemplars and artefacts. Ultimately, I will consider how we can experiment with new ways of thinking about feedback literacies, leading us to new ways of thinking about relationships in learning and teaching.
References
Bozalek, V., Bayat, A., Gachago, D., Motala, S. and Mitchell, V. (2018). ‘A pedagogy of response-ability’. In Bozaelk, V., Braidotti, R., Shefer, T. and Zembylas, M. Socially just pedagogies: Posthumanist, feminist and materialist perspectives in higher education, pp. 81-97. London: Bloomsbury.
Gravett, K. Kinchin, I. M., Winstone, N. E., Balloo, K., Heron, M., Hosein, A., Lygo-Baker, S. and Medland, E. (2020). The development of academics’ feedback literacy: Experiences of learning from critical feedback via scholarly peer review. Assessment & Evaluation in Higher Education, 45 (5), 651-665.
Feedback literacy is not only important to students and teachers but also academics. In particular, academics and researchers who are actively involved as peer reviewers for journals need to develop their capacity, ability, and disposition to provide constructive feedback to authors. In this presentation, I argue that it is especially crucial to develop feedback literacy of peer reviewers because they face more constraints than feedback givers in other contexts (e.g., education). For instance, the identity of the authors is usually unknown to peer reviewers, making it difficult to construct feedback dialogues; other hurdles include the restriction on the mode of feedback, power (im)balance.
Despite the above, not much formal training is available to equip peer reviewers to be feedback literate; the rather mystified scholarly peer-review process, which is usually done individually and “in the dark”, also discourages learning from observation. To demystify the feedback process of scholarly peer review and to share first-hand experiences, this presentation reports a collaborative autoethnographic study on two early-career researchers (ECRs) who are active journal peer reviewers. Since 2017, these two peer reviewers have reviewed for 22 international journals in various disciplines and completed 67 reviews. Recently, they were awarded the Reviewer of the Year Award by Routledge and Higher Education Research & Development, a top-tiered journal in higher education. Informed by conceptual frameworks of feedback literacy (Carless & Boud, 2018; Carless & Winstone, 2020; Chong, 2020) and networked ecological systems theory (Neal & Neal, 2013), personal narratives and reflections of the two peer reviewers will be shared. Implications for supporting less experienced peer reviewers (especially ECRs and doctoral students) to be feedback literate peer reviewers will be discussed.
Assessment and feedback approaches can be influential factors on the students learning and engagement throughout their university experience. However, the assessment and feedback practices used across higher education often represent a more procedural focus to maintain the status quo. There is a continued overwhelming emphasis on summative assessment, which also translates into a dominance in one-way feedback practices across academic disciplines and institutions. Dialogic feedback is making inroads into current practices but is not yet widespread and often forgoes the research suggesting the positive impact it can have on learning, student attainment, engagement and attendance. Higher education’s focus on graduate attributes is proliferating the curriculum, with authentic and integrative assessment being more and more prominent in the course design and implementation. With the increased emphasis on digital skills and the recent Covid-19 global pandemic, this has undoubtably risen up the agenda and will play an even increasing role in the future construction of curriculum, but this research highlights the need for synthesis between these elements. Assessment and feedback practices are often disjointed and limit the possible impact on student attainment and engagement as a result, whilst also being summatively focused and weighted at the end of a module/programme.
Observing people who use computers at work can be difficult. A person working with physical objects and physical technologies behaves in ways that an observer can readily track. For example, in early motion and time studies, the Gilbreths devised a system of 18 elemental movements (e.g., select, grasp, move, inspect) to analyze what workers did. A person working with digital objects and digital technologies poses a greater challenge for the observer because small, nearly indiscernible actions (such as typing a single letter) may initiate a series of work actions on the computer. Worse still, a person may be hard at work when away from the computer while software programs run “in the background.” In this talk, I discuss the methods that I developed with my colleagues to combat these issues in our multi-year field study of engineering work and technology. Our methods blend the industrial engineer’s
eye for detail with the ethnographic tradition of observation and interpretation. I discuss in particular methods for collecting and analyzing digital objects and for understanding the array of digital technologies in a workplace.
Dear Naomi
This is a test submission. Applicants have up to 2500 characters to fill here, which will allow just over 350 words.
If you would like more characters added, or would like to amend this form in any way to include other information please let me know.
Under the ‘Type’ section currently there are two options – paper and poster. If you wish to change this to say something else please let me know and I will tailor the form for you.
If you have any queries please let me know.
Best wishes
Vicki
test
Test