After attending the fantastic EFYE 2019 Conference in the beautiful Cork this summer, one of our project members, Pete Crowson, reflects on his experience.
There is this fabulous moment at a conference, when you see a delegate listening to another explain their unique approach to supporting students, and seeing eyes light up because the listener had never considered it before. Conferences bring us out of our bubble, and give us new ideas to work with. We came to EFYE to learn right?
In my opinion, there is one thing better than finding out about a new approach or that exciting innovative idea, and that is having your own thoughts and opinions challenged. It’s good to learn about what others are doing, but debate is great. At this year’s EFYE, there were lots of fantastic ideas been shared, but we are all (mostly) pulling in the same direction.
We heard about the brilliant work from John and Betsy that we are all now trying to build upon, we were inspired by the rallying cry from Sally, and we all nodded along with Gemma as she said “if you were to ask me what is the most important thing in the world? It’s people, it’s people, it’s people.” We all agree that the transition into University shouldn’t be crammed into a week like it often is, we all want to build on peer support and student mentoring, and we all think Tinto was pretty important.
There was however one topic raised during this year’s conference that consistently caused some disagreement: collecting and using student data. We all know data is important, but the disagreements come when we make claims about how much it lets us understand student experience. We all know that collecting student data is a part of University, but some feel we’re already too much like big brother and have gone too far.. We want to include student feedback at all points of education, but we also know that students get over surveyed and often don’t give completely honest or useful comments when we ask for them anyway.
On the second day of the EFYE conference, and I was sat in a workshop entitled “Identifying Students at Risk – The Problems Your Predictive Analytics Can’t See”. To my left were three young confident delegates from Birmingham City University, describing one of their unique and innovative technical solution that gathers data to identify students at risk; a feedback system called ‘Backloop’. This involves contacting students every two weeks, to ‘check in’ on them. They are prompted to complete their ‘check in’, and this allows the University to gather a massive amount of feedback on their students. There is a belief in a duty of care for students, and frequently contacting students is a quick and powerful way to do this. It is a really impressive system.
To my right, were three experienced (and, as I found later that day at the conference dinner, absolutely delightful) Study Psychologists from various Finnish universities, who were left (literally) with mouths agape. To these delegates, supporting students requires intensive face-to-face interaction, peeling back the layers of the students psyche in order to understand the heart of student issues. How can this be measured by responding to a survey with an emoji? They have seen first-hand that it takes a trained specialist a great deal of time and effort to understand and change behaviour. Whatsmore, surely contacting students so frequently will just end up pushing the student away?
Of course, this is EFYE, so these different perspectives led to a really interesting discussion rather than judgement and picking a side (we’ve had far too much of that already here in the UK), but the point is, it was the first time I had really seen some disagreement of approach. During that same workshop, there was a suggestion that technology can allow for students location to be tracked when they log onto the University VLE platform. There was a sharp intake of breath. Some clearly felt we are going too far and crossing a line here. There was a discussion about how accurately we can predict student outcomes based entirely on demographic details, entry grades, and how the student presents themselves during their first week. Now I don’t know about you, but I came out of Uni a completely different person to the one I was when I came in.
This was my favourite session of the conference, in part because of the wonderful people I met and the personal stories from Lee Richardson, but also because I felt a debate brewing in the room. The questions raised are all ones that we grapple with at Nottingham Trent, with our learning analytics platform. When we use data points as proxies for engagement, is this enough information for us to base an intervention on? Why should our staff trust data over their own observations in the classroom? We don’t use background in our algorithm for ethical reasons, but it could arguably be more accurate if we did.
I wrote this not because I have the answers, but because it excited me that this was where we were asking the questions. When we think of where we as a sector seem to be going with using data, I was reminded of a quote from a famous movie: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Next year in Aarhus, I would like to see more sessions where we spend time challenging each other on what we are doing, because debate is great. Perhaps you’ll see me there with a session entitled “Why We Are Wasting Our Time Focusing on First Year Experience” that might sound a little contentious but I mean well. You can challenge my preconceived ideas, as well as inspiring me with your brilliant ones.