AI diary, part three
In this article, we’ll discuss:
- Will generative AI mean that a whole generation of university students will ‘cheat’ or cut corners rather than doing the hard work of learning?
- What does this mean for employers who are recruiting new talent?
- It’s known that bias can be baked into AI. Does this mean that minority groups will be even more marginalised?
In our first and second article, we discussed the positives and negatives that have developed in recent years in AI. On the whole, we concluded that there are more positives — and that AI research should continue.
But another potential pitfall is how university students are ‘writing’ essays using ChatGPT and Google’s Bard… and how that affects their learning.
If students use ChatGPT, are they learning anything at all?
Liopa’s Senior Machine Learning Scientist, Alex Cowan, said: “AI should make our lives easier and more efficient, however, the more you can assist, the less the person has to do themselves.
“The whole point of education is learning. You’re not learning if you’re getting something to do it for you. If all students start using ChatGPT to write essays, all the work is going to be at the same standard. That is a real concern.”
One of the main issues with AI is that it can have “hallucinations.” This is when the model goes off-piste and starts to generate answers based on conclusions it has drawn from different sources. But in hallucinations, it invents the data and outputs information that’s entirely made up. There was a well-known case of a lawyer in NYC who submitted a legal brief, citing a whole slew of precedent legal cases that were entirely false – they were hallucinations of ChatGPT, which the lawyer admitted to using.
Alex says that this can have an impact on how useful AI would be for education purposes.
“Yes, you can use AI and it can give you some correct answers, but it’s still not there yet. You don’t have the transparency as you do as a human. It’s important to know what’s in the models. Sometimes it’s just generating stuff.”
Liopa’s CTO, Fabian Campbell-West, noted that cheating is as old as academics itself. He said, “People find ways to work around anything that’s measured – it’s been that way since the beginning of time. It’s not a new thing, it’s just a new vector – it’s an old problem with a new variant.
“The cheesy saying is that ‘you’re only cheating yourself.’
Weeding out the bullshitters in recruitment
How does that impact recruiting of new talent, at a company like Liopa? Will you have to suss out AI fakers during the vetting and interview process, and how hard might that be?
Fabian is well-positioned to discuss the talent search, as a leader of a team of developers.
“Recruitment should improve to weed out bluffers. The more sophisticated someone is at bluffing… if they’re very good at it, then maybe they are doing the job.”
Will interviews require a practical test more often?
Fabian said: “It depends on the role. Mostly, I’m looking for people who are problem solvers. I try to ask questions to uncover someone’s character as much as I can. Does this person self-motivate? Do they figure out how to do things that they don’t already know how to do? In software, this is hugely important. You’re constantly having to evolve. So I’m looking for a mindset rather than a skillset. I use tools every day now that didn’t exist when I was studying my degree.”
Alex says she’s not overly concerned. “People who are fakers – at a shallow level, they could look like they know what they’re doing. But if you dive a bit deeper – and it doesn’t have to be that much deeper – they show themselves.”
Our R&D intern, Matthew Blair, is still in the middle of his university degree, so it was interesting to hear his thoughts on using ChatGPT. He said:
“It’s not something I think you’d go about solving – people will just need to know more. Online exams are much harder now because people will use ChatGPT to help them. Is there even much point in discouraging people from using generative AI, because if you graduate and get a job, you’ll be using it anyway? Surely, it’s good to know how to use it well.”
Baked in bias
With regards to bias – the experts have underlined that the bias comes from human minds, not from the AI itself.
“A few years ago, there was uproar about AI being racist or homophobic – but it’s not the models themselves, it’s the information inputted to the model,” Alex said.
Fabian added: “These AI models have been trained to take a certain input and produce a certain output. They don’t have feelings, understanding or emotions, so the algorithm doesn’t have any sort of inherent bias. If you train an algorithm with biased data, then it will be biased, but it’s not from the algorithm itself.”
Global collaboration is needed to regulate AI
All of the experts agree that collaboration needs to happen with regards to regulating AI.
Alex said: “A of people from a lot of different backgrounds need to come together and discuss it. The people who build it are generally given a project and they just need to get it done. You don’t so much consider the ethics when you’re hired to build something. Collaboration is crucial for setting regulations.
“Everyone gets frightened by change, but we always have to keep moving forward and focusing on the positives that AI can bring.”
How Liopa uses AI technology
Liopa is one of the world’s only companies focused purely on the area of AI-based lip reading. The award-winning technology has been based upon decades of research, starting at Queen’s Univeristy Belfast. The company sits at the intersection of three popular fields in computer science: AI, Computer Vision and Speech Recognition. For more about our healthcare-related lip reading app, SRAVI, visit www.sravi.ai. For more about our other R&D applications visit www.liopa.ai.