In Part One of our AI Diary, we discussed whether Generative AI (ChatGPT) could ruin the quality of the internet. Because AI erodes monetisation models for generating new content, could it send the quality of information into a rapid downward spiral? Read Part One of our report here.
Now, in Part Two, we’re looking at the following:
- Is there too much scaremongering about AI?
- AI and the hype cycle
- What are the good things that have come from AI?
- What is the worst thing to come from it? (Disinformation)
In early July we’ll release Part Three, which will examine how AI is impacting the next generation of learners. As university students rely on ChatGPT to write essays, how we can ensure people are really understanding their field of study? How does that impact companies as they recruit new talent?
Scaremongering and AI
The scaremongering around AI seemed to have reached a fever pitch this month, as leaders have issued several warnings about the pace of change, and how regulation needs to catch up.
The Centre for AI Safety released a statement saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
This sparked a new tide of hyped statements about AI, comparing it to nuclear weapons.
The three categories of fears from AI
In the NY Times’ OnTech column, Adam Pasick summed up the three categories of fears around AI:
- Joblessness
- Existential threat to humanity
- The Singularity – or, as he puts it, “the long-prophesied moment when technology changes everything”
The article went on to explain the reasons why these fears should be tempered – like, for example, the fact that previous technological advancements in history were proven to create more jobs than they replaced.
Pasick wrote: Throughout history, new technologies have tended to affect wages and inequality much more than the number of jobs available. It’s not yet clear how A.I. will affect these aspects of work — and there’s plenty of potential for bad news — but economists say mass unemployment isn’t likely.
How the large tech players contribute to AI fears
Liopa’s Senior Machine Learning Scientist, Alex Cowan said, “AI is not at the point where it can grasp human concepts. It would need to get there before all this fearmongering would become reality.
“I believe we are still years away – I don’t know whether we’d see that in my lifetime.
“It’s not smaller players that we need to worry about – it’s the big tech companies who operate under a veil of secrecy. They have huge resources, and a lack of transparency, where we don’t know what models they’re using, or how they’re training them. It’s quite opaque, even for those of us who work in the industry. And that lack of transparency has resulted in a lot of fear.”
AI and the hype cycle
Liopa’s CTO Fabian Campbell-West agreed. He said, “When I started my career in 2001, I was using AI then and have been ever since – it’s not new.
“A few years ago, there was a massive spike in interest because GPUs made things possible that had only been theoretical.
“I think it matches the Gartner hype cycle – you create excitement around a tech, and it reaches a peak, before it falls because reality doesn’t meet expectation.
“What’s been the most noticeable difference is that there are more ways for people to be aware of things – such as business intelligence, and empowered decision making. A lot of companies are buying tools to automate every aspect of a business, to try to optimise.
“Meanwhile the personal tech is driven by people being more introspective and looking at themselves – smart watches that monitor everything you do, including when you sleep – these innovations are based on AI.”
What good things have come from AI?
Fabian continued: “There are very few people who aren’t affected by AI in some way, and a lot of it is good.
“There are great positives in healthcare – including something like SRAVI – AI can give people a new lease of life for people who can struggle to communicate.
“Or, wearing a smart watch might make you more conscious of taking the stairs instead of the lift – then that’s a good thing.
“AI is all around us – like anything, it can be used badly, or it can be used well.
“Another example of an AI for Good organisation is hackathons – it’s important for people to know that AI has already enabled a lot of good, positive things.”
Disinformation, and trusting your source
As it becomes more ubiquitous, people will need to become more sceptical, rather than just blindly trusting the source. Where did this info come from? What is the source? This becomes even more difficult with ChatGPT which infamously doesn’t list any source material or references.
Fabian pointed out that questioning your sources is ultimately a good habit.
He said, “The biggest threat is disinformation – how do you know the info I’m getting out of a model is correct? If you have an AI system help you by making an automated system, how do you know that it’s not taking information from a source that’s wrong?”
In much the same way that blockchain is supposed to be irrefutable because it crowd-sources the idea of credibility checks, the enormous scale of the data that AI is pulling from, is supposed to protect us.
Fabian said, “The reason AI works well is if you gather huge amounts of info, you can assume most of it is correct. You can average out mistakes and errors.
But crowd-sourced information isn’t always accurate. In the case of specialised fields like science, engineering and medicine, the lack of knowledge could cause serious problems.
How AI could sway an election
A detailed report by the Brookings Institute, How to deal with AI-enabled Disinformation, spells out exactly how an Election Day attack could occur, to sway an election. In the hypothetical scenario, hackers create fake accounts and Tweet about election centres being closed, putting people off going out to vote. Real people re-Tweet the claims, thinking they are true, and radio and TV stations start to pick up the news. According to Brookings’ analysis, because this sort of attack is designed to happen rapidly, and create large-scale, but short-term damage, social media networks are far less able to do anything to stop the spread of the disinformation. In just one day, enough damage is done to impact the election.
Fabian explains: “One of the reasons that works is social media is AI-based, and it’s too easy to spread information without needing to reference a source.”
He went on: “If you wanted some work done on your house, you wouldn’t go and ask 100 people – you’d consult an architect or an engineer.”
The solution seems to rest on us – the people using the internet, not computers, need to be vigilant.
“Ultimately, it’s incumbent on everyone to be healthily sceptical about any information they might come across. Who is telling me this? Why are they telling me it? Is someone using this information to get me to do something?”
“The world is changing, and you can be with it or not – look for ways that it can enrich your life rather than considering it a threat,” he concluded.
Our own data, disinformed
Our R&D researcher, Matthew Blair, noted an example of disinformation very close to home. He said, “I asked ChatGPT to explain what our product, SRAVI was, and it returned the answer that it was designed in India, not here in Ireland.” See below the screenshot of what Matthew was given.
Alex Cowan concluded by saying: “AI is supposed to assist us. As a society we work very hard, long hours, 5 days a week. Ideally with enough AI in the world we should drop that back – it should make our work/life balance better. It might make your job easier, and you’ll have more free time. And that fuels the economy because people will be out doing more things that they enjoy.”
She noted, “No-one in the industry is saying ‘stop doing it.’ Everyone agrees that we need to keep researching AI.”
Keep checking our blog for the 3rd and final part of our AI Diary.
How Liopa uses AI technology
Liopa is one of the world’s only companies focused purely on the area of AI-based lip reading. The award-winning technology has been based upon decades of research, starting at Queen’s Univeristy Belfast. The company sits at the intersection of three popular fields in computer science: AI, Computer Vision and Speech Recognition. For more about our healthcare-related lip reading app, SRAVI, visit www.sravi.ai. For more about our other R&D applications visit www.liopa.ai.