Membership will help maximise skills and identify new ideas, products and insights  

Belfast based Artificial Intelligence startup Liopa is delighted to announce it has become a member of Northern Ireland’s Connected Health Innovation Centre (CHIC). CHIC seeks to develop companies operating in Northern Ireland through targeted research and development. 

CHIC is funded though Invest NI’s Competence Centre Programme and aims to transform healthcare through business research. The £8m centre, based at Ulster University, has a membership of over 30 companies from across the healthcare and technology sectors. It has delivered over 30 research projects with industry collaborators, driving innovation and product developments. This, in turn, has lead to better health outcomes and economic return to Northern Ireland. 

On welcoming Liopa as CHIC member Centre Director David Branagh commented “It is great to see Liopa become a member of the Connected Health Innovation Centre as part of the growing life and health sciences industry in Northern Ireland. Liopa has a highly innovative and revolutionary technology which has the potential to deliver real impact into people’s lives. We look forward to working with Liopa as their Visual Speech Recognition technology develops.” 

Professor Jim McLaughlin, is one of two principal investigators in CHIC. He said, “The research which originates from CHIC and the products which member companies, such as Liopa, bring to market are revolutionising the healthcare. CHIC provides a vehicle for health service and end user engagement, helping along an often-complex path to adoption and impact. We’re delighted Liopa is joining us in leading the way in transformational research for Connected Health.” 

“We’re excited about joining CHIC” said Liam McQuillan, Co founder and CEO Liopa. “Being a member will help us to go to market at greater pace and scale. And we’ll have the opportunity to collaborate with some of the most innovate NI companies in the connected health arena.”  

Today’s best online authentication systems use multi-factor authentication – a way of verifying a person’s identity by using  

  • something you know (e.g. a pin or password) 
  • something you have (e.g. a keyfob) 
  • something you are (biometric authentication)

Something you are 

The ‘something you are’ factor provides a very accurate and reliable user authentication method by identifying the individual from a unique physiological or behavioural characteristic, e.g. fingerprint, voice, face, lip movement, keystroke analysisThese biometric techniques are accurate, easy to use and difficult to compromise.  

Amongst the various biometric techniques, Facial Recognition (FR) has gained greatly in popularity, especially for authentication on mobile devices. The increasing popularity of ‘selfies’ show that users are very comfortable with this form of interaction. 


However, FR is particularly susceptible to ‘spoofing’ – formally defined as ‘the presenting of an artificial replication of a piece of biometric data to the biometric system in order to try and gain access.’  FR systems can be ‘spoofed’ by high resolution images of the subject held up to the camera. Better FR systems look for movement in the subject. However these too can be ‘spoofed’ by a decent headshot video of the subject downloaded, for example, from a social media account. 

Liveness detection 

It is critical that FR systems can detect the presence of a ‘live’ user (aka Liveness Detection), as opposed to a static image or video of the subject. 

Various liveness detection solutions exist today for FR.  These can be categorised into: 

  • Hardware-based – the use of specialized sensors that measure for facial thermogramsspecific reflection properties of the eye etc. 
  • Software-based  present a challenge to the user and analyse response to ascertain liveness e.g. asking subject to blink/smile.

Hardware-based solutions are generally very expensive and typically found in higher-end FR systems used, for example, in airport security.  For mobile-based FR authentication, software-based liveness checking solutions are commonly deployed.   

A good software-based solution should be easy to use and provide strong liveness detection – getting the correct balance between security and convenience is seen as critical.  Some options, such as asking the user to blink or smile, are very easy to use. But they can be easily spoofed if video of the subject blinking or smiling can be obtained. Other options require the user to move their mobile device in a random pattern whilst keeping their head at all times within an on-screen oval.  Such solutions are difficult to use.    

Liopa has developed LipSecure. It’s a software-based liveness checker that leverages our AI-based lipreading technology to deliver an easy-to-use, yet highly robust, anti-spoofing solution.  Working alongside a partner’s FR technology, LipSecure prompts the user to speak/mime a random sequence of digits appearing on screen.  The response is analysed for accuracy and a decision made on whether the user is a live person. LipSecure is available to trial today – get in touch to find out more. 

Will be working with the Defence and Security Accelerator (DASA) which funds innovative, exploitable ideas that could lead to a cost-effective advantage for UK armed forces and national security.

DASA has selected Liopa to take part in a new initiative. It will be investigating how behavioural analytics can improve understanding and measurement, help make confident and ethical predictions, and guide better judgements on interventions for defence and security.

DASA, a cross-Government organisation, finds and funds exploitable innovation to support UK defence and security quickly and effectively, and support UK prosperity. Its vision is for the UK to maintain its strategic advantage over its adversaries through the most innovative defence and security capabilities in the world.

Liopa will leverage its existing Visual Speech Recognition (VSR) technology which deciphers speech from analysis of Lip Movements for activities such as key word spotting. The existing VSR engine takes, as input, video of a subject(s) speaking. It uses advanced AI-Based techniques to predict most likely utterances. Liopa will adapt its technology to identify utterances of specified words in uploaded video content, where audio is either not present or of very poor quality.

“This competition set out to find and fund a wide range of exciting and diverse proposals to advance Behaviour Analytics capabilities for the Defence and Security sector. A large number of high quality proposals were received and we are delighted to offer Liopa this contract through the Defence and Security Accelerator,” commented Richard Leigh, Influence Programme Manager, Defence Science and Technology Laboratory.

Liam McQuillan, Founder and CEO, Liopa, said, “This represents a considerable stamp of approval. We were able to show how our idea will work, and how it fits in with a larger ecosystem and other data analytics feeds. We’ve the relevant Artificial Intelligence expertise and capability inhouse, and we’ll also be looking to grow our team of experts in Belfast.”

LIOPA secures Innovate UK Funding, along with partners Lancashire Teaching Hospitals NHS Foundation Trust and Queen’s University Belfast. They will deploy a communications aid for tracheostomy patients, aimed at improving patient engagement and autonomy.

Liopa, a spin out of the Centre for Secure Information Technologies (CSIT) at Queen’s University Belfast (QUB) has announced that it is to deliver a prototype patient/carer communications aid. Tracheostomy patients will use it in critical care environments.

Working along with Lancashire Teaching Hospitals NHS Foundation Trust and Queen’s University Belfast, Liopa will develop SRAVI (Speech Recognition App for the Voice Impaired). Compared to the limited alternatives available, SRAVI will provide an easy-to-use, accurate and cost effective method for communication between these patients, their family members and healthcare staff. SRAVI will integrate with LipRead, Liopa’s artificial intelligence engine for Visual Speech Recognition.

The initial project will focus on a select group of tracheostomy patients (approximately 10,000 tracheostomies are performed annually in the UK). They will struggle to vocalise but be able to move their lips normally. Whilst the initial prototype will support a limited vocabulary in English, the application can be further developed to support larger vocabularies across multiple languages.

Clinical Professor Danny McAuley at QUB’s Wellcome-Wolfson Institute for Experimental Medicine and Consultant at the Belfast Trust commented, “The inability to communicate during an ICU stay is a major source of morbidity for patients, family and staff. A patient’s non-verbal attempts to communicate are often difficult to understand. This can be frustrating for patients and carers. This novel approach may allow better communication between the patient, staff and family from an early stage.”

“This is an innovative application of our proven AI-based Visual Speech Recognition (VSR) system LipRead. LipRead analyses and translates lip movements into recognisable words. The technology allows the translation of lip movement to text using a mobile app on a mobile device. It requires very little training and is inexpensive,” said Liam McQuillan, Co-founder and CEO, Liopa. He continued, “SRAVI can be deployed on commodity smartphones and tablets, that can be used by multiple patients.  Alternative technologies, such as ‘eye-gaze’ systems, require bespoke hardware and are generally much more expensive.”

Shondipon Laha is a Consultant in Critical Care and Anaesthesia at Lancashire Teaching Hospital. He explained further, “This project will address a government priority to implement new digital solutions in the NHS. SRAVI will deliver improved patient-carer communications for patients with tracheostomies. It therefore reduces rehabilitation times in expensive ICU settings.”

The project will run for 9 months. It will include an evaluation phase, carried out in hospital critical care environments in Lancashire and Belfast. It has been funded by UK Research and Innovation. This new organisation brings together the UK Research Councils, Innovate UK and Research England. It creates the best environment for research and innovation to flourish, to ensure the UK maintains its world-leading position in research and innovation.

Find out more about what lipreading is from this short video….

Belfast AI Startup Liopa Raises Seed funding on the Syndicate Room crowdfunding platform

A spin-out from Queen’s University Belfast and the Centre for Secure IT, Liopa is developing lipreading technology to enable visual speech recognition. 

AI-based technology startup Liopa has completed a very successful fund-raising campaign on the SyndicateRoom crowdfunding website. The company raised 2.5 times the target amount in a 4 week period. It received strong backing from a number of angel investors and the Fund Twenty8 EIS fund. 

Founded in 2015, Liopa is commercialising over 10 years of research in the field of Speech and Image processing. The company’s technology can determine speech by analysing the movement of a user’s lips as they speak into a camera. In addition, the technology can be used to prevent “spoofing” and security issues in facial recognition systems. 

Liopa’s primary focus is improving the accuracy of voice driven applications which have risen in popularity. Virtual assistants such as Apple’s Siri and Amazon’s Alexa have brought voice interaction into the mainstream. And now corporations such as Google and Sonos are following their lead. These voice driven systems, however, rely on audio speech recognition (ASR) to determine speech. This means their accuracy deteriorates with the increase of real-world audio noise – for example, in a busy restaurant or outside on a windy day. 

Liopa’s visual speech recognition technology, LipRead, is designed to decipher speech from lip movements. It is therefore agnostic to audio noise. Liopa hopes to augment existing voice driven systems in real-world environments, improving accuracy when background noise is present. The company calls this usage of LipRead “ASR-Assist”. 

Speaking about the funding round, Liopa’s CEO Liam McQuillan said: “This investment will allow us to grow our engineering capability and AI talent.” He continued, “We’ll be able to accelerate the exciting developments we have planned in our roadmap, and protect our valuable IP.”

SyndicateRoom’s co-founder, Tom Britton commented, “It’s no wonder the technology being developed by Liopa is so incredible. The team behind it have spent a combined 50+ man years researching or developing the technology. Their backgrounds range from Senior Academics to C- suite Commercial ranging from startups to the likes of Intel. The applications for their platform are wide ranging, everything from helping law enforcement decipher what’s been said on CCTV footage to giving those who have lost their ability to vocalise a new way to easily communicate. We’re delighted to play a role in such an innovative technology that is applying machine learning and AI for an ultimate good.” 

Launched in September 2013, SyndicateRoom is an online investment platform. It has helped 170+ early-stage UK businesses secure more than £215 million in funding through its investor-led equity crowdfunding model.  

Watch the Liopa company overview video …..

What is lipreading?

Lipreading is a communication technique used by the hard-of-hearing.  Unlike sign language it doesn’t require both parties to be trained in the technique. From an accuracy perspective however, human lipreading is generally poor. Indeed, it requires intense levels of concentration for the lipreader. As such, it is not a favoured communication technique for the hard-of-hearing.  

The challenges

The optimal scenario for the lipreader is a face-to-face engagement with someone they know – ideally whose lip movements are familiar. Lipreading strangers is much more challenging. More often than not, interactions and environments are not ideal. Lipreading multiple speakers in a group is virtually impossible. People do not turn to face the lipreader, or speak one at a time in an orderly fashion!  Additionally, different mouth shapes, facial hair, rate of speech and distance from speaker all create problems for even the best trained lipreaders.  

Low accuracy

As a result, the accuracy of human lipreading is unfortunately very low.  Most lipreaders actually try to pick out keywords and ‘fill in the blanks’ given the context of the conversation.  In fact, lipreading is said to be 80% guesswork! Studies have shown that the best performing lipreaders struggle to achieve greater than 50% accuracy in ideal conditions.  These accuracy levels tail off markedly in longer tests, as the lipreader tires. Amongst the standard hearing population lipreading accuracy is about 10% – 1 word in 10.    

Why being lipreader “friendly” is important

Effective communication techniques allow the hard of hearing to stay connected to the world around them. They build confidence and develop social and communication skills. Not being able to understand what is being said can be frustrating and lead to a sense of isolation. Communication is part of human contact and is vital for mental well-being.  It is important that the hearing population are aware of the difficulties the hard of hearing have in using techniques such as lipreading. They should ensure, where possible, that they communicate in such a way that is best for the lipreader.  

You can read some suggestions on how to more lipreader “friendly”:  Healthy Hearing: Resolve to Improve Your Lipreading Skills and this Deaf Expressions blog: Five Tips to make Lipreading Easier. 

What we’re doing to help

Liopa is developing an automated lipreading platform – LipRead. We use videos of people speaking to train our AI-based LipRead platform to recognise speech from lip movements. LipRead is initially targeted at constrained vocabularies. This includes the command set for an in-vehicle voice-activation unit, for instance. It can be used to  improve the accuracy of other current speech recognition technologies, especially which analyse audio and are susceptible to background noise. 

Over time, LipRead will support larger vocabularies and more languages with increasing accuracy. With these additional capabilities, we plan to provide a smartphone application that can assist the hard-of-hearing in the difficult task of lipreading.

Contact us to find out more! 



Visual Speech Recognition - Read My Lips photo

Liopa commences trials of AI-based LipSecure with established User Authentication and Identity Verification providers to provide enhanced anti-spoofing capabilities.

Read more


Liopa has received an Invest NI Grant for Research and Development, targeted towards a project to develop LipRead, a Visual Speech Recognition (VSR) product.

Read more

Belfast-based technology specialist Liopa has commercially launched the world’s first automated Lip Reader. The service will initially be used to prevent ‘spoofing’ in Facial Recognition systems where there is a threat of compromise from images or videos of the subject being presented by an imposter.

Read more

Investment secured

Liopa to bring visual speech-recognition platform to market. Belfast’s Liopa has raised $1m in funding led by Techstart NI and QUBIS to commercialise its LipRead platform for a global audience. Read the article in full here.