“Visual Speech Recognition” (VSR) has the catchy ring of a game-changing oxymoron, like “Virtual Reality” or “Artificial Intelligence.” All three are self-contradicting concepts that promise to prove invaluable as individuals, enterprises and government agencies try to take charge of unprecedented computer power and data access designed to make their lives better and everyday tasks easier.
VSR may have a lower profile than the other two, but it stands out for the highly practical set of applications it enables. It starts with lip reading, or the simple ability to observe and extract speech and intent of an individual based solely on the unique patterns or movements of lips.
In this free whitepaper, Dan Miller, lead analyst & founder with Opus Research, profiles how Belfast-based Liopa is introducing VSR in combination with leading-edge neural network techniques to help improve the performance of existing voice-first services and have a business impact in real-world situations where environmental conditions are non-ideal for speech recognition.
You can download the white paper here.