Audio data quality is essential for sound recognition. There is, however, another reason why taking data seriously is so important: the significant financial and reputational risks that poor practice creates. This is the subject of a detailed whitepaper titled ‘Audio for Machine Learning: The Law and your Reputation‘ that we’ve recently published.
The combination of sound recognition and movement is the subject of a new patent, that was granted to us this week. Wearable devices such as smartwatches, TWS earbuds and AR glasses that can combine sound recognition and movement detection will open up opportunities to deliver powerful new and improved user experiences. These experiences range from a more granular understanding of physical activity to helping people navigate or discover the world around them.
We've been named in CB Insights AI 100 List of Most Innovative Artificial Intelligence Startups
Our latest independent consumer survey results are in and show strong demand for new AI-driven capabilities on true wireless stereo (TWS) earbuds. Carried out by respected company Sapio Research among people who own true wireless earbuds in the US and UK, the survey shows that while 84% of respondents purchased earbuds in the last 12 months, nearly two-thirds plan to replace their earbuds within the coming 12 months. This means that OEMs need to react quickly to meet consumer demand and win market share in one of the fastest-growing consumer electronics sectors. To illustrate the growth rate, analyst firm Canalys forecasts that the number of TWS earbuds shipped in 2024 will reach 520 million (up from 250 million shipped in 2020).
In the latest series of TinyML UK talks, I share elements of our product design process, share some amusing stories from deploying ML models to real-world environments, see how the real world can throw up surprising, unexpected and plain strange scenarios, and propose tools and techniques to help build ML models that really work. You can watch the whole presentation and gain access to the presentation slides on our website.
It has been an honour to contribute a lecture to Professor Vijay Janapa Reddi’s course on Tiny Machine Learning (TinyML), alongside other industry experts from Google, Qualcomm, Microsoft and more. The lecture is available to view here for free, alongside other course material and as an add-on to edX’s main TinyML course.
For the second year running we will be presenting a paper at ICASSP, our paper titled ‘Improving Sound Event Detection Metrics: Insights from DCASE 2020’ and written in conjunction with Nicolas Turpault and Romain Serizel at Université de Lorraine, presents an in-depth analysis of how PSDS can uniquely inform users about SED systems performance in comparison with conventional metrics. Find out more and read about the paper here
As 2020 drew to a close, we were granted a significant technology patent. The key technology in this patent, called Post-Biasing, has been essential to how our technology delivers exceptional performance in many diverse products and environments today. Read the full blog piece on why post-biasing is so important.
Audio Analytic CEO, Dr Chris Mitchell, recognised as a top technologist in voice/sound in 2020
We announced that our ai3-nano™ software and Acoustic Scene Recognition AI technology is pre-validated and optimized to run in always-on, low-power mode running on the Qualcomm® Sensing Hub. As part of the new Qualcomm® Snapdragon™ 888 5G Mobile Platform, the 2nd generation Qualcomm Sensing Hub was unveiled at the Snapdragon Tech Summit Digital 2020. As a result, smartphone OEMs can now create high-value benefits and features for consumers based around the phone knowing whether the user is in a chaotic, lively, calm, or boring acoustic environment.