Thanks to the incredibly sophisticated artificial intelligence in WIDEX MOMENT™ hearing aids, you can. And even better, there are now two ways for you to personalise your sound. So now, you can make every MOMENT™ your own.
With Widex My Sound you can easily adjust your hearing aids in every moment through the intuitive and simple to use app on your smartphone or tablet, choosing the personalisation option which best suits your needs. And even better, you can save your settings for the next time you are in the same environment.
Two paths to personalisation to suit your personal needs
SondSense Learn™ - launched in 2018, the original artificial intelligence feature was the first ever to be used in a hearing aid. Three years on, and Widex hearing aids are still the only ones which use artificial intelligence to allow you to create such precise personalisation for your sound. The simple to use feature allows you to find your perfect sound in any environment by offering you a selection of A/B sound comparisons to choose from, using artificial intelligence to learn your preferences to keep making smarter recommendations. However, we know that not all users have the desire to be so engaged with their hearing aids and want their sound to be personalised automatically.
‘Made For You’ is a brand new element which gives you a second option for personalisation - by automatically recommending two ideal settings for whatever environment you happen to find yourself in. The recommendations are based on the data gathered from thousands of users around the world, who have used SoundSense Learn previously to create their own perfect settings for different environments. This means that the recommendations are smart and highly accurate, and take all the leg work out of finding that sweet spot for a more instantaneous solution to personalising your sound! Made for users, by users.
We caught up with Oliver Townend, Widex lead Audiologist, to find out more about this exciting upgrade to the AI capabilities of WIDEX MOMENT™.
What is My Sound and how does it work?
My Sound is all about AI, the individual user and users all around the globe coming together to deliver smart, fast and personal solutions in sound. We know our automatic features do a lot of great work for individual wearers, but sometimes what the wearer wants is different to what the automatic features deliver.
When we launched SoundSense Learn™ (SSL) in 2018, we began a journey that brought AI and the individual wearer together to solve this problem. By involving the user directly to train SSL we could quickly find individual sound solutions, in a simple but powerful feature. The other great outcome of thousands of users around the world interacting with our AI interfaces was the huge amount of preference data created. Using AI modelling and clustering we could use all this rich data to create highly qualified recommendations to offer to users when they find themselves in a particular listening scenario. The best bit is this is a combined effort of thousands of users and AI leading to benefits for more users.
What is the technology behind My Sound and how was this developed?
The technology is part AI, part data and part real people! We had all this rich preference data from real people that has been shared via consented and secure data architecture. We have very smart AI scientists who could put a system together to learn from data to create these suggestions and finally we made a new app with a new home for our AI features, ‘My Sound’.
We are on the forefront of AI features to personalise sound. We have incredible automation that really allows people to just get on with their listening day but when there is a time when someone wants something unique and different we now have more than one way to provide a potential solution. We expect even more innovations to come. It felt right that we have one place in the app where your sound is found, and that is ‘My Sound’.
How is the data collected? How much data has been collected since SoundSense Learn was first introduced?
The data is collected via the app, of course with consent and handled incredibly securely in the cloud. We are only interested in information such as the situation someone is in and their sound preferences in that situation. This is how we can make smart recommendations to others. Regarding the amount of data, it is getting difficult to quantify. We have actually had to use representative samples of our large pool of data to make some calculations, in other words we have too much to handle!
What are the benefits to the hearing aid user?
The clear benefit is being able to get the sound you want. Widex aims to provide that automatically, and we are very successful at doing so - with our Fluid Sound Technology and multiple Sound Classes we can steer the hearing aid though most day to day situations. Even the most sophisticated automatic system will need to follow some rules that have been pre-determined and the system is therefore going to, on occasion, make a choice that is not exactly what the hearing aid user wants.
Widex is the only company using AI to then solve this problem and by doing so can help a user out of a listening situation in moments. Either by using a recommendation from our app or by finding a unique sound setting just for them. The solution is in the palm of their hand, this is incredibly empowering and puts the user back in control of their hearing. [please isolate and enlarge this quote]
What are the benefits to hearing care professionals?
Having satisfied, empowered clients should be a great benefit to professionals. We want to keep in touch with our client base but it would be great if they could help themselves in the moment. Plus, offering ground-breaking and cutting edge AI technology helps you differentiate your services.
In what kind of situations would My Sound be used?
I don’t think there really is a situation it wouldn’t be used in, except maybe one when you shouldn’t be using your phone! Everyone is unique and we all have our tastes, until automation is able to accurately predict what each individual wants in a given moment there will always be moments when the individual user would prefer something else. If you can think of that situation then you have a moment when My Sound could be used.