Personalized Spatial Audio in iOS 16 (adding HRTF to Airpods?)

QuadraphonicQuad

Help Support QuadraphonicQuad:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

Franck

Senior Member
QQ Supporter
Joined
Aug 27, 2019
Messages
293
Location
USA
Not sure if you have seen, but iOS16 will allow you to scan your head and ears using a truedepth camera on your iPhone/iPad and get a better binaural experience with Dolby Atmos track in Apple Music.

https://www.techradar.com/news/apple-is-adding-personalized-audio-to-airpods-by-scanning-your-ears
I think this is why the recommendation is to mix an album in a 7.1.4 studio, because the binaural rendering will constantly change and improve till it feels you are in a 7.1.4 room.

Still this is rendered using the Spatial Audio renderer and not the Dolby Atmos renderer, the binaural experience between the two is different, and I welcome some competition to improve quickly the end user experience.

I also hope, this will be made available in Logic Pro soon.

What's your take?
 
Apparently this is already available for people on beta builds of iOS 16 (Dev only, need a Mac and a $100 a year developer account. Theres free ways around this but I’ve always been hesitant to use it).

The free public beta should go up sometime next month for those who want to try it out (I know I will).

Fair warning thought, iOS betas, especially real early builds like this… are frequently… messy…
 
Will Apple put this personal information in their databse to build an even larger profile of their users?
Probably not. Since Tim Cook took over Apple when Steve passed he's been really strong on privacy: Privacy
More specifically for what we are talking about, this is utilizing the FaceID tech which stays on device and is encrypted on device: Legal - Data & Privacy - Apple
The support page for personalized spatial audio isn't up yet, so I can't link that, but they're really good about having a popup discuss security changes, and I didn't notice one.

They do offer an opt in, completely optional "Share With Apple" on specific settings to take the data and make things better (Siri being the main one, I've opted in for that), and they anonymize results, each entry is detached from identifying information. Of course, if someone were to hack in and get access and really put their minds to it, they could find you, but... the same could be said for any online service, including this site.

As for the topic at hand, setup strangely wasn't painless, aided by the fact that the way they want you to scan your ears isn't entirely clearly communicated (they want you to hold the phone steady in the same position it did your face, but turn your head towards the phone), and the ears part is supposed to play sounds, but mine didn't (and, well, it is a beta...).

Once you get it done however, you are set. I'm listening to Axel Boman's Quest for Fire as a test album, and... I can't notice too many differences? Sounds that are meant to sound far away sound a tad further away, things sweeping around my head feel a touch more dramatic, but other than that, it's not as night and day as it was expecting. Sound definitely... feels more full, but I'm not sure if I'm just hearing things I want to hear. There is a feedback app that's auto installed when you install the "beta profile" required to install/run iOS betas, I'll make a note in there to have the option to A/B test personalized vs basic
 
A few days later I think I can articulate what changed:

Compared to other Binaural Surround Mixdown software I’ve tried in the past (Dolby Atmos & DTS:X for Headphones, built into windows these days for an unlock feed quite good), I’ve always felt that the default HRTF feels distant. You could always kinda get a feel for the “room” they’re rendering the objects in. Now, things sound and feel a lot closer. I bet that their model HRTF was quite different to mine. Now that things render closer to my head I’m hearing smaller details I wasn’t really hearing in the old renderer. Not sure if this will be the case for everybody but it’s definitely for the better.
 
It could be helpful to specify which brand/style/model number of the headphones that are being used.

I recently bought 2 mp3s from the Sony 360 demo "album" via Amazon, using Sony MDR-ZX110 headphones - imaging is somewhat better than stereo, but not anywhere near my 4.1 system (playing discrete - DD or DTS).


Kirk Bayne
 
It could be helpful to specify which brand/style/model number of the headphones that are being used.

I recently bought 2 mp3s from the Sony 360 demo "album" via Amazon, using Sony MDR-ZX110 headphones - imaging is somewhat better than stereo, but not anywhere near my 4.1 system (playing discrete - DD or DTS).


Kirk Bayne
In this particular instance the Personalized spatial audio feature is limited to Apple’s own headphone line, so Airpods 3, AirPods Pro, AirPods Max (what I have used to test this), and select Beats headphones.

Presumably because they know the DSP flow of those headphones well. (All headphones do Support spatial audio on iOS/iPadOS, etc, but only Airpods and Beats get special features like this, and “spatialize stereo”, an Atmos upmixer for stereo.)
 
Looks like Apple is going to take the same path Sony did with the 360 Spatial Sound Personalizer app.
3DAEF66D-AA0F-4F11-A1D4-C8FF8791BED1.png

I own AirPods Pro and Sony WH-1000XM4. I did some comparison with some albums both mixed in 360RA in Tidal and Atmos in Apple Music (John Mayer Sob Rock) and I stand with Kirk, nothing compares to the real deal of course. Sometimes I feel I’m just listening to a different stereo mix rather than a “new 3D sound effect”
 
iOS 16 dropping on the 7th September? Can't wait to listen to tracks using it if this improve my Spatial Audio experience.
 
  • Like
Reactions: mkt
iOS 16 dropping on the 7th September? Can't wait to listen to tracks using it if this improve my Spatial Audio experience.
Likely on the day new iPhones are out, as that’s been the trend the last few years. Date’s not too far off if rumors are to believed. Somewhere between the 12th-14th
 
Update is out, and ready to go.
And I just tried it with an album I released (so I know it well). Sounds better for me still no radical sense of distance/location but better spatialization.
I was very surprised it took no time to do the analysis. When I did it with Dolby Atmos, the image was sent to the cloud to be processed. It took a while to get a result.
Now can I push this to Logic Pro for better binaural monitoring?
 
And I just tried it with an album I released (so I know it well). Sounds better for me still no radical sense of distance/location but better spatialization.
I was very surprised it took no time to do the analysis. When I did it with Dolby Atmos, the image was sent to the cloud to be processed. It took a while to get a result.
Now can I push this to Logic Pro for better binaural monitoring?
Sadly, and I don't know why I didn't realize this before, it seems you need an iPhone to do the 3D facial mapping and ear scanning necessary for personalized spatial audio--and, for that matter, to actually use personalized spatial audio.

You can't do it from a MacBook or an older iPad, which is all I have. I still plan to explore whether it's possible to customize AirPods (with data from an audiogram, say) from my old iPad, or if I need a newer one that's capable of running the latest iPadOS.
 
Sadly, and I don't know why I didn't realize this before, it seems you need an iPhone to do the 3D facial mapping and ear scanning necessary for personalized spatial audio--and, for that matter, to actually use personalized spatial audio.

You can't do it from a MacBook or an older iPad, which is all I have. I still plan to explore whether it's possible to customize AirPods (with data from an audiogram, say) from my old iPad, or if I need a newer one that's capable of running the latest iPadOS.
The new iPadOS will be released later when new MacOS is released. So may be they will add it then. I expect a new version of Logic Pro to follow suit too.
 
Back
Top