Streaming Atmos: Do you lose any fidelity with a Tidal Atmos stream compared to a Blu-ray Atmos?

QuadraphonicQuad

Help Support QuadraphonicQuad:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
If they aren't comparing lossless vs lossy blind and level matched and at nonpathological volume, I don't trust anyone's ears.

And the 'of course it sounds worse, it's lossy' line of reasoning is just facepalm material.
I have some iTunes 256 kbps songs that sound better than the CD version, with the latter obviously mastered like crap. However, with low bit rate Atmos, there is clearly a lot of garbage coming from the ceiling. The algorithm seems to do a pretty good job of masking it; but it still annoys me that better performance can be achieved no matter how marginal.
 
I agree that it's facepalm material when a lossy copy of a track sounds better than a lossless copy (because mastering damage). There are volume war CD editions that will be put to shame by stepped on damaged lossy Atmos mixes. Facepalm indeed!

Yeah, so it turns out there are worse ways to damage audio than low bit rate mp3 style lossy compression. That's not a reason to justify lossy compression! Just how freaky are we going to get with the audio and be trashy cheap just to see if we got away with it or if it wasn't as bad as a different screw up?
 
To be practical, ignoring the losssy vs losless technical descriptions a bit, I would recommend that everyone must experience it very much in person.

The following three points are quite (a lot) influential:
  • The quality of the equipment we have at home.
  • The quality of acoustic room treatment.
  • The hearing capacity we have (how old are we, and how our ear system is degraded).
As has already been said in previous posts, it is good that for an Atmos mix that we really like in streaming, we buy the Blu-Ray (if available) to be able to make a comparison and check how much our listening improves, considering the three points I mention above.

If the improvement is null or insufficient for you then you have discovered that it is not worth buying the Blu-Ray for the reason of improved sound quality.

In my case, I barely noticed any difference with my previous 5.1 system of normal/average quality, and I thought that my old ears were no longer able to discern.

But since I set up the new Atmos Home Theater, after renovating the house, with a higher quality than the previous one and a minimal room acoustic treatment, nothing excessive either, I have verified that I DO notice a substantial difference between streaming and the Blu-ray. So, fortunately my ears are not too much degraded.

The improvement is not always the same with all the comparisons I've made, but the improvement in clarity, drums, cymbals, and some voices (especially female) of a whisper type, sound much more faithful to me and "as if were there in my living room” with Blu-ray than with streaming.

I think it is not necessary to be any demanding audiophile. You can simply try one or two Blu-rays, try to do an A-B comparison with streaming, and see for yourself.
 
I have some iTunes 256 kbps songs that sound better than the CD version, with the latter obviously mastered like crap.

And that (mastering difference) of course is not what I'm referring to. If the mastering is different, all claims about 'lossy vs lossless' are secondary.

I'm referring to the unexamined assumption that one can routinely 'hear' lossy and thus it must wound worse.

People focus on the 'data reduction' part and forget about the extremely important 'psychoacoustic modelling' part. Lossy codecs don't just discard data willy nilly. Psy modelling leverages research on *what* we can hear, and *when* we can hear it.

However, with low bit rate Atmos, there is clearly a lot of garbage coming from the ceiling. The algorithm seems to do a pretty good job of masking it; but it still annoys me that better performance can be achieved no matter how marginal.
Especially in a multichannel system, it would be a mistake to assume that deficit you can hear when a speaker is auditioned in isolation, must also be perceptible when all speakers are in use,
 
Last edited:
To be practical, ignoring the losssy vs losless technical descriptions a bit, I would recommend that everyone must experience it very much in person.

The following three points are quite (a lot) influential:
  • The quality of the equipment we have at home.
  • The quality of acoustic room treatment.
  • The hearing capacity we have (how old are we, and how our ear system is degraded).
As has already been said in previous posts, it is good that for an Atmos mix that we really like in streaming, we buy the Blu-Ray (if available) to be able to make a comparison and check how much our listening improves, considering the three points I mention above.

If the improvement is null or insufficient for you then you have discovered that it is not worth buying the Blu-Ray for the reason of improved sound quality.

In my case, I barely noticed any difference with my previous 5.1 system of normal/average quality, and I thought that my old ears were no longer able to discern.

But since I set up the new Atmos Home Theater, after renovating the house, with a higher quality than the previous one and a minimal room acoustic treatment, nothing excessive either, I have verified that I DO notice a substantial difference between streaming and the Blu-ray. So, fortunately my ears are not too much degraded.

The improvement is not always the same with all the comparisons I've made, but the improvement in clarity, drums, cymbals, and some voices (especially female) of a whisper type, sound much more faithful to me and "as if were there in my living room” with Blu-ray than with streaming.

I think it is not necessary to be any demanding audiophile. You can simply try one or two Blu-rays, try to do an A-B comparison with streaming, and see for yourself.

And again, without careful level matching -- which btw is rather hard to do with physical lossless vs streaming lossy -- not to mention blind comparison, you could just be hearing simple level differences, and/or reacting to unconscious expectation.

So one should temper one's claims. Allow for that possibility of error.

That's a scientific approach.
 
Especially in a multichannel system, it would be a mistake to assume that deficit you can hear when a speaker is auditioned in isolation, must also be perceptible when all speakers are in use,
So true. I've made a number of 5.1 mixes from stereo where artifacts from the demixing process are readily apparent when isolated, yet are masked when everything is put back together. Even the upmixing programs rely on psycho acoustical masking of artifacts; and those full logic vari-blend decoders of yore depended on it.

Try running something through the Surround Master and listen to only one rear speaker. You will hear the unit doing its thing. Yet the logic processing sounds seamless with all four (or five) channels playing.
 
People focus on the 'data reduction' part and forget about the extremely important 'psychoacoustic modelling' part. Lossy codecs don't just discard data willy nilly. Psy modelling leverages research on *what* we can hear, and *when* we can hear it.
Unfortunately that psycho acoustic modelling is done based on near perfect hearing, which few of us have. Indeed it is ironic that the people more likely to own decent audio equipment are more likely to have degraded hearing due to exposure to damaging sounds over the years. (Recent evidence shows age alone is minor, it's long term exposure to damage.) Other research shows people with damaged hearing can pick out lossy codecs more easily, because the modelling does not match their hearing. Throwing some sound away because it will be masked to inaudibility by something else doesn't work if the listener can't hear that something else.
 
Given how poorly most people -- across age ranges -- do in blind tests of good lossy encodes vs source, I would say the codecs tend to work pretty darn well, and that concerns about lossy audio voiced by 'people more likely to own decent audio equipment' are very overstated. Amir, who has tinnitus and fits your description pretty well, had to work darn hard to reliably tell good encodes from source, forensically listening in a way none of us normally would.

But to the degree that hearing-degraded listeners can better hear the difference -- I'd want to read about the damage tested, the codecs used, listening conditions, etc -- the fault again is not with data reduction itself -- 'lossiness' -- but a mismatch of model to listener, i.e., how it's done. Models could be tuned differently, if there's a demand for it.

People with poor understanding simply blame the bitrates, as if the simple fact of data reduction means they'll hear it. The prudent path is to allow the usual suspects -- level mismatch, sighted bias -- to inform one's claims about the cause of 'worse sound'.
 
Last edited:
And again, without careful level matching -- which btw is rather hard to do with physical lossless vs streaming lossy -- not to mention blind comparison, you could just be hearing simple level differences, and/or reacting to unconscious expectation.

So one should temper one's claims. Allow for that possibility of error.

That's a scientific approach.

Scientifically speaking, lossless audio streams should be seen as the baseline point of reference. The onus should be on those who push for lossy streaming to provide evidence of perceptible non-inferiority using blinded, volume-matched comparisons. Such experiments should use varying levels of quality of audio equipment and room acoustics, as well as age groups/hearing capacity of study participants. Is there such scientific literature available?

While I allow for the possibility for placebo/nocebo effects greatly exaggerating or even explaining perceived differences between lossless and lossy Atmos, it's not those of us who want lossless streams that need to err on the side of caution.
 
Yes there's a big difference and the lossy DD+ stream at 768kbps sounds much worse. Quite tinny and hollow by comparison.

Of course it's far better than having no Atmos mix available at all but the point is there really is no need for the data to be compressed to that level today... The codec even allows a bitrate up to 1664 kbps and your normal Netflix video stream would be in excess of this. There's no harm in allowing for a buffer before you play content either.

When listening to a similar 1600 kbps bitrate with MPEG-H content - it's a far more acceptable level and closer match to lossless.

Hopefully someday we get a better streaming bitrate... :)
I don't have a bluray to compare it to but the Berlin Phil claims to be streaming in atmos at 1024 kbps, and, with all the usual caveats, I have the impression that it's more pleasant to my ears. There's certainly no technical limitation holding back the bitrates. Also it works in a browser (safari on Mac).

Dolby Atmos 7.1.4 · 1.024 kbit/s · E-AC-3 JOC (Dolby Digital Plus)
https://help.berliner-philharmonike...nd-sound-format-are-the-concerts-broadcast-in
 
Last edited:
I agree with a lot of the comments about lossy tech. I often make the same comments about level matching when A/B'ing and all that.

I don't hear any readily noticeable change when I make 320k mp3s.
The lossy dts2496 5.1 masters that can be compared to fully lossless blueray releases sound the same to me.
I can't really hear sample rate conversions. I know what software to use and have Apogee DACs but still.

128k mp3 and below is where the "mp3 sound" thing starts. The mix is still there. A different mix would be more striking of a change. But it's instantly lo-fi. The multichannel systems and the discrete 12 channel mixes are the decadent end of Atmos. It's fair to call this out.

I can only A/B my own Atmos mixes with the encoded lossy copies. By itself with no reference it sounds great. A/B it with the master and it's striking. Same with the separation software. By itself it's impressive and magical. Run your own test with a mix made from your own original mix stems. Compare the genuine mix stems with the separated ones from the final mix and it's bad.
 
I don't hear any readily noticeable change when I make 320k mp3s.
Nor do I.
The lossy dts2496 5.1 masters that can be compared to fully lossless blueray releases sound the same to me.
Full bit rate 1.5mbps DTS has always been regarded as indistinguishable from lossless. Half rate not so much.
128k mp3 and below is where the "mp3 sound" thing starts.
Indeed.

Being in the UK I can stream BBC radio stations at 320kbps AAC-LC and I have to say Radio 3 live concerts eg the Proms sound sublime. There's no dynamic range compression for those concerts even on FM, and with a good FM signal (which my parents have) they sound pretty damned good on FM too. My FM signal is marginal for full stereo quieting unfortunately, and my parents hifi can't stream so I can't A/B good FM to 320kbps AAC stream.

One year the BBC ran a trial of streaming FLAC for Radio 3 during the Proms. I did some A/B comparisons with the 320kbps AAC streams and I really couldn't hear any difference, apart from when they weren't volume matched (some concerts they were the same, some they weren't).
 
Wondering if anyone has compaired a Tidal Atmos streamed cut to a Blu-ray Atmos of the same? Do you lose any fidelity while streaming?

Yes. I particularly compared R.E.M.'s Automatic For The People on a 5.1.4 system. The differences were noticeable but the streaming atmos was better than I expected from the low low bitrate. Clearly the blu-ray atmos (TrueHD) is preferred on my end to the streaming atmos (EAC-JOC3) but I will happily take streaming atmos over no atmos at all.
 
Indeed it is ironic that the people more likely to own decent audio equipment are more likely to have degraded hearing due to exposure to damaging sounds over the years.
Interestingly enough this is what my Otolaryngologist says. He says from his experience people who say they are Audiophiles have the worst degradation in their hearing. I had 75% loss in my left ear due to a broken ear drum. My other ear had no issues with a normal frequency spectrum. This is the guy who put me onto the Widex Moment hearing aid. They are excellent and music with the left side hearing aid sounds the same as the right ear which has no hearing aid. It's profound. I'm back to hearing like when I was younger. They make a huge difference with surround sound. Anyone who has hearing loss should look into these. They aren't cheap (in Canada about $3500 each with support included). Anyway, sorry to get off topic.

 
Last edited:
Interestingly enough this is what my Otolaryngologist says. He says from his experience people who say they are Audiophiles have the worst degradation in their hearing. I had 75% loss in my left ear due to a broken ear drum. My other ear had no issues with a normal frequency spectrum. This is the guy who put me onto the Widex Moment hearing aid. They are excellent and music with the left side hearing aid sounds the same as the right ear which has no hearing aid. It's profound. I'm back to hearing like when I was younger. They make a huge difference with surround sound. Anyone who has hearing loss should look into these. They aren't cheap (in Canada about $3500 each with support included). Anyway, sorry to get off topic.


Could be selection bias, meaning that audiophiles might be more likely to seek medical help for hearing problems than an average person with hearing loss.

Congrats on finding the right hearing aid for you! I have a family member with severe hearing loss, so I appreciate that this is no small feat.
 
Could be selection bias, meaning that audiophiles might be more likely to seek medical help for hearing problems than an average person with hearing loss.

Congrats on finding the right hearing aid for you! I have a family member with severe hearing loss, so I appreciate that this is no small feat.
I think it's the opposite. It took me a long time to seek help as I didn't want to know that I had a hearing issue.

The Widex Moment is chosen by most musicians so it was a fairly easy pick given that.
 

Latest posts

Back
Top