peak recording level for .wavs destined for conversion to AC3?

QuadraphonicQuad

Help Support QuadraphonicQuad:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

ssully

2K Club - QQ Super Nova
QQ Supporter
Since 2002/2003
Joined
Jul 2, 2003
Messages
3,857
Location
in your face
I'm transferring some multichannel analog recordings to multichannel .wav using Audition 1.0 and an M-Audio Delta 1010lt card. Using the card's 'consumer' input setting, and monitoring the recording with Audition's peak meter, the highest waveform peaks remain just below 0dBFS in any channel (~-0.5dBFS). The recording is 88khz/32bit then I output the file as a single 88/24bit interleaved multichannel .wav file. This becomes input into Audacity for conversion to AC3 using its FFmpg plugin.

Do I need to worry that these peaks could become 'overs' when the multichannel .wav is converted to AC3? IOW, should I be recording less 'hot' to account for possible overs from conversion? If so, what's a safe amount of headroom?

(FWIW, when I load my resulting AC3 file into Audacity , the peaks are still below 0dBFS. I presume this involves some sort of temporary conversion to .wav on the part of Audacity?)
 
The thing to remember with AC3 (Dolby Digital) is that you cannot turn off the DRC - it is always there, even if Dialnorm is set to -31.
So what you need to do is work to a centre channel level within Dialnorm's parameters.
This gives you an effective RMS level for centre channel of -31dB. Set this any louder, and you will get attenuation as the decoder will always try to maintain a dialnorm of -31, and will attenuate to obtain this.
Frankly, DD for music is a nightmare. Please - for a much better description than I can manage - see this thread.
http://www.gearslutz.com/board/1112798-post7.html
 
The SACD I'm currently transferring, as my 'beta test' of my workflow, happens to be Jeff Beck's 'Blow By Blow'. Center channel level shouldn't be an issue here because there's hardly any content in that channel and it's all extremely low-level, even when I adjust levels so that L/R peaks are at 0dBFS. This SACD '5,1' mix is basically a repurposing of the old 4.0 quad tape; there's little in the way of C or SW content -- indeed I don't know why there's anything there *at all* in those channels, except to mollify the consumers who feel cheated if the C and SW indicators don't light up.

Even so ,I'm getting faint and maddeningly inconsistent 'scratchy' distortion in one or more channels *even before I convert to AC3*...even though no individual channel shows 'in the red' during recording or playback.

Doing all this with Audition 1.0 so far is like using 'stone knives and bearskins', to quote an old Star Trek episode...worse yet, I'm trying to do it on an old Dell PC with only 256 Mb of RAM, argh. Wondering if the noise artifacts are a memory issue...? Or perhaps I need to play with levels, or sample rates...
 
That doesn't sound right to me.
What is the recording setup (I know it's Audition, but what are your buffer settings, and what soundcard drivers are you using - WDM, ASIO, MME or what?)
I would definitely try to avoid going to AC3/Dolby Digital because you cannot turn off the DRM - it's built right in, and with hot levels even a setting of -31 dialnorm will still trigger attenuation.
You'd me better off running it to 44.1kHz and making a DTS-CD, or 48/96kHz and making a DVDA.
 
The point is to be able to stream it from hard drive, not burn it to media. And further contraint is that it has to be streamable via S/PDIF, because so far, there is no laptop implementation of HDMI that passes more than two channels of PCM. So that limits me to AC3 or DTS. And I don't have
any software for DTS conversion.
 
I'm not at all sure what to suggest at this point, as there is apparently just no way at all to turn off the DRC.
What are you streaming to though - is it an AV amplifier? If it is, what other formats will it accept?
I'm wondering about FLAC as a lossless option (DD doesn't sound that great to me) or even WMA Lossless.

The only other alternative I can think of right now would be to take a single track & experiment.
Encode to DD at one output level, decode it again & see what it has done.
Make adjustments as appropriate & try again.

I am definitely a lot more concerned about the scratchy quality you mention.........
 
Lossless 5.1 is not an option for S/PDIF, and laptop HDMI implementations don't yet support it either. So for now I am limited to AC3 or DTS.

This thread seems to indicate that dialnorm and AC3 dynamic range compresion need not be issues:
http://www.hydrogenaudio.org/forums/index.php?showtopic=52451&view=findpost&p=469470
But I'm reading up on AC3 conversion here to get a better handle on the topic.


Also, I would think that if they were major issues, then soundcards (liek the one in my laptop) would not offer options like Dolby Digital Live!/Dolby Home Theater, which convert content to 640kbs AC3 bitstreams on the fly (2-ch is upmixed to mch by DPLII first, I think); the AC3 bitstream can be passed via S/PDIF and decoded by an AVR. I've tried this on my laptop with 5.1 PCM files I've ripped from my DVD-A discs, and it actually works fine; it even keeps the channel assignments correct. The drawback is that it converts *everything* I play to AC3, and I don't want that.

The source of scratchy distortion should be relatively easy to identify, I plan first to determine for sure whether it's present in the original .wav recordings, or is introduced during AC3 conversion. I'll work with single and 2channel tracks first and see what's what.
 
Back
Top