New Atmos-Capable DAW: Fiedler Audio Dolby Atmos Composer

QuadraphonicQuad

Help Support QuadraphonicQuad:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
Of course you can already use any DAW to mix in Atmos.

But if you want to use object elements and deliver them to the Atmos master that way, you need the Dolby renderer to put your 7.1.4 (and any object elements baked into it beyond the height channels - which are mandatory objects) into the Atmos file format.

So full DAW apps with this integrated might be welcome! It will likely become a stock thing before too long. But you can still just use your favorite DAW and make however many channel multichannel mixes you please.

The stock joystick panner plugins in Reaper are sure comprehensive!

That screen shot looks like Dolby licensed their Atmos renderer app to them and they made a subset of it or something. $180 vs $300 for the renderer. Wonder what feature they pulled out?
 
Some random mix thoughts.

I wonder if there are any music mixes that used objects beyond the height speakers? The system forces you to define 7.1 bed vs objects. So you make the height speakers objects and pan them to the hard default spots. Your mix comes out as expected. So the only element that will scale to different speaker arrays in most music mixes will be the height channels alone. The 7.1 bed is static.

Film vs music does have different goals. Not to add to the film vs audio divide or attribute write or wrong to anything. Film needs dialog to come across, dialog to come across, dialog to come across, and then cinematic folly fx to be on point. Natural and not gimmicky sounding and in the right spots on the sound stage. (This can require some advanced mixing chops that some audio only guys don't have, mind you!)

Music wants to make the instruments and sound elements as clear as possible. As full fidelity as possible. Usually hyper-realistic. And with the ability to be delivered by sometimes below average sound systems. For an example of that thinking, a kick drum is not placed 4" to the right to precisely image it how it was placed in the recording room. It's mixed in mono to both front speakers to couple those speakers together and make it easiest for any system to make a kick drum sound.

There are old school moves all over mixes and then maybe a few cinematic effects that can get away with only being heard in full on a full system. Any cinematic mix element could use the object system to advantage. It just doesn't apply for the old school mix elements. And of course that's what the bed tracks are for.

At the end of the day, a 7.1.4 mix can be delivered 1:1 to a consumer's 7.1.4 speaker system. Or it can be downmixed to stereo by the Atmos binaural system to sound like an actual proper accomplished stereo mix that used binaural elements.

I imagine someone will try to make a surround mix while only listening in binaural in headphones with the Atmos system. Then play the game of "What does the mix sound like on different large speaker arrays?" That seems like it would be a more niche weird thing to do. Like something the Flaming Lips might do. I suspect most people are going to continue to mix on their preferred speaker systems. 7.1.4 has landed as a reference point for that.
 
There are definitely some mixers using objects for ear-level sounds, not only heights. Steven Wilson and Bruce Soord are some common mainstream examples. I have also heard a "quadraphonic" mix using objects in Atmos: the source was a quad synth and each channel was mapped as an object and put in the imaginary corners of the soundstage, so you would hear the back corners in the rears in a 7.x.x system and in the sides in a 5.x.x system.
I see the 7.1 or 5.1 bed as backwards compatibility feature, it makes sense to use all objects also for music.

Some random mix thoughts.

I wonder if there are any music mixes that used objects beyond the height speakers? The system forces you to define 7.1 bed vs objects. So you make the height speakers objects and pan them to the hard default spots. Your mix comes out as expected. So the only element that will scale to different speaker arrays in most music mixes will be the height channels alone. The 7.1 bed is static.

Film vs music does have different goals. Not to add to the film vs audio divide or attribute write or wrong to anything. Film needs dialog to come across, dialog to come across, dialog to come across, and then cinematic folly fx to be on point. Natural and not gimmicky sounding and in the right spots on the sound stage. (This can require some advanced mixing chops that some audio only guys don't have, mind you!)

Music wants to make the instruments and sound elements as clear as possible. As full fidelity as possible. Usually hyper-realistic. And with the ability to be delivered by sometimes below average sound systems. For an example of that thinking, a kick drum is not placed 4" to the right to precisely image it how it was placed in the recording room. It's mixed in mono to both front speakers to couple those speakers together and make it easiest for any system to make a kick drum sound.

There are old school moves all over mixes and then maybe a few cinematic effects that can get away with only being heard in full on a full system. Any cinematic mix element could use the object system to advantage. It just doesn't apply for the old school mix elements. And of course that's what the bed tracks are for.

At the end of the day, a 7.1.4 mix can be delivered 1:1 to a consumer's 7.1.4 speaker system. Or it can be downmixed to stereo by the Atmos binaural system to sound like an actual proper accomplished stereo mix that used binaural elements.

I imagine someone will try to make a surround mix while only listening in binaural in headphones with the Atmos system. Then play the game of "What does the mix sound like on different large speaker arrays?" That seems like it would be a more niche weird thing to do. Like something the Flaming Lips might do. I suspect most people are going to continue to mix on their preferred speaker systems. 7.1.4 has landed as a reference point for that.
 
I wonder if there are any music mixes that used objects beyond the height speakers?
You actually don't even need to use objects to access the height speakers, as beds go up to 7.1.2 configuration - but there are certain positions you can't pan to (front or rear heights only, the wides, etc) without using objects. Some Atmos mixes are composed almost exclusively of objects (though the LFE channel has to be part of the bed), yet others only use the bed and no objects.
I imagine someone will try to make a surround mix while only listening in binaural in headphones with the Atmos system. Then play the game of "What does the mix sound like on different large speaker arrays?" That seems like it would be a more niche weird thing to do. Like something the Flaming Lips might do. I suspect most people are going to continue to mix on their preferred speaker systems. 7.1.4 has landed as a reference point for that.
Most of the mixers I've spoken to start with the speaker mix, then try to adapt it for headphone playback after-the-fact using the binaural metadata tags and other tools in the Dolby suite. Trying to make it work on different-sized speaker arrays is usually a distant third consideration after those two.
 
Any examples? One of the engineers would have had to talk about it to know of course.

You're still mixing in some room with speakers. The mix is made on that array regardless of mix method and tools used. The idea is it sounds 100% intentional and 1:1 on the very same speaker array in a different room. But it maybe sounds better on a different setup - larger or smaller - than previous systems.

Anyway it looks like fun to be able to dial in the metadata and make mixes translate to other systems better. My first thought was finally genuine single inventory mixes that a consumer can't screw up! My 2nd thought is look how they're making it rely on hardware units and that whole angle. My next thoughts are consumed with Dolby refusing to even sell the encoder to the likes of me. My final thought is that negative stuff will be short lived while delivery of 128 channels of audio and 7.1.4 mixes and all will be forever.
 
Of course you can already use any DAW to mix in Atmos.

But if you want to use object elements and deliver them to the Atmos master that way, you need the Dolby renderer to put your 7.1.4

Steinberg Cubase does not need the Dolby Renderer. Free trial and special pricing for upgrades from other DAWs makes it ‘affordable’
 
Yeah it's already starting! Hopefully Reaper picks up on it soon too.
 
I tried this shortly after it was launched. Using Logic in Atmos mode imposes quite a load on the CPU without playing anything, so I thought it would be a better way to go...features-wise it was way more versatile, the CPU hit was significantly less, and I had high hopes for it but all I could get out of it was stuttering nonsense and the support was woeful. YMMV

I'm caught in the buying-a-new-computer-means-buying-new-audio-interfaces loop (none of mine will work nicely with the M or silicon Macs). Currently contemplating buying a maxed-out trashcan Mac which will run Monterey and still do firewire via Thunderbolt 2. Will give the Fiedler software another go when I have that...
 
I picked up the full featured version for $99 from Plugin Alliance last week. I am finding this to be an immensely satisfying plugin. Has a few quirks, but they can be worked around. I'm using it in Reaper, putting stems in subfolders, then the Atmos Beam on the main folder so that I can still control volume levels in the subfolder and apply any additional EQ if I need to before it hits the beam.

Right now I'm using it to do up some 5.1 mixes since I don't have an Atmos setup. I'm finding that the volume setup from the stereo mix stems can pretty much be applied straight to the 5.1 mix with a few adjustments-- most notably the center speaker if you're using it for vocals.
 
Having upgraded my Atmos mixing machine to a trashcan Mac Pro during last summer (a huge improvement) I bought the full version of the Fielder Atmos Composer for a slightly discounted £199 GBP...I'd anticipated it suiting my intended workflow in Logic better than the built-in Atmos renderer. Sadly I experienced exactly the same stuttering digital glitching as I had done previously, even with Fiedler's own template Logic session. Once again the 'technical support' was pathetic, worse than useless - I get the impression that it's a one-man show and that he's spread himself too thinly. Sad about this as not only do I now have financial 'skin in th game' but I had high expectations around what this would do for me. Maybe because I needed it to do full speaker array stuff as opposed to binaural on earbuds I wasn't the target demographic...

Fair play if it enables non-Atmos platforms to do Atmos, but for me it was a completely crappy experience. I may give the guy another shake at some point down the line, but he was useless before and didn't offer anything by way of input and I decided not to waste any more time on him and his buggy nonsense. As previously, YMMV, but for me it was 0/10.
 
Well, that doesn't sound good.

Just in the spirit of troubleshooting...
Using the Dolby renderer with Reaper (or any DAW). The workflow is the same with the renderer running in sync with the host DAW using time code as audio over one of the channels.

Dolby gives you a LTC plugin and a panner plugin to use in the host DAW. They talk to the renderer app over local host. Dolby also gives you a 130 channel virtual audio device app to use for routing audio to the renderer.

I found the Dolby LTC plugin unreliable. Crashed all the time and almost never made it through a render. I pulled up the stock LTC generator in Reaper and boom, just worked. 100% solid and reliable.

The Dolby audio bridge virtual device app is solid. I usually use Loopback because I have it but Dolby is solid too.

Sometimes you have to mess with the panner plugins to get them talking. Always launch the renderer before the DAW. Sometimes you have to pull up another instance of a panner to get it to connect. Then the original one starts working again and you can delete the temp.

I could use it live as I mix I suppose but I only use it to render right now. I print object stems with bed mixes as needed. I have the 5,1 Mac Pro tower with max CPUs, ram, etc. Running this 500 track project in front of me with 3 mix busses for 7.1.4, 5.1, and 2.0. 96k audio but I'm running it at 48k for reasons so it's live sample rate converting with r8brain. 47% CPU use. This is the stupidest project I can pull up. Audio should be bottomless on any decent-ish computer almost no matter what you do.
 
Having upgraded my Atmos mixing machine to a trashcan Mac Pro during last summer (a huge improvement) I bought the full version of the Fielder Atmos Composer for a slightly discounted £199 GBP...I'd anticipated it suiting my intended workflow in Logic better than the built-in Atmos renderer. Sadly I experienced exactly the same stuttering digital glitching as I had done previously, even with Fiedler's own template Logic session. Once again the 'technical support' was pathetic, worse than useless - I get the impression that it's a one-man show and that he's spread himself too thinly. Sad about this as not only do I now have financial 'skin in th game' but I had high expectations around what this would do for me. Maybe because I needed it to do full speaker array stuff as opposed to binaural on earbuds I wasn't the target demographic...

Fair play if it enables non-Atmos platforms to do Atmos, but for me it was a completely crappy experience. I may give the guy another shake at some point down the line, but he was useless before and didn't offer anything by way of input and I decided not to waste any more time on him and his buggy nonsense. As previously, YMMV, but for me it was 0/10.
Yeah - tried Fiedler a while back. Complete non-starter, primarily because of the post-fader plugin/insert requirement. Stability too. Incredibly, Logic has its own problems. There are issues with heights panning accurately, rendering ADM's that match what the Dolby ATMOS renderer delivers, and not supporting downmix settings. I spent so much time rendering in Logic, then trying to finalize renders with the Assembler - and they sounded completely different...I files some bugs w Apple and Dolby - some of the issues were reproduced by Apple support, but at least as far as I know, still not resolved. I'm also convinced that the BIN and Spatial monitoring in Logic aren't accurate - but I've not spent any time 'proving' it. Just gave up on Logic completely.
 
Yeah, Logic Audio Hell...
Being someone who doesn't trust anything, I've checked all my Atmos renders/encodes against the original renders and so far they always match. This is with the Dolby supplied renderer and encoder apps. Mixing and mastering in Reaper.

Quirks as mentioned but solid operation and solid results once it's talking and rolling. I haven't put the renderer on blast yet with automated object panning in motion. I don't want to deconstruct the mix process for this and spend time doing stuff like that. If Reaper integrates the object format natively, I might consider it. And beyond a couple specific 'created on small system -> will upmix on larger system' scenarios the Atmos downmixes are identical from a 7.1.4 12 channel mix vs delivered as object stems.
 
I haven't had issues yet, but maybe that is because I'm doing 5.1 instead of Atmos mixing? Also, I am not doing a ton of moving around in the field. There is some, but mostly something goes front, back, center, a mix of both, etc... Since I'm using stems from my stereo mix, a lot of those already have panning and motion in the stereo stems, so I'm sticking with that for the most part.

So ultimately, what I get are about 12-15 stem tracks, and I put all those as subfolders to their own folder with the Atmos Beam so that I can control the volume in the Reaper subfolder and also do any additional EQ (particularly for LFE) before it hits the Beam. You can't do that otherwise, and I don't like going into each Beam to control the volume because it's easier/faster in Reaper. I have some issues sometimes, particularly if I want to do LCR, since it just doesn't do that for me for some reason. But I've only wanted to do that on one track. No biggie- I had a mono track already, so I rendered a stereo track, threw the mono in C and the stereo in LR. Done. Just a work around. The LFE is also a bit counterintuitive. LFE is usually track 4 when it is all rendered, but in the Beam, I set my LFE channel for 1, and it goes to 4 and one of the LR channels (I think this is due to people mixing for multiple LFE channels). I just solo the LFE channel, see that the only output is coming out of the LFE (channel 4) on the monitors, and adjust volume accordingly. I'm also running separate tracks for LFE bass and kick, EQing them in the subfolder with a pretty sharp cutoff around 100, slight boost around 70 for the bass, and a very light read in the LFE channel-- enough to keep the sub engaged on playback. That's usually around a -20db setting from the Beam for my LFE stems.

I think that if I tried to do all of this from one session where I have stereo routing and 5.1, I may very well crash Reaper, and I wonder if the crashes some have experienced are due to that.

It's pretty cool hearing the final product. I always figured I'd be pretty consistent in mixing decisions. For example, guitars in front, keys in back, etc. Well, I'm changing it around depending on what serves the song. I want the surround to be immersive and immediate if possible, so if guitars come in and it's a while before the keys hit, I'll put the guitars in back, keys in front. I also have a lot of reverb/delay stems for vocals and guitar. I'm generally using the vox verb in the back, but sometimes in the LR with vox in C. If guitar is LR, I'll throw guitar verb in back, but if guitars are in back, I'll throw their verb in the phantom sides (which is both LR and back) or in the front. Seems counterintuitive, but I want the soundstage to feel like it's everywhere.

Mixing is fun, even if you don't know WTH you are doing.
 
Yeah - tried Fiedler a while back. Complete non-starter, primarily because of the post-fader plugin/insert requirement. Stability too. Incredibly, Logic has its own problems. There are issues with heights panning accurately, rendering ADM's that match what the Dolby ATMOS renderer delivers, and not supporting downmix settings. I spent so much time rendering in Logic, then trying to finalize renders with the Assembler - and they sounded completely different...I files some bugs w Apple and Dolby - some of the issues were reproduced by Apple support, but at least as far as I know, still not resolved. I'm also convinced that the BIN and Spatial monitoring in Logic aren't accurate - but I've not spent any time 'proving' it. Just gave up on Logic completely.
I've been using Logic since the mid 90s, when it superceded working with Notator on Atari along with my SADiE system for the audio, and it's served me well throughout several upgrades...I'm way too long in the tooth to change platforms at this point. I've only rendered out a couple of ADMs of work in progress and haven't noticed anything amiss, but can't say I've gotten at all forensic with it...will look out for anomalies next time. As with when you're starting out you eventually find out how not to make an album or how not to mix, I started out by learning how not to do an Atmos mix, and spent a long time evolving a workflow which is now yielding the kind of results I was imagining in my head before being confounded by the realities of what's involved. It's quite a journey, but an utterly inspiring one.
I thought that running the Fielder thing would be less of a CPU hit than the Logic renderer, which even when doing nothing is quite processor-intensive. I'll file the Fiedler purchase, along with many others, as money spent on the road to knowing better!
 
I've been using Logic since the mid 90s, when it superceded working with Notator on Atari along with my SADiE system for the audio, and it's served me well throughout several upgrades...I'm way too long in the tooth to change platforms at this point. I've only rendered out a couple of ADMs of work in progress and haven't noticed anything amiss, but can't say I've gotten at all forensic with it...will look out for anomalies next time. As with when you're starting out you eventually find out how not to make an album or how not to mix, I started out by learning how not to do an Atmos mix, and spent a long time evolving a workflow which is now yielding the kind of results I was imagining in my head before being confounded by the realities of what's involved. It's quite a journey, but an utterly inspiring one.
I thought that running the Fielder thing would be less of a CPU hit than the Logic renderer, which even when doing nothing is quite processor-intensive. I'll file the Fiedler purchase, along with many others, as money spent on the road to knowing better!
Having been lucky enough to have heard a few of your Atmos mixes in your studio, I'd say that they are excellent, fits/suits the music.

BTW can you run the Fielder Atmos Composer stand alone on its own machine (so you could 'port' the work flow through it) or does it have to be embedded as a plug-in in the main tool chain?
 
Back
Top