SM w/more than 3 frequency bands ?

QuadraphonicQuad

Help Support QuadraphonicQuad:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.

kfbkfb

2K Club - QQ Super Nova
QQ Supporter
Since 2002/2003
Joined
Feb 20, 2003
Messages
2,189
Location
Midwest USA
Has an experimental SM ever been built with, for example, 10 frequency bands?

If so, is 3 bands close to the point of diminishing returns (i. e. more than 3 bands offers minimal or no audible improvement)?


Kirk Bayne
 
Has an experimental SM ever been built with, for example, 10 frequency bands?

If so, is 3 bands close to the point of diminishing returns (i. e. more than 3 bands offers minimal or no audible improvement)?


Kirk Bayne
Hi Kirk

I know this is controversial and the good fellas over at Shadow Vector would respectfully disagree with me but I made a deliberate decision way back in the pondering years to restrict it to 3 bands. Yes it would help in yielding more spectacular separation NUMBERS but as I have said repeatedly we have never chased the numbers but focused on the sound and things like image smear , pumping and gurgling.

I spent a lot of time looking at the spectra of lots of musical instruments, human voice etc and was very conscious that chopping up the frequency into too many bands could smear the harmonics of the instruments- with the fundamental going one way and the harmonics going the other way. On all reviews I have read a consistent comment is that we tend to move objects as a lump, not in bits. To me 3 bands was optimal.
 
Last edited:
Hi Kirk

I know this is controversial and the good fellas over at Shadow Vector would respectfully disagree with me but I made a deliberate decision way back in the pondering years to restrict it to 3 bands. Yes it would help in yielding more spectacular separation NUMBERS but as I have said repeatedly we have never chased the numbers but focused on the sound and things like image smear , pumping and gurgling.

I spent a lot of time looking at the spectra of lots of musical instruments, human voice etc and was very conscious that chopping up the frequency into too many bands could smear the harmonics of the instruments- with the fundamental going one way and the harmonics going the other way. On all reviews I have read a consistent comment is that we tend to move objects as a lump, not in bits. To me 3 bands was optimal.
Certainly with multi-band surround sound decoding there must be a point of diminishing returns. Itā€™s hard for me to visualize the SM sounding any better with twice as many frequency decoding bands. In fact I think the case can be made that tri-band decoding is over kill. Hereā€™s why.

Start with the bass. Traditionally lower frequencies were always mixed equal left/right in stereo records. This was to reach a desired bass play back output that didnā€™t over drive the mechanics of the cutter or play back cartridge. Today in digital you can do what ever you want but still itā€™s usually mixed the same way as two woofers are better than in playback to reach a desired bass SPL.

In a QS/Involve decoder that means the bass is basically center front. I posit that a simple RM/QS non-enhanced decoder circuit would work just fine for the bass. Thereā€™s a fine point not frequently discussed that when you derive a directional control signal from audio input, it starts out as an AC signal & ends up a DC control signal. Rectified it is like a power supplyā€¦ it has ripple that must be smoothed out or it will impose IMD on the following gain control circuit. So basically a filter cap is added to smooth it out.

Hereā€™s where all that talk about how smooth/fast/slow in decoding action comes about. The lower the frequency the more filtering you need. In a single band decoder itā€™s quite the compromise all tho thereā€™s variable attack/decay ways to work around this. In a multi-band decoder you can use larger filtering time constants for the low frequencies, and less for the mid & treble. Using a basic decoder for just the bass gets around all of this, the sound would simply ā€œbe thereā€. Whether itā€™s mixed to come out a single or a pair of speakers, when decoded it would always get the intended direction right, just within the limitations of a simple decoder.

As for the higher frequencies that would definitely benefit from separation enhancement. Again tho, I donā€™t think you need to split mid-range from treble. Most fundamental frequencies in music are mid-bass to upper mid-range. It seems over the centuries musical instruments have been designed to be easy to hear and often to accompany the range of the human voice. There are several types of flutes out there & we think of this as a high pitched instrument but really the fundamental range is not that extreme & any higher frequencies will be harmonics. The direction of the harmonics will be the same as the main note when recorded/reproduced. So splitting the mid & treble up is of dubious benefit to me. I used a flute as one example but again most instruments have the power in the mid-range and any upper harmonics I can not visualize coming from a significantly different direction that it needs to be individually decoded & enhanced.

There are of course exceptions to what Iā€™ve said. Midiā€™s & synths could have a huge octave range & modern mixing could put any note any where in the soundfield. And in one Enoch Light LP I have thereā€™s a strong double bass fiddle center front with a rather loud triangle from right back. Itā€™s this kind of a situation that multi-band surround sound decoding really shines. Even with this example using only two bands would suffice & give equal results.

So really, after all these years of listening & pondering and even building my own two band Vario-Matrix decoder Iā€™ve come to the conclusion using a simple non-enhanced decoder for the bass, and enhanced high separation decoding in a single band for the higher frequencies is the perfect balance between circuit simplicity & decoding quality.

Most interested to hear your point of view on this, Chucky & others.
 
Certainly with multi-band surround sound decoding there must be a point of diminishing returns. Itā€™s hard for me to visualize the SM sounding any better with twice as many frequency decoding bands. In fact I think the case can be made that tri-band decoding is over kill. Hereā€™s why.

Start with the bass. Traditionally lower frequencies were always mixed equal left/right in stereo records. This was to reach a desired bass play back output that didnā€™t over drive the mechanics of the cutter or play back cartridge. Today in digital you can do what ever you want but still itā€™s usually mixed the same way as two woofers are better than in playback to reach a desired bass SPL.

In a QS/Involve decoder that means the bass is basically center front. I posit that a simple RM/QS non-enhanced decoder circuit would work just fine for the bass. Thereā€™s a fine point not frequently discussed that when you derive a directional control signal from audio input, it starts out as an AC signal & ends up a DC control signal. Rectified it is like a power supplyā€¦ it has ripple that must be smoothed out or it will impose IMD on the following gain control circuit. So basically a filter cap is added to smooth it out.

Hereā€™s where all that talk about how smooth/fast/slow in decoding action comes about. The lower the frequency the more filtering you need. In a single band decoder itā€™s quite the compromise all tho thereā€™s variable attack/decay ways to work around this. In a multi-band decoder you can use larger filtering time constants for the low frequencies, and less for the mid & treble. Using a basic decoder for just the bass gets around all of this, the sound would simply ā€œbe thereā€. Whether itā€™s mixed to come out a single or a pair of speakers, when decoded it would always get the intended direction right, just within the limitations of a simple decoder.

As for the higher frequencies that would definitely benefit from separation enhancement. Again tho, I donā€™t think you need to split mid-range from treble. Most fundamental frequencies in music are mid-bass to upper mid-range. It seems over the centuries musical instruments have been designed to be easy to hear and often to accompany the range of the human voice. There are several types of flutes out there & we think of this as a high pitched instrument but really the fundamental range is not that extreme & any higher frequencies will be harmonics. The direction of the harmonics will be the same as the main note when recorded/reproduced. So splitting the mid & treble up is of dubious benefit to me. I used a flute as one example but again most instruments have the power in the mid-range and any upper harmonics I can not visualize coming from a significantly different direction that it needs to be individually decoded & enhanced.

There are of course exceptions to what Iā€™ve said. Midiā€™s & synths could have a huge octave range & modern mixing could put any note any where in the soundfield. And in one Enoch Light LP I have thereā€™s a strong double bass fiddle center front with a rather loud triangle from right back. Itā€™s this kind of a situation that multi-band surround sound decoding really shines. Even with this example using only two bands would suffice & give equal results.

So really, after all these years of listening & pondering and even building my own two band Vario-Matrix decoder Iā€™ve come to the conclusion using a simple non-enhanced decoder for the bass, and enhanced high separation decoding in a single band for the higher frequencies is the perfect balance between circuit simplicity & decoding quality.

Most interested to hear your point of view on this, Chucky & others.
Hey Sonik

You are indeed a wise man and you have further uncovered the complete sham and lies I have perpetrated to you guys. I remember when I was servants boy for a catholic seminary, I had to go to confessional every week to confess my sins and then be sentenced to 5 hail Mary's. Problem was I was a good boy and did not sin much and had nothing to confess, so I ended up making up a week of sins so the priest could absolve me.

Well now is time for my QQ confessional, Sonik you can be the priest and sentence me.

I am in 90% agreement with your conclusion but I still think 3 bands is the best compromise, in particular for the decode of simultaneous events coming from different positions in the room. I noticed the fellas at Shadow vector kicked our sorry arses on that one but we do better than the blessed Sansui QSD1. I and the Bitch will review the software this year and see if we can improve this more for V4 whenever. The averaging (actually RMS) time constants is indeed critical and the attack time needs to be a lot faster than the decay time, again 3 bands gives you the ability to optimize for general music. The trick that most music comes in packets and to have these attack/ decay times match the characteristics of these packets.

Another major factor is that music tends to be a bit like pink noise in that the amplitude for constant energy input decays with 1/f. This is a major issue for all matrix decoders when comparing various quantities of differing frequencies. The lower frequencies will dominate all comparisons and yet when you look at the Fletcher Munson equal loudness curves our hearing is less sensitive to the lower frequencies. So any math done on Left / Right variables needs to encompass corrections for Fletcher Munson, 1/f and frequency band attack decay for types of music. Having the lower frequency band really helps ensure you don't get pumping due to lower frequency magnitude. dominance. Fortunately pumping is something we have avoided.

Now here comes my guilty sin.
In short as powerful as our DSP was we simply ran out of space to fit all the above (and heaps more) towards the end of the software development. We had to throw something out to make the thing work. I elected to just use passive matrix decode of the rear channels for the sub 300 Hz band but full matrix "logic" for front Left/ Right.

You sir have exposed my evil sin, can you possibly absolve me father Sonik?
 
Last edited:
Hey Pup

You are indeed a wise old dog and you have further uncovered the complete sham and lies I have perpetrated to you guys. I remember when I was servants boy for a catholic seminary, I had to go to confessional every week to confess my sins and then be sentenced to 5 hail Mary's. Problem was I was a good boy and did not sin much and had nothing to confess, so I ended up making up a week of sins so the priest could absolve me.

Well now is time for my QQ confessional, Pup you can be the priest and sentence me.

I am in 90% agreement with your conclusion but I still think 3 bands is the best compromise, in particular for the decode of simultaneous events coming from different positions in the room. I noticed the fellas at Shadow vector kicked our sorry arses on that one but we do better than the blessed Sansui QSD1. I and the Bitch will review the software this year and see if we can improve this more for V4 whenever. The averaging (actually RMS) time constants is indeed critical and the attack time needs to be a lot faster than the decay time, again 3 bands gives you the ability to optimize for general music. The trick that most music comes in packets and to have these attack/ decay times match the characteristics of these packets.

Another major factor is that music tends to be a bit like pink noise in that the amplitude for constant energy input decays with 1/f. This is a major issue for all matrix decoders when comparing various quantities of differing frequencies. The lower frequencies will dominate all comparisons and yet when you look at the Fletcher Munson equal loudness curves our hearing is less sensitive to the lower frequencies. So any math done on Left / Right variables needs to encompass corrections for Fletcher Munson, 1/f and frequency band attack decay for types of music. Having the lower frequency band really helps ensure you don't get pumping due to lower frequency magnitude. dominance. Fortunately pumping is something we have avoided.

Now here comes my guilty sin.
In short as powerful as our DSP was we simply ran out of space to fit all the above (and heaps more) towards the end of the software development. We had to throw something out to make the thing work. I elected to just use passive matrix decode of the rear channels for the sub 300 Hz band but full matrix "logic" for front Left/ Right rear.

You sir have exposed my evil sin, can you possibly absolve me father Pup?
I wish I was that smart Chucky, but that was the great Sonik expounding
 
I have alway thought that it is preferable to decode single band. The encoded signals are full bandwidth, so to separate them and decode each frequency band separately just seems wrong. It reminds me of pathetic attempts to create stereo from mono by feeding bass to one speaker and treble to another.

When I first got my QSD-1 it seemed to be smearing the sound image. In fact at times I could swear that I was hearing the same instrument in more than one speaker but still with good separation. The single instrument was split in two?! Much latter when I decided to tweak the separation pots inside the unit I found that the midband would not adjust. It was still working (sort of) but that did end up being the cause of the image smearing that I was hearing! Replacing the chip(s) I was able to adjust it and the image smearing was gone. All that added circuitry, bandpass filters etc. contribute to the QSD-1's somewhat imperfect sound character. The Photolume QSD-2 single band decoder sounds much better. The idea that multiband decoding reduces artifacts might be true, but I don't hear any artifacts with the single band Photolume. I seldom hear artifacts with the S&IC either. If ever there any they can be eliminated with the separation control. Charles Wood of Audionics also felt that multi-banding was unnecessary and undesirable even.

When doing things via DSP multi banding becomes easier, so why not ten bands instead of three? Don't get me wrong I appreciate the concept of diminishing returns. That is why I think that more than four channels are rather unnecessary.

Another thought is that most of us use three way speakers so why not separate the triband decoders outputs and feed directly into a tri amped speaker system! That idea has popped into my head on more than one occasion, you would kill two birds with one stone. On the negative side there would be no provision for discrete sources!
 
I have alway thought that it is preferable to decode single band. The encoded signals are full bandwidth, so to separate them and decode each frequency band separately just seems wrong. It reminds me of pathetic attempts to create stereo from mono by feeding bass to one speaker and treble to another.

When I first got my QSD-1 it seemed to be smearing the sound image. In fact at times I could swear that I was hearing the same instrument in more than one speaker but still with good separation. The single instrument was split in two?! Much latter when I decided to tweak the separation pots inside the unit I found that the midband would not adjust. It was still working (sort of) but that did end up being the cause of the image smearing that I was hearing! Replacing the chip(s) I was able to adjust it and the image smearing was gone. All that added circuitry, bandpass filters etc. contribute to the QSD-1's somewhat imperfect sound character. The Photolume QSD-2 single band decoder sounds much better. The idea that multiband decoding reduces artifacts might be true, but I don't hear any artifacts with the single band Photolume. I seldom hear artifacts with the S&IC either. If ever there any they can be eliminated with the separation control. Charles Wood of Audionics also felt that multi-banding was unnecessary and undesirable even.

When doing things via DSP multi banding becomes easier, so why not ten bands instead of three? Don't get me wrong I appreciate the concept of diminishing returns. That is why I think that more than four channels are rather unnecessary.

Another thought is that most of us use three way speakers so why not separate the triband decoders outputs and feed directly into a tri amped speaker system! That idea has popped into my head on more than one occasion, you would kill two birds with one stone. On the negative side there would be no provision for discrete sources!
Make the speakers concentric!
 
Maybe the SM V4 could be switchable - 3 band or single (wide) band variable matrix decoding (both QS and SQ).

I don't recall tests of the Sansui multiband QS variomatrix decoder mentioning directional differences for music fundamentals and the related harmonics.

The multiband/fundamentals/harmonics/directionality issue is something I had wondered about since I first read about the Sansui QS variomatrix tech in 1973, I just figured that Sansui would somehow link the operation of the variomatrix across the bands to minimize directional errors for sounds that occupy more than one band.


Kirk Bayne
 
Maybe the SM V4 could be switchable - 3 band or single (wide) band variable matrix decoding (both QS and SQ).

I don't recall tests of the Sansui multiband QS variomatrix decoder mentioning directional differences for music fundamentals and the related harmonics.

The multiband/fundamentals/harmonics/directionality issue is something I had wondered about since I first read about the Sansui QS variomatrix tech in 1973, I just figured that Sansui would somehow link the operation of the variomatrix across the bands to minimize directional errors for sounds that occupy more than one band.


Kirk Bayne
Nup, 3 bands win
 
Nup, 3 bands win
Just to give you a hint, we have two flavors of Involve encode. The dumb encode that uses a phase constant of around 0.21- not the usual 0.41 and it is SINGLE BAND. We also have the clever encode, that is more interesting it wiggles the phase constant from between around 0.2 to 0.41 depending on the amount of surround to stereo content- heavy surround its around 0.41, for vanilla stereo its around 0.21 and everything between and its TRIBAND.

Oh the 0.21 constant ensures a minimum stereo OR surround separation of 12 dB - My magic number that I claim is the maximum separation we can detect (Blah Blah I know people disagree)

This means if you get 2, 3 or 4 simultaneous signals in hopefully different frequency bands (as it usually is) then you can actively and correctly steer the signal to the right corner. If it was single band all would go to the corner that was amplitude/ frequency weighting dominant. Fortunately most music actually is a one at a time thing and in reality we can only discriminate one sound at a time within a similar frequency band- this saves the show for single band stuff.

Anyway with the tri band you really get the best of both worlds, stereo lovers hear full separation on vanilla stereo and us quadies get full 35 - 40 dB separation on surround. This was used on the Suanne Ciani recent quadraphonic recording. There are great advantages in tri band but in my humble opinion a lot of dangers if you go above 3 bands, but yes you will get better separation numbers.
 
Last edited:
In short as powerful as our DSP was we simply ran out of space to fit all the above (and heaps more) towards the end of the software development. We had to throw something out to make the thing work. I elected to just use passive matrix decode of the rear channels for the sub 300 Hz band but full matrix "logic" for front Left/ Right.
In every product design, the engineer attempts to metaphorically put ten pounds of shit in a five-pound bag, and the price of double-density shit eventually becomes more than the market can bear.

Nonetheless, I have only heard of two times anyone criticised the SM. My only beef is that itā€™s out of stock.
 
Start with the bass. Traditionally lower frequencies were always mixed equal left/right in stereo records. This was to reach a desired bass play back output that didnā€™t over drive the mechanics of the cutter or play back cartridge. Today in digital you can do what ever you want but still itā€™s usually mixed the same way as two woofers are better than in playback to reach a desired bass SPL.
There were two reasons this was done:

1. The cutter for a phonograph record blank using the Westrex 45/45 system cannot cut deep bass at any angle except laterally. The cutter would leave the surface of the blank at any other angles, creating an overcut skip. So the deep bass is attenuated in the vertical direction and the engineer is told to mix deep bass for panning to center.

2. When two speakers are reproducing the same deep bass note in phase, it reinforces the bass sound. It is almost like you had a woofer with a diameter equal to the distance between the speakers. But if the speakers are not in phase with each other, then the effect disappears.

In both RM and SQ, this modulation decodes to center front, and both front speakers are in phase, making the bass reinforcement effect work well. In BMX, UMX, and UD4, this modulation comes out all four speakers at equal levels, but with different phases, so the effect is not as strong. In all of these cases, the bass must be cut this way to prevent overcutting
 
Back
Top