PDA

View Full Version : Is Good DAC Still Necessary With DSP Sound Processor?



azzd
08-07-2016, 05:59 PM
Currently, I am looking for a DSP speaker management processor for the ongoing speaker project. While, I am getting confused about the two-step digital-analog processing: assumed my main DAC has better outputs, the analog signal will be AD and DA by the DSP processor again. In this case, is the main DAC that important? What are the benefits to use it instead of connecting CD/Computer to the DSP's digital input directly.

Any thought appreciated!

Regards,

AZZD

grumpy
08-07-2016, 07:11 PM
How many DSP processor outputs will you be using? If more than 2, and your DAC is 2ch, ... :hmm:

That posed, bypassing a D/A A/D step (e.g., CD output and DSP input) is likely a good thing as long as you have volume control somewhere (DSP output DACs or your own stand alone DAC). No way to know if the DSP output DAC is preferable (simpler connection/use and internal clock, vs regenerated in your external and potentially higher quality DAC). You might want to experiment.

azzd
08-07-2016, 07:23 PM
How many DSP processor outputs will you be using? If more than 2, and your DAC is 2ch, ... :hmm:

That posed, bypassing a D/A A/D step (e.g., CD output and DSP input) is likely a good thing as long as you have volume control somewhere (DSP output DACs or your own stand alone DAC). No way to know if the DSP output DAC is preferable (simpler connection/use and internal clock, vs regenerated in your external and potentially higher quality DAC). You might want to experiment.

Thank you, grumpy!Your comment give me more confidence to purchase a DSP processor with digital input. I need eight outputs from the DSP processor. Volume control is not a problem since digital signal is from computer; both player and Lynx PCI card have volume control.

grumpy
08-07-2016, 07:58 PM
Normally, digital outputs are not volume controlled. If the proposed DSP does this digitally, or after its internal DAC in the analog domain (before your amps), then you have a way to reduce volume from full output. Be careful. Ask lots of question before buying.

ivica
08-08-2016, 05:22 AM
Thank you, grumpy!Your comment give me more confidence to purchase a DSP processor with digital input. I need eight outputs from the DSP processor. Volume control is not a problem since digital signal is from computer; both player and Lynx PCI card have volume control.

Hi azzd,

I THINK, that using any volume control before DSP processing is not a good solution. The best would be to use digital source (such as CD digital out, or computer card digital out) and connect it directly to the DSP processor. DSP would do all desired signal processing and attached DACs would convert the data in to the desired number of analog channels. I THINK that AFTER DACs conversions volume control have to be done, the best, on the power amplifiers inputs. I am aware that it would not be too practical to manage , say 8, analog pots. Some amount of output dynamic can be managed digitally , especially if larger number of bits DACs are used ( 24 or 32 bits per sample).
The problems with the signal wide noise or his can be annoying if high efficiency drivers are used connected to the power amps with large amplifications, usually of high-power type.

Interesting to read:
http://www.androidauthority.com/why-you-dont-want-that-32-bit-dac-667621/


regards
ivica

azzd
08-08-2016, 08:07 PM
Hi azzd,

I THINK, that using any volume control before DSP processing is not a good solution. The best would be to use digital source (such as CD digital out, or computer card digital out) and connect it directly to the DSP processor. DSP would do all desired signal processing and attached DACs would convert the data in to the desired number of analog channels. I THINK that AFTER DACs conversions volume control have to be done, the best, on the power amplifiers inputs. I am aware that it would not be too practical to manage , say 8, analog pots. Some amount of output dynamic can be managed digitally , especially if larger number of bits DACs are used ( 24 or 32 bits per sample).
The problems with the signal wide noise or his can be annoying if high efficiency drivers are used connected to the power amps with large amplifications, usually of high-power type.

Interesting to read:
http://www.androidauthority.com/why-you-dont-want-that-32-bit-dac-667621/


regards
ivica

Thanks again to grumpy and ivica! I had never thought of or known the volume control you both mentioned. Very helpful!!! Will check the webpage. Before making the final decision, I am testing whether I can get good result by using RME UC interface with JRiver DSP plugins as the DSP processor. While, for the after DAC volume control, do you think the RME's volume control of each output is before or after DAC? If it is before DAC, then I guess I should use some passive resistors between RME and amplifiers to control the volume, right?

Regards,

Yong

grumpy
08-08-2016, 08:54 PM
MSBtech made an MVC unit ~$500 used
SPL makes a volume8 (still made)
cirrus logic makes a cs3318 chip (extreme DIY) and an eval board (needs PS and box).
studio equipment called 'monitor controllers' are probably overkill.

maybe ask RME what they recommend (perhaps saving a lot of fuss and $$$)

ivica
08-09-2016, 05:27 AM
Thanks again to grumpy and ivica! I had never thought of or known the volume control you both mentioned. Very helpful!!! Will check the webpage. Before making the final decision, I am testing whether I can get good result by using RME UC interface with JRiver DSP plugins as the DSP processor. While, for the after DAC volume control, do you think the RME's volume control of each output is before or after DAC? If it is before DAC, then I guess I should use some passive resistors between RME and amplifiers to control the volume, right?

Regards,

Yong

Hi azzd,

Just as I HAVE UNDERSTOOD:
If I want to reduce the signal level by 6dB, doing it in digital domain, that would be equal as divide by 2, so reducing LSB bit, and if I want to reduce by 24dB, that would be reducing 4 bits, so instead of using say 16 bits, it would be used 12 bits...., but if such "operation" is done in analog domain, either using pot in the amp input, or reducing DAC reference current or voltage (if "multiplying" type of DAC is applied), all of the DAC bits resolution would remain. I can imagine that a kind of "digital manipulation" can be done using 'oversampling DAC', as it can behave as an equivalent more data bits DAC then it really has.....
I have no knowledge what kind of level attenuation is applied on many DSP driven active networks in order to provide proper output signal attenuation, saying about 40~60dB.

regards
ivica

azzd
08-10-2016, 09:15 AM
MSBtech made an MVC unit ~$500 used
SPL makes a volume8 (still made)
cirrus logic makes a cs3318 chip (extreme DIY) and an eval board (needs PS and box).
studio equipment called 'monitor controllers' are probably overkill.

maybe ask RME what they recommend (perhaps saving a lot of fuss and $$$)

Thanks again. They are very useful volume controls. I even don't know there are this kind of equipment besides passive preamps. Prefer SPL to MVC because MVC uses RCA connectors. Will do more research. Temporary good news is I believe I can get barely enough (at least for testing) amps with built-in volume control.:bouncy:

azzd
08-10-2016, 09:35 AM
Hi azzd,

Just as I HAVE UNDERSTOOD:
If I want to reduce the signal level by 6dB, doing it in digital domain, that would be equal as divide by 2, so reducing LSB bit, and if I want to reduce by 24dB, that would be reducing 4 bits, so instead of using say 16 bits, it would be used 12 bits...., but if such "operation" is done in analog domain, either using pot in the amp input, or reducing DAC reference current or voltage (if "multiplying" type of DAC is applied), all of the DAC bits resolution would remain. I can imagine that a kind of "digital manipulation" can be done using 'oversampling DAC', as it can behave as an equivalent more data bits DAC then it really has.....
I have no knowledge what kind of level attenuation is applied on many DSP driven active networks in order to provide proper output signal attenuation, saying about 40~60dB.

regards
ivica

Hi ivica,

I do appreciate your kind and detailed explanation! Make me understand this thing. I believe the RME channel volume control is on the digital domain. As just realized, I could use amplifiers volume control to do the most volume control work. But still need the digital side due to different recording levels. Better than nothing though. Current main DAC Dcs 954 does have volume control; and factory announced it will not affect the sound quality. So if keep the DCS, analog active crossover is also a potential solution. Doing and thinking.

Best regards,

azzd

sebackman
12-05-2016, 07:03 AM
Dearall,

I know this is an old thread but I thought I would add some comments. Sorry for the rather lengthy post.

The DAC process in general is less cumbersome than the ADC process. I doubt that an external DAC would improve the SQ enough to make them worthwhile compared to keeping the signal chain in the same unit, at least in current pro or semi-pro units. But I do understand and recognize that many audiophiles think the DAC’s are important and argue they can hear very subtle changes. In my view introducing an ADC in the DSP, processing, using the DAC process in the DSP for analogue out, a second ADC process to finally get to a different set of DAC chips would not make sense. This would also normally introduce the use of three different clock rates, one at the source, at the DSP and one in the external DAC.

After having used many DSP units the last 15 years I have come to that the HW is less restricting compared to the implementation of the SW. The best DSP engines in my view is where you build your DSP signal path on a computer and the compile a runnable code that you then down load to the DSP unit.

The second part that is important to me is that the manufacturer updates SW and FW overtime so you can take advantage of new development and improvements in algorithms as they emerge. A good implementation of algorithms do much more to safe guard good SQ than HW.

A few thoughts:
- Most pro- or semipro DSP units are as good as any other consumer digital audio equipment. There is in my mind no need for a separate DAC. I believe that differences can potentially be heard but in a DBT it would be very difficult to pick one from the other. I think when testing we often hear shifts in level and perceive them as SQ as the ear is VERY sensitive to level and less so to FS.
Just for good housekeeping, I do respect that many fellow audiophiles may have experience and views that deviate from my personal view.
- The ADC’s in many units today are so good today that they do not harm the SQ in any meaningful form compared to other parts of the chain.
- Always make certain that the ADC process have decent input signal level to work with. If it is too low the quantification noise may be audible. A classic trick is to turn down the gain on power amps and turn up the preamp into the DSP so even the low sounds have reasonable input amplitude. Most DSP's have ample head room internally so this should not be a problem. Clipping normally first occurs in the analogue output card so it can be a good idea to keep an eye on levels there.
- The advantage of a DSP overrules the reduction in SQ by >10 times in my view. No speakers are perfect and few rooms are. Even JBL has caved in and almost all new speakers (Tour, PA, Studio) are DSP enhanced. The DSP simply gives you access to improvements that are impossible in the passive world.
- The speakers and the room will always be the weakest link. If you move your speakers 5 inch the sound will change more than any change or upgrade of any HW in your sound chain can do. I think the fact that phase coherent and an excellent balance between direct and indirect sound makes the M2’s being so well received everywhere. It would be interesting to hear a DBT between 4367 and M2 (passive vs active) but I suspect that M2 would come out on top.
- I think in most setups (as in the mighty JBL M2 Studio Monitor) you should be just fine with a separate DSP with analogue in and out without a separate DAC.


- If you do decide to run an “all digital show”, I suggest you keep the signal native format all the way if you do have good gear. The disadvantage of sample rate conversion (SRC) will often consume the advantage of going more bits or higher sample rate. Some gear seems to sound better at higher sample rate and that is probably due to construction and filter design. And if it does sound better to your ears, it does.
- I have both digital and analogue input to my DSP units and the difference is neglectable unless you keep the entire chain without any SRC’s, on the same clock and keep the sample rate native. In such un-broken chain I think I can hear a little more detail but in reality it can be just placebo as I have never done a DBT….. I can’t measure the difference.
- Relatively few DSP’s do have digital output.


The following is just a rant about digital formats….

In my opinion SRC is a bigger problem than many other issues in the digital domain, like DAC updates. Few digital consumer (and some pro gear) products have the possibility to use the same clock and hence you need SRC’s to sync the digitaldata between units if you send digital information. And right there is when it starts getting difficult and this is (in my opinion) why we still see analogue feed into DSP’s and alike. With analogue in we can avoid many difficulties and the units become more universal/versatile with little to no negative SQ impact.

But if you do want to go “all digital” here is some of my thoughts.

Sample rate converters (SRC) are there to make certain that the DAC/DSP (or whateverunit with digital in) can receive any sample rate and bit depth even if they internally always run at one clock rate that may deviate from the input signal. They strip out the clock that came in the signal and introduce the local clock. This is more or less impossible to avoid unless the units use the same clock. Pro- gear does have a clock input and in a modern studio all digital gear would normally be run from the same master clock and all sample rate converters (SRC) would be turned off. That means that the entire studio runs 24bit and 96kHz for recorded material. If external material would be introduced it would have to be re-sampled (off line re-sampling) to 24/96. Even with a very good clock (ie. femto clock), there will be “clicks and rattle” when clocks are un-synced. I have a good asynchronous USB sound card that produces more or less any bit rate/ sample rate out by SPDIF or AES. It also has a high precision XO femto clock. If I feed my DSP digital SPDIF input using this very accurate clock I still can hear artefacts from the clock not being in synch when the SRC is turned off in the DSP.

This also becomes an issue as most commercially available music is encoded with 16bit/44,1kHz (Red Book) while most computer gear is run on multiples of 8, 48or 96Khz. We can keep 44,1kHz/16bit and be done with it or alternatively SRC to a multiple of 8. Some gear and SW can keep the original encoding and that is in my experience the best solution with really good gear (more of that later).

If not, we have to sample rate convert from 44,1kHz to 48Khz and that is difficult as the only way to do this correct is to up-sample to the product of them and then down-sample to the other sample rate. This means up-sample to Mhz-range and then down to 48Khz. Very few units are capable of that on the fly so instead we use algorithms to fill the missing samples between the available 44,1k samples to the needed 48k samples. Going between the "computer formats" is less sensitive so going from 48kHz to 96kHz and back may be done without any degradation, if done correctly. The reason is that you just add and remove identical sampels to get to where you what to be , nothing have to be recalculated. I have Heard that some may even introduce a new "average sample" inbetween two samples but this is beyond my limited knowledge.

Implemented in a good way this should not affect SQ but sometimes it does with digital artefacts. In a studio such SRC is typically done off-line where a program re-calculates the sample rate and that takes some time. It is the same process if a studio records at 24bit 96kHz (which is rather common) then when converting to RedBook 16bit/44,1kHz (for a CD) the SRC is done off-line at or after mastering to reduce digital artefacts.

So why are everyone ranting about high sample rates and the superiority there of? First of all, different implementation of electronics and algorithms may very well sound different. Is this due to superiority in resolution in the material used as being better with 24bit compared to 16bit or 192kHz compared to 44,1kHz? I say no. You can’t hear something that was not there to start with (if the original was Red Book CD 16bit/44,1kHz).

But you can design equipment that is better to deal with (sound better to the ear) highbit rate and/or high sample rate. In fact it is often simpler and cheaper to construct the needed brick wall filters at high frequencies 96kHz/192kHz (or higher) than at measly 44,1kHz. But adding information it does not.

Hence it can be perceived to sound better with a unit running 192kHz after resampling. But compared to a “perfect” non-resampling unit (filters being implemented as good as possible today, not as cheap as possible J ) for 44,1kHz there should be no sonic advantage what so ever if the source is or has been Red Book 16bit/44,1kHz at any point in time. If the material is native 24bit/96kHz, it should sound best at that resolution/rate but in reality it does not contain very much more information about the music. The deterioration in my view starts when SRC is introduced.

This is a huge thread that may be valuable to read for some. You really only have toread the first few pages if of interest. The other article is also interesting.
http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded (http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded)
https://people.xiph.org/~xiphmont/demo/neil-young.html (https://people.xiph.org/~xiphmont/demo/neil-young.html)

Of course there can be some additional dynamic information in the additional 8bits (16 to24) but this mainly is a way for studios to have "more than enough" headroom at all times. Real life 24bit audio it is not audible and indeed not achievable with today’s technology as the circuit noise floor will mask anything beyond about 20bits.

Many argue 96/192kHz can technically produce a cleaner wave form but in reality this is not true below the Nyquist frequency, it will be perfect already at 44,1kHz sample rate, that is was digital audio was based upon from the outset. It can of course be technically improved with new and better chips but with no or very limited sonic value to the human ear. -Albeit maybe with a large value to the human ego, not to be neglected :-). There are also few instruments with information above 20kHz, albeit some think otherwise and I respect that even if I disagree. I can’t hear 20Khz anymore but I can hear a difference if I include my 045-1 tweeters above 15kHz or not. I cannot perceive any difference above 20kHz. I’m not saying that others cannot.

So why do we do it, hunt for higher numbers. I think the simple answer is because we can. And that we always want to improve and have the latest gear. The industry have understood this years ago and who can blame them.

A few words about volume control in the digital domain if you use digital input.
There may be a problem of resolution if the unit is stuck with 16 or 24 bits internally as the only way to drop sound level is to drop bits and then you will lose resolution per definition (ref to Ivica’s post above). Most pro-gear use DSP processors that can run 41 bits internally and if the volume control in the digital domain is implemented so that it uses the bits above 16 (24) to control level this can be done almost without any deterioration of SQ. A way to check is to turn the output very low (in the DSP) on one channel and keep the other at normal or high level and reduce the gain on the power amplifier so they are on the same sound level. You will need a decent sound pressure level meter or measure the voltage out from the power amp to be certain that both channels are on the exact same level. Even the slightest difference will make the test useless. Listen to both channels using piano or voice to see if you can hear any deterioration on the DSP attenuated channel.

Alternatively you can put a passive potentiometer/ attenuator or a VCA unit on the analogue output from the DSP. This becomes more complicated in balanced multi-way systems as you need to control many channels at the same time and with minimal shift in attenuation between the balanced leads in each channel but thereare solutions for that also.

That is what I have done in my main system, I have two 8 channel balanced Burr Brown VCA volume controls after my DSP with 16 channels out (5.2 / 3-way + 2 subs). This is really a “leftover” since I used an older DSP where the volume control was not very well implemented. With my current DSP unit I would probable not have included them today.

People seem to be less worried about the digital artefacts that results from sample rate converters in the chain and more intrigued about DAC quality. We happily let our computer or CD up-sample to 96Khz or 192kHz without giving a though to if Windows or IOS SW SRC really can keep up with the multi 10kUSD electronics and speakers later in the chain. I say go native J .

In my view the DAC process is pretty straight forward. Most DAC chips today are really good and implementation does not seem to vary that much except for the really esoteric ones. There are no doubt audible differences between DAC’s but not necessarily from the raw music data or the pure DAC process. Please remember that most freestanding DAC would start with a sample rate converter to re-clock the input before any DAC process can start.

Much on little

Kindregards
//RoB

Ed Zeppeli
12-05-2016, 07:50 PM
Thanks for the perspective Sebackman. Very good.

I am using my Sonos Connect which as I understand it runs at 30dB native which seems to give me a good amount of headroom that I don't hear any audible artifacts at lower volumes, operating using the Sonos software volume control.

I go SPDIF out from the Sonos into the AES/EBU input of a dbx driverack venu360.

Best Regards and thanks again,

Warren

David Ketley
12-06-2016, 06:01 AM
Dearall,

I know this is an old thread but I thought I would add some comments. Sorry for the rather lengthy post.

The DAC process in general is less cumbersome than the ADC process. I doubt that an external DAC would improve the SQ enough to make them worthwhile compared to keeping the signal chain in the same unit, at least in current pro or semi-pro units. But I do understand and recognize that many audiophiles think the DAC’s are important and argue they can hear very subtle changes. In my view introducing an ADC in the DSP, processing, using the DAC process in the DSP for analogue out, a second ADC process to finally get to a different set of DAC chips would not make sense. This would also normally introduce the use of three different clock rates, one at the source, at the DSP and one in the external DAC.

After having used many DSP units the last 15 years I have come to that the HW is less restricting compared to the implementation of the SW. The best DSP engines in my view is where you build your DSP signal path on a computer and the compile a runnable code that you then down load to the DSP unit.

The second part that is important to me is that the manufacturer updates SW and FW overtime so you can take advantage of new development and improvements in algorithms as they emerge. A good implementation of algorithms do much more to safe guard good SQ than HW.

A few thoughts:
- Most pro- or semipro DSP units are as good as any other consumer digital audio equipment. There is in my mind no need for a separate DAC. I believe that differences can potentially be heard but in a DBT it would be very difficult to pick one from the other. I think when testing we often hear shifts in level and perceive them as SQ as the ear is VERY sensitive to level and less so to FS.
Just for good housekeeping, I do respect that many fellow audiophiles may have experience and views that deviate from my personal view.
- The ADC’s in many units today are so good today that they do not harm the SQ in any meaningful form compared to other parts of the chain.
- Always make certain that the ADC process have decent input signal level to work with. If it is too low the quantification noise may be audible. A classic trick is to turn down the gain on power amps and turn up the preamp into the DSP so even the low sounds have reasonable input amplitude. Most DSP's have ample head room internally so this should not be a problem. Clipping normally first occurs in the analogue output card so it can be a good idea to keep an eye on levels there.
- The advantage of a DSP overrules the reduction in SQ by >10 times in my view. No speakers are perfect and few rooms are. Even JBL has caved in and almost all new speakers (Tour, PA, Studio) are DSP enhanced. The DSP simply gives you access to improvements that are impossible in the passive world.
- The speakers and the room will always be the weakest link. If you move your speakers 5 inch the sound will change more than any change or upgrade of any HW in your sound chain can do. I think the fact that phase coherent and an excellent balance between direct and indirect sound makes the M2’s being so well received everywhere. It would be interesting to hear a DBT between 4367 and M2 (passive vs active) but I suspect that M2 would come out on top.
- I think in most setups (as in the mighty JBL M2 Studio Monitor) you should be just fine with a separate DSP with analogue in and out without a separate DAC.


- If you do decide to run an “all digital show”, I suggest you keep the signal native format all the way if you do have good gear. The disadvantage of sample rate conversion (SRC) will often consume the advantage of going more bits or higher sample rate. Some gear seems to sound better at higher sample rate and that is probably due to construction and filter design. And if it does sound better to your ears, it does.
- I have both digital and analogue input to my DSP units and the difference is neglectable unless you keep the entire chain without any SRC’s, on the same clock and keep the sample rate native. In such un-broken chain I think I can hear a little more detail but in reality it can be just placebo as I have never done a DBT….. I can’t measure the difference.
- Relatively few DSP’s do have digital output.


The following is just a rant about digital formats….

In my opinion SRC is a bigger problem than many other issues in the digital domain, like DAC updates. Few digital consumer (and some pro gear) products have the possibility to use the same clock and hence you need SRC’s to sync the digitaldata between units if you send digital information. And right there is when it starts getting difficult and this is (in my opinion) why we still see analogue feed into DSP’s and alike. With analogue in we can avoid many difficulties and the units become more universal/versatile with little to no negative SQ impact.

But if you do want to go “all digital” here is some of my thoughts.

Sample rate converters (SRC) are there to make certain that the DAC/DSP (or whateverunit with digital in) can receive any sample rate and bit depth even if they internally always run at one clock rate that may deviate from the input signal. They strip out the clock that came in the signal and introduce the local clock. This is more or less impossible to avoid unless the units use the same clock. Pro- gear does have a clock input and in a modern studio all digital gear would normally be run from the same master clock and all sample rate converters (SRC) would be turned off. That means that the entire studio runs 24bit and 96kHz for recorded material. If external material would be introduced it would have to be re-sampled (off line re-sampling) to 24/96. Even with a very good clock (ie. femto clock), there will be “clicks and rattle” when clocks are un-synced. I have a good asynchronous USB sound card that produces more or less any bit rate/ sample rate out by SPDIF or AES. It also has a high precision XO femto clock. If I feed my DSP digital SPDIF input using this very accurate clock I still can hear artefacts from the clock not being in synch when the SRC is turned off in the DSP.

This also becomes an issue as most commercially available music is encoded with 16bit/44,1kHz (Red Book) while most computer gear is run on multiples of 8, 48or 96Khz. We can keep 44,1kHz/16bit and be done with it or alternatively SRC to a multiple of 8. Some gear and SW can keep the original encoding and that is in my experience the best solution with really good gear (more of that later).

If not, we have to sample rate convert from 44,1kHz to 48Khz and that is difficult as the only way to do this correct is to up-sample to the product of them and then down-sample to the other sample rate. This means up-sample to Mhz-range and then down to 48Khz. Very few units are capable of that on the fly so instead we use algorithms to fill the missing samples between the available 44,1k samples to the needed 48k samples. Going between the "computer formats" is less sensitive so going from 48kHz to 96kHz and back may be done without any degradation, if done correctly. The reason is that you just add and remove identical sampels to get to where you what to be , nothing have to be recalculated. I have Heard that some may even introduce a new "average sample" inbetween two samples but this is beyond my limited knowledge.

Implemented in a good way this should not affect SQ but sometimes it does with digital artefacts. In a studio such SRC is typically done off-line where a program re-calculates the sample rate and that takes some time. It is the same process if a studio records at 24bit 96kHz (which is rather common) then when converting to RedBook 16bit/44,1kHz (for a CD) the SRC is done off-line at or after mastering to reduce digital artefacts.

So why are everyone ranting about high sample rates and the superiority there of? First of all, different implementation of electronics and algorithms may very well sound different. Is this due to superiority in resolution in the material used as being better with 24bit compared to 16bit or 192kHz compared to 44,1kHz? I say no. You can’t hear something that was not there to start with (if the original was Red Book CD 16bit/44,1kHz).

But you can design equipment that is better to deal with (sound better to the ear) highbit rate and/or high sample rate. In fact it is often simpler and cheaper to construct the needed brick wall filters at high frequencies 96kHz/192kHz (or higher) than at measly 44,1kHz. But adding information it does not.

Hence it can be perceived to sound better with a unit running 192kHz after resampling. But compared to a “perfect” non-resampling unit (filters being implemented as good as possible today, not as cheap as possible J ) for 44,1kHz there should be no sonic advantage what so ever if the source is or has been Red Book 16bit/44,1kHz at any point in time. If the material is native 24bit/96kHz, it should sound best at that resolution/rate but in reality it does not contain very much more information about the music. The deterioration in my view starts when SRC is introduced.

This is a huge thread that may be valuable to read for some. You really only have toread the first few pages if of interest. The other article is also interesting.
http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded (http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded)
https://people.xiph.org/~xiphmont/demo/neil-young.html (https://people.xiph.org/~xiphmont/demo/neil-young.html)

Of course there can be some additional dynamic information in the additional 8bits (16 to24) but this mainly is a way for studios to have "more than enough" headroom at all times. Real life 24bit audio it is not audible and indeed not achievable with today’s technology as the circuit noise floor will mask anything beyond about 20bits.

Many argue 96/192kHz can technically produce a cleaner wave form but in reality this is not true below the Nyquist frequency, it will be perfect already at 44,1kHz sample rate, that is was digital audio was based upon from the outset. It can of course be technically improved with new and better chips but with no or very limited sonic value to the human ear. -Albeit maybe with a large value to the human ego, not to be neglected :-). There are also few instruments with information above 20kHz, albeit some think otherwise and I respect that even if I disagree. I can’t hear 20Khz anymore but I can hear a difference if I include my 045-1 tweeters above 15kHz or not. I cannot perceive any difference above 20kHz. I’m not saying that others cannot.

So why do we do it, hunt for higher numbers. I think the simple answer is because we can. And that we always want to improve and have the latest gear. The industry have understood this years ago and who can blame them.

A few words about volume control in the digital domain if you use digital input.
There may be a problem of resolution if the unit is stuck with 16 or 24 bits internally as the only way to drop sound level is to drop bits and then you will lose resolution per definition (ref to Ivica’s post above). Most pro-gear use DSP processors that can run 41 bits internally and if the volume control in the digital domain is implemented so that it uses the bits above 16 (24) to control level this can be done almost without any deterioration of SQ. A way to check is to turn the output very low (in the DSP) on one channel and keep the other at normal or high level and reduce the gain on the power amplifier so they are on the same sound level. You will need a decent sound pressure level meter or measure the voltage out from the power amp to be certain that both channels are on the exact same level. Even the slightest difference will make the test useless. Listen to both channels using piano or voice to see if you can hear any deterioration on the DSP attenuated channel.

Alternatively you can put a passive potentiometer/ attenuator or a VCA unit on the analogue output from the DSP. This becomes more complicated in balanced multi-way systems as you need to control many channels at the same time and with minimal shift in attenuation between the balanced leads in each channel but thereare solutions for that also.

That is what I have done in my main system, I have two 8 channel balanced Burr Brown VCA volume controls after my DSP with 16 channels out (5.2 / 3-way + 2 subs). This is really a “leftover” since I used an older DSP where the volume control was not very well implemented. With my current DSP unit I would probable not have included them today.

People seem to be less worried about the digital artefacts that results from sample rate converters in the chain and more intrigued about DAC quality. We happily let our computer or CD up-sample to 96Khz or 192kHz without giving a though to if Windows or IOS SW SRC really can keep up with the multi 10kUSD electronics and speakers later in the chain. I say go native J .

In my view the DAC process is pretty straight forward. Most DAC chips today are really good and implementation does not seem to vary that much except for the really esoteric ones. There are no doubt audible differences between DAC’s but not necessarily from the raw music data or the pure DAC process. Please remember that most freestanding DAC would start with a sample rate converter to re-clock the input before any DAC process can start.

Much on little

Kindregards
//RoB

Firstly I have to say my knowledge of the digital domain is limited all I can do is read the experts advice and try and extrapolate from there.
My own experience with DACs is that they are the main thing governing the quality of the sound other than the speakers themselves.
I run a 4 way system computer sourced via USB to the DAC not the very top model but a Lampizator Level 4 DAC. From there I feed a Mod Squad passive pre amp to a Marchand XM 44 active crossover.
All digital levels are set full on.
The speakers are physically time aligned.
Because of this I have avoided Digital Signal Processers.
Trouble is that I would like to try and use DSP but I would require 3 x digital outputs and 3 DACs any other solution?
Regards
Dave

Ducatista47
12-07-2016, 07:04 AM
Of course there can be some additional dynamic information in the additional 8bits (16 to24) but this mainly is a way for studios to have "more than enough" headroom at all times. Real life 24bit audio it is not audible and indeed not achievable with today’s technology as the circuit noise floor will mask anything beyond about 20bits.

Many argue 96/192kHz can technically produce a cleaner wave form but in reality this is not true below the Nyquist frequency, it will be perfect already at 44,1kHz sample rate, that is was digital audio was based upon from the outset. It can of course be technically improved with new and better chips but with no or very limited sonic value to the human ear. -Albeit maybe with a large value to the human ego, not to be neglected :-). There are also few instruments with information above 20kHz, albeit some think otherwise and I respect that even if I disagree. I can’t hear 20Khz anymore but I can hear a difference if I include my 045-1 tweeters above 15kHz or not. I cannot perceive any difference above 20kHz. I’m not saying that others cannot.

So why do we do it, hunt for higher numbers. I think the simple answer is because we can. And that we always want to improve and have the latest gear. The industry have understood this years ago and who can blame them.

A voice of sanity. I involuntarily laugh or facepalm, depending on my mood, whenever someone tries to improve on the theoretical work of Harry Nyquist. Those who consider themselves Golden Ears get pretty defensive when faced with engineering facts. At that point what follows reminds me of fairy tales.

As for digital volume controls in DACs throwing signal away, the one I use will if I attenuate more than, I think it is, 55dBs. Who turns down their system 55dBs?

sebackman
12-07-2016, 01:11 PM
Dear David,

I look to be a fine piece of equipment and if it sounds good to you it does.
Using different gear may produce a different sound which by it selves does not imply that your or the alternative gear is better or worse. All gear alters the signal in way or the other, the trick is to find an “alteration” that suits your ears. -And that may or may not be the combination of gear with the overall lowest sound alteration/coloration. J The important piece is that you like the sound your combination of gear produces.

If you are open to alternative information, first of all I would check that your USB sound card (DAC) is true asynchronous, it is not possible to read from the web page. Info on the implication can be found here.
http://www.audioresearch.com/ContentsFiles/DAC8_white_paper.pdf (http://www.audioresearch.com/ContentsFiles/DAC8_white_paper.pdf)
http://www.hifi-advice.com/USB-synchronous-asynchronous-info.html (http://www.hifi-advice.com/USB-synchronous-asynchronous-info.html)

I my experience when using a reasonable computer (not esoteric high-tech dedicated) the asynchronous protocol does make a difference. -More than any of my DAC’s. I personally cannot attribute noticeable sound deterioration to any of the DAC’s I have if the rest of the chain is correct. Not by listening or measuring. I do happen to have digital out from some of my DSP’s, both SPDIF, DSD and BLU-LINK and I cannot hear a difference with by just changing the DAC chip. Sorry. (Digitalout and separate DAC). However, I’m not saying that others cannot.

My recipeis as Einstein said; “keep it as simple as possible, but not simpler” . J

Get a good asynchronous sound card with XO clock and a good digital out. Set the computer to output native format. Feed your DSP digital signal (or analogue) and use the analog outs in the DSP. If your DSP does have clock input or output do connect the sound card and DSP to share the clock and turn of the SRC.

My favorite right now, as I’m using only BSS DSP, is new nice little device from BSS called BLU-USB. It is an asynchronous sound card that feed the BSS DSP with a proprietary digital signal (BLU-LINK) but the neat thing is that there are now SRC’s anywhere and the BLU-USB uses the clock in the DSP so they are always paced. Signal path is short and sweet. But that is for a different post.

Kindregards
//Rob

Ian Mackenzie
12-07-2016, 06:03 PM
Robs last post is good advice

speakerdave
12-11-2016, 04:06 PM
For me, and possibly for anyone else not using DSP, the idea of getting an outboard DAC is more about questionable implementation of analog audio output sections in consumer CD players. When my Philips SA1000 died and I went to a Denon it was no doubt a step backward sonically. A Bryston DAC brought the life back, except, of course, for SACD's.

Mr. Widget
12-11-2016, 05:13 PM
A few thoughts on subjects covered in this thread:

1. In my opinion the most significant audible differences in DACs have more to do with clocking and the analog output than the chip sets and digital topologies used.

2. In my experience sample rate conversions are almost always a bad idea and in many DACs upsampling isn't a great idea either.

3. If you are using a DSP you should also be concerned with your A to D too unless everything in your system is already digital.


Widget

David Ketley
12-12-2016, 02:22 AM
Speakers with their electro mechanical interface must be the weakest link in the chain and in the analogue domain also record player interface.

In the digital domain things seem a bit more blurred the method of outputting the digital signal (computer, streamer, any EQ, DSP etc.) and then the quality of the DAC.

As usual everything we put in the signal path makes a difference and measuring and modifying the signal to get a straight line output does not always give the desired result.

My own experience is that the DAC is fundamental to the sound of a system as this is the component that changes the digital signal to an analogue output. The good thing for us is there are people who devote their lives to getting the best possible sound, in fact many are obsessed.
For someone with limited knowledge and limited budget it’s a nightmare trying to sort out the utter rubbish from the actual facts.
I have ended up with a Level 4 Lampizator DAC none DSD and I’m sure there are many such units out there.

My dilemma is it possible to obtain an all-digital system, including the crossover and still use the DAC of ones choice without having to purchase multiple units? Well out of my budget.

It’s easy to forget that what sounded good last year still sounds good now.

Dave

badman
05-16-2019, 11:40 AM
A voice of sanity. I involuntarily laugh or facepalm, depending on my mood, whenever someone tries to improve on the theoretical work of Harry Nyquist. Those who consider themselves Golden Ears get pretty defensive when faced with engineering facts. At that point what follows reminds me of fairy tales.

As for digital volume controls in DACs throwing signal away, the one I use will if I attenuate more than, I think it is, 55dBs. Who turns down their system 55dBs?

Reviving a Zombie Thread since this is some of the most lucid thinking I've seen on the subjects. Great stuff guys.

I am reintegrating some DSP, specifically a DBX Venu360, into my rig. The above conversation has been en-pointe for the most part, but I want to address this above 55dB issue-

I regularly turn down my system 55dB or more, and have spent a lot of time with attenuation in various forms trying to maintain resolution and low noise. I've made transformer volume controls of multiple types, as well as buffers, traditional preamps and resistive attenuators. For the DSP, my input levels will be exceedingly low if I don't work around that issue. The problem is that high efficiency speakers, coupled with powerful and sensitive home amplifiers (without any input trimmers) will operate in home spaces at very low output relative to their maximum. I have some 200-500 "real" watts per driver available and most of my drivers are generally 97dB or more, in a modest space. I prefer the sound of big efficient systems (as we tend to around here), but it means that typical gain structures are way too much.

The solution is well known to many in the pro/venue space, where it's generally a mixer or the like feeding the DSP. Input trimmers on amplifier inputs allow you to structure your gain differently, so that you can avoid pushing against either the noise/resolution floor or potential clipping. The former is usually able to be fixed easily by adjusting the trimpots on amplifiers downward and the mixer master volume upward, leading to a higher voltage throughput on the DSP, and better noise and quantization performance. If you're clipping the mixer to get that voltage, you done gone too far, and you need to back off the mixer volume and up the trims on the amps.

So for me, I've spent the last day deciding around some of what I'll do to correct the issue- I've been waffling between two options:



Simple DIY resistor networks, either as part of cables, or wired on the input XLRs of the amps
Having someone else do it for me- https://naiant.com/custom_audio_reproduction_equipment/inline-devices/, either as an adaptor attenuator or built into some cable.


There are cheap XLR attenuators on market but they do not seem to be of sufficient quality for my taste, with many designed around 600 ohm operation (too low) or not properly balanced (an L-pad on just one leg of a balanced connection will lower the volume of a differential input, but isn't ideal). Naiant doesn't seem to suffer from any of those issues, and the adjustable units are very, very cool.

One can also change the gain structure as another way to get down that path, but the point is, where your gain takes place in a system with ADC is very important, and 55dB isn't an unheard of level of attenuation.

David Ketley
05-16-2019, 12:38 PM
Reviving a Zombie Thread since this is some of the most lucid thinking I've seen on the subjects. Great stuff guys.

I am reintegrating some DSP, specifically a DBX Venu360, into my rig. The above conversation has been en-pointe for the most part, but I want to address this above 55dB issue-

I regularly turn down my system 55dB or more, and have spent a lot of time with attenuation in various forms trying to maintain resolution and low noise. I've made transformer volume controls of multiple types, as well as buffers, traditional preamps and resistive attenuators. For the DSP, my input levels will be exceedingly low if I don't work around that issue. The problem is that high efficiency speakers, coupled with powerful and sensitive home amplifiers (without any input trimmers) will operate in home spaces at very low output relative to their maximum. I have some 200-500 "real" watts per driver available and most of my drivers are generally 97dB or more, in a modest space. I prefer the sound of big efficient systems (as we tend to around here), but it means that typical gain structures are way too much.

The solution is well known to many in the pro/venue space, where it's generally a mixer or the like feeding the DSP. Input trimmers on amplifier inputs allow you to structure your gain differently, so that you can avoid pushing against either the noise/resolution floor or potential clipping. The former is usually able to be fixed easily by adjusting the trimpots on amplifiers downward and the mixer master volume upward, leading to a higher voltage throughput on the DSP, and better noise and quantization performance. If you're clipping the mixer to get that voltage, you done gone too far, and you need to back off the mixer volume and up the trims on the amps.

So for me, I've spent the last day deciding around some of what I'll do to correct the issue- I've been waffling between two options:



Simple DIY resistor networks, either as part of cables, or wired on the input XLRs of the amps
Having someone else do it for me- https://naiant.com/custom_audio_reproduction_equipment/inline-devices/, either as an adaptor attenuator or built into some cable.


There are cheap XLR attenuators on market but they do not seem to be of sufficient quality for my taste, with many designed around 600 ohm operation (too low) or not properly balanced (an L-pad on just one leg of a balanced connection will lower the volume of a differential input, but isn't ideal). Naiant doesn't seem to suffer from any of those issues, and the adjustable units are very, very cool.

One can also change the gain structure as another way to get down that path, but the point is, where your gain takes place in a system with ADC is very important, and 55dB isn't an unheard of level of attenuation.

I now use a lightspeed attenuator as my main control and noticed an immediate increase in bass control, I guess a better impedance match. Now I am going to contradict myself, my Lampizator DAC went faulty so in desperation I hooked the system up direct to the computer and am amazed at the sound its producing. It is a high end Laptop but its not supposed to sound that good!!!!!! Another problem one of the amplifiers also decided it had enough and started smoking. Again in desperation to have something to listen to I connected my old Denon RCD M37DAB driving the compression drivers. Another revelation, I had never heard such clarity from the top end. My old Yamaha M60 amp is going up for sale but as its switchable Class A its going to be difficult to replace maybe try a Chinese offering?

badman
05-16-2019, 01:03 PM
I now use a lightspeed attenuator as my main control and noticed an immediate increase in bass control, I guess a better impedance match. Now I am going to contradict myself, my Lampizator DAC went faulty so in desperation I hooked the system up direct to the computer and am amazed at the sound its producing. It is a high end Laptop but its not supposed to sound that good!!!!!! Another problem one of the amplifiers also decided it had enough and started smoking. Again in desperation to have something to listen to I connected my old Denon RCD M37DAB driving the compression drivers. Another revelation, I had never heard such clarity from the top end. My old Yamaha M60 amp is going up for sale but as its switchable Class A its going to be difficult to replace maybe try a Chinese offering?

You may be doing that very human thing of "new"="improved". It's very common in this hobby to hear a change as an improvement by default, which is part of why people seem to always upgrade over and over and over, even though at each point they tell themselves "I'm done, this is amazing".

Regarding amp, I don't know your budget, but I am a serious advocate of hypex NCore amplification. Nord Acoustics is an excellent and fairly priced source for pre-built, in any configuration you can imagine. It's not cheap stuff but it's world class amplification.

sebackman
06-12-2019, 02:40 AM
Hi,

Not knowing how your signal chain looks like it is in general a bad idea to introduce "resistors" in the signal chain unless the source is very low impedance and the "reciver" is high impedance. If the source is "pro" ie <600ohms output it is usually fine within limits. I concur with previous posts that you should keep the level into the DSP (if analogue) as high as possible to maintain decent S/N and that may post a problem later in the chain if there are no input trimmers as stated.

Is this 2-channel or multi, active or passive speakers? Are you running balanced cabling?

There are several attenuator alternatives in the "pro" market, both passive and active.

I'm on a fully digital chain with separate multi Burr&Brown VCA's after the DAC's but the newer BSS units do provide a good attenuator function that can be controlled from any iPhone or iPad. They use 41 bit internally which provides for retained resolution even when reducing volume in the digital domain. Most other DSP's does not. In fact, many of them are direct poor in volume control as they do it in or before the DAC chip (usually 16 or 24 bit) and not in the DSP chip (often up to 41 bits FP). Some use analogue VCA on the output which is muck better if correct implemented. Or just a pot for gain control. BSS solved this by implementing attenuation as a DSP function and it is just a nice feature that you can control it via (AA) via an app (iPhone/iPad) or indeed even by a single analogue hard wired pot attached to the rear of the BSS.

Even if that is not what you want to hear I would sell the DBX and get a BSS BLU160 (or BLU100-103 which is fixed IO), feed it digitally via an digital input card or use the BSS BLU-USB which is a very good asynchronous sound card with USB in and digital BSS BLU-LINK out (I have both). The beauty with BLU-Link is that the BSS sound card will use the BSS BLU-XXX DSP clock via BLU-LINK and as it is asynchronous the potential limitation from the computer OS i limited. This would give you up to 256 channels out @48kHz. I would then use the BSS digital volume control controlled via iPhone or iPad and feed analogue signal to your power amps.

There is currently no other simple solution to a fully digital DSP/active crossover chain with working volume control in the market today, at reasonable price. There may be separate DAC's in the market with good volume controls implemented (digital or analogue) but the problem will be to calibrate levels and to control all of them at the same time if more than 2 channels. I've seen some attempts but not really any successful solutions.

The balanced Burr&Brown VCA's (2 x 8 channels) I'm using cost an arm and half a leg at the time and was introduced before I went with BSS. Today I would use the BSS volume control to control volume to my 8+8+2 power amps (3-way active 5.2). That is also what JBL is using in the Mark Levinson M2 package (the JBL DSP SDSC's is a re-badged BSS)

Kind regards
//Rob

Ian Mackenzie
06-13-2019, 11:39 PM
Hi Rob,

Are those and BSS BLU160’s (or BLU100-103) the ones that make so much fan noise they become an annoying?

I recall this discussed in a thread sometime back.

sebackman
06-17-2019, 01:35 PM
Hi Ian,

Sure is. However you can mount silent Noctua fans or a slow 120mm fan and be done. Even in a quiet living room.

Some use them without cover and the fans disconnected. Seems to work fine.

Alternatively use a BLU50 without any fan and add BLU BOB’s BLU BIB’s to get desired number of output and input channels. No fans at all.

Kind regards
//Rob

Ian Mackenzie
06-18-2019, 12:32 AM
There must be an easier way surely.

sebackman
06-20-2019, 02:23 PM
I agree Ian,

The truth is that the HW is not important, the algoritms are. From having had the fortune to having sample many DSP’s I would say that BSS is superior to most from a sonic perspective. You get what you pay for....

And if fan noise is a concern buy the fan less BLU50.

kind regards
//Rob

Ian Mackenzie
06-21-2019, 05:34 PM
I agree Ian,

The truth is that the HW is not important, the algoritms are. From having had the fortune to having sample many DSP’s I would say that BSS is superior to most from a sonic perspective. You get what you pay for....

And if fan noise is a concern buy the fan less BLU50.

kind regards
//Rob

From my own research the out come is related to how the dsp is integrated into the loudspeaker design and hardware.

The “how” is the expertise and this is the single biggest barrier to success in any diy audio project or journey.

The diy user might have skills in a few areas but it’s unlikely they will have have expertise and practical application in all the key deliverables or what they are? Hence the question of this thread and the rabbit hole that follows. The ultimate unknowns are will the project get finished and will it deliver the outcomes? That lack of certainty is where help is most needed.

An analogy is go build a modern car on your own or an aircraft. The dude decides to buy the most elaborate mag wheels because his ego tells him that is what’s important.

How many people can seriously say l can do that and how many would say are you crazy 😜?

Jonas_h
05-11-2020, 01:36 AM
Dear David,

I look to be a fine piece of equipment and if it sounds good to you it does.
Using different gear may produce a different sound which by it selves does not imply that your or the alternative gear is better or worse. All gear alters the signal in way or the other, the trick is to find an “alteration” that suits your ears. -And that may or may not be the combination of gear with the overall lowest sound alteration/coloration. J The important piece is that you like the sound your combination of gear produces.

If you are open to alternative information, first of all I would check that your USB sound card (DAC) is true asynchronous, it is not possible to read from the web page. Info on the implication can be found here.
http://www.audioresearch.com/ContentsFiles/DAC8_white_paper.pdf (http://www.audioresearch.com/ContentsFiles/DAC8_white_paper.pdf)
http://www.hifi-advice.com/USB-synchronous-asynchronous-info.html (http://www.hifi-advice.com/USB-synchronous-asynchronous-info.html)

I my experience when using a reasonable computer (not esoteric high-tech dedicated) the asynchronous protocol does make a difference. -More than any of my DAC’s. I personally cannot attribute noticeable sound deterioration to any of the DAC’s I have if the rest of the chain is correct. Not by listening or measuring. I do happen to have digital out from some of my DSP’s, both SPDIF, DSD and BLU-LINK and I cannot hear a difference with by just changing the DAC chip. Sorry. (Digitalout and separate DAC). However, I’m not saying that others cannot.

My recipeis as Einstein said; “keep it as simple as possible, but not simpler” . J

Get a good asynchronous sound card with XO clock and a good digital out. Set the computer to output native format. Feed your DSP digital signal (or analogue) and use the analog outs in the DSP. If your DSP does have clock input or output do connect the sound card and DSP to share the clock and turn of the SRC.

My favorite right now, as I’m using only BSS DSP, is new nice little device from BSS called BLU-USB. It is an asynchronous sound card that feed the BSS DSP with a proprietary digital signal (BLU-LINK) but the neat thing is that there are now SRC’s anywhere and the BLU-USB uses the clock in the DSP so they are always paced. Signal path is short and sweet. But that is for a different post.

Kindregards
//Rob

This is an old thread, but hoping it can be made active :)

You mention that with the BLU-USB you can disable SRC anywhere... But on my BLU160 you are forced to select either 48khz or 96khz in the configuration. Isn't it a requirement for BLU-link that all devices are set to the same sampling rate? So what happens if you play a 96khz file on your computer? And then right after a 48khz file? (Or a 44.1 for that matter). Won't it be re-sampled to whatever sampling rate you have chosen in the BSS? The clock will be shared though which is a good thing, but I can't see how you can bypass SRC?

EDIT: I will hopefully soon be able to use Dante and the question above is very relevant to me.