Page 1 of 2 12 LastLast
Results 1 to 15 of 28

Thread: Is Good DAC Still Necessary With DSP Sound Processor?

  1. #1
    Member
    Join Date
    May 2016
    Location
    virginia
    Posts
    36

    Is Good DAC Still Necessary With DSP Sound Processor?

    Currently, I am looking for a DSP speaker management processor for the ongoing speaker project. While, I am getting confused about the two-step digital-analog processing: assumed my main DAC has better outputs, the analog signal will be AD and DA by the DSP processor again. In this case, is the main DAC that important? What are the benefits to use it instead of connecting CD/Computer to the DSP's digital input directly.

    Any thought appreciated!

    Regards,

    AZZD

  2. #2
    Senior Member grumpy's Avatar
    Join Date
    Oct 2003
    Location
    SoCal
    Posts
    5,338
    How many DSP processor outputs will you be using? If more than 2, and your DAC is 2ch, ...

    That posed, bypassing a D/A A/D step (e.g., CD output and DSP input) is likely a good thing as long as you have volume control somewhere (DSP output DACs or your own stand alone DAC). No way to know if the DSP output DAC is preferable (simpler connection/use and internal clock, vs regenerated in your external and potentially higher quality DAC). You might want to experiment.

  3. #3
    Member
    Join Date
    May 2016
    Location
    virginia
    Posts
    36
    Quote Originally Posted by grumpy View Post
    How many DSP processor outputs will you be using? If more than 2, and your DAC is 2ch, ...

    That posed, bypassing a D/A A/D step (e.g., CD output and DSP input) is likely a good thing as long as you have volume control somewhere (DSP output DACs or your own stand alone DAC). No way to know if the DSP output DAC is preferable (simpler connection/use and internal clock, vs regenerated in your external and potentially higher quality DAC). You might want to experiment.
    Thank you, grumpy!Your comment give me more confidence to purchase a DSP processor with digital input. I need eight outputs from the DSP processor. Volume control is not a problem since digital signal is from computer; both player and Lynx PCI card have volume control.

  4. #4
    Senior Member grumpy's Avatar
    Join Date
    Oct 2003
    Location
    SoCal
    Posts
    5,338
    Normally, digital outputs are not volume controlled. If the proposed DSP does this digitally, or after its internal DAC in the analog domain (before your amps), then you have a way to reduce volume from full output. Be careful. Ask lots of question before buying.

  5. #5
    Senior Member ivica's Avatar
    Join Date
    Jan 2008
    Location
    serbia
    Posts
    1,632
    Quote Originally Posted by azzd View Post
    Thank you, grumpy!Your comment give me more confidence to purchase a DSP processor with digital input. I need eight outputs from the DSP processor. Volume control is not a problem since digital signal is from computer; both player and Lynx PCI card have volume control.
    Hi azzd,

    I THINK, that using any volume control before DSP processing is not a good solution. The best would be to use digital source (such as CD digital out, or computer card digital out) and connect it directly to the DSP processor. DSP would do all desired signal processing and attached DACs would convert the data in to the desired number of analog channels. I THINK that AFTER DACs conversions volume control have to be done, the best, on the power amplifiers inputs. I am aware that it would not be too practical to manage , say 8, analog pots. Some amount of output dynamic can be managed digitally , especially if larger number of bits DACs are used ( 24 or 32 bits per sample).
    The problems with the signal wide noise or his can be annoying if high efficiency drivers are used connected to the power amps with large amplifications, usually of high-power type.

    Interesting to read:
    http://www.androidauthority.com/why-...it-dac-667621/


    regards
    ivica

  6. #6
    Member
    Join Date
    May 2016
    Location
    virginia
    Posts
    36

    Thumbs up

    Quote Originally Posted by ivica View Post
    Hi azzd,

    I THINK, that using any volume control before DSP processing is not a good solution. The best would be to use digital source (such as CD digital out, or computer card digital out) and connect it directly to the DSP processor. DSP would do all desired signal processing and attached DACs would convert the data in to the desired number of analog channels. I THINK that AFTER DACs conversions volume control have to be done, the best, on the power amplifiers inputs. I am aware that it would not be too practical to manage , say 8, analog pots. Some amount of output dynamic can be managed digitally , especially if larger number of bits DACs are used ( 24 or 32 bits per sample).
    The problems with the signal wide noise or his can be annoying if high efficiency drivers are used connected to the power amps with large amplifications, usually of high-power type.

    Interesting to read:
    http://www.androidauthority.com/why-...it-dac-667621/


    regards
    ivica
    Thanks again to grumpy and ivica! I had never thought of or known the volume control you both mentioned. Very helpful!!! Will check the webpage. Before making the final decision, I am testing whether I can get good result by using RME UC interface with JRiver DSP plugins as the DSP processor. While, for the after DAC volume control, do you think the RME's volume control of each output is before or after DAC? If it is before DAC, then I guess I should use some passive resistors between RME and amplifiers to control the volume, right?

    Regards,

    Yong

  7. #7
    Senior Member grumpy's Avatar
    Join Date
    Oct 2003
    Location
    SoCal
    Posts
    5,338
    MSBtech made an MVC unit ~$500 used
    SPL makes a volume8 (still made)
    cirrus logic makes a cs3318 chip (extreme DIY) and an eval board (needs PS and box).
    studio equipment called 'monitor controllers' are probably overkill.

    maybe ask RME what they recommend (perhaps saving a lot of fuss and $$$)

  8. #8
    Senior Member ivica's Avatar
    Join Date
    Jan 2008
    Location
    serbia
    Posts
    1,632
    Quote Originally Posted by azzd View Post
    Thanks again to grumpy and ivica! I had never thought of or known the volume control you both mentioned. Very helpful!!! Will check the webpage. Before making the final decision, I am testing whether I can get good result by using RME UC interface with JRiver DSP plugins as the DSP processor. While, for the after DAC volume control, do you think the RME's volume control of each output is before or after DAC? If it is before DAC, then I guess I should use some passive resistors between RME and amplifiers to control the volume, right?

    Regards,

    Yong
    Hi azzd,

    Just as I HAVE UNDERSTOOD:
    If I want to reduce the signal level by 6dB, doing it in digital domain, that would be equal as divide by 2, so reducing LSB bit, and if I want to reduce by 24dB, that would be reducing 4 bits, so instead of using say 16 bits, it would be used 12 bits...., but if such "operation" is done in analog domain, either using pot in the amp input, or reducing DAC reference current or voltage (if "multiplying" type of DAC is applied), all of the DAC bits resolution would remain. I can imagine that a kind of "digital manipulation" can be done using 'oversampling DAC', as it can behave as an equivalent more data bits DAC then it really has.....
    I have no knowledge what kind of level attenuation is applied on many DSP driven active networks in order to provide proper output signal attenuation, saying about 40~60dB.

    regards
    ivica

  9. #9
    Member
    Join Date
    May 2016
    Location
    virginia
    Posts
    36
    Quote Originally Posted by grumpy View Post
    MSBtech made an MVC unit ~$500 used
    SPL makes a volume8 (still made)
    cirrus logic makes a cs3318 chip (extreme DIY) and an eval board (needs PS and box).
    studio equipment called 'monitor controllers' are probably overkill.

    maybe ask RME what they recommend (perhaps saving a lot of fuss and $$$)
    Thanks again. They are very useful volume controls. I even don't know there are this kind of equipment besides passive preamps. Prefer SPL to MVC because MVC uses RCA connectors. Will do more research. Temporary good news is I believe I can get barely enough (at least for testing) amps with built-in volume control.

  10. #10
    Member
    Join Date
    May 2016
    Location
    virginia
    Posts
    36
    Quote Originally Posted by ivica View Post
    Hi azzd,

    Just as I HAVE UNDERSTOOD:
    If I want to reduce the signal level by 6dB, doing it in digital domain, that would be equal as divide by 2, so reducing LSB bit, and if I want to reduce by 24dB, that would be reducing 4 bits, so instead of using say 16 bits, it would be used 12 bits...., but if such "operation" is done in analog domain, either using pot in the amp input, or reducing DAC reference current or voltage (if "multiplying" type of DAC is applied), all of the DAC bits resolution would remain. I can imagine that a kind of "digital manipulation" can be done using 'oversampling DAC', as it can behave as an equivalent more data bits DAC then it really has.....
    I have no knowledge what kind of level attenuation is applied on many DSP driven active networks in order to provide proper output signal attenuation, saying about 40~60dB.

    regards
    ivica
    Hi ivica,

    I do appreciate your kind and detailed explanation! Make me understand this thing. I believe the RME channel volume control is on the digital domain. As just realized, I could use amplifiers volume control to do the most volume control work. But still need the digital side due to different recording levels. Better than nothing though. Current main DAC Dcs 954 does have volume control; and factory announced it will not affect the sound quality. So if keep the DCS, analog active crossover is also a potential solution. Doing and thinking.

    Best regards,

    azzd

  11. #11
    Member sebackman's Avatar
    Join Date
    Feb 2004
    Location
    Europe
    Posts
    493

    Much on little, DSP and digital audio

    Dearall,

    I know this is an old thread but I thought I would add some comments. Sorry for the rather lengthy post.

    The DAC process in general is less cumbersome than the ADC process. I doubt that an external DAC would improve the SQ enough to make them worthwhile compared to keeping the signal chain in the same unit, at least in current pro or semi-pro units. But I do understand and recognize that many audiophiles think the DAC’s are important and argue they can hear very subtle changes. In my view introducing an ADC in the DSP, processing, using the DAC process in the DSP for analogue out, a second ADC process to finally get to a different set of DAC chips would not make sense. This would also normally introduce the use of three different clock rates, one at the source, at the DSP and one in the external DAC.

    After having used many DSP units the last 15 years I have come to that the HW is less restricting compared to the implementation of the SW. The best DSP engines in my view is where you build your DSP signal path on a computer and the compile a runnable code that you then down load to the DSP unit.

    The second part that is important to me is that the manufacturer updates SW and FW overtime so you can take advantage of new development and improvements in algorithms as they emerge. A good implementation of algorithms do much more to safe guard good SQ than HW.

    A few thoughts:
    - Most pro- or semipro DSP units are as good as any other consumer digital audio equipment. There is in my mind no need for a separate DAC. I believe that differences can potentially be heard but in a DBT it would be very difficult to pick one from the other. I think when testing we often hear shifts in level and perceive them as SQ as the ear is VERY sensitive to level and less so to FS.
    Just for good housekeeping, I do respect that many fellow audiophiles may have experience and views that deviate from my personal view.
    - The ADC’s in many units today are so good today that they do not harm the SQ in any meaningful form compared to other parts of the chain.
    - Always make certain that the ADC process have decent input signal level to work with. If it is too low the quantification noise may be audible. A classic trick is to turn down the gain on power amps and turn up the preamp into the DSP so even the low sounds have reasonable input amplitude. Most DSP's have ample head room internally so this should not be a problem. Clipping normally first occurs in the analogue output card so it can be a good idea to keep an eye on levels there.
    - The advantage of a DSP overrules the reduction in SQ by >10 times in my view. No speakers are perfect and few rooms are. Even JBL has caved in and almost all new speakers (Tour, PA, Studio) are DSP enhanced. The DSP simply gives you access to improvements that are impossible in the passive world.
    - The speakers and the room will always be the weakest link. If you move your speakers 5 inch the sound will change more than any change or upgrade of any HW in your sound chain can do. I think the fact that phase coherent and an excellent balance between direct and indirect sound makes the M2’s being so well received everywhere. It would be interesting to hear a DBT between 4367 and M2 (passive vs active) but I suspect that M2 would come out on top.
    - I think in most setups (as in the mighty JBL M2 Studio Monitor) you should be just fine with a separate DSP with analogue in and out without a separate DAC.


    - If you do decide to run an “all digital show”, I suggest you keep the signal native format all the way if you do have good gear. The disadvantage of sample rate conversion (SRC) will often consume the advantage of going more bits or higher sample rate. Some gear seems to sound better at higher sample rate and that is probably due to construction and filter design. And if it does sound better to your ears, it does.
    - I have both digital and analogue input to my DSP units and the difference is neglectable unless you keep the entire chain without any SRC’s, on the same clock and keep the sample rate native. In such un-broken chain I think I can hear a little more detail but in reality it can be just placebo as I have never done a DBT….. I can’t measure the difference.
    - Relatively few DSP’s do have digital output.


    The following is just a rant about digital formats….

    In my opinion SRC is a bigger problem than many other issues in the digital domain, like DAC updates. Few digital consumer (and some pro gear) products have the possibility to use the same clock and hence you need SRC’s to sync the digitaldata between units if you send digital information. And right there is when it starts getting difficult and this is (in my opinion) why we still see analogue feed into DSP’s and alike. With analogue in we can avoid many difficulties and the units become more universal/versatile with little to no negative SQ impact.

    But if you do want to go “all digital” here is some of my thoughts.

    Sample rate converters (SRC) are there to make certain that the DAC/DSP (or whateverunit with digital in) can receive any sample rate and bit depth even if they internally always run at one clock rate that may deviate from the input signal. They strip out the clock that came in the signal and introduce the local clock. This is more or less impossible to avoid unless the units use the same clock. Pro- gear does have a clock input and in a modern studio all digital gear would normally be run from the same master clock and all sample rate converters (SRC) would be turned off. That means that the entire studio runs 24bit and 96kHz for recorded material. If external material would be introduced it would have to be re-sampled (off line re-sampling) to 24/96. Even with a very good clock (ie. femto clock), there will be “clicks and rattle” when clocks are un-synced. I have a good asynchronous USB sound card that produces more or less any bit rate/ sample rate out by SPDIF or AES. It also has a high precision XO femto clock. If I feed my DSP digital SPDIF input using this very accurate clock I still can hear artefacts from the clock not being in synch when the SRC is turned off in the DSP.

    This also becomes an issue as most commercially available music is encoded with 16bit/44,1kHz (Red Book) while most computer gear is run on multiples of 8, 48or 96Khz. We can keep 44,1kHz/16bit and be done with it or alternatively SRC to a multiple of 8. Some gear and SW can keep the original encoding and that is in my experience the best solution with really good gear (more of that later).

    If not, we have to sample rate convert from 44,1kHz to 48Khz and that is difficult as the only way to do this correct is to up-sample to the product of them and then down-sample to the other sample rate. This means up-sample to Mhz-range and then down to 48Khz. Very few units are capable of that on the fly so instead we use algorithms to fill the missing samples between the available 44,1k samples to the needed 48k samples. Going between the "computer formats" is less sensitive so going from 48kHz to 96kHz and back may be done without any degradation, if done correctly. The reason is that you just add and remove identical sampels to get to where you what to be , nothing have to be recalculated. I have Heard that some may even introduce a new "average sample" inbetween two samples but this is beyond my limited knowledge.

    Implemented in a good way this should not affect SQ but sometimes it does with digital artefacts. In a studio such SRC is typically done off-line where a program re-calculates the sample rate and that takes some time. It is the same process if a studio records at 24bit 96kHz (which is rather common) then when converting to RedBook 16bit/44,1kHz (for a CD) the SRC is done off-line at or after mastering to reduce digital artefacts.

    So why are everyone ranting about high sample rates and the superiority there of? First of all, different implementation of electronics and algorithms may very well sound different. Is this due to superiority in resolution in the material used as being better with 24bit compared to 16bit or 192kHz compared to 44,1kHz? I say no. You can’t hear something that was not there to start with (if the original was Red Book CD 16bit/44,1kHz).

    But you can design equipment that is better to deal with (sound better to the ear) highbit rate and/or high sample rate. In fact it is often simpler and cheaper to construct the needed brick wall filters at high frequencies 96kHz/192kHz (or higher) than at measly 44,1kHz. But adding information it does not.

    Hence it can be perceived to sound better with a unit running 192kHz after resampling. But compared to a “perfect” non-resampling unit (filters being implemented as good as possible today, not as cheap as possible J ) for 44,1kHz there should be no sonic advantage what so ever if the source is or has been Red Book 16bit/44,1kHz at any point in time. If the material is native 24bit/96kHz, it should sound best at that resolution/rate but in reality it does not contain very much more information about the music. The deterioration in my view starts when SRC is introduced.

    This is a huge thread that may be valuable to read for some. You really only have toread the first few pages if of interest. The other article is also interesting.
    http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded
    https://people.xiph.org/~xiphmont/demo/neil-young.html

    Of course there can be some additional dynamic information in the additional 8bits (16 to24) but this mainly is a way for studios to have "more than enough" headroom at all times. Real life 24bit audio it is not audible and indeed not achievable with today’s technology as the circuit noise floor will mask anything beyond about 20bits.

    Many argue 96/192kHz can technically produce a cleaner wave form but in reality this is not true below the Nyquist frequency, it will be perfect already at 44,1kHz sample rate, that is was digital audio was based upon from the outset. It can of course be technically improved with new and better chips but with no or very limited sonic value to the human ear. -Albeit maybe with a large value to the human ego, not to be neglected :-). There are also few instruments with information above 20kHz, albeit some think otherwise and I respect that even if I disagree. I can’t hear 20Khz anymore but I can hear a difference if I include my 045-1 tweeters above 15kHz or not. I cannot perceive any difference above 20kHz. I’m not saying that others cannot.


    So why do we do it, hunt for higher numbers. I think the simple answer is because we can. And that we always want to improve and have the latest gear. The industry have understood this years ago and who can blame them.

    A few words about volume control in the digital domain if you use digital input.
    There may be a problem of resolution if the unit is stuck with 16 or 24 bits internally as the only way to drop sound level is to drop bits and then you will lose resolution per definition (ref to Ivica’s post above). Most pro-gear use DSP processors that can run 41 bits internally and if the volume control in the digital domain is implemented so that it uses the bits above 16 (24) to control level this can be done almost without any deterioration of SQ. A way to check is to turn the output very low (in the DSP) on one channel and keep the other at normal or high level and reduce the gain on the power amplifier so they are on the same sound level. You will need a decent sound pressure level meter or measure the voltage out from the power amp to be certain that both channels are on the exact same level. Even the slightest difference will make the test useless. Listen to both channels using piano or voice to see if you can hear any deterioration on the DSP attenuated channel.

    Alternatively you can put a passive potentiometer/ attenuator or a VCA unit on the analogue output from the DSP. This becomes more complicated in balanced multi-way systems as you need to control many channels at the same time and with minimal shift in attenuation between the balanced leads in each channel but thereare solutions for that also.

    That is what I have done in my main system, I have two 8 channel balanced Burr Brown VCA volume controls after my DSP with 16 channels out (5.2 / 3-way + 2 subs). This is really a “leftover” since I used an older DSP where the volume control was not very well implemented. With my current DSP unit I would probable not have included them today.


    People seem to be less worried about the digital artefacts that results from sample rate converters in the chain and more intrigued about DAC quality. We happily let our computer or CD up-sample to 96Khz or 192kHz without giving a though to if Windows or IOS SW SRC really can keep up with the multi 10kUSD electronics and speakers later in the chain. I say go native J .

    In my view the DAC process is pretty straight forward. Most DAC chips today are really good and implementation does not seem to vary that much except for the really esoteric ones. There are no doubt audible differences between DAC’s but not necessarily from the raw music data or the pure DAC process. Please remember that most freestanding DAC would start with a sample rate converter to re-clock the input before any DAC process can start.

    Much on little

    Kindregards
    //RoB
    The solution to the problem changes the problem.
    -And always remember that all of your equipment was made by the lowest bidder

  12. #12
    Senior Member Ed Zeppeli's Avatar
    Join Date
    Dec 2007
    Location
    Nanaimo, BC
    Posts
    608
    Thanks for the perspective Sebackman. Very good.

    I am using my Sonos Connect which as I understand it runs at 30dB native which seems to give me a good amount of headroom that I don't hear any audible artifacts at lower volumes, operating using the Sonos software volume control.

    I go SPDIF out from the Sonos into the AES/EBU input of a dbx driverack venu360.

    Best Regards and thanks again,

    Warren
    DIY Array, 2242 sub, 4408, 4208, Control 8SR, E120 Guitar cab, Control 1, LSR305.

  13. #13
    Senior Member
    Join Date
    Mar 2008
    Location
    UK
    Posts
    230
    Quote Originally Posted by sebackman View Post
    Dearall,

    I know this is an old thread but I thought I would add some comments. Sorry for the rather lengthy post.

    The DAC process in general is less cumbersome than the ADC process. I doubt that an external DAC would improve the SQ enough to make them worthwhile compared to keeping the signal chain in the same unit, at least in current pro or semi-pro units. But I do understand and recognize that many audiophiles think the DAC’s are important and argue they can hear very subtle changes. In my view introducing an ADC in the DSP, processing, using the DAC process in the DSP for analogue out, a second ADC process to finally get to a different set of DAC chips would not make sense. This would also normally introduce the use of three different clock rates, one at the source, at the DSP and one in the external DAC.

    After having used many DSP units the last 15 years I have come to that the HW is less restricting compared to the implementation of the SW. The best DSP engines in my view is where you build your DSP signal path on a computer and the compile a runnable code that you then down load to the DSP unit.

    The second part that is important to me is that the manufacturer updates SW and FW overtime so you can take advantage of new development and improvements in algorithms as they emerge. A good implementation of algorithms do much more to safe guard good SQ than HW.

    A few thoughts:
    - Most pro- or semipro DSP units are as good as any other consumer digital audio equipment. There is in my mind no need for a separate DAC. I believe that differences can potentially be heard but in a DBT it would be very difficult to pick one from the other. I think when testing we often hear shifts in level and perceive them as SQ as the ear is VERY sensitive to level and less so to FS.
    Just for good housekeeping, I do respect that many fellow audiophiles may have experience and views that deviate from my personal view.
    - The ADC’s in many units today are so good today that they do not harm the SQ in any meaningful form compared to other parts of the chain.
    - Always make certain that the ADC process have decent input signal level to work with. If it is too low the quantification noise may be audible. A classic trick is to turn down the gain on power amps and turn up the preamp into the DSP so even the low sounds have reasonable input amplitude. Most DSP's have ample head room internally so this should not be a problem. Clipping normally first occurs in the analogue output card so it can be a good idea to keep an eye on levels there.
    - The advantage of a DSP overrules the reduction in SQ by >10 times in my view. No speakers are perfect and few rooms are. Even JBL has caved in and almost all new speakers (Tour, PA, Studio) are DSP enhanced. The DSP simply gives you access to improvements that are impossible in the passive world.
    - The speakers and the room will always be the weakest link. If you move your speakers 5 inch the sound will change more than any change or upgrade of any HW in your sound chain can do. I think the fact that phase coherent and an excellent balance between direct and indirect sound makes the M2’s being so well received everywhere. It would be interesting to hear a DBT between 4367 and M2 (passive vs active) but I suspect that M2 would come out on top.
    - I think in most setups (as in the mighty JBL M2 Studio Monitor) you should be just fine with a separate DSP with analogue in and out without a separate DAC.


    - If you do decide to run an “all digital show”, I suggest you keep the signal native format all the way if you do have good gear. The disadvantage of sample rate conversion (SRC) will often consume the advantage of going more bits or higher sample rate. Some gear seems to sound better at higher sample rate and that is probably due to construction and filter design. And if it does sound better to your ears, it does.
    - I have both digital and analogue input to my DSP units and the difference is neglectable unless you keep the entire chain without any SRC’s, on the same clock and keep the sample rate native. In such un-broken chain I think I can hear a little more detail but in reality it can be just placebo as I have never done a DBT….. I can’t measure the difference.
    - Relatively few DSP’s do have digital output.


    The following is just a rant about digital formats….

    In my opinion SRC is a bigger problem than many other issues in the digital domain, like DAC updates. Few digital consumer (and some pro gear) products have the possibility to use the same clock and hence you need SRC’s to sync the digitaldata between units if you send digital information. And right there is when it starts getting difficult and this is (in my opinion) why we still see analogue feed into DSP’s and alike. With analogue in we can avoid many difficulties and the units become more universal/versatile with little to no negative SQ impact.

    But if you do want to go “all digital” here is some of my thoughts.

    Sample rate converters (SRC) are there to make certain that the DAC/DSP (or whateverunit with digital in) can receive any sample rate and bit depth even if they internally always run at one clock rate that may deviate from the input signal. They strip out the clock that came in the signal and introduce the local clock. This is more or less impossible to avoid unless the units use the same clock. Pro- gear does have a clock input and in a modern studio all digital gear would normally be run from the same master clock and all sample rate converters (SRC) would be turned off. That means that the entire studio runs 24bit and 96kHz for recorded material. If external material would be introduced it would have to be re-sampled (off line re-sampling) to 24/96. Even with a very good clock (ie. femto clock), there will be “clicks and rattle” when clocks are un-synced. I have a good asynchronous USB sound card that produces more or less any bit rate/ sample rate out by SPDIF or AES. It also has a high precision XO femto clock. If I feed my DSP digital SPDIF input using this very accurate clock I still can hear artefacts from the clock not being in synch when the SRC is turned off in the DSP.

    This also becomes an issue as most commercially available music is encoded with 16bit/44,1kHz (Red Book) while most computer gear is run on multiples of 8, 48or 96Khz. We can keep 44,1kHz/16bit and be done with it or alternatively SRC to a multiple of 8. Some gear and SW can keep the original encoding and that is in my experience the best solution with really good gear (more of that later).

    If not, we have to sample rate convert from 44,1kHz to 48Khz and that is difficult as the only way to do this correct is to up-sample to the product of them and then down-sample to the other sample rate. This means up-sample to Mhz-range and then down to 48Khz. Very few units are capable of that on the fly so instead we use algorithms to fill the missing samples between the available 44,1k samples to the needed 48k samples. Going between the "computer formats" is less sensitive so going from 48kHz to 96kHz and back may be done without any degradation, if done correctly. The reason is that you just add and remove identical sampels to get to where you what to be , nothing have to be recalculated. I have Heard that some may even introduce a new "average sample" inbetween two samples but this is beyond my limited knowledge.

    Implemented in a good way this should not affect SQ but sometimes it does with digital artefacts. In a studio such SRC is typically done off-line where a program re-calculates the sample rate and that takes some time. It is the same process if a studio records at 24bit 96kHz (which is rather common) then when converting to RedBook 16bit/44,1kHz (for a CD) the SRC is done off-line at or after mastering to reduce digital artefacts.

    So why are everyone ranting about high sample rates and the superiority there of? First of all, different implementation of electronics and algorithms may very well sound different. Is this due to superiority in resolution in the material used as being better with 24bit compared to 16bit or 192kHz compared to 44,1kHz? I say no. You can’t hear something that was not there to start with (if the original was Red Book CD 16bit/44,1kHz).

    But you can design equipment that is better to deal with (sound better to the ear) highbit rate and/or high sample rate. In fact it is often simpler and cheaper to construct the needed brick wall filters at high frequencies 96kHz/192kHz (or higher) than at measly 44,1kHz. But adding information it does not.

    Hence it can be perceived to sound better with a unit running 192kHz after resampling. But compared to a “perfect” non-resampling unit (filters being implemented as good as possible today, not as cheap as possible J ) for 44,1kHz there should be no sonic advantage what so ever if the source is or has been Red Book 16bit/44,1kHz at any point in time. If the material is native 24bit/96kHz, it should sound best at that resolution/rate but in reality it does not contain very much more information about the music. The deterioration in my view starts when SRC is introduced.

    This is a huge thread that may be valuable to read for some. You really only have toread the first few pages if of interest. The other article is also interesting.
    http://www.head-fi.org/t/415361/24bit-vs-16bit-the-myth-exploded
    https://people.xiph.org/~xiphmont/demo/neil-young.html

    Of course there can be some additional dynamic information in the additional 8bits (16 to24) but this mainly is a way for studios to have "more than enough" headroom at all times. Real life 24bit audio it is not audible and indeed not achievable with today’s technology as the circuit noise floor will mask anything beyond about 20bits.

    Many argue 96/192kHz can technically produce a cleaner wave form but in reality this is not true below the Nyquist frequency, it will be perfect already at 44,1kHz sample rate, that is was digital audio was based upon from the outset. It can of course be technically improved with new and better chips but with no or very limited sonic value to the human ear. -Albeit maybe with a large value to the human ego, not to be neglected :-). There are also few instruments with information above 20kHz, albeit some think otherwise and I respect that even if I disagree. I can’t hear 20Khz anymore but I can hear a difference if I include my 045-1 tweeters above 15kHz or not. I cannot perceive any difference above 20kHz. I’m not saying that others cannot.


    So why do we do it, hunt for higher numbers. I think the simple answer is because we can. And that we always want to improve and have the latest gear. The industry have understood this years ago and who can blame them.

    A few words about volume control in the digital domain if you use digital input.
    There may be a problem of resolution if the unit is stuck with 16 or 24 bits internally as the only way to drop sound level is to drop bits and then you will lose resolution per definition (ref to Ivica’s post above). Most pro-gear use DSP processors that can run 41 bits internally and if the volume control in the digital domain is implemented so that it uses the bits above 16 (24) to control level this can be done almost without any deterioration of SQ. A way to check is to turn the output very low (in the DSP) on one channel and keep the other at normal or high level and reduce the gain on the power amplifier so they are on the same sound level. You will need a decent sound pressure level meter or measure the voltage out from the power amp to be certain that both channels are on the exact same level. Even the slightest difference will make the test useless. Listen to both channels using piano or voice to see if you can hear any deterioration on the DSP attenuated channel.

    Alternatively you can put a passive potentiometer/ attenuator or a VCA unit on the analogue output from the DSP. This becomes more complicated in balanced multi-way systems as you need to control many channels at the same time and with minimal shift in attenuation between the balanced leads in each channel but thereare solutions for that also.

    That is what I have done in my main system, I have two 8 channel balanced Burr Brown VCA volume controls after my DSP with 16 channels out (5.2 / 3-way + 2 subs). This is really a “leftover” since I used an older DSP where the volume control was not very well implemented. With my current DSP unit I would probable not have included them today.


    People seem to be less worried about the digital artefacts that results from sample rate converters in the chain and more intrigued about DAC quality. We happily let our computer or CD up-sample to 96Khz or 192kHz without giving a though to if Windows or IOS SW SRC really can keep up with the multi 10kUSD electronics and speakers later in the chain. I say go native J .

    In my view the DAC process is pretty straight forward. Most DAC chips today are really good and implementation does not seem to vary that much except for the really esoteric ones. There are no doubt audible differences between DAC’s but not necessarily from the raw music data or the pure DAC process. Please remember that most freestanding DAC would start with a sample rate converter to re-clock the input before any DAC process can start.

    Much on little

    Kindregards
    //RoB
    Firstly I have to say my knowledge of the digital domain is limited all I can do is read the experts advice and try and extrapolate from there.
    My own experience with DACs is that they are the main thing governing the quality of the sound other than the speakers themselves.
    I run a 4 way system computer sourced via USB to the DAC not the very top model but a Lampizator Level 4 DAC. From there I feed a Mod Squad passive pre amp to a Marchand XM 44 active crossover.
    All digital levels are set full on.
    The speakers are physically time aligned.
    Because of this I have avoided Digital Signal Processers.
    Trouble is that I would like to try and use DSP but I would require 3 x digital outputs and 3 DACs any other solution?
    Regards
    Dave

  14. #14
    Senior Member Ducatista47's Avatar
    Join Date
    Jul 2005
    Location
    Peoria, Illinois
    Posts
    1,790
    Quote Originally Posted by sebackman View Post

    Of course there can be some additional dynamic information in the additional 8bits (16 to24) but this mainly is a way for studios to have "more than enough" headroom at all times. Real life 24bit audio it is not audible and indeed not achievable with today’s technology as the circuit noise floor will mask anything beyond about 20bits.

    Many argue 96/192kHz can technically produce a cleaner wave form but in reality this is not true below the Nyquist frequency, it will be perfect already at 44,1kHz sample rate, that is was digital audio was based upon from the outset. It can of course be technically improved with new and better chips but with no or very limited sonic value to the human ear. -Albeit maybe with a large value to the human ego, not to be neglected :-). There are also few instruments with information above 20kHz, albeit some think otherwise and I respect that even if I disagree. I can’t hear 20Khz anymore but I can hear a difference if I include my 045-1 tweeters above 15kHz or not. I cannot perceive any difference above 20kHz. I’m not saying that others cannot.


    So why do we do it, hunt for higher numbers. I think the simple answer is because we can. And that we always want to improve and have the latest gear. The industry have understood this years ago and who can blame them.
    A voice of sanity. I involuntarily laugh or facepalm, depending on my mood, whenever someone tries to improve on the theoretical work of Harry Nyquist. Those who consider themselves Golden Ears get pretty defensive when faced with engineering facts. At that point what follows reminds me of fairy tales.

    As for digital volume controls in DACs throwing signal away, the one I use will if I attenuate more than, I think it is, 55dBs. Who turns down their system 55dBs?
    Information is not Knowledge; Knowledge is not Wisdom
    Too many audiophiles listen with their eyes instead of their ears


  15. #15
    Member sebackman's Avatar
    Join Date
    Feb 2004
    Location
    Europe
    Posts
    493
    Dear David,

    I look to be a fine piece of equipment and if it sounds good to you it does.
    Using different gear may produce a different sound which by it selves does not imply that your or the alternative gear is better or worse. All gear alters the signal in way or the other, the trick is to find an “alteration” that suits your ears. -And that may or may not be the combination of gear with the overall lowest sound alteration/coloration. J The important piece is that you like the sound your combination of gear produces.

    If you are open to alternative information, first of all I would check that your USB sound card (DAC) is true asynchronous, it is not possible to read from the web page. Info on the implication can be found here.
    http://www.audioresearch.com/ContentsFiles/DAC8_white_paper.pdf
    http://www.hifi-advice.com/USB-synchronous-asynchronous-info.html

    I my experience when using a reasonable computer (not esoteric high-tech dedicated) the asynchronous protocol does make a difference. -More than any of my DAC’s. I personally cannot attribute noticeable sound deterioration to any of the DAC’s I have if the rest of the chain is correct. Not by listening or measuring. I do happen to have digital out from some of my DSP’s, both SPDIF, DSD and BLU-LINK and I cannot hear a difference with by just changing the DAC chip. Sorry. (Digitalout and separate DAC). However, I’m not saying that others cannot.

    My recipeis as Einstein said; “keep it as simple as possible, but not simpler” . J

    Get a good asynchronous sound card with XO clock and a good digital out. Set the computer to output native format. Feed your DSP digital signal (or analogue) and use the analog outs in the DSP. If your DSP does have clock input or output do connect the sound card and DSP to share the clock and turn of the SRC.

    My favorite right now, as I’m using only BSS DSP, is new nice little device from BSS called BLU-USB. It is an asynchronous sound card that feed the BSS DSP with a proprietary digital signal (BLU-LINK) but the neat thing is that there are now SRC’s anywhere and the BLU-USB uses the clock in the DSP so they are always paced. Signal path is short and sweet. But that is for a different post.

    Kindregards
    //Rob
    The solution to the problem changes the problem.
    -And always remember that all of your equipment was made by the lowest bidder

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Similar Threads

  1. SK2-1000 did not sound good - is it me?
    By Jonas_h in forum Lansing Product General Information
    Replies: 5
    Last Post: 01-21-2013, 10:44 AM
  2. How good will this speaker sound?
    By dkalsi in forum Lansing Product General Information
    Replies: 24
    Last Post: 05-03-2007, 04:26 PM
  3. Have-Good-Sound-with-your-beer-project...
    By Ralf in forum Lansing Product General Information
    Replies: 45
    Last Post: 04-09-2006, 06:06 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •