I don't understand this type of audiophiles. I also have a fairly high end, vintage audio system at home, but I'd never do anything like that in the name of audio quality. There's a quote I love about these people:
A normal person uses the audio system to listen music, while an audiophile uses music to listen to his audio system.
You don't understand the power of naivety, placebo effect, dreams and stupidity composed (multiply all). Not in any way intended to insult these people, I know a few (not as rich) and they simply do not want to understand the basic principles behind digital signals, error corrections etc. I know people that work in IT and still pretend that audio transmission over USB can add jitter to music and that a DAC can "open" or "close" "the stage" by low quality processing of the signal. There is no way to convince these people they are wrong, so I stopped trying many years ago.
I know people that work in IT and still pretend that audio transmission over USB can add jitter to music and that a DAC can "open" or "close" "the stage" by low quality processing of the signal.
Audio transmission over USB with cheap converters and poorly chosen levels can result in loss of sound quality. I'm pretty sure you can tell the difference between 8 bit, 12 bit, and 16 bit sound. Choosing levels poorly can reduce your resolution.
Also, there are cheap analog to digital converters which use voltage comparison, which basically linearly "search" for the voltage they should be encoding. So those fundamentally do introduce jitter. (Large changes in signal voltage get encoded a bit later than small ones.) I'm sure there are ways that jitter can be introduced the other way around by cheap DACs. However, if you add more bits and increase the sampling speed, those should largely go away.
Devices like DACs also include a small amplifier, and if those aren't properly designed, there can be some distortion introduced there as well.
That said, there are times when people think they hear a difference, and it's woo. There are other times when the difference is real, however. Audio is a real field with some real science and know how behind it. Please don't grant yourself instant superior expertise just because you've read some stuff online.
(On more than one occasion, I've gotten the "all audio is woo" knee-jerk from programmers who were just reacting to hearing the words "frequency response" not knowing what it was, but labeling it as woo. Ironically, they did remember stuff about Fourier transforms. Go figure.)
> Also, there are cheap analog to digital converters which use voltage comparison, which basically linearly "search" for the voltage they should be encoding.
Integration type ADCs are not necessarily cheap. And you always put a sample-and-hold circuit in front of integration type ADCs, which completely eliminates the sampling time variance.
Also many people are unaware that the Nyquist theorem states, that the _average_ sampling frequency must be at least twice the highest signal frequency. The sampling points may indeed jitter around a little bit, as long as the jitter is distributed evenly. As a matter of fact, adding a little bit of sample jitter can be used as a poor man's dithering, to shift the LSB noise spectrum above the reconstruction filter cutoff frequency.
Jitter is mostly a problem with sample clock distribution. Too much jitter and the sample clock no longer arrives with the right delay with respect to the data, which may create large transients, e.g. whenever the numeric value rolls over some power of 2 (a lot of bits change); this problem could be avoided using Gray codes, though.
Regarding your usage of 'cheap', I would read this as the difference between a 50c or $5 component. I can understand spending more to get a reputable brand compared to buying the cheapest no name item from AliExpress.
However audiophiles are a whole different level. People spend tens of thousands of dollars on items that have questionable difference in quality. I guess people do the same with cars though, a Toyota Corolla will get you to work just as well as a Rolls Royce Phantom.
Wine might be a better comparison (at least a Rolls and a Toyota look different). Many could tell the difference pretty well between a $5 and $20 bottle (hint: the $20 one is the one not loaded with sugar), but not a whole lot of people can tell the difference between a $20 bottle and a $200 bottle. The law of diminishing returns kicks in pretty quick with both wine and audio. I think with audio once you’re in over a few hundred bucks for your set that wall comes up pretty quickly.
> difference pretty well between a $5 and $20 bottle (hint: the $20 one is the one not loaded with sugar)
Wine loaded with sugar??? AFAIK, in a dry wine all the sugar is converted to alcohol- even in the case it's been added before the fermentation, which is still pretty rare, at least in major wine producing countries. That is definitely not the difference between a $5 and $20 bottle of wine (in Italy you can still get excellent wine for $5 a bottle if you buy it from the producer, btw).
This is incorrect, there is always "Residual Sugar" [0] in wine, even when it's dry. Of course, GP is also not correct as there are plenty of wines with high RS that are very expensive while I can get a 7g/L RS wine for around 4€ here.
It also depends on the wine... The difference between a typical $10 bottle of pinot noir and a better $30 bottle is like night and day... But most varieties aren't as delicate and hard to make, so the differences get way more subtle.
If it's added before fermentation, it's still not in the final product, as it's been converted to alcohol.
But you're right, champagne, even the best brands, has some sugar added after the fermentation, about 12g/l. It's part of the method. It's still a rather tiny amount if you consider that an orange juice without added sugar has about 90g/l (and you do drink it by the litre, as opposed to champagne...)
Yeah, and sometimes one car is better than another car, even if they have the same engine ($5 wolfson DAC), because more love is given to everything around the engine.
If you just want to put together a cheap car with a good engine, that's a different thing from wanting to build a comfortable or speedy car with a good engine. Someone who isn't in the market for that extra level will just be a sceptic and say "this Audi is just a volkswagen with a prettier body", but if you sit in one you'll feel the difference. (not sure if Audi's have VW engines)
"More love" in this case doesn't have to mean more than maybe a retail price of $50 or so, to get a DAC with a frequency response so flat and a noise level so low that any difference to more expensive items is purely academic.
Digital to analog conversion is a solved problem, with inexpensive commodity parts. Even if you buy an expensive DAC, the studio gear used to produce the music you listen to has gone through dozens of A->D->A conversions, all done using standard commodity parts. No one builds fancy audiophile-approved DACs and ADCs into mixer or studio gear.
In my home setup, I'm using a DBX DriveRack PX as a DSP EQ and crossover between my main monitors and subs. It's almost a decade old, but its signal/noise ratio is on par with the newest gear you can buy. Any competently built pro gear from the last ~10-15 years with 24-bit conversion can be plugged right into any analog signal chain and be 100% audibly transparent.
People worry way too much about the electronics, and way too little about their speakers and rooms.
You think they're overpriced, but I bet you no one is getting (very) rich off these $400 DACs. Even more, I think many of them profit so little every once in a while a company pivots or goes out of business.
An Audi also has just a couple thousand of extra love worth of physical cost added to it, but the retail price is easily upwards of $20.000 above the equivalent VW car. That's because if you sell fewer units, the r&d and marketing costs per unit are much higher, as well as the required profit per car sold.
If you get a good deal on your hardware, you might have that good DAC in a nice box with a pretty manual for $50 physical costs, then put it on a prime shelf in a nice Hi-Fi store for $130 (30% of $400), maybe $50 to target and prime your customer, and then use $150 to pay your audio engineer, your CEO, maybe outsource the rest of the company infrastructure, and then 8% interest to your lenders. That's 1.7M profit at 10.000 units sold (assuming that you do all of this within a year).
I have no idea if 10.000 units is close to reasonable for an indie hardware startup, but calculations become a little more complex if we're going to have to take a multi year ramp up in account..
edit: Sorry I got caught up a little in the startup aspect of it. Your system sounds great, but your DAC (and DSP+crossover system) also cost $400, so you're not making a point?
No one is getting rich off DACs, except for the makers of outrageously overpriced 4- or even 5-figure retail price units. After all, DACs are commodity hardware, boutique devices serve primarily serve as audio jewelry, with no technical benefit.
The DAC I use cost less than $30, yet performs well beyond the limits of human hearing. The DriveRack did have an MSRP around $300 when it was new, but usually $200 on sale, but it does do a whole lot more than a DAC, and the base price level for pro gear is somewhat higher than for consumer gear.
As I said, people focus way too much on electronics, when even basic devices are more than good enough. I don't even have to use the DAC anymore, after I switched to DisplayPort for my monitor, which has a perfectly good analog stereo output. The only reason I keep the Toslink DAC hooked up is pure laziness.
It's almost as if there's a middle ground to be had somewhere.
There are definitely things which can perceptually detract from audio which people just don't think about.
I'm about to buy a couple power conditioners for my electronics because the wiring in my new apartment is garbage and every little spike in voltage comes through as a loud pop in my speakers and headphones.
But more subtly, my audio is constantly being degraded because I have a few cheap dimmers which cause a constant interference pattern in my electrical circuit that weren't a problem in my last apartments, and I can even hear fluctuations caused by my refrigerator if I listen closely.
And all of these things which can subtly degrade audio are 100x more of an issue when you're recording audio. Run any given song through a spectrogram analyzer and you'll usually see a horizontal line or two which could be anything from a little cable noise to a CRT screen.
Not sure why you got downvoted, I've seen crappy electricity in my own home office after I got a UPS for my computer system. It would have its fans kick in regularly to modulate the voltage that came in at a crazy high ~255v (Australia).
I got the APC
Line-R 1200VA Line Conditioner With AVR installed in front of the UPS and couldn't be happier now - it steps down the voltage and gives the UPS peace of mind.
Try the APC, not the most expensive thing but it's worked well for me.
Installing a UPS behind the conditioner sounds like a good idea. I have been meaning to get one of those... Thanks for the rec on the APC, I bookmarked it for comparison later. What UPS are you using?
The voltage in that part of the house appears to be better, but the circuit is easily tripped when running toaster + micro or similar, so having these UPSes has taken a lot of frustration away.
Actually just checked my invoice and I got them both in 2012, which is a little while ago now, and they're still good with their original batteries.
Wow, that is pretty bad wiring. I can hear an audible hum from my subwoofer when my wife runs the vacuum or blow dryer but that’s about it. And those are about the highest pulling appliances you could have in a household and the sub is pretty big with pretty powerful amp.
Good DACs and ADCs are useful for recording studios who have a lot of analog outboard gear (where the signal leaves and enters their digital setup multiple times). But for normal listening where there are only one or a few conversions, the point of diminishing returns is pretty low.
Most of the perceived gains in quality of higher end equipment (DACs, Amps etc.) come from the higher voltage of the outputs (where you connect your speakers and headphones), this translates into an increased loudness/volume level.
So if you want to do blind tests between different kind of equipment, you have to measure that voltage and make sure it's equal across different pieces of equipment (you can adjust in realtime by volume knobs), otherwise the louder one will always win and perceived as higher quality.
I have a decent hi-fi audio system at home, and there is a clearly audible difference between playing from my high-end CD player, and playing FLAC from the audio out of my Macbook.
The difference is not one of loudness but of 'energy'. A snare hit played from the high-end cd player sounds like someone actually hitting a snare; you can feel there's actual energy in the hit, while the macbook out sounds flat, even at higher loudness.
I'm not sure what is the exact cause but I've always guessed that it comes from the qualities of the built-in amps in the DACs.
That sounds like amp presets to drop bass, which would make perfect sense on an Amp expected to be used on laptop speakers.
I think the general problem is mid-range speakers and reasonable settings is all it really takes to have quality sound, but people want to be above average and therefore look for ways to discriminate between better sound and worse sound differences they can't "yet" hear. From there you get into placebo and random correlations.
It can be the case that the output of your Macbook isn't really flat, or that you have some kind of software processing enabled. You could test a $20 class compliant Behringer UCA202 USB interface against it, which is flat, and try to disable processing or use a software which just passes the signal to the interface.
It can also be your room. If you're not exactly located in the same listening position the listening experience can change a lot, even with the same setup.
Your room is basically an equalizer and should be regarded as a part of your audio system. That's how some Hifi sellers can trick you into more expensive setups, they just change the speaker positions or your location to optimized defaults. It's crazy how they can trick people.
A harder hitting snare can be caused by changes to the upper mids and lower highs, by anything that is located between you and the speakers, and then the later reflections of the room which also add up.
> The difference is not one of loudness but of 'energy'
Aren't those the same? Put more joules into the speaker wire and more joules of sound pressure waves will come out of the speaker cone, which just means the sound will be louder (until you reach the mechanical limits of the speaker, of course, but no DAC in the world could possibly change those).
Another possibility is that your CD player might not be flat. Marantz, Yamaha and other manufacturers add their slight sound signatures to their higher and models.
I never benchmarked my Yamaha CD-S300 to other devices, but its flyers and web site never says it's completely flat on the analog outputs. The reviews of the player notes that it has the distinct Yamaha signature too.
Also "Monitor" and "Flat" devices as their name implies sounds pretty flat and dull.
All-in-all you should consider taking response graphs of your devices rightmark.
> a DAC can "open" or "close" "the stage" by low quality processing of the signal
In principle a low enough quality DAC certainly could affect "the stage", but in reality even cheap ones are likely good enough.
> There is no way to convince these people they are wrong
I'm pretty sure many of them don't want to be convinced since it would make their hobby pretty boring. I also wouldn't be surprised if industry players actively spread myths like these to sell more (expensive) products.
No, cheap DACs are not “good enough” unless you’re completely indifferent to audio quality.
There’s certainly woo and outright fraud in audio. But there’s also attention to detail.
Professional studio engineers will spend $3000 on a high-precision digital clock for their DACs because it makes a clearly audible difference.
Software people don’t understand that human hearing, and human sensitivity to timbre and distortion, is spectacularly non-linear.
Hearing isn’t a linear counting device with x volume levels. It’s an incredibly sophisticated neural network that performs real time classification and source separation based on spatial and ambient cues, spectral analysis, predictive modelling, and entropy estimation.
Some people’s brains and ears are better at all of this than others. Those people can hear things other people can’t.
In music, someone with a trained ear can name the notes in a chord. Someone with exceptional skill can name the notes in a random cluster of pitches on the lowest octave of a piano.
Audio quality has analogous levels of sensitivity. People who have neither shouldn’t be telling people who do what they can and can’t perceive.
> Professional studio engineers will spend $3000 on a high-precision digital clock for their DACs because it makes a clearly audible difference.
I don't know what kind of frequency stability you really need (please enlighten me), but I checked on one of the components retailer websites, and found that even the most stable MEMS oscillators are only a few bucks a piece, any DAC or ADC manufacturer that claims that's worth a $3000 markup is running a scam.
> Software people don’t understand that human hearing, and human sensitivity to timbre and distortion, is spectacularly non-linear.
Hearing isn’t a linear counting device with x volume levels.
As a "software person" I take exception to this. Who exactly of my colleagues claimed that it was? Not even the cheapest, most terribly sounding, incompetently designed child's toy's "what sound do cows make" 8-bit 8kHz DAC produces output with x discrete volume levels, it produces a continuous signal. A bad sounding signal perhaps, but a continuous one nonetheless. Plenty of software people know this, but do audiophiles?
The main kind of clock you can spend $3000 on is an atomic clock (usually a rubidium frequency reference). There's absolutely no point in this for an audio system, but it's a quite trendy money sink (especially because atomic clocks are if anything worse than a good crystal over short time periods and only dominant over very long time periods: if you need both you can combine them).
"Some people’s brains and ears are better at all of this than others. Those people can hear things other people can’t."
Well, a lot of this Audiophile stuff is voodoo but if it makes them happy, who cares. But I am skeptical about your statement. The easiest way is to always do a blind or double blind study. A German computer magazine did this once with CD vs. mp3:
The guy who was best in spotting the difference and assign it correctly had an ear damage. Since mp3 was developed for "healthy" ears this is not too surprising.
"People who have neither shouldn’t be telling people who do what they can and can’t perceive. "
Let's agree on the scientific method. Anything that shows a significant difference in a double blind study, even for a specific individual I accept.
By the way, I hear my music on my computer via mp3 and send it into my super old tube amplifier. :-)
Or even if it makes no physical sense so it even seems worth doing a test. It is depressing to be told people shouldn't do science, especially for the benefit of consumers.
What kind of arguments you've been using? Are you talking specifically about the DA chips or the black boxes containing them? I pretty much can agree with you regarding the digital side of things, but could the actual appliances have notable differences in their analogue side or configuration? Maybe the actual value of having a $400 DAC appliance (built around a $2 Wolfson DAC chip) is going around problems in a computer's noisy or badly pre-amplified integrated audio output.
I don't mean to question your expertise on the subject, I'm just after an explanation what are the exact reasons why the setups with separate DAC and amp boxes have made me find new musical instruments on my audio files. And of course it would be nice to know if there is an inexpensive way of getting the analogue signal to a good amplifier without somehow ruining it.
I fully agree that an external USB DAC can have a much better output quality than the motherboard output that can be influenced by interference; I have an external DAC myself, but I bought it for about $50 (list price was $200 in a hi-fi store, I ordered it from a small reseller).
Not talking about the $400 box with $5 parts in it, but about common sense (plus some physics and math) in understanding what a DAC does and does not.
Not an audiophile myself (well, not that kind of audiophile, but with some passion for the music), I cannot provide good and trusty advice to others, but a digital-out (very common) to an amplifier with a digital-in (very common) is a very simple and cheap way to do it; I am not aware of a cheap way to get very good quality analogue signal out of a PC, for example, as add-on sound cards are very rare these days and very good audio on motherboard is fairly restrictive, but the digital path is wide open. Coax or optical out is the cheapest, a DAC is the next, but the difference between a $50 DAC and a $500 DAC is negligible.
I do agree. I think it has to do with the fact that most motherboards have pretty terrible analog audio support. Every motherboard I've ever had has sounded meh compared to a cheap USB DAC.
It would be super interesting to compare an iPhone analog vs iPhone with a DAC.
I guess you'd have to use an old iPhone with an audio jack though :(
Even the motherboards have gotten better over time. I recall being over at a friend's house in the early 2000's and he was saying, "my speakers are so bad, I guess I need new ones" and I said, "wait, try plugging those into a different source", which we had in the form of another friend's laptop. The audio immediately went from tin can quality to reasonably transparent. The issue was with the cheap DAC in his computer.
But I usually go with onboard sound these days. I did get a FiiO E10K when I got new headphones recently, but I've put it away even though the sound is noticably smoother and less compressed-sounding than my onboard sound - it's another cable and associated hassle.
The reason to go for pricier stuff in my mind is to get more I/O. Good audio interfaces for recording can run you hundreds, and you're paying there mostly for having more channels and some recording features. There's more of an expectation of getting what you paid for.
Spend a lot of money on object —> Expect difference that doesn't exist —> Listen for this difference —> Careful listening reveals previously unnoticed nuance —> Revelation is attributed to new object.
to an extent, it's often part of the ~value of new devices too. your new computer seems so much better when most people have zero idea if it is or where it is.
I don't disagree with the point you're trying to make, but I'm not sure if USB is the best example to make that point.
Case in point: when I used a USB DAC with my NUC there's an intermittent electrical noise that's actually quite audible and distracting (even when nothing is playing). I ended up switching to another cheap DAC that accepts optical input, and the noise completely disappeared. I'm not entirely sure what caused the electrical noise to begin with, but that it existed means there's some potential for USB to mess with the audio signal.
Can confirm. Using my SPL Crimson audio interface with a cheap USB hub leads to clearly noticeable noise and distortion (particularly when silent). Connecting the interface directly to the Mac Mini removes the issue.
Power audiophiles don't do digital. Noise is a very real thing in analog. Go to any rehearsal studio and ask about the power conditioning: almost sure they're not using power as coming from the steet.
To me, my iPod Shuffles, launch day iPhone, and modern iPhone all sound very different from one another when I play the same song. (I prefer the Shuffles.)
Is this a real thing, or should I upgrade to a better brand of crack?
It's hard for our brain to comprehend that such a tiny gadget can sound so good, so I suspect that accounts for a lot of the perceived difference. Also, some music can sound better with a poorer output device by e.g. smearing noise in the treble or rolling off excessive bass.
I also have a 6th generation iPod Nano, an iPhone 5 and an iPhone X. I can hear the difference between the Nano and the iPhone 5.
The difference is not much. iPhone has slightly better highs and slightly tighter lows. The headphones I'm using is Sennheiser MM70i (They use the same buds with CX400-II, I also have these).
I didn't found the difference with "critical listening". One night I just started listening it, and it sounded different to my iPhone. Then I played the same song on the iPhone and found out the difference.
The difference maybe due to different generations of DACs, different OPAMPS, different power budget, different decoders, etc. As I said before, the difference is not that much, but normal for the evolution of these types of electronics. Nano honestly sounds impressive for a device that compact.
P.S.: I used to play euphonium in a large band and double bass in a symphony orchestra, so I have trained my ears for separation and tracking of instruments and rhythm. This is why I'm a bit more sensitive to these things than most of the people around.
I've always wondered if the headphone adapter contains a DAC or if it just acts as a breakout box for a DAC already in the phone. I found some people claiming it does, but none that I can see cite a source for it.
As far as I know, lightning connectors (as opposed to their 30 pin predecessors) carry no analog signals, so the DAC necessarily has to reside in the headphone adapter.
Forgot to mention in my other response... the volume is the biggest influence in perceived sound quality. Equalising that in such devices can be tricky in an impromptu setup.
Then maybe you don't need to equalize it? Run your experiment with both sources at a wide range of volumes. If there's actually a noticeable difference, it should come across in the data.
Power, DAC'S and headphones are pretty much all that matter when it comes to sound quality. Power matters the most. "You don't understand the power of naivety, placebo effect, dreams and stupidity composed (multiply all)." Its not that simple.
>You don't understand the power of naivety, placebo effect, dreams and stupidity composed (multiply all). Not in any way intended to insult these people, I know a few (not as rich) and they simply do not want to understand the basic principles behind digital signals, error corrections etc.
You also don't understand that these people have found a pastime that makes them happy and engaged. Who cares about the actual difference in sound compared to that?
That's a very apt quote. The way I look at things like this is: Do recording studios go to the same lengths? After all, if the effect is real, and the studio didn't do something to mitigate it, then it's going to be on the track regardless of what you do in playback at home. Much more so, even, because of the gain stages involved when recording a microphone signal.
(Spoiler alert: Recording studios do not install their own power poles. What they do, though, is install isolated ground circuits.)
Most larger studios have a huge isolation trasformer for the whole set of studio circuits, and then smaller local filters like furmans. Balanced wiring as noted, also occasionally makes an appearance, but is less common. I run a dedicated ground. I personally use 3 transformers, and 3 filters (one capacitance, 2 mot). Not all in series mind you, but digital and analog gear are on seperate circuits, and a 3rd for hybrid designs (yay 80s synths). Analog recoding is sensitive to power in a huge way, but listening is seldom as affected. Note all studios use balanced audio, while most 'audiophile' crap is still unbalanced.
One of the things I truly will never understand about "Audiophiles" is their willingness to spend money on products that are, from a technical basis, indisputably categorized as snake oil. For instance:
It's an HDMI cable! The video signal has CRC in it and is packetized, it's either going to make it or it isn't. The quality of video delivered at, for instance, 2160p 60fps 10bit color to an expensive 80" 4K TV is going to be literally indistinguishable from the same video delivered by a $16 Amazon house brand HDMI cable.
Or this, it's a $340 cable for 1000BaseT Ethernet:
I can sort of understand methodology and expensive equipment modifications to separate AC power from amplifiers, and shield AC power lines and such, keep AC power runs away from speaker wires, etc. That has at least some basis in technological reality.
> It's an HDMI cable! The video signal has CRC in it and is packetized, it's either going to make it or it isn't.
I don't disagree with your main point, but this actually isn't quite true. The HDMI signal is split into 3 distinct interleaved periods: video data, data island and control. Video data is not packetized and the only possible error detection it has is from TMDS signaling, but no such error handling is required by the TMDS spec. You can absolutely get imperfect transmission of an HDMI video signal due to cable or other electrical problems. Auxiliary packets in the data island, including audio data, do have an error correction scheme (BCH + TERC4).
Yes, but when when there's signal degradation in an HDMI cable it shows obviously, not as a subtle video quality change. If you can see an HDMI signal with white blinking dots everywhere, you can be confident that the cable is not altering the signal in any really detectable way.
"If you can see an HDMI signal with white blinking dots everywhere, you can be confident that the cable is not altering the signal in any really detectable way."
I have seen an HDMI transmission with white blinking pixels at content edges that were not from the original signal. I ran test signal through that source and verified the errors on an HDMI waveform monitor, after which I tested the cable and threw it away.
HDMI video is not magically impervious to error because it is digital. It's actually a pretty straightforward transmission scheme.
This was while working on an HDMI-output device, working with team members who had written the original HDMI specifications (so, yes, I've read them).
It is possible to have subtle degradation in HDMI-delivered content. It's rare, but possible. As one might expect, comprehensive failures (e.g. black screen, periodic HDCP key mismatch) are far more common, as are negotiation/configuration failures (e.g. wrong resolution, wrong framerate, wrong color space).
HDMI cables used to be a great example, but now there are something like 8 different levels of certification and only the highest have the bandwidth to carry a 2160p, 60fps, 10 bit, 4:4:4 signal. And as far as I can tell, they are all passive so your source or sink can't warn you that your cable isn't compatible. You'll just get artifacts.
The AmazonBasics cable will do 18Gbps, so it can do 2160p, 60fps, 10 bit, but only with chroma subsampling (which is fine because most video sources have chroma subsampling). The Amazon cable may work with a 4:4:4 signal, but it's not certified for it. There's a level above that for 48Gbps cables.
I totally agree with the general idea, though. As long as a cable for a digital signal meets the specifications set by the standards body, you're good.
Okay, so the last time I bought hdmi cables I got 2.0 rated. Just looked and HDMI 2.1 rated 48Gbps cables are $25-35 each for 6 foot. That's not totally unreasonable. Now how is a $350 cable or a $800 cable any better?
One thing I can say about copper is that it is noisy.
We are running FPGAs (with NICs) and were getting lots of packet errors over the copper cable SFP cables. Turns out, we either needed to write out noise cleanup logic into the FPGA, or add in a noise cleanup module which would add an extra 50ns to packet processing to the FPGA. Simple solution was to convert to fiber and the noise and errors all went away.
I'd say if audiophiles truly want unmolested electricity, they ought to use optical fiber for as much of the run as possible.
Digital transmission/storage ensures the signal is not affected no matter how much noise is on the line (well, as long as it can be transmitted at all...)
Now, what I think you mean is that the copper in the cable picks up noise either from the device on the other end or from the surroundings and that has the possibility of affecting the DAC.
Even moderately well designed receiver should be able to galvanically isolate itself from outside sources of noise without having to resort to fiberoptics or wireless. It's just couple of dollars max for extra few components on the board.
For an example possibly familiar in computing circles, Infiniband has easily-accessible counters to record transmission errors. In a properly-built system with hundreds of communication-busy nodes connected with copper, you see remarkably low error rates, even in comparison with the spec (maybe a few a week in my experience).
Infiniband is using extremely isolated and well balanced cables for passive/copper transmission. Also the cable lengths are very short for EDR and beyond (3m-5m is max usable length). For anything longer you need to use fiber cables with embedded converters.
We have a cluster with ~1000 nodes and all generations of Infiniband (DDR/QDR/EDR) equipment, and routing all these copper cables without damaging them is not easy.
The quality of digital tranmsission over copper cables is dependent on the size and clarity of "eye", which is the pattern which two sinusodial signals when run through and oscilloscope. Lower quality cables (HDMI/DP/Coax/etc.) has blurry and small eyes, which increases chances of artifacts and drives the error correction modules harder, while better cables are easier on the both sender and receiver, since they produce lower number of errors to correct.
Trivia: Amphenol / Gore has high speed digital interconnect cables called Eye Opener Plus [0], which also used some of the Infiniband cables that we use.
> Digital transmission/storage ensures the signal is not affected no matter how much noise is on the line (well, as long as it can be transmitted at all...)
There is no such thing as a digital transmission. All signals are analog.
Digital transmission refers to the whole transceiver system, not just the transmission medium. If I can put a "1" in and reliably get a "1" out, it doesn't matter that it was only a 0.9 part way down the wire.
I suppose you need to read up on digital protocols, it is all well handled. If it wasn't there could be no Internet or digital media or even processors or motherboards.
I'm just so tired of all the bullshit where it "Must be a 1 or a 0". That is patently not true. HDMI video, to name one example very relevant to this thread, has no error correction, and bit errors will indeed crop up as the "sparkling" artifacts GGGGGP reported.
Sorry, copper is not ‘noisy’! Any medium has limits and methods to properly use it. E.g. shielding / grounding. Have to look at whole system. Agree fibre might be more practical in some situations.
Obviously don’t spend $1000 on a normal passive HDMI cable, but I’ve found that increasingly the all-or-nothing argument doesn’t work for some reason. I recently had to upgrade all my HDMI cables to get 4k HDR video working reliably. Before upgrading, the video would work, but I would occasionally see white “sparkles” on the video output. Googling this problem led me to many people recommending upgrading HDMI cables if you have very old cables.
Certainly there are bad cables, and sparkles are clear signal dropouts.
The description of that cable though, says Perfect-Surface Technology applied to extreme-purity silver provides unprecedented clarity and dynamic contrast.
An HDMI cable can not affect the "clarity" or contrast of the image or audio it carries. It can just succeed or fail (and failure results in obvious pixel noise, not some overall lowering of contrast or clarity like in analog)
All cables have some probabilistic failure of corruption, it isn't binary. It's the same as what going from 5 9s in your application to 7 9s, which involves a huge cost with distributed databases multiple data centers etc. etc... For most people it's a waste, but if you really value not having the occasional white speck, it could be worth a better cable, which may cost a lot more to recover design costs of making a cable very few will want to buy.
I don't know this world. But could it be because the HDMI protocol doesn't require correcting all errors? And newer cables produced fewer corrupt packets which might have resulted in "sparkle"?
Different HDMI cables have different bandwidth which is a result of their physical construction, same way as you have different bandwidth ethernet cables. Sending signal on the edge of cable's capability may be causing intermittent problems.
Many years ago I saw gold plated TOSLINK cables. They were $500. TOSLINK is a pure fiber audio connection standard that uses multimode fat plastic fiber.
Anything "audiophile" in the digital domain is all equally absurd. The only difference is that some require a half-second of consideration and others require literally no thought at all to identify as stupid.
I wouldn't go as far to say "anything", but certainly most. As long as a DAC with sufficient resolution and stable clock is being fed with enough bits, then yes.
If it’s all-digital and marketed to audiophiles, it’s a scam and/or absurd. You can’t improve upon bit-perfect. (Room calibration being a notable exception.)
You mean like those people, who argue, that: "Everything is a sequence of Zeroes and Ones, therefore the signal either gets transmitted or not!", while totally leaving out the fact, that this digital signal materializes in the real world via an analogous signal, which is electricity, and therefore each digital data transmission (as long as not optical) is submitted to physics that happen totally outside of the "digital domain"?
I don't understand the point you're trying to make. Everything is ultimately subject to physics, including optical signals. Nobody claims otherwise.
But it is incorrect to say that digital transmissions must be at the mercy of physics. Digital transmissions can defy physics. There are numerous techniques which ensure that a signal reaches the destination with precisely zero flipped bits and exact timing. And even when the signal has no integrity mechanisms, in practice the error rate will be low enough to never matter in practice.
And you know what, even if bits were getting flipped and jitter was extreme, you still wouldn't have signal degradation in the ways described by audiophiles; you would get a raised noise floor. Random error is noise. Noise is random error.
Of course this is never an issue. We can send digital signals that are millions of times more complex than digital audio, with zero problems, using equipment that is insanely cheap. The assertion that the extremely low data rate PCM audio signals have some special risk associated with them is delusional.
> I don't understand the point you're trying to make. Everything is ultimately subject to physics, including optical signals. Nobody claims otherwise.
Sorry, I misformulated my sentence.
It should have read (emphasizing what I left out from the sentence in my previous post):
> You mean like those people, who argue, that: “Everything is a sequence of Zeroes and Ones, therefore the signal either gets transmitted or not!”, while totally leaving out the fact, that this digital signal materializes in the real world via an analogous signal, which is electricity, and therefore each digital data transmission (as long as not optical) is submitted to the physics of electricity, that happen totally outside of the "digital
domain"?
Though, I think you are nitpicking, because, from the rest, it should be totally obvious, what I wanted to say. Use the error correction, man ;-)
Therefore:
If an analogous signal's "success" can suffer from the material, through which it travels, then a digital signal will suffer for the same reasons, simply, because it is not a digital signal, but, materialistically, both signals are of the same sort.
Even if we could create the perfect mathematical concept in our brain and have a solution for anything, as soon as we step into the real world, that is, the material side of affairs, many unexpected things can happen, that have to be accounted for at the next time.
For what it's worth I deal with race conditions on a daily basis with multi-threaded programming. What I don’t understand is the point you’re trying to make and how it relates to real digital audio systems. Can you please explain it.
tl;dr: As soon as the "digital domain" steps into the "physical domain", you can not guarantee a "side effects" free system.
Let's start with a real-life example:
I have a Macbook from 2006, which has a Firewire port. I have an external audio-interface (Onyx Satelite Pro), that has a (Logitech Z4) active speakers system connected via line-out. When I got the Onyx, I had to realize, that the delivered Firewire cable was too short for my setup, so I had to purchase a longer one. I took great care to buy the cheapest one in the store (which was a Radio Shack type), because, hey, digital is digital, or isn't?!.
When I came home and exchanged the two cables the first thing I realized, much to my surprise, was, that the audio sounded different!
The old cable sounded more "relaxed", "warm", "analogous". The new sounded "faster", "analytical", a little "sterile". How could that be? All I changed was the cable...?!
Conventional audio wisdom, especially of those, who call audiophiles "audiofools" dictates, that a digital signal is just zeroes and ones. The sequence either get transmitted or not. Add in the timing component, which is either to the point or off, and things should be pretty clear: What goes in is what comes out. No loss of information. Should there be such loss or distortion, we can error correct it, at least up to a certain degree. Therefore, one always get a perfect, or near perfect, signal.
That, at least, is the argument.
If that argument (alone) would hold true, and be sufficient to describe all, that happens in such a system, I would not have been able to hear a difference, by exchanging the digital interconnect, right?
But I did!
Therefore there must be something happening outside of the digital domain, that influences the signal. In my case, clearly the cable!
And now I come to the conclusion, which should answer your question:
> What I don’t understand is the point you’re trying to make and how it relates to real digital audio systems.
Former example covers exactly your request for a real digital audio system. What happens here is, that we step out of the digital domain into the real world domain, which means the physical and chemical domain. Namely, the domain of audio components. These components are made out of matter. In the case of electrical signals (Firewire, USB, etc.), the medium, which transmits our signal, is the same, which would transmit an analogous audio signal. Signals get transmitted as electricity with ever changing voltages (leaving out optical transmissions now). Therefore all, that applies to an analogous signal transmission, applies to digital signal transmission, as well, as long as the transmitting medium is electrical.
Now, you may, say, that that is not of importance, since the information carried is still a sequence of binary data, and it gets transmitted or not, given the theory behind it, it should be perfectly reproducable. But then, I ask you, why the difference in sound?
Given a perfect program (our digital concept), when applying it to the real world (physical/chemical), race conditions may occur, simply, because of unpredicted side effects (you know the importance of this), that come with this domain. Our perfect program, here, would be the theory of lossless, bit-perfect data transmission: the digital domain. Or - Theory. Our real world would be the cables, connectors, building blocks of electronics, which are: the chemical and physical domain. Or - Matter. And there is something happening in the material domain, that makes an impact on the sound. I don't know what that is. But I can clearly hear it on sub 1000€ system. Now imagine, how much more this could be heard on a stereo system, that cost 25.000€ or more...
> Anomalous behavior due to unexpected critical dependence on the relative timing of events.
The anomalous is the key word here. There are side effects, that happen outside of the "digital domain". And it is these side effects, that change the sound. And audiophiles know this. If a cable can change the sound, then capacitors, resistors, whatever, may well change the sound, too.
> ... given the theory behind it, it should be perfectly reproducable. But then, I ask you, why the difference in sound?
Placebo effect. 100% guaranteed. It is literally impossible for digital audio signals to degrade in any way that even vaguely aligns with your descriptions. It is analogous to using a different brand of SD card to get richer, more vibrant colours out of your digital camera. It is analogous to changing the ethernet cable on your computer to make web pages more insightful.
Bits don't work that way.
Whereas the placebo effect is very real and very powerful.
Firewire is also a two-way packetized, error corrected protocol. If a cable was not delivering the signal reliably, the consequence would have been a lower maximum transmission rate, not error that could affect sound. If there was any change in the signal due to the cable, your Onyx interface could not maintain a stable connection with your computer and you'd get no sound at all.
Your attempt to describe how digital signals are transmitted is actually just plain wrong.
Furthermore, race conditions have absolutely no relevance to this discussion.
> [Digital] Signals [when not transmitted optically] get transmitted as electricity with ever changing voltages [...]
You wrote:
> Your attempt to describe how digital signals are transmitted is actually just plain wrong.
Wikipedia writes:
> A waveform [...] as a digital signal [...]. The two states are usually represented by some measurement of an electrical property: Voltage is the most common [...]
Are you referring to noise being transferred to the analog stage, or digital data transmissions being corrupted?
The latter is very unlikely with asynchronous USB DACs since USB will retransmit corrupted packets, and jitter isn't a concern since the DAC buffers then uses it's own clock.
The former, well yeah, it's impossible to completely eliminate RFI no matter what you do, short of optically isolating the source, putting the analog equipment in a faraday cage, and running it all off of batteries.
That still leave the dirty old storage drive. Fortunately Mojo audio have got that one covered.
“Integrated Sorbothane resonance control under the chassis, main board, and SSDs isolate resonance.”
This isn't _strictly_ true - I've seen basically the equivalent of noise in a HDMI picture (white dots all over the picture), and eventually tracked it back to a faulty HDMI cable.
I did replace it with the cheapest (<£5) HDMI cable I could find though, so there is that.
That's not 'noise', that is packets that didn't make it. It's not having a perfect picture vs. no picture, but about data packets either going from A to B or not going from A to B.
Say the HDMI cable is crap and it can only do 100 packets per second, and HDR needs 1000 packets per second, the HDMI chips are going to negotiate a HDMI transfer settings without HDR. Going back further, say the HDMI data path is negotiated to do only basic HDMI, say, DVI-like video. If even that is too much for the cable to handle, say, it constantly drops a few packets, you would get either perfect pixels or missing pixels.
You are correct. I suppose serial data is more like a clocked stream than packets. Perhaps I was thinking of TDMA instead of TDMS. Flicking bits is easy indeed, which should be mostly mitigated with the requirements on the conductor and shielding.
It's a stream based cipher, so you'll get the same amount of blips as you'd get with normal bit damage. Difference is probably that you can get a whole missing frame as a blip instead of just missing pixels, because the stream key operates on frames instead of single pixels.
You are right. I went back and read hdcp spec. Now I understand my confusion. What I experienced multiple times with hdcp and bad/low quality cable/connection was screen going blank for a short moment. I always assumed it was cryptography itself breaking apart, but the spec requires resetting connection in case of logged TMDS layer errors
" While the receiver may reach a determination
sooner, the receiver must determine loss of synchronization at least by the time it has detected 50
consecutive data island packets with ECC errors. The HDCP Receiver must either assert the
REAUTH_REQ bit of the RxStatus register or de-assert HDCP_HPD to the upstream transmitter
once it determines loss of synchronization. "
The examples you linked are crazy. It is possible, especially with the Ethernet, to just check the packet error rate, and see if there is a problem.
There's something to be said for a bit of extra effort in filtering, but most of what audiophiles pursue has no basis in electrical engineering reality.
And then there's even weirder audiophile accessories like cable elevators, amplifier stands, CD "demagnetizers", and "quantum chip" stickers -- all complete nonsense which doesn't even have any plausible way of affecting the audio.
I think this is a straw man, since a majority of audiophiles will tell you that a 1000 dollar HDMI cable is stupid. nobody actually buys that shit.
What does matter is getting a decent HDMI cable. I'm going to get a braided cable which I know supports the bandwidth of HDMI 2.1 (I think that's the current version).
Technically there is jitter in the digital domain, and it's possible to see problems at high bandwidth signals, but in low bandwidth transmissions like Flac, you're absolutely right.
If you have nanosecond timing requirements, then those might be useful, but for audio they are overkill.
The difference is that with digital signal there is limited number of components that actually make a difference and as soon as a certain level of quality is reached (basically reliable transmission) nothing can be improved further.
For listening purposes the transmitted data can be buffered on the receiver DAC. This means that as long as the buffer on DAC is not emptied everything else may be failing badly and it will have zero effect on your listening experience.
You were not downvoted because people took your comment literally, you were downvoted because being snarky runs against the Hacker News guidelines, specifically:
> Be civil. Don't say things you wouldn't say face-to-face. Don't be snarky. Comments should get more civil and substantive, not less, as a topic gets more divisive.
Sometimes snarky comments will be forgiven if they are substantive. Yours contributes very little beyond what the parent/grandparent comments had already mentioned.
And that's exactly what real high quality cable is.
Professional cables are usually flexible, with robust insulation and connectors. They may also be fire rated. Quality control ensures that cables will always be up to specs because people doing the installation have other things to do than dealing with defective cables.
Like most pro stuff, pro cables are not "better", they are cables that won't be a problem.
One theory I read (for good quality USB cables to a DAC) is that reduced packet loss reduces jitter, which means that the clock rate of the DAC doesn't have to adjust in order to keep the buffer populated, but not overflowing. Sounds plausible, but I'm not sure that packet loss is high enough to actually cause this.
A working USB cable does not create packet loss and definitely does not fix packet loss; if packet loss occurs because of the cable, the cable is defective, nothing else. There is no good quality and bad quality, there are cables that work as expected and cables that are defective.
But there is capacity, and USB1.1 may need less than USB2. That said, we can assume that we are talking about USB cables that are tested/verified/certified, which means they have to follow a few rules about how much Mhz bandwidth it supports, how much EMI it can deal with and how much drop in power it's going to cause. But of course, those are measurable facts and still have nothing to do with the data packets getting 'warm' or 'fuller' or some other subjective term.
Yeah but still, the biggest loss is going to happen within your plug/jack connection. So if you really care, solder your cables in. Better than every gold plated connector your audiophile purse is ever going to buy.
Also: Acoustics are what matters the most (and they are the hardest to get right, just by throwing money at the problem, that is why it is not usually beeing discussed that much).
Jitter is itself an audiophile scam. The audible consequences of extreme jitter is an increased noise floor, which itself is being drowned out by the sound in a typical "quiet" room.
Yeah that basically describes me, between the ages of 16 and 26. We would spend hours moving gear to different locations (homes), fiddling with cables, different amplifiers and loudspeaker to find the best possible match our (limited) finances would allow.
Tube amplifiers combined with mosfet pre's, magnapans with external bass speakers. Class A amplifiers that would require a dedicated powerfuse per amplifier. First press vinyl from Japan. Van-den-Hul needles. Granite turntable platform.
Every single weekend we'd swap pre amps, amps, turntables, arms, elements, needles and listen to the "audio system via music" :)
What happened to my setup, you ask? I got married. My living room was modelled around the audio experience: it was not an ideal love nest :) I actually gave my hand crafted, tailor made end amplifiers away just a year ago to a young stranger I overheard talking about his audio quest and who is in the phase where he is searching for his ideal setup like I was 30 years ago.
I used to obsess over the sound quality before. But when I grew as a musician, I started to care less about the frequency balance, dynamic range and started to care more about the emotional content. Now I listen to music on the cheap and convenient bluetooth headphones, going through dozens new songs every day (in crappy streaming quality), and instead of tweaking Ozone settings, use LANDR for mastering and use this time to work on lyrics instead.