Editor’s Note: Most of Bob Stuart’s answers have been debunked and the MQA technology is now seen as lacking any benefit for anyone other than record labels and MQA Ltd.
Shortly after attending CES 2016, where MQA was a very hot topic, I realized that there was more speculation about MQA than available facts. Nonetheless, it seemed like everyone had an opinion about MQA. Most people had never heard the final output of the MQA process, an actual song or two, but they were still very eager to render an opinion. Some armchair engineers jumped at the chance to speculate what was going on, based on little to no information. In addition, other learned folks even rushed to judgement about MQA without fully understanding what they were “analyzing.” Once I started to see this speculation controlling the narrative and leading interested CA readers down a path that wasn’t necessary illuminated by facts, I figured it was a good idea to go right to the source. I talked to MQA’s Bob Stuart about some of the questions people had and some of the speculation that was swirling around not only CA but the entire HiFi community. I proposed a question and answer “session” where the CA readers could ask anything they wanted, without censorship, and Bob would respond. Without hesitation Bob agreed. In order to accumulate a good number of questions and to give Bob a decent amount of time to formulate thorough answers, the questioning period was open for one week, after which Bob curated the questions and started writing his responses. Bob was the first to say, “All questions will be answered.” As such, the time between the end of the questioning period and the publishing of the answers needed to be ample. I’m sure Bob could have whipped up some talking-point type answers in a day or so, but that’s not what he wanted to do and that’s not what those asking the questions wanted to receive. Fortunately good things come to those who wait. This week Bob sent me a thirty page document, including eighty-two questions, graphics, references, and a glossary. What follows is a word-for-word reproduction of this substantial document.[PRBREAK][/prbreak]
http://images.computeraudiophile.com/cavb/1-Pixel.png
INTRODUCTION
Thank you for the questions and the opportunity to answer them. We have grouped them into broad categories. No questions were ignored or removed but sometimes answers are for a group of similar questions. A few questions were either added in by us or brought in from others, to aid clarity or completeness. [jumpto=anchor57]1[/jumpto]
Several of the more technical questions do require detailed answers, some of which we give here, others are brought in by reference. Some topics will be covered in our upcoming blogsite giving more data.
There is a list of myths, references and a glossary at the end.
Cautionary note: We have tried to answer your questions fairly, please offer us the same courtesy and try not to take our answers out of context. If there are any burning questions, contact us on info@mqa.co.uk.
[aname=anchor57]1[/aname] Spelling or grammar in the questions may have been adjusted.
WHAT IS MQA?
[aname=anchor42]Q0[/aname]. What is MQA? (bonus question)
A0. In essence, MQA is a hierarchical method and set of specifications for recording, archiving, archive recovery and efficient distribution of high quality audio. Devised by long-term collaborators Bob Stuart and Peter Craven, it has been developed by MQA Ltd. [[jumpto=anchor1]1[/jumpto]]
One axiom is that, in audio, High Resolution can be more accurately defined in the analogue domain in terms of temporal fine structure and lack of modulation noise than by a description in the digital domain, particularly if that description relies on sample rate or bit depth numbers. [[jumpto=anchor2]2[/jumpto]]
Another observation is that, by not going back to first principles, the recent trend seeking higher- resolution in digital audio has involved an unstructured and somewhat unscientific approach; a ‘dilution’ rather than resolution of problems; leading to excessive increases in data rate with resulting lack of convenience for the end user.
A postulate in MQA is that by combining the statistics of musical signals with modern methods in sampling theory and insights from human neuroscience, we can more effectively convert the analogue music to digital and back to analogue.
A key implementation point is how to bring these insights to bear on current equipment, so that distribution files can be enjoyed on existing equipment while at the same time not accepting compromises in the potential to overcome key problems in processing or in the gateways (A/D and D/A) or to innovate in the future.
We see from the questions that some people have been confused but this is generally because they are approaching, trying to understand, or forcing the discussion on MQA, from a different conceptual frame of reference. In brief, MQA is a philosophy more than it is ‘just a codec’.
[aname=anchor41]Q1[/aname]. Are you guys serious about this? (bonus question)
A1. Very. We are very serious about the problem that, in the internet era, the average level of sound quality has declined for most music fans. The steady dissolution of the album as a creative work and reduction in inherent sound quality from vinyl or CD to MP3 is more than unsettling, especially for newcomers. The fact that the decline of physical media has effectively disconnected several generations from simple discovery and playback has accelerated the process.
For music lovers, hobbyists in audio, or companies distributing music or building playback equipment, the most important short, medium and long-term issue is the quality, depth and scope of the recorded music catalogue. Since 2000 the technological and behavioural climate has taken a serious toll on the music companies’ ability or desire to release the best sound. Audiophiles can never make a big enough market for recorded music and so the problem has to be solved at a much wider level (we return to this theme in Q8 and Q11).
Just for fun, we will put this picture back again – partly because it led to so much argument last time. But just before we do, remember the graph isn’t ours, it’s the opinion of top music label heads of what they delivered to the average customer and where they want to go.
http://images.computeraudiophile.com/graphics/2016/0407/Figure-1.png
Figure 1. Evolution of formats (High Resolution Image)
We are also serious about the fact that, in the high-performance arena, development has been haphazard and there are only loose causal links between escalating sample-rate, bit-depth and sound quality. So, a ground-up re-evaluation was called for.
The MQA technical team has more than 100 man-years’ experience in this area. Bob Stuart and Peter Craven, who head that team, are well known and respected in the audio engineering community and, along with their late collaborator Michael Gerzon, are behind many of the things we now take for granted. Examples include the first to optimise dither, noise-shaping, propose and build lossless compression (MLP/TrueHD a decade before FLAC), lossless processing, auditory noise analysis, lossless-matrix coding, lossless buried data and even apodizing filters (the term was coined by Peter). [[jumpto=anchor4]4[/jumpto]][[jumpto=anchor5]5[/jumpto]][[jumpto=anchor6]6[/jumpto]][[jumpto=anchor7]7[/jumpto]][[jumpto=anchor8]8[/jumpto]][[jumpto=anchor10]10[/jumpto]][[jumpto=anchor12]12[/jumpto]][[jumpto=anchor13]13[/jumpto]][[jumpto=anchor14]14[/jumpto]][[jumpto=anchor15]15[/jumpto]]
A decade on from Peter describing apodizing, a high percentage of DAC chips now incorporate that thinking. Two decades on from ARA, lossless compression has become the norm. [[jumpto=anchor16]16[/jumpto]]
We believe the same will be true for the fundamental insights brought by MQA.
GENERAL QUESTIONS
Q2. Are you willing to produce more technical documents describing the technology? The more details the better.
A2. Yes we intend to provide more information. We are a small team and, from necessity, our focus has been on completing the tools and on thousands of hours of detailed listening tests. Nevertheless there is already some useful information available and we are working to have more of our papers open access. See references [[jumpto=anchor1]1[/jumpto]][[jumpto=anchor2]2[/jumpto]] and [[jumpto=anchor3]3[/jumpto]] (in Japanese). There is another AES paper in the works. We’re about to launch a blog section on the MQA website.
Up to now we have concentrated on the professional bodies such as RIAA, JAPRS, JAS and the mastering community.
Q3. Is the MQA technology still under development?
A3. We have frozen the distribution bitstream at version 1.0. Our view on compatibility is strict, therefore any music files sold today will always be compatible with future decoders and similarly decoders sold today will decode aspects of future streams compatibly.
Q4. Where do you think MQA is a year from now?
A4. The MQA project has a long timeline, but our hope is that inside a year:
.. major and independent music label groups in Japan, Europe and the USA will be using MQA as an integral part of their business. This is the key – helping the music industry to distribute a more accessible product to bring a better listening experience to many. .. many companies currently incorporating MQA will have announced products. .. a significant number of studios involved in high-quality recording will be using MQA tools to make great new releases. .. we will have seen services launched using MQA streaming for live events.
Q5.
Is there a market for MQA in streaming movies in order to achieve better sound? Can MQA be multichannel?
A5. MQA is hierarchical and, as you might expect from its founders, has strong underpinnings to extend to multichannel. However this is not our first focus. Frankly we think the bigger and more urgent task is to do what we can to help music. It can be expanded to multichannel later.
Q6. Are there any plans to use MQA in video releases?
A6. Not at the moment. Taking on a second industry, especially Hollywood outside a Forum context, would be foolish.[jumpto=anchor18]2[/jumpto]
Q7. What is the time Schedule for MQA streaming release? Are any major Labels incorporating this kind of encoding?
A7. We are in discussion with a number of streaming services. However it is up to them to make their own announcements.
Q8. Will there ever be MQA releases in physical format that can be ripped? I’d rather buy something in physical form especially if it is Premium priced and then rip It into my computer. I am one of those guys who would like to hold what they pay for.
[aname=anchor35]A8[/aname]. Most download services do provide backup access to purchases and on-demand streaming services need no backup. Nevertheless, it’s easy to sympathise with that sentiment.
Nothing resonates across the generations better than a physical format you can readily buy in the high street, but streaming and new forms of download combined with high sound quality might get us back to something better than exists today.
The sentiment in your question is not new. Stephen Witt wrote very recently in the Financial Times ‘… there is a phenomenon so new it doesn’t have a name. The digital era gives us everything to own but nothing to touch. What remains is our phantom longing for useful physical objects.’
Every music lover, in each generation has a different take on this. For me, nothing beats the sound quality and ease of access of a Sooloos, but if the same files are on a streaming service and one can choose whether to buy them or not, nothing is lost and an exciting journey of discovery can start.
CD was the most successful format for mass music because the catalogue became huge, everybody understood how to buy, play and store it. Make no mistake, the vinyl resurgence is not about convenience, universality or accurate sound quality.
The audiophile’s hobby is fuelled by recordings, but we live at the behest of the larger market. The damage caused by MP3 (and similar) was huge, economically and, in retrospect, culturally.[jumpto=anchor19]3[/jumpto]
For this complex set of reasons, MQA was conceived partly as an offer of way out of the weeds.
We made sure the MQA stream is LPCM and 100% compatible with the past. That means there is no reason not to put MQA on DVD-Video or BluRay. There is another form of MQA that is very interesting for bridging narrow gaps in distribution; it is practical to make MQA CDs which are 100% backward compatible, sound great as a Redbook CD and decode to much more. A number of labels are seriously considering that option.
Q9. Regarding my disappointment from hi res audio (with some exceptions of course) as a holy grail digital format I believe that MQA is the last format standing between Real evolution in digital audio and Redbook-mp3 total domination in the long term.
A9. We are inclined to agree. It’s an important problem we are solving and requires insight, perspective and determination. We are up for the chance to make recorded music more enjoyable and more available. We have been very pleased by the number and quality of very positive comments and support. The key difference is we are taking the solution inside the music industry. This inclusive approach makes it slower to get going, but we hope more effective in the end.
Q10. Despite audio buffs (Who tend to listen with their minds rather than ears) nobody will really care about hi-res enough to make It something Special in the Market and in my opinion rightfully so. The only hi res formats that really make a difference are so large that makes them totally inconvenient to store stream or even listen to. To my experience all other digital formats except DSD128/DXD offer little if any improvement over Redbook depending on the recording.
A10. In our experience, when ‘normal’ people hear better quality they appreciate it and furthermore, understand that they have been missing a lot in their music. In earlier generations there was only one distribution asset, e.g. the vinyl record. People could choose how to play it back. The record was not dumbed down because the listener didn’t care. This judgemental approach in file format proliferation is a big step backwards on many levels; it alienates and fragments the market.
So, we contend that if you don’t make it harder or more expensive to get a better sound, everyone wins, especially the artist who can communicate better sound and the fan who receives a clearer impression. Implicit in this initiative is that the closer we can get to the sound of the live or original, the more we understand and enjoy the music.
Quad DSD and DXD are superb, but they are barely practical distribution formats. By solving a fundamental problem in digital music, MQA allows one ‘mechanical’ to cover the majority of listening contexts. In very basic outline, this diagram hints at the reason you don’t find intermediate formats fully satisfying. I’m going to return to the DSD128/DXD comment in a later question.
http://images.computeraudiophile.com/graphics/2016/0407/Figure-2.png
Figure 2. Notional sound quality vs sample rates (High Resolution Image)
Q11. MQA is truly the last chance for something really Special to happen. I sincerely hope that all those frustrating delays mean that its release will be something special including major support from the Real music Industry and not just Niche audiophile labels. If it doesn’t come to the music most people like to listen to it will not go far I am afraid, and I would like to think that it will.
A11. We agree with you 100% that there are very few chances to change the course. We chose to have had a few delays to gather consensus, to make sure everything was right and to set up a program with a level playing field. The history of recorded music shows that when the release form is accessible, it is more successful. The music market was never built on audiophiles, even in the vinyl era (where quality was also compromised for convenience).[jumpto=anchor20]4[/jumpto]
Of course the major label groups are important, but so too are the independents and we are putting a lot of effort into the music supply chain of all genres. We are very actively supporting the smaller labels as well and they can release early because they are typically more agile.
Our vision here is to converge access and convenience with quality.
Q12. When will the MQA certification process for third party DAC partners be complete, such as the Mytek Brooklyn?
A12. The quick answer is always as soon as possible. Each certification process depends on the company and its model and how much work we have to do together to get the result that suits us both. We have a process in place and many products going through.
Q13. Where can recordings be found where the recording process was performed using MQA compliant ADCs?
A13. None out yet, but they are in preparation.
[aname=anchor36]Q14[/aname].
Do you see MQA as complementary to existing technology, as co-existing but separate, or as a replacement for any specific technology? , and .. Obviously there are a lot of existing systems out there, are we looking at compatibility with iTunes, Apple Music, Sonos, etc.?
A14. A cornerstone of MQA was that it should be seamless and backward compatible in the existing ecosystem. To be useful in music distribution, the files have to play everywhere and single inventory is very appealing. To be useful to the music listener, one file that can play everywhere: in the car, on a phone, on a PC, in iTunes, on Sonos (after import) or on the hi-fi or that can be streamed realistically would be perfect.
Not everyone can be bothered with file management, especially transcoding from big ‘hi-res’ files to portable-friendly.
So, yes, we do see it is as complementary (see also [jumpto=anchor35]A8[/jumpto]). MQA audio is PCM in a world of PCM. But, like PCM (and indeed MLP) but unlike FLAC, it is not a file format as such. MQA encodes audio in a continuous stream; so we can put it on a disc and skip around. It can be played back without its header, we can jump to the middle of a song and the music starts immediately; the decoder knows what to do. And because it isn’t a file format, it can also be used for live streaming.
Of course the entire legacy ecosystem is complex, varied and challenging. Computer audiophiles know the traps waiting to prevent bit-accurate audio coming from a computer; similar dangers or chances to lose transparency lurk in automotive, in mobile (iOS and Android), in Airplay, over Chromecast, over Bluetooth, in well-meaning accessories or in products designed with faulty understanding of DSP, etc.
Some platforms don’t accept all possible sample rates or bit depths – that’s part of the challenge we took on.
The MQA decoder tells you when the data are correct and the same decoder should know the composite DAC and associated analogue sections to get the best answer that hardware can give. By this means we can get a better sound than any other delivery method.
[aname=anchor18]2[/aname] Having served on the DVD and Blu-ray Forums, we understand how time consuming that process can be.
[aname=anchor19]3[/aname] See Stephen Witt, ‘How music got free’.
[aname=anchor20]4[/aname] Greg Milner’s book ‘Perfecting Sound Forever: The Story of Recorded Music’ gives a very interesting overview. It’s worth a read, but be prepared to find the last chapter saddening as loudness compression and downstream over-production takes its toll on the art.
THE PHILOSOPHY
[aname=anchor43]Q15[/aname]. To Mr. Stuart: I read your 2014 AES convention paper #9178 with great interest. It is clear to me that MQA has great potential and considers a larger array of psychoacoustic factors than are currently acknowledged in conventional sampling/filter theory. I would like to ask: Can you please clarify how DAC sampling rate capability might affect reproduction quality, considering your paper statement that the receiver (decoder) should implement an appropriate up-sampling reconstruction? What sampling rate capabilities would you consider ideal for highest quality reproduction, and what other DAC capabilities would you suggest are important?
A15. This is a complex topic. A central axiom of MQA is that sound we hear is analogue; digital technology is most useful for storage, transformation or transmission.[jumpto=anchor21]5[/jumpto]
While it’s useful to know the original sampling rate of the mastering process, because that tells us about the first part of the chain. As we will cover later, the DAC is equally important; a chain is as strong as the weakest link. More important, unless the encoder (A/D or mastering) and decoder (plus D/A) processes are complementary, it isn’t possible to reach the final result and certainly not at low data rates. That means that we can’t solve this problem either in the studio or in the DAC alone; we have to get both ends right and working together.
Our ideal DAC has zero modulation noise and a compact impulse response. Of the many converter chips out there today, the best gives the MQA decoder direct access to the Modulator. Failing that, we will generally want to minimize the on-chip processing; that means driving the DAC as fast as possible and with tailored filters. That also involves matching the resulting impulse response to fit the response into the conceptual hierarchy described in [[jumpto=anchor1]1[/jumpto]], that way the sound most closely matches the studio preview.
Our brain-stem (which is very responsive to fine time structure), extracts the envelope of many sounds. Knowing this, we can more clearly understand why the sinc kernel is less appropriate than others for natural and other sounds of human interest. [[jumpto=anchor1]1[/jumpto]][[jumpto=anchor9]9[/jumpto]]
The graph below is from a model of neural significance of temporal blur introduced by the filters in different systems. One thing we can see is that, as listening tests bear out, MQA can significantly improve low-rate digital sources.
http://images.computeraudiophile.com/graphics/2016/0407/Figure-3.png
Figure 3. Modeled temporal blur vs sample rate in sinc systems c.f. MQA. (High Resolution Image)
http://images.computeraudiophile.com/graphics/2016/0407/Figure-4.png
Figure 4. The diagram above shows examples of system end-to-end impulse response (analogue –digital- analogue) for: typical linear-phase 192 kHz/24b and 48 kHz/24b (at 48kHz the response does not fit on the graph as it extends both ways to +/-4ms); also shown is the MQA response for the same 192 kHz input and the response of 2.5m of air at STP.
MQA aims to ‘do no more harm’ to an audio signal than it suffers in passing through a few metres of air. It exceeds the sound quality of all current hi-res formats without requiring high data rates, while remaining fully backwards-compatible! (High Resolution Image)
[aname=anchor21]5[/aname] For sounds 1 metre away, analogue can be likened to a current (sinc kernel) digital channel sampled at 768 kHz with optimal (shaped) quantisation to 15 bits. Why? Because such a channel introduces less blur and random noise than the air itself.
WHY IS IT END TO END
[aname=anchor37]Q16[/aname]. My understanding is MQA is supposed to be “end to end”, therefore:
Will there be separate analogue and digital masters? In other words, with many analogue (vinyl) recordings starting off as digital masters what will be the extent of MQA in the analogue signal chain? My understanding of the MQA process is that when the files are encoded, corrections are made for “damage” done by the original ADC. Again, with the intent that MQA will be present in the very beginning of the signal chain, are the psychoacoustic issues being talked about for digital in the analogue domain also? Why or why not?
A16.
Generally there is only one core source of the truth in the studio, the final mix, which can then be mastered to a number of requirements, e.g. as second generation for CD, MFIT, HDTracks, Blu-ray, vinyl, etc. If a new recording is being mastered from digital to vinyl MQA is ideally involved in the capture, mastering and playback to the cutting lathe. For new recordings or special re-issues, where possible we fingerprint the system and converters used. This can be done for tracks or for mixes. Deblurring the source is invariably right. Generally we are trying to drill back to the sound that was heard and approved in the original mix/master, so it isn’t appropriate to compensate for microphones or earlier analogue components unless it is a new recording and the recording engineer specifically wants to do that. There are also times where addressing the fact that all we have is 2nd or 3rd-generation analogue tape is worthwhile but that’s too complex a topic for this Q&A.
Q17. What advantages, if any, do compression and back-end decoding have with respect to sound quality? In other words, wouldn’t we be just as well off with hi-res files that have had the original encoding errors “fixed?”
A17. In Q15, you will see that we consider both gateways to be equally significant. Of course the better the signal the better the result, this is one reason why MQA files often sound better than ‘CD quality’ even without a decoder. But to really get to what was heard in the studio we need the whole chain right. If we don’t do that then the errors will be at least one order of magnitude higher. The clarity of sound - when this is right - is exquisite.
One of the very cool features of MQA is that the file is a ‘chameleon’, it contains enough information to allow each decoder to make the best of the platform. So, e.g. on a mobile phone we get the best that platform can give. See [jumpto=anchor36]Q14[/jumpto].
http://images.computeraudiophile.com/graphics/2016/0407/Figure-5.png
Figure 5. The digital chain interrupts an analogue to analogue path with two cascaded converters. (High Resolution Image)
IN THE STUDIO
[aname=anchor40]Q18[/aname].
The MQA process is said to be able to improve upon the original recording by de-blurring. How can this function with the common recording that has had many levels of processing between the recorded data and the end result? I understand how authentication for a simple recording where one ADC was used could work - but how would this work for cases where multiple different digital sources, from many different ADCs are digitally mixed?
A18.
There are a number of routes. First the individual tracks can be corrected prior to mixing. That is not necessarily a total solution depending on other processing going on and what works best depends on some specifics. Alternatively the tracks can be post-analysed individually.
However in some cases mixed ADCs are used and the optimum solution depends on their similarity or otherwise. In any case our encoder can perform analysis of the composite mix and this is the best approach when there is not enough information and it tends to get extremely good answers. We have several practical examples of recovering resolution with mixed ADCs. Ultimately in the studio the mastering or recording engineer will use his judgement. Authentication is for the final release (mix), not the individual tracks.
Q19. What approx fraction of the music catalog has provenance information?
A19. That’s hard to express in numbers but here’s a guess – 70%. The three majors account for ~65% of the music market worldwide and, in general they have records varying from good to excellent. They all have problems that they acquired, divested, swapped or traded sub-labels and each small startup had different work practises. A small fraction of the independent labels have superb records of their work (e.g. ECM, 2L, UnaMas). Many don’t. Many archives are plagued by missing items or hardware problems to play back important recordings. For most labels, they tend to know about the location of the true archive for top-selling or important works. E.g. no-one is confused, (partly through the excellent work of Steve Berkowitz), which is the correct Miles Davis, Bob Dylan, Brubeck, Beatles etc. We would imagine that 70% of titles ever released can be vouched for to a reasonable degree of confidence. However there is mayhem in distribution: one aggregator reported having 23 different versions of an Otis Reading song. In some cases, the label had lost track of the fact that WMA or even MP3 had been used en route. You don’t want to buy that unless it’s definitive.
Q20. MQA / Studio
Can you explain the difference between MQA and MQA Studio? What does “MQA from the master” mean? Does it mean the final digital master is processed through the MQA encoding or does it mean MQA is used in the mixing process? A20. The MQA authentication ‘light’ indicates Provenance in the source for the file.
The MQA display indicates that the unit is decoding and playing an MQA stream or file and denotes provenance that the sound is identical to that of the source material. MQA Studio indicates it is playing a file which has either been approved in the studio by the artist/producer or has been verified by the copyright owner.
We make no judgement about sound quality or about any arbitrary definitions of ‘resolution’ or ‘quality’. So an MQA file direct from a label which is, e.g. 44.1 kHz 16 bit mono transfer from a cylinder is fine so long as it is vouched for, as is music in any sample rate up to 768 kHz PCM or DSD256. What is not OK is content that is overtly up-, down- or cross-sampled if they are not the definitive documents for the piece.
MQA authentication is not a proxy for a High-Res definition.
[aname=anchor45]Q21[/aname]. Is there some sort of a new MQA standard for new productions?
[aname=anchor37]A21[/aname]. Not a standard as such, but definitely additions or guidelines to work practises and enhancements to capability. MQA provides a plug-in for studios that allows preview of many encoder options. It also allows preview and authentication of several possible renderings in the marketplace.
Q22. There are many hobby recordists who use fairly respectable semi-pro (or even pro) digital recorders. In many cases these recordings are then subjected to some amount of post processing in the computer (e.q. dynamics or some other digital processing for cleaning or “improving” the recorded audio) with the aid of plug-ins in DAWs:
Will the MQA de-blurring algorithm be available for purchase in plug-in form as a standalone package (i.e. without the “audio origami” folding compression technique)? If yes, this will allow for these kind of users to compare the MQA de-blur algorithm with offers from other manufacturers (like Waves or Izotope) that also claim to help to clean or improve the recorded digital audio. A22. See [jumpto=anchor37]A21[/jumpto]. It’s not settled yet about how the studio tools will be distributed and what associated equipment is needed. We will probably announce this when the studio plugin is out of beta. However also see [jumpto=anchor38]Q16[/jumpto]; the full temporal blur benefit requires both ends co-operating.
Q23. It has been stated that MQA enabled ADCs will be available for the pro market (think Mytek):
Will these ADCs be delta-sigma based with decimation for PCM output? If yes, where in the conversion stage will be MQA participating with its de-blurring technology: is it in the decimation stage? As a summary: what will be the main technological difference between a top-notch MQA certified ADC and the already existing top-notch non-certified ADCs like Meitner, Merging, Metric Halo or Apogee. What are the technical requirements for a certified A/D converter? A23. MQA ADCs comply with the triple goals of: i) specific, complementary and compact kernel; ii) very low modulation noise; iii) losslessly reversible archive metadata. Yes, delta-sigma architecture is acceptable but not required. The key differences are in the signal processing which can, in many professional designs, be updated in software. Further details on this are confidential and provided to our licensed partners.
Q24. To me, it is very important to allow every content creator to work in the format with which he feels most comfortable. While the majority of music producers and recording engineers work mainly in the PCM world, there are some that prefer to work instead in a purely DSD domain (or some analogue-DSD combination). Of course, there is conversion software that allows to release a final musical product in any format, but my question is:
Does the MQA technology, as a whole, fit in a production environment based mainly or purely in DSD? If yes, in what way? A24. Yes it does fit in a production environment based mainly or solely in DSD; the release would be based on DXD since MQA is based on PCM.
Q25. More interesting would be, will there be a software decoder, that could convert the MQA “packed” file, into a “pure” 352 or 176 kHz PCM file, or speaking, unfold the MQA origami for non MQA DACs or for DAWs.
A25. We aren’t sure why this question is asked in the context of DAWs. The MQA file is for distribution and can’t be edited. For other situations please see [jumpto=anchor39]Q43[/jumpto].
Q26. I hope you take the time to explain all aspects related to the MQA production process, so there will be no more speculation about that. You may also add the Hyper Secure Module implementation, as people need to understand when and why, and maybe how, it is implemented.
A26. I believe that the preceding answers have covered the first part of the question. The security module gets associated with a production encoder.
AUTHENTICATION
Q27. I’d like to know
How you plan to authenticate masters when you repair old digital files and put the MQA stamp on that. I mean there’s so much fraud going on already with major labels trying to pass off up-sampled Redbook files of unknown origin and questionable quality. Possibly even material that’s been converted to digital and back to analogue or suffers from generation loss due to use of non-masters or safety copies etc.? Why not start from scratch and require original master tapes when doing MQA encoding? Quite frankly it would seem that repairing rather than starting over is an impossible task and would be diluting the MQA process to mean everything to everybody. A27.
Our encoder is sophisticated at scanning source for the ADC and signal fingerprints. It’s also on the lookout for oddities. By the way, we might find an up-sampled file, or a file where some of the stems or tracks were up-sampled, but fundamentally that’s not our business to argue. In today’s production systems, some studios routinely bounce out of digital into analogue and back, just to access favourite signal processors. That’s their prerogative. If the label asserts that there is one best version, technical issues like that aren’t to be judged. We only query the provenance. This is the ideal, we’d call that a ‘white-glove’ process and whether it is done or not depends on the importance of the album to the label. We’ve done this several times and we hope many more times; there are several in the pipeline. In the end it’s a market question: if people like the result more will happen; it’s an opportunity for crowdsourcing opinion. We request and aim to start with the best version of the definitive mix/document that exists. In some cases, the original artefact is an analogue tape so it can be transferred – but not if it has deteriorated or has lost oxide. In those cases, we have to go to the best digitised copy made when it was accessible. Sometimes tapes are lost, burned, etc. For digital recordings we want to go back to the original when it is playable. For example, we have interesting projects mining X-80 Dash 50.4kHz recordings and old DAT recordings. Occasionally labels have been lazy (recording at 88.2 kHz and issuing at 96 kHz) – in those cases we go back to the original.
Q28. Can you explain the difference between adding MQA during mastering process vs recording?
As an example, does during recording mean that the original file already contain MQA, and a non MQA will not exist? Your FAQ about CD/DVD says: If a recording is made using the MQA process, will this preclude the availability of a hi-res non- MQA version? A28. MQA is involved in recording if an MQA ADC is used. In mastering the tools can optimise the stems or mix according to the producer’s wishes and previews. MQA can be used in mastering as described in [jumpto=anchor40]Q18[/jumpto].
Not at all. If MQA is used in recording it does not prevent really good releases being made in other formats including vinyl or other (presumably larger) download files. We are not an enforcement agency; we just aim to make things better. This has been answered to a degree in Q38. More information on optical disc mastering will be released in due course. Certainly not. That’s a policy decision for the copyright owner.
MUSIC INDUSTRY
Q29. The success of MQA depends on the record companies. Can you explain how much effort we can expect them to put in to this? Like remastering (Whatever that means), a purely automatic process. (Your FAQ talks about 100 000 tracks in 24 hours).
A29. We agree that a good supply of music is important for us and for all those interested in high-quality sound. The music business is large and somewhat incoherent. We have rapidly increasing interest in MQA within small, medium and large labels and among the recording and mastering communities. Every label will make their own decision. The real key is establishing distribution for streaming, download and physical and supporting each method as it evolves.
Q30. Can we be assured we get the best masters? (Whatever best is).
A30. MQA Studio could create a tension in the market that may ensure it happens more often than not. Only the labels can decide to make the effort or choice to ensure. Many labels have this already sorted, others appreciate the encouragement.
Q31. What is the value for the record companies to encode millions of old files? (If they will do).
A31. We can’t answer that, other than to say that for most labels, back-catalogue is more than 60% of their revenue.
MUSIC DISTRIBUTION
Q32. Like also assure us that there can ever only exist one MQA version of a master, or maybe not?
A32. We aren’t an enforcement agency. We are supplying tools and guidelines that, in particular aim to having ‘one version of the truth’ when the MQA Studio flag is shown.
Q33. The MQA website states that the technology “…delivers master quality audio in a file that’s small enough to stream or download.” Will there be any provision for storing that file in its “small” state until selected to be unpacked for playback? That might be useful for portable audio players with restricted storage space.
A33. If I understand your question correctly, you are asking if the streaming file can be stored for offline playback. The answer is yes; the download and the streaming forms are identical and the download file can be saved and played back by any equipment capable of playing PCM at 44 or 48 kHz/24bit. Although MQA is a PCM stream (not tied to a file) and can work in continuous streaming, most on- demand streaming services (such as Tidal, Qobuz, Spotify, Deezer) actually tend to buffer one or even two songs ahead in the playlist on the device running their App; in other words, it happens even without offline storage today.
[aname=anchor54]Q34[/aname]. Regarding bandwidth utilisation: what kind of bitrates can we expect to see while streaming hi-res MQA, and, on average, how much larger will the data transfer rate be when compared to that of the same material streamed as 44.1/16 lossless FLAC?
A34. We will answer this for the types of file that have so far been released, namely MQA in a 1x 24-bit PCM container (the graph below shows MQA files in a FLAC wrapper). The bitrate depends to an extent on the original source.
A corpus of Redbook files (44.1/16 original) in native form will show a reduction in FLAC from 1.414 Mbps to about 760 kbps average data rate. In MQA form the data rate is very similar (within ± 40 kbps. For higher-resolution files, e.g. 44.1/24 or higher sample rates the MQA files average around 1Mbps with a standard deviation of ~ 300 kbps across a large collection. To some degree the size of the MQA file depends on entropy in the original source, however the music content that needs to be conveyed from upper octaves (above 96 kHz) is small.
So, on a streaming service we might expect to see data rates between 500 kbps for CD-sourced to a maximum of 1.5Mbps for a complex DXD or DSD256 source.
http://images.computeraudiophile.com/graphics/2016/0407/Figure-6.png
Figure 6. Digital Audio data rates (High Resolution Image)
TECHNICAL QUESTIONS
ABOUT 2L AND THE TESTBENCH
Q35. Tell us about the 2L testbench (bonus question)
A35. I think we should all be grateful to 2L for the test bench. Over the years it has been a great resource, it is generous and educational. But it is a hobby project for them.
I know that 2L have been trying to ensure that all the versions are from consistent sources for every piece offered (including those not in MQA). We have been helping them cross-check over the last few weeks that the sources, derivatives and MQA files are both consistent and current (there were some early posts of pre-production files or mixed provenance variants). Occasionally some files have been incorrect, leading to complaints, but frankly it is rude to complain about free files!
Given that 2L and MQA both have policies of continuous improvement, we would always suggest working with current downloads, especially if you are using non-original comparisons (such as 96 or 192 kHz to compare to a CD or MQA made directly from, e.g. DXD).
LOSSLESSNESS
[aname=anchor44]Q36[/aname]. Is MQA really lossless? (bonus question)
A36. This question often seems to assume that lossless is always best but in fact all “lossless” does is to take some bits and to reproduce those same bits at another time or place. It that’s all you wanted to do, FLAC would be fine and there would be no need for MQA.
The team behind MQA understand not only lossless compression (see [jumpto=anchor41]Q1[/jumpto]) but also lossless processing and data burying. As explained earlier, there is a fundamental difficulty if we focus solely on strict lossless delivery. It is understood that a digital distribution system (including MQA) can be lossless in distribution. The problem is that the result is not delivered today; current DACs do not have lossless behaviour in the digital domain and all behave differently. Also the replay chain has several (sometimes unintended) places where losslessness breaks down. This is covered in our papers [[jumpto=anchor1]1[/jumpto]][[jumpto=anchor2]2[/jumpto]].
So MQA is set up to deliver a ‘closer-to-lossless’ digital path up to the DAC modulator with the goal of approaching analogue-to-analogue ‘lossless’ within appropriate thermal limits, including protecting the signals above ‘acoustic absolute zero’ (see [[jumpto=anchor1]1[/jumpto]][[jumpto=anchor2]2[/jumpto]][[jumpto=anchor11]11[/jumpto]].
MQA does not have the capability to defeat information theory.
More important is to capture and protect (in a lossless manner) all the information in the file that relates to the music content. This means capturing safely at least everything in the triangle on the Origami diagrams; this is then conveyed and protected without loss. This triangle is important for defending the content but also to achieve the low-blur hierarchical sampling chain.
Furthermore, the system path from analogue to analogue is more precise because of the other parts of the technology. Lossless deals with data in the digital domain. The biggest problem, in our experience, is getting it from analogue and back to analogue with the least audible damage. See [jumpto=anchor42]Q0[/jumpto] and [jumpto=anchor43]Q15[/jumpto]. Unless you understand this perspective MQA looks strange.
The problem that MQA is addressing is how to transport an analogue signal to another time or place. It is the analogue signal from the mixing desk that the producer heard and that is the signal that you want to reproduce at your loudspeaker.
Many recording and mastering engineers have testified that MQA improves very considerably on the conventional methods, recreating the sound they actually hear or remember from the original session or, in the case of archive material, the sound from an analogue tape recorder.
Q37. Lossless:
Is MQA lossless in the sense of the data? Not is it audibly lossless, does it have the ability to unpack the exact data that was recorded at higher sample rates? Clarification on “lossless” please. Clearly frequencies >24kHz are not losslessly compressed in the usual way we think of “lossless”, right? A37.
See [jumpto=anchor44]Q36[/jumpto]. MQA has the ability to unpack exactly (bit-for-bit) the data that was previewed with our plug-in tools in the studio. This is true for the maximum quality fed to a reference DAC as well as each of the other renderings that may happen, such as in mobile phones. It is incorrect and a serious misunderstanding to assert ‘Clearly frequencies >24kHz are not …’.
As described elsewhere, there are two types of Origami fold and the frequencies where they are used depend on the ratio of the original sample rate to the transport rate. When the packing is folding a ‘kernel’, the process is losslessly reversible for the encapsulated audio and even at the lowest transmission rate, for content 2x or higher, the octave 24–48 kHz (or 22-44 in base 44.1) is a lossless process for the encapsulated audio. The lossless compressor used is proprietary and optimised for ultrasonic components; the folds use lossless processing. However, there is a great deal of intricacy here.[jumpto=anchor22]6[/jumpto]
[aname=anchor22]6[/aname] We don’t advise asserting: ‘MQA does this one thing’ based on examining a few files. MQA is complex and the mastering engineer and encoder between them have 6 million million combinations to choose from. We will talk about this more on our blog.
THE MQA FILE
Q38. Is MQA compliant with Redbook and will there be MQA CD’s produced?
A38. Yes. The MQA stream is PCM, so it can be put in a file or on an optical disc (CD, Blu-ray) or a transport stream. There is a version using 16 bits (see Q56) which is intended for CD and several labels are excited about the opportunity to get higher sample-rates out of the disc.
Q39. I am unclear how MQA can improve Redbook, HiRes PCM, or DSD files actually, but if it can, having a way to convert existing files would be very nice indeed.
A39. MQA can improve these by providing the end-to-end reduction in blur and noise floor. File-to-file conversion is possible but we have no immediate plans currently to release such a service. That doesn’t mean never, just we are a small team with a challenging roll-out plan.
[aname=anchor55]Q40[/aname]. Based on the music released by 2L, it appears that the MQA version of this music is roughly two times larger (in MB) than the CD 16/44.1 non-MQA version. This makes sense to me if the original source is high resolution such as DXD or 192 kHz. However, when the original source is 16/44.1 I don’t understand how or why the MQA version is larger than the original. Example:
Carl Nielsen: Chaconne op 32 (Christian Eggen) was recorded to DAT at 16/44.1. The CD version is 33 MB and the MQA version is 80 MB. Question: Can you explain why the MQA version of 16/44.1 CD material is larger than the original 16/44.1 CD material? [aname=anchor53]A40[/aname]. First you can see Morten’s notes about this recording here: https://shop.klicktrack.com/2l/468051. Morten wanted to do some minor touch-up on the recording. So, in order that nuance would not be lost, we used our ingestion process to give him 24-bit stems. The simple answer is that the re-release was remastered at 24-bit. The MQA file is made from that new 44.1/24 new master. Had he remastered strictly in a 16-bit context, some original magic might have been lost, but the file would have been smaller.[jumpto=anchor23]7[/jumpto] In the 2L catalogue there are examples of MQA from 44.1/16 that were not remastered in this way.
Q41. After purchasing an MQA album in a format such as FLAC, will consumers be able to convert the files, using software they already own such as JRiver, XLD, or dBpoweramp, to another format such as WAV or AIFF or ALAC without destroying MQA features of the file? Put another way, if I convert an MQA FLAC file to AIFF, using JRiver Media Center, will the MQA light still illuminate on my DAC when playing the AIFF files?
A41. That is no problem. As long as the audio itself is unchanged MQA files can be moved between FLAC, ALAC, MLP, TrueHD, DTS Master, AIFF, WAV etc. Plus all the information needed by the decoder exists in the stream, not even at file level or in the header.
[aname=anchor23]7[/aname] This has been one of the topics fuelling conspiracy theories about MQA. The truth, as we see, is more prosaic and yet more interesting!
DECODERS
[aname=anchor52]Q42[/aname]. I’d like to ask about MQA decoding.
Does it require hardware, software or both? Will there be a Linux software decoder library which will be able to be incorporated into the existing Linux audio ecosystem (perhaps something similar to how Nvidia provides proprietary graphics drivers for its video cards to the Linux community)? Can you indicate what sort of licensing fee, if any, might be required for the enthusiast running their own Linux music playback system for personal use who would like to be able to decode MQA? Many thanks for any enlightenment. [aname=anchor47]A42[/aname].
MQA decoding does not require hardware, it can be performed on a number of different platforms. But the decoder normally runs in the context of paired DAC(s). Currently we license decoder builds for Windows, OSX, Linux, Android, iOS, XMOS, some custom platforms, with several more coming. We are rolling out the decoder platform licensing in stages and no decision has been taken yet about this type of application. But we will in due course so please stay in touch.
[aname=anchor39]Q43[/aname]. Soft decoding:
Will software decoding be allowed (and when)? Will software decoding get the complete benefit that will be possible with hardware MQA DACs? If not what will the differences be? Will a MQA decode software module be available for integration into third party music players that run on generic PCs and Macs? A43.
We already have software decoders for a number of hardware, portable and mobile platforms. In these three cases the decoder has the benefit of precise knowledge of the DAC and associated hardware. See [jumpto=anchor47]A42[/jumpto], there is no inherent quality difference between MQA decoders unless they are operating in designated power-saving modes. However, it is inevitable that a properly designed hardware product, incorporating the decoder and DAC will give the better result. The performance level that MQA enables, allows hardware makers an even better environment on which to stretch their skills. For the audiophile, this should be very exciting. We do anticipate a program to enable such applications, but the requirement for tight DAC coupling and the obligation to match the previewed audio (in the studio) means that several combinations and options are still being explored with both DAC makers and creators of software players. We will make announcements in due course.
Q44. Digital out decoder:
To the extent that hardware decoding is necessary or desired and that many DAC manufacturers might not be able to incorporate hardware mods, could an MQA decoder in a box feeding into a DAC be a potential solution for consumers? If someone already has a an expensive DAC, of which they are very fond, and this DAC doesn’t natively support MQA, will it be possible for some manufacturer to supply a stand-alone box “MQA un-packer” with USB and/or S/PDIF output to make existing DACs MQA compatible? Will software decoding into a “standard” high res bit stream that a generic DAC can use be allowed? Which type of connections will be possible USB, SPDIF AES etc.? Will SPDIF streamer interface be allowed, or only , in order to achieve a two way communication. Meaning you would require a as a minimum in order to have your old DAC benefit from SW decoded MQA, or a streamer like Auralic. A44.
- e. To get the best result, decoders with digital output require to know how to render for the specific connected DAC, or else to ensure that any alternative degraded representation is in line with the studio preview options (see [jumpto=anchor45]Q21[/jumpto]). This is true for either full decodes or split decodes (see [jumpto=anchor46]A65[/jumpto]).
We are working with our partners on programs for both product types and will announce roll-out in the coming months.
Q45. Is there any technical limitation that makes a FPGA DAC not to have MQA implemented?
A45. None whatsoever, we have several companies implementing MQA with FPGAs.
ABOUT DACS
Q46. Are there some technical minimum requirements to a MQA certified DAC? Like 32/384, or 24/192?
A46. We can get the best result from a DAC which can operate at 192 kHz or higher. In certain applications lower rates are possible, it is very platform specific. For example, some mobile platforms or network players are limited to 48 or 96 kHz. See [jumpto=anchor45]Q21[/jumpto] about studio preview.
Q47. DAC profiles
How many DAC profiles do you currently support? Any number of planned DAC profiles to be supported? Is a DAC profile only related to the chip, like it can be used for many brands, or will a DAC profile be pr. DAC manufacturer and model? A47.
We support several DAC profiles and there is no limit to the number we can support, in most cases the decoder is specifically compiled for the target platform. The optimum profile takes account of the DAC and any associated analogue circuitry, so again, it is platform specific. We support several DAC profiles and there is no limit to the number we can support, in most cases the decoder is specifically compiled for the target platform. The optimum profile takes account of the DAC and any associated analogue circuitry, so again, it is platform specific. We support several DAC profiles and there is no limit to the number we can support, in most cases the decoder is specifically compiled for the target platform. The optimum profile takes account of the DAC and any associated analogue circuitry, so again, it is platform specific.
Q48. If my understanding is correct, DAC profiling would imply that the DAC model in question has to be sent from the manufacturer to Meridian for certification. Is this really a viable route from an IP standpoint and a practical way of doing it? Seems like a cumbersome and time consuming procedure in my view.
A48. This is not a Meridian question, MQA is a separate company (see [jumpto=anchor48]Q76[/jumpto]). The hardware and mobile licensing involves verification (which is normal in this industry) and we also work with our partners to optimise the conversion interface. We think it is viable. Perfection takes a bit longer.
Q49. I have a DAC I built myself with a custom filter implemented in an FPGA which upsamples to 352.8/384 which turns off the internal filter in the DAC chip. How will MQA work with such a DAC? I have spent a LOT of effort getting this filter in the FPGA just right, will MQA be messing this up and adding what IT thinks is the best filter?
A49. If you are a company asking us to optimally connect to such a DAC we would be happy to integrate it. In principle, that is a good way to make a DAC and we have several MQA decoder implementations that can drive the DAC at 8x or higher. In [[jumpto=anchor1]1[/jumpto]] we explain that the way to get maximum quality is to minimise temporal blur using a hierarchical cascade and this isn’t a matter of subjective tuning. The MQA decoder need not alter the way your DAC plays back regular PCM or DSD.
Q50. As I likely never get decoding on the DAC level for my equipment (Devialet) how much one loses in sound quality if the decoding is done via music server / computer. As I consider time coherence (including lack of pre/post ringing) the most interesting feature of MQA, how much of that can that preserved without a MQA certified DAC?
A50. We think that that products such as Devialet could run an MQA decoder without issues. They could implement the decoder or build an accredited endpoint.
Q51. What is the best architecture for a DAC? (bonus question)
A51. It is important that the industry is free to innovate in all areas. We have no opinion on the architecture of DACs, only about detailed implementation of filtering, quantisation and dither.
MQA provides certain performance guidelines but not designs.
However, MQA is licensing a specific DAC design which achieves extraordinary performance up to 768 kHz and is already being incorporated in upcoming designs.
TECHNICAL SPECIFICS
Q52. Will or can MQA be transported as 32 bit?
A52. MQA can encode 32-bit files and feed 32-bit DACs. It uses a 24-bit transport file and for some background into why, see [[jumpto=anchor2]2[/jumpto]] and [[jumpto=anchor4]4[/jumpto]].
Q53. Is 384 kHz the highest MQA encoding frequency?
A53. No, the current syntax supports up to 768 kHz, but it is hierarchically extensible.
Q54. Are there different “container” sizes used in MQA. Looks like the typical is the audio data being put into (lossless compressed) 24/48. Is that the bitrate they anticipate streaming audio to be delivered? If other data rates are possible, do all devices including the Meridian Explorer 2 have the ability to decode all potential data rates?
A54. As answered earlier, MQA is hierarchical. The files released so far are a small subset of what’s possible. For streaming and download the most common stereo files are at 1x transmission rate and could contain 1x or higher content; when the content is 2x or higher one or more of the Origami folds will be used.
MQA files can also have transport rates of 2x (which could contain original rates of 2x or higher using Origami).[jumpto=anchor24]8[/jumpto] The syntax allows transport at 4x or 8x for specific archive tasks.
Consumer hardware and software decoders are all required and verified to handle both 1x and 2x transport rates in any container size between 16 and 24 bits. See Glossary.
[aname=anchor50]Q55[/aname]. Dynamic Range
Do encoded MQA files provide anything more than 16-bit resolution? Even if we accept improvement to time-domain accuracy, are the PCM files essentially dithered down to 16-bits … (Looks that way based on previous reports?) What will be the dynamic range of a 24bit 48KHz MQA file? Will it be 16bit since the least significant 8 bits are used to encode the higher freq. data? A55.
The reports to which we assume you are referring, are incorrect, see our answers to [jumpto=anchor49]Q82[/jumpto]. The MQA files are not restricted to 16 or any other number of bits (up to and including 24).
In an LPCM system the number of bits in the channel indicate the possibility for step-size. The theoretical dynamic range of the channel is generally also determined by the number of bits and the noise-shaped quantizer. But in practise the attainable dynamic range is actually limited by the noise floor of either the signal or channel coding. Music files don’t have a 24-bit noise floor and in fact a 17-bit-equivalent noise floor is very unusual, so it is hard to make proper assessment of MQA based on music file analysis.
In fact, the dynamic range of a 48-kHz MQA channel can be between 23 and 24 bits. The effective channel dynamic range of the MQA coding will always exceed that of the content in the recording. Generally, MQA’s stationary audio noise floor will be at least 3 bits lower than that of the content and often more.
However, as we stressed earlier, these questions about noise in the channel come from an outdated concept of resolution. Nothing in this Q&A really tackles temporal microstructure which is the key to advancing fidelity.
Q56.
Do I understand correctly that MQA can work in part by encoding information from a higher resolution file into high frequency noise in a lower resolution file? Do you have measurements for what the level of increase in noise is at what frequencies due to this process, either typically or in at least one actual example of conversion from a 192/24 file to 48/24 or 44.1/24? A56.
When information from higher octaves is buried in the 0 to 22/24 kHz region it is shaped to be inaudible and minimised according to the encoder settings and mastering engineers’ balance of Optimum vs Legacy (no decoder) vs narrowed pipe sound quality. However, it is totally reversibly removed by the decoder. So the information folded back does not impact the decoded result. It is incorrect to say the higher-octave information is encoded as high-frequency noise, it is encoded as very low-level noise. See [jumpto=anchor50]Q55[/jumpto] and [jumpto=anchor49]Q82[/jumpto]
Q57. No Decoder
Can you confirm if a person does NOT have MQA decoding capability, does the file still retain a full 16-bits (and presumably 48kHz sampling rate) resolution if they try to play back the data? MQA is said to playback without decoding with good quality or improved quality over a non- MQA Redbook file A57.
is this true? What are the differences between non-decoded MQA files that were originally 192/24 and a simple quality down-sampling of 192/24 to Redbook specs of 44/24? Do non-decoded MQA files still have 16 bits of resolution or is it reduced to 15 bits or less? Will there be more noise from the MQA process when playing back un-decoded MQA files? The file is a 24-bit file and so that is the resolution seen by the DAC. Without a decoder, the noisefloor depends on the encoding parameters and compatibility settings (producers can trade off the performance for no-decoder situations).
For the files released so far, the signalling tends to be approximately at the 16-bit level (adjustable in mastering) and those signals in the Legacy (no decoder) channel manifest typically as equivalent to an optimum 15.9 bit coding (see Q40 and Q82). Normally this is within 0.1 bit or actually better than the CD. However, this question and the answer are both couched in Fourier/Shannon domain and do not address how it sounds. For the listener with a decoder, the noise floor range can be down to 24 bits (see Q55).
Q58. Chromecast
A58. There are no technical limitations. Implementations would be at the discretion of the company building the product or between us and Google. We can’t discuss other companies’ plans.
[aname=anchor56]Q59[/aname]. Will there exist 16/44.1 MQA files,
A59.
Q60. Is compressed sensing used to encode MQA? Is this the reason higher bit rates can be done at the low bit rates?
A60. I think your question is ‘Does MQA use lossy compression?’ If so the answer is no, not in any conventional sense. Lossy compression uses psychoacoustic models to predict on very short timescales which data or information can be removed or degraded. To us such coding is anathema in any high quality context. The encoding parameters for MQA are strictly constant throughout a file or work and that is impossible with lossy coding.
MQA sets out to identify the actual audio in the file and then to convey it as precisely as possible, sometimes with higher resolution, but it (sometimes) avoids packing excessive stray thermal or random noise. See [[jumpto=anchor1]1[/jumpto]] and [[jumpto=anchor4]4[/jumpto]]
Q61. Is the generation of subtractive dithering on playback only possible with MQA hardware?
A61. It is a requirement of all decoders.
Q62. I’ve seen mentioned two technical aspects that MQA focusses on: Temporal de-blurring and zero noise modulation. Both of these have been stated to be routed in psychoacoustics. I’ve seen the figure of 4uS or 10uS inter-aural threshold stated in the neuroscience literature but haven’t seen any thresholds for noise modulation?
A62. There are many places in the literature where we can find the threshold for detectability of ripple noise or noise moving in noise. In straightforward psychoacoustic tests or modelling the threshold depends on SPL. Near threshold the limit is around 0.5 dB, falling to 0.05 dB around 70 dBSPL. [[jumpto=anchor5]5[/jumpto]][[jumpto=anchor6]6[/jumpto]][[jumpto=anchor7]7[/jumpto]][[jumpto=anchor8]8[/jumpto]] For extreme quality, the limits should be much tighter.
Q63. De-blurring:
A63. MQA does not use apodizing.
The goal of what we describe as ‘de-blurring’ for the average reader, is to harmonise the analogue- through digital-to-analogue chain to have an end-to-end impulse response which closely resembles two things: i) the kernels of inferred neural responses to ensembles of natural sounds (see [[jumpto=anchor1]1[/jumpto]] and [[jumpto=anchor9]9[/jumpto]]), and ii) the impulse response of a short column of air (see [[jumpto=anchor2]2[/jumpto]]).
[aname=anchor24]8[/aname] None yet released, but soon.
[aname=anchor25]9[/aname] By the way, we don’t recommend listening at such high acoustic gains. Sustained high level will cause hearing damage. The best mastering engineers do their work at very low listening levels. Our ears are most sensitive, including to temporal microstructure, at levels in the region 60–70 dB PSL.
[aname=anchor26]10[/aname] Wiener N (1964). Extrapolation, Interpolation, and Smoothing of Stationary Time Series. Cambridge, Mass: MIT Press. ISBN 0-262-73005-7
SYSTEM QUESTIONS
Q64. System Integrity and Losslessness
A64.
Q65. Room correction, etc
[aname=anchor46]A65[/aname].
http://images.computeraudiophile.com/graphics/2016/0407/Figure-7.png
Figure 7. The top diagram shows the ideal implementation of an all-in-one MQA decoder. It is fairly self- explanatory. Note that the decoder has a control for gain, optimised for the DAC. Remembering that the MQA decoder is tightly bound to the DAC, the lower diagram shows how to implement additional processing. (High Resolution Image)
We are going to answer this question by describing the optimum structure for MQA in a surround processor (e.g. using upscaling, matrixing, room correction) or in an automobile or a DSP loudspeaker. Such processing may not be performed on the incoming MQA (as this would destroy the MQA data signals) and should not be performed on the final output to the DAC (as this would compromise the MQA decoder’s management of the DAC), introduce uncontrolled temporal blur and require a lot of resource to perform, e.g. room correction at 8x or higher DAC feed speeds. [jumpto=anchor28]12[/jumpto]
The main decoder produces an intermediate signal and processing, such as multichannel up-mixing, room correction or crossover, etc. may be performed on this. A software decoder may include a side-chain API where such processing can be inserted, as it is already in mobile implementations.
g. We respectfully disagree. Such a system might appear to be convenient but will always deliver lower quality and be lower resolution and further away from the studio original.
Q66. How will the end user know original master resolution? Is there some requirement from MQA, so as an example we easily will know if the master is 16 bit, even it is delivered as 24 bit? The question applies both to DAC’s, SW Solutions, streamers, CD’s, and downloaded MQA.
A66. An MQA file in current distribution has a word-width of either 16 or 24 bit. It is very unusual for a project to leave a DAW in 16-bit form but a 16-bit source generally encodes to a 16-bit MQA file.
The MQA stream knows internally the original Fs and bit depth but normally decoders only display the former. Written by the encoder, original sample rate is a field in the ID3 header, for the convenience of UI.
[aname=anchor27]11[/aname] For example, at a recent high-end computer audiophile convention, for interest, we made tests on several exhibitors who thought they were providing bit-accurate data for their various (non-MQA) DAC offerings. Even in this arena we found only 1 in 10 had this right, the confusing menu options and occasional unnecessary processing in audiophile soft players, or OS not in exclusive mode being typical.
[aname=anchor28]12[/aname] In particular, to be transparent, low-frequency room correction software complexity increases roughly as the square of the sample rate.
WALLED GARDEN
Q67. I think that a lot of people are concerned about the “walled garden” aspect of MQA.
A67. MQA has a goal to improve something that should be important to us all, namely a simpler and more democratic way to get great sound and great music from artist to fan, with some technology that should improve the ecosystem and a method that might encourage the labels to release higher- quality and more accessible versions of their music.
It’s not possible to change the paradigm without tackling the end-to-end chain.
That does not mean people are excluded. It’s a free market. An MQA encapsulated file can be replayed without a decoder. This cannot be construed as a walled garden in any way.
BUSINESS QUESTIONS
Q68.
A68. MQA involves patented technology and requires a license to implement the encoder or decoder. We have released these details to several partners who are doing this already. As also described in [jumpto=anchor47]A42[/jumpto], we have decoder libraries for a number of platforms and see no sound quality or provenance advantage to be gained by risking any faulty implementations or overtaxing our support resources. A key driver for the program is to get the labels comfortable with releasing a much larger catalogue than today in high quality. When that’s done, we all benefit. Stay in touch as we build more options.
Q69. From MQA FAQ, I understand there is a license pr. unit. (DAC, Streamer, and other hardware). Can you explain about license principles for SW.? Like should we expect to pay more for a SW player that support MQA? And maybe Tidal as well.
A69. Software players embedded in mobile devices should be part of the device. Software players embedded in Apps for subscription services will probably be just part of the service. We haven’t yet started a licensing program for soft players in less quality-assured situations such as computer audio but when we do we will announce it.
Q70. Which leads to a freeware player, will they ever be able to play MQA? If you could pay some attention to a possible MQA implementation for Squeezeboxes? Like is there a license issue that would or could stop a Squeezebox support? (With SPDIF or USB out).
A70. In principle an MQA decoder can be added to any platform. That’s a different question to whether it may be free. Please see our answers to [jumpto=anchor52]Q42[/jumpto].
Q71. MQA has developed a test card (development platform).
A71. The reference development platform is offered under NDA and implementation license to bona fide developers. Right now we have a significant number of these being used to develop MQA products for market and it is keeping our team busy. It isn’t our intention to sell these to DIYs, particularly as it is for integration and not optimised for sound-quality from the monitoring output.
If you are interested in listening to MQA then it is higher quality and cheaper to buy a Meridian Explorer2, Prime or Mytek’s Brooklyn or one of the many other decoder/DACs in the pipeline.
Q72. Can you tell us what went wrong during Auralic’s MQA release under CES?
A72. We are still working with Auralic and are not prepared to discuss any of our licensee relationships.
Q73. MQA embedded in DACs:
A73. There is strong interest from makers of DAC chips in supporting aspects of MQA. That does not necessarily mean the whole decoder ends up in there, but rather that – as we progress – the DAC can take some of the load and have options that require less pro-active management. We are happy to see the DIY interest (here and Q71) and will put these requests into our planning.
FUTURE PLANS
Q74. Are there any plans to implement MQA on digital feeds from live broadcasts? For example in the UK BBC radio 3 is streamed at ‘high definition’ (typically up to 320kbs): could the MQA process be applied in real time to supply CD or better quality within the broadcast bandwidth limitations?
A74. Yes, MQA can be used in real-time and there is a lot of interest in this and live event streaming. There are a few showcase events planned later in 2016 to kick that off.
Q75. Will there be a consumer version of MQA to convert existing Hirez file (like 192/24) that I have created to MQA files. I am thinking about being able to have greater portability of my files.
A75. No, there is no immediate plan to sell or license a public encoding service. You should understand that the majority of the content owners do not even permit download retailers to re-encode or transcode today. On the other hand, we might hope you can listen to the same music on a streaming service. Remember that part of the value we add is provenance and getting the right source.
MQA MYTHS, RUMOURS AND WORSE
[aname=anchor48]Q76[/aname]. Don’t blame Meridian
A76. At several places in these questions posters refer to Meridian. MQA is developed, owned and operated by MQA Ltd, a completely separate company with its own shareholders, offices and employees. Meridian is one of MQA’s many licensees.
Q77. MQA are lying to us about …
A77. We agree conspiracy theories are often more interesting than the truth, but we have nothing to gain by lying …. it might be worse, we could be right!
[aname=anchor51]Q78[/aname]. Unsubstantiated assertions:
A78. We have no idea how bloggers feel able to draw these un-scientific and broad conclusions, particularly as MQA isn’t one thing.
Q79. DRM
A79.
Q80.
A80.
Q81. But I read on Wikipedia …
A81. Everything on the internet might be true, but, unfortunately it isn’t.
PLEASE COMMENT ABOUT THESE INVESTIGATIONS/BLOGS
[aname=anchor49]Q82[/aname]. Please comment on these posts.
A82. We have paraphrased the assertions: [jumpto=anchor29]13[/jumpto]
i) “MQA have around 13 Bit of “lossless” information and everything below 14 Bit is “lossy”
This is incorrect. In general, the MQA system can reach in excess of either 23-bit dynamic range capability or 3–6 bits below the content noise in the audio band.
ii) “Without a decoder we hear 13 bits, that isn’t CD quality”.
Here is a classic case of comparing apples to oranges. When we talk about CD quality sound we don’t expect an answer that says ‘it can’t sound like CD, because I can see only 13 bits’. Do we listen with our instruments? Even after years of working in this area we can’t look at an FFT plot and tell you how something will sound. We can maybe tell you the information capacity of the signal or the channel. One clue why it doesn’t help is in the second ‘F’ (for Fourier).
In any case the 13-bit number is wrong. Try as we might there is no way to tell the information capacity of a channel from a spectrogram (as in one of the cited posts) – the graphs look pretty but are basically meaningless.
As described earlier, if you don’t have a decoder, the channel capacity appears to be typically > 15 bits for the files on the 2L Testbench and this is limited by considerations of compatibility, not coding space. The noise is frequency shaped to minimise audibility, as it is for many well-produced CDs. If you have a decoder then, depending on the authoring parameters, the noisefloor in the recording should not be increased anywhere there is music signal.
iii) Paraphrase: ‘The Nielsen recording shows that MQA are cheating. They take a 16-bit recording and give us back a 24-bit file with lots of noise in it’.
Wrong. All one had to do was read Morten’s notes to guess it might have been remastered to 24 bits, See [jumpto=anchor53]A40[/jumpto].
Nielsen: 2L-120 Track 1
As can be seen in the following, the inherent noisefloor of MQA in this recording is actually:
Without Decoder: MQA channel noise is lowest around 4kHz @ 17.5 bits with a channel capacity of
15.8 bits which has been shaped. The MQA noise is always below that of the CD release.
With Decoder: MQA channel noise is lowest around 4kHz @ 24.3 bits with a channel capacity of
over 23 bits which has been shaped.
We have added to the graph (from our earlier note on the 2L website) to make this clear.
http://images.computeraudiophile.com/graphics/2016/0407/8.png
(High Resolution Image)
Note: the 24-bit Master and MQA (decoded) peak noise curves overlay and are not separately visible.
These graphs confirm that 2L’s Original, CD and MQA versions of the files are consistent in level and response. Of course spectral plots using FFT have no time-domain information, but we can use them to compare the peak spectrum of the Original, CD, and MQA with and without a decoder.
Also shown is a comparison of the background noise throughout each version and the reference level for 16-bit TPDF dither in a channel sampled at 44.1 kHz. [jumpto=anchor30]14[/jumpto] [jumpto=anchor31]15[/jumpto]
In the graphs the peak and noise-floor curves overlay for both MQA decoded and Original master. We can also see that the shaped noise introduced by the MQA encoder and ‘heard’ without a decoder is removed by the decoder and is also below that of the CD release, even without decoding.
Additional curves explained:
With a Decoder: Brown (with open circles): This shows the underlying end-to-end MQA channel noisefloor (with a decoder) in this recording, which clearly shows that here the inherent noise of the MQA process is at least 10 bits (i.e. 60dB) below the noisefloor in the recording at all frequencies up to 22.05 kHz and close to 24 bits between 4kHz and 20 kHz.
Navy: shows the level of 24-bit TPDF dither for reference.
No Decoder: Magenta (with open stars): This shows the underlying MQA noisefloor for the listener with no decoder. It is lowest around 4 kHz and 12 kHz to minimise impact; is essentially below the 16-bit level up to 14 kHz and is always below the noise of the CD version. The inherent noise in the recording dominates below 15 kHz.[jumpto=anchor32]16[/jumpto]
Note: The noise seen by a Legacy (no-decoder) listener is the sparse signalling channel, not lossy noise in the file.
iv) Paraphrase: ‘MQA increases the noise in some recordings’ (an experiment using Explorer 2).
The underlying thesis in this blog has been to demonstrate that, because MQA uses burying techniques in the lossless folds, that somehow the dynamic range is restricted to 16 bits or fewer. We showed this to be incorrect regarding the Nielsen recording. We also disagree with the blog’s findings in the case of 2L ‘Blågutten’ from Quiet Winter Night. The graph below shows analysis of:
http://images.computeraudiophile.com/graphics/2016/0407/9.png
(High Resolution Image)
Is there any technical limitations for Chromecast Audio not to support MQA? If Chromecast Audio can support MQA, will you implement it? (Both digital and analogue) Or I can ask in a different way, we know that Tidal plan to support Chromecast. Will this also be possible with the MQA enabled? (Sometime). so Airplay and Sonos as examples are not technical limited by a 24 bit MQA requirement ? (They could in theory benefit from those few? 16 bit MQA editions). Or could such old non hires systems be allowed to downsample to 16/44.1? Same way 384 is downsampled or converted to 192 by Meridian Explorer2? MQA files can be 16 or 24 bit. The 44.1/16 MQA file could contain a 44.1/16 original (Examples of natural 44.1/16 MQA files include the very early albums from 2L, such as https://shop.klicktrack.com/2l/27242 or https://shop.klicktrack.com/2l/27117.). Or it could contain a 44.1-, 88.2-, 176.4- or 352.8-kHz original, but the MQA file is truncated to 16 bits to make a CD or on-the-fly for 24-bit to cross an Airplay, Bluetooth (or similar) bridge. See Q8 which explains that not all the timing or high-frequency components get lost by doing so (MQA tries to push the highest quality all the way to the end). In principle systems that only understand 16 bit audio can carry an MQA stream. If it is truncated it will preserve the ability for a decoder to respond. Some older or limited platforms are challenging, e.g. how can it be that Airplay is limited to 44.1 whereas some Apple TV is limited to 48 kHz? At some point in ‘the last mile’ SRC or worse can start to happen, but that is not an MQA problem. Because the MQA file is PCM it carries its benefits as far as it can go and even in Legacy playback the deblurring and some of the other benefits survive. If they choose to, systems that transcode on import can be made sensitive to MQA. This is a false assumption; Explorer2 has a 384 kHz-capable DAC. And, if it didn’t (and some products don’t) the MQA decoder would know and stop unwrapping the file when it reached the DAC limit, for example at 96 or 192kHz – it will also optimise the DACs performance for the context based on knowledge of the content. Basically the MQA decoder doesn’t downsample. Let’s say there’s an improvement of 100% when using MQA - what percentage of this improvement would you say is accounted for by temporal & what percentage to noise-mod improvements? What papers or studies in neuroscience have studied noise modulation & its audible effects? Do you see the biggest contributor to noise modulation being the result of the feedback mechanism & noise shaping of sigma-delta modulators and/or some other mechanisms? Are you are addressing temporal de-blurring by using digital filters which address this (such as apodizing filters)? How are you addressing the noise-mod problem? it depends completely on the music and sample rate. They can be equally important. It is often overlooked that each signal processing or DSP step involves resampling and issues such as filters, bandwidth, jitter and dither reappear and are critical each and every time. I listed some references earlier; noise-modulation is traditionally more in the area of psychoacoustics than neuroscience. noise modulation in the digital domain can come from dynamic or non-linear processing, quantisation processes, jitter, equalisation, room-correction, noise shapers and delta sigma modulators and associated divider/upsampling chains particularly, in all listed cases, where dither has been skipped or misapplied. We have already described that the filters in a complete end-to-end chain define the overall blur. Some of these filters can be designed better; some can be compensated so long as the whole chain is accounted for. MQA does not use apodizing filters.[[jumpto=anchor15]15[/jumpto]] That information is proprietary and the result of years of research. We may publish a paper on it in due course. Are you willing to give a specific, technical explanation of the terms “temporal de-blurring”? Can you explain if and if so, why, your apodizing filters are unique from a mathematical perspective? Re the filters: Many apodizing filters start working in the high audible range in order to use a gentler slope. You’ve pointed out the importance of response to 45 KHz. How will you be able to obtain this frequency response and avoid both ringing and undue aliasing distortion? The use of deconvolution filters to improve spatial resolution, essentially to de-blur, dates back many decades. I find references dating back to the 1950s regarding seismological data, and of course optical. Norbert Weiner [jumpto=anchor26]10[/jumpto] If I reduce the level of the digital signal within the Playback Software to just about 1 dB (in the digital domain), will the MQA light will still light up and will the MQA stream still be “unpacked”, or will this be end up in the 44k1 24 Bit digital stream, as when having non MQA certified DAC? Nearly all playback software do still sound good, if you adjust the level of the tracks just within some dB. MQA is a bit-exact system and it requires a lossless data stream in order to identify, decode and render correctly. If the data are changed (by just 1 LSB) the decoder will revert to Legacy; the light will go out and the sound quality will be from the 44- or 48-kHz 24-bit file. In fact, the MQA decoder is an excellent indicator that the delivery pipes are clean; as Computer Audiophiles we know the many and various ways the soft players and OS can try to confound precision.[jumpto=anchor27]11[/jumpto] We don’t agree about the sound quality of changing volume in a soft player, it is a totally unnecessary additional quantisation and always degrades. The MQA decoder has a very high quality, sample-accurate volume control built into the renderer that does not interfere with DAC management, doesn’t add a step and can be used for loudness compensation, trimming or even system volume if nothing better is available. Will room correction be usable with MQA? My thinking is, then also no digital room correction will work in that way, that it will benefit from the MQA unpacking, and all files will end up with just 44k1 24 Bit container, but mainly only with 13 Bit of audio information, no matter what sample rate the original file has had in the MQA encoding process. What about DSP features (like speaker correction and x-over)? Will MQA work with Digital Volume Control and also with Digital Room Correction? If the answer to the above is ‘No’, will it be possible to have any room corrections at all in software along the likes of Dirac, etc, or will room correction be restricted to Meridian devices such as the DSP8000 speaker system? Will MQA allow at any point a “standard” decoded high res PCM data stream to be manipulated in software for volume control or room correction? In comparison a lesser compressed master files with real 24 Bit 96 kHz sample rate from start to finish (in a real lossless codec) will do a more practical job, in our daily homes, especially when you plan or already use a Digital Room Correction (in the digital path, between or in the playback software, and before the DAC). Yes. This assumption is completely incorrect, see below. Also, please stop repeating this false 13-bit assertion, (see [jumpto=anchor51]Q78[/jumpto])
- f. See below. Will you release the detailed method to unpack an MQA file? Will you release a compiled program to unpack the MQA data stream, or work with others to accomplish this? Can this be purchased by DIY’s? If no, why, if yes, expected price. Can we have some more technical documentation about the test card? Will there be DAC chips available with built in MQA, and when? If yes, that also mean a DIY in theory can built a MQA DAC? CD quality? … fill in the blank. MQA is 13bit .. uh, oh, no, we meant MQA is 14, 15, 16, 17, 18 bits? c) MQA increases the noisefloor of the recording MQA is a DRM system Does this MQA format have any DRM component that we as consumers need to be aware of? (added) What does DRM mean in the context of MQA? If MQA incorporates DRM at such level that the file cannot be played at all (MQA decoding or not) without authentication… What format would this file be in? That is: it could not be FLAC, could it? DRM DRM DRM - what technical and/or legal assurances does a user/purchaser of MQA (which is an IP/patented protected technology) have that MQA will not now or in the future serve as part of a DRM system? Not interested in personal assurances, good intentions, or a simple “we have no plans for that”. What LEGAL rights do the purchaser of this technology have that protects them from MQA based DRM (if any)? Since the answer is probably “none”, what changes to the licensing is Meridian prepared to make to assure the audio community that MQA can never be used as a DRM mechanism? NO it ISN’T. We have no idea where this rumour came from, but we advise circumspection about the motives of those who persist in repeating this falsehood.
In fact, MQA is the antithesis of a DRM system: everyone can hear the music without a decoder!
Even FLAC requires a decoder, so does AAC, MP3, etc; vinyl and optical discs require players. There isn’t anyone who can’t play an MQA file on a mobile phone or an existing system.
DRM is about limiting access, tracking or copy protection. MQA does none of these.
MQA is about getting access to the definitive essence of great performances, with sound quality that is not otherwise achievable and reassuring you when you have it.
MQA files and decoders exist today, they can’t suddenly stop access to the music.
MQA does carry provenance, metadata and (optionally) creation rights information that might help the artist or publisher. It does not (unlike some downloads) carry information tracking the purchaser and we reject audible watermarks. MQA does not have a DRM component. MQA does not have a DRM component. MQA does not have a DRM component. MQA does not have a DRM component. Is MQA purely a technology, or is it an initiative to change music distribution? By that I mean does the MQA product include DRM or track usage in any way? MQA is not just a technology, it is a philosophy, see [jumpto=anchor42]Q0[/jumpto].
At its roots are several technical insights, but part of the implementation hopes to enable improved music distribution; that has been clearly stated. The improvements come in the areas of sound quality, accessibility and compatibility, with file sizes and data rates that are convenient for modern mobile networks and data plans.
There is also the capability in MQA to carry creator rights metadata, e.g. copyright, artist, publisher etc. Such information might help to create a fairer playing field for the creative community and we are very much in favour of helping artists to be more able to create good music. Is a non-sequitur. But as we have said many times, there are neither DRM, nor usage (tracking) implemented or intended. In addition, we are also opposed to audible watermarking. http://archimago.blogspot.ca/2016/01/measurements-mqa-observations-and-big.html (Blog post has been removed by author - Editor) In this blog you will see, that from the technical point of view, MQA have around 13 Bit of “lossless” information and everything below 14 Bit is “lossy”. Doesn’t mean that is will not sound good, it just means, that this is not a lossless codec, it is lossy (from the technical point of view). http://www.computeraudiophile.com/blogs/miska/some-analysis-and-comparison-mqa-encoded-flac-vs-normal-optimized-hires-flac-674/ Files: Background noise levels in original DXD source and MQA file.[jumpto=anchor33]17[/jumpto] Explorer2 analogue output when receiving: MQA (decoded in Explorer2) and the 192 kHz PCM version on the 2L testbench (background noise).[jumpto=anchor34]18[/jumpto] References: showing 16- and 20-bit noisefloor @ 352 kHz (note, 9dB lower than at 44 kHz). Analysis: The underlying MQA channel noisefloor in this file. Hearing thresholds (steady-state) referenced to a playback acoustic gain of 105dB SPL.
The end-to-end core MQA noise floor in these encodings is always at least: 5 bits below the noise- floor of the recording up to 11 kHz, 4 bits below up to 22 kHz and is 3 bits below at 44.1 kHz (audio). However, no common DAC chip will reveal this due to internal noise. Even in these great 2L recordings we don’t often see hall/microphone/ADC noise below the 16-bit noise spectral level – not surprising given the fundamental thermal limit for microphones. See [[jumpto=anchor2]2[/jumpto]] and brown curve above.
In our experiment we don’t see the Explorer2 output deviate from the DXD or 192 kHz versions below 33 kHz. Above that there is rising dither from the DAC, but its origin is not lack of dynamic range.
The mastering engineer can set encoding or playback parameters where noise level can be increased or decreased in some frequency regions, but is not due to lack of dynamic range in the MQA system.
We should point out some key points for those less skilled in reading such plots:
FFT analysis like this does not give any clear indication of how it is going to sound because temporal information is excluded. The dynamic range is huge; the silence in the recording is 1/3 the way up the graph. For steady noise we hear nothing in the shaded areas. Even at high listening levels (e.g. acoustic gain of 112 dB), the noisefloor of the un-decoded MQA should be inaudible if the playback system is linear and has a flat response. With a decoder the noise is more than 20dB lower. [[jumpto=anchor2]2[/jumpto]][[jumpto=anchor5]5[/jumpto]][[jumpto=anchor6]6[/jumpto]][[jumpto=anchor7]7[/jumpto]][[jumpto=anchor8]8[/jumpto]] Very few headphones or loudspeakers can reproduce above 40 kHz (shaded blue area). Very few microphones pick up above 40 kHz, including in this recording. Noisefloor above 44.1 or 48 kHz (especially at these levels) is more artefact than audio.[[jumpto=anchor3]3[/jumpto]]
[aname=anchor29]13[/aname] There is an issue of bias: we take exception to blogs that block us from posting corrections!
[aname=anchor30]14[/aname] The analysis uses 21.53Hz bins (=44100/2048 and 351800/16384) giving an offset of +13.33 dB wrt 1Hz.
[aname=anchor31]15[/aname] 2L sensibly use shaped quantisation for their CD releases.
[aname=anchor32]16[/aname] Of course not all DACs can reach this low level of in-band noise.
[aname=anchor33]17[/aname] Graph displayed up to 88.2 kHz for best comparison with blog.
[aname=anchor34]18[/aname] The analogue output of Explorer2 was captured at 352.8 kHz/24bit in a Pyramix workstation using the Horus converter. The analogue noisefloor of the ADC is around 20 bits. Files were sent to the Explorer2 using Foobar.
QUESTIONS FOR PAL @TIDAL
Q83.
Why would TIDAL use MQA for anything but hi-res content? Redbook files (16/44) would be substantially bigger in MQA form so no upside on bandwidth. Are streaming services that have embraced MQA (e.g., Tidal) likely to be MQA encoding RedBook material? If so, what is the benefit of doing that? Can you talk to Pål and ask if you are allowed to tell what to expect from Tidal when they flip the switch. Will MQA versions coexisting with Redbook versions. Meaning if you have the necessary access (subscription), you will be able to play both versions of a track / album. A83.
Wrong, they are not bigger; often smaller. See [jumpto=anchor54]Q34[/jumpto], [jumpto=anchor55]Q40[/jumpto] and [jumpto=anchor56]Q59[/jumpto].
- c. We can’t answer for Tidal.
REFERENCES
[[aname=anchor1]1[/aname]] Stuart, J. R. and Craven, P.G., ‘A Hierarchical Approach to Archiving and Distribution’, , 137th AES Convention, (2014). Open Access: http://www.aes.org/e-lib/browse.cfm?elib=17501
[[aname=anchor2]2[/aname]] Stuart, J.R., ‘Soundboard: High-Resolution Audio’, JAES Vol. 63 No. 10, pp831–832 (Oct 2015) Open Access http://www.aes.org/e-lib/browse.cfm?elib=18046
[[aname=anchor3]3[/aname]] Stuart, J.R., Howard, K., ‘New digital coding scheme – MQA’, (Japanese translation by Hiroaki Suzuki), J. Japan Audio Society, vol. 55 #6, pp45 – 57 (Nov. 2015).
[[aname=anchor4]4[/aname]] Stuart, J.R. ‘Coding for High-resolution Audio Systems’, J. Audio Eng. Soc., Vol. 52, No. 3 (March 2004)
[[aname=anchor5]5[/aname]] Stuart, J.R., ‘Noise: Methods for Estimating Detectability and Threshold’, J. Audio Eng. Soc., 42, 124–140 (March 1994)
[[aname=anchor6]6[/aname]] Stuart, J.R. ‘Predicting the audibility, detectability and loudness of errors in audio systems’ AES 91st convention, New York, preprint 3209 (1991)
[[aname=anchor7]7[/aname]] Stuart, J.R. ‘Estimating the significance of errors in audio systems’ AES 91st convention, New York, preprint 3208 (1991)
[[aname=anchor8]8[/aname]] Stuart, J.R. ‘Psychoacoustic models for evaluating errors in audio systems’ Proceedings of the Institute of Acoustics, 13, part 7, 33 (1991)
[[aname=anchor9]9[/aname]] Lewicki, M.S. ‘Efficient Coding of natural sounds’, Nature Neurosci. 5, 356–363 (2002). http://dx.doi.org/10.1038/nn831
[[aname=anchor10]10[/aname]] Jackson,H.M., Capp,M.D. and Stuart,J.R., ‘The audibility of typical digital audio filters in a high-fidelity playback system’, 9174, 137th AES Convention, (2014).
[[aname=anchor11]11[/aname]] Fellgett,P.B., ‘Thermal noise limits of Microphones’, J.IERE, 57 No.4, 161–166 (1987). http://dx.doi.org/10.1049/jiere.1987.0058
[[aname=anchor12]12[/aname]] Craven, P.G., and Gerzon, M.A., ‘Compatible Improvement of 16-Bit Systems Using Subtractive Dither’, AES 93rd Convention, San Francisco, preprint 3356 (1992)
[[aname=anchor13]13[/aname]] Gerzon,M.A., and Craven, P.G., ‘Optimal Noise Shaping and Dither of Digital Signals’, 87th AES Convention, NewYork, preprint 2822 (1989)
[[aname=anchor14]14[/aname]] Gerzon, M.A., Craven, P.G., Stuart, J.R., and Wilson, R.J., ‘Psychoacoustic Noise Shaped Improvements in CD and Other Linear Digital Media’, AES 94th Convention, Berlin, preprint 3501 (March 1993)
[[aname=anchor15]15[/aname]] Craven, P.G., ‘Antialias Filters and System Transient Response at High Sample Rates’, J. Audio Eng. Soc., Vol. 52, No. 3, pp. 216–242, (March 2004)
[[aname=anchor16]16[/aname]] Acoustic Renaissance for Audio, ‘A Proposal for High-Quality Application of High-Density CD Carriers’, private publication available for download at http://www.meridian-audio.com/ara (April 1995). Reprinted in Stereophile (August 1995) and in Japanese in J. Japan Audio Soc., 35 (October 1995)
GLOSSARY
ADC (A/D Converter) - Analogue to Digital Converter.
DAC (D/A Converter) - Digital to Analogue Converter.
Encapsulation - The process which identifies and secures the information space of a music signal and which exploits advanced sampling and reconstruction methods to optimise analogue end-to-end temporal precision while minimising data rate. We refer to the encapsulated object as the ‘kernel’.
Hierarchical Coding - A conceptual framework that models analogue as an infinite sample rate, finite word-size representation which can be approximated by a hierarchical chain of downward and upward splines. In MQA the transmission kernel is not a sinc function, nor is it Gaussian, it is informed by neuroscience. MQA can also employ the hierarchical packing system – so- called ‘Music Origami’. When the packing is folding a ‘kernel’, the process is losslessly reversible.
Kernel - The encapsulated core music signal. The apparent sampling rate of the kernel we refer to as the ‘transmission rate’.
Legacy - The Legacy quality audio is that provided by playback of the MQA distribution stream without a decoder. The perceived audio will be at least equivalent to a standard CD (44.1 kHz, 16-bit) for a 1x transmission file.
MQA - MQA, Master Quality Authentication, provides a means to securely encode and transmit high resolution audio. An MQA decoder may be used to verify the authenticity of the audio and to present a high resolution listening experience. A listener may still playback and listen to the encoded audio stream without an MQA decoder, treating it as a standard PCM stream, at CD quality.
Rates : Original, Kernel, Transmission, Transport, Render - The hierarchical system can present different apparent sample rates throughout the process. We refer to ‘original’ (studio capture), ‘transmission’ (information rate of the kernel), ‘transport’ (the apparent speed of the file) and ‘rendering’ (the rate sent to the DAC).
Rendering - The MQA Renderer performs sampling reconstruction under instruction from the encoder while matching and optimising the attached DAC to deliver an authenticated analogue output. The renderer may replace or over-ride reconstruction filters in the DAC. The renderer may perform cross-family conversion depending on the platform.
1x, 2x, 4x, etc. - We refer to sample rates by their multipliers. These represent multiples of the basis sampling frequency of 44100 Hz or 48000 Hz, yielding two families of sample rate. For example, in the 44.1 kHz family, 2x audio refers to an 88.2 kHz sample rate, whereas in the 48 kHz family, 2x refers to a 96 kHz sample rate. Consumer decoders are not required to support 50.4, 64 or 128 kHz.
Correct, it is true. Why is this? Because the Legacy and intermediate presentations are optimised for a wide population of DACs and present pre-conditioned and stabilised signals. Is this proven? We have countless listening tests carried out by labels, studios and ourselves to confirm this is the case for un-decoded MQA vs the Redbook. In many cases listeners are surprised how close the un-decoded MQA sounds to a higher-resolution (e.g. 192/24) version. Considerable. Simple lossy down-sampling of 192/24 to 48/24 will introduce significant blur and irretrievably damage the temporal and other information in the recording. (We assume the questioner didn’t really mean 192 to 44.1 which is even more invasive). See above. More noise compared to what? With no decoder? To a well-made CD from the same master? The answer is ‘no, or hardly at all’. To a high-resolution master? There will be a higher noise level; whether you hear it is another matter. The MQA white signalling noise is shaped to sit below the threshold of hearing for acoustic gains up to at least 110 dB SPL.[jumpto=anchor25]9[/jumpto]
The key point to grasp is that no other system even allows this to happen; if you have a FLAC of a 192/24 it simply cannot be played in many systems or contexts. This technology lets you listen to the file without a decoder.