Does rendering degrade WAV's?
- inverseroom
- on a wing and a prayer
- Posts: 5031
- Joined: Wed May 07, 2003 8:37 am
- Location: Ithaca, NY
- Contact:
Does rendering degrade WAV's?
I track at 24/44.1, and sometimes I do a lot of edits on a track and then render them out to a fresh 24/44.1 WAV. And this weekend I've been transferring projects from one DAW to another, and I'm basically rendering every single sound I've recorded for the past year, and it occurs to me how much truth there is to the notion that every render sucks a little bit of the life out of your tracks.
Personally, I can't hear any difference and am not remotely worried. But how much truth is to this? I'm going from 24/44.1 to 24/44.1, without printing any effects.
Personally, I can't hear any difference and am not remotely worried. But how much truth is to this? I'm going from 24/44.1 to 24/44.1, without printing any effects.
-
- takin' a dinner break
- Posts: 176
- Joined: Tue Jul 31, 2007 11:18 am
- Location: Lexington, KY
- Contact:
-
- gimme a little kick & snare
- Posts: 83
- Joined: Fri Sep 15, 2006 2:34 pm
- Location: Los Angeles, CA
- Contact:
- inverseroom
- on a wing and a prayer
- Posts: 5031
- Joined: Wed May 07, 2003 8:37 am
- Location: Ithaca, NY
- Contact:
Well, I am doing some crossfades on a few tracks, mostly drum edits. But most other tracks it's just straight from the original WAVs.
I think i started thinking about this because of an interview with Bob Katz in the book Mastering Engineer's Handbook:
I think i started thinking about this because of an interview with Bob Katz in the book Mastering Engineer's Handbook:
I'm not normalizing, but a render is a DSP operation, right? Again, I can't hear any difference, but I wonder...Every DSP operation costs something in terms of sound quality. It gets grainier, colder, narrower, and harsher. Adding a generation of normalization is just taking it down one generation.
-
- moves faders with mind
- Posts: 2746
- Joined: Thu Jul 03, 2003 11:26 pm
- Location: Denver, CO
- Contact:
So why not do the render, then import it to a new track, and flip the polarity, and see how well they null.
I'd expect to get the same results either way...in each case, the computer is going through the same input data set (the edited track). Is there really a difference between writing the resulting data to the DAC, as opposed to copying out ("rendering") the data stream to a new file? The computer is just doing math, and math isn't arbitrary (well, unless you're dithering).
(this does lead to the fact that the path taken by the two tracks will be slightly different, too - on the edited track, it's got just that track fader to go through. On the rendered track, it'll have passed through the original fader, then through the fader that's playing the rendered track. If the fader isn't 100% transparent at unity gain (and floating point math is a little imprecise), then it may change things between the two tracks...albeit hopefully to an extremely miniscule degree)
Of course this doesn't really address Mr Katz assertion that processing, in general, leads to degradation. With 24-bit sampling, there's more room to manipulate before things get audible than there was with 16-bits.
I'd expect to get the same results either way...in each case, the computer is going through the same input data set (the edited track). Is there really a difference between writing the resulting data to the DAC, as opposed to copying out ("rendering") the data stream to a new file? The computer is just doing math, and math isn't arbitrary (well, unless you're dithering).
(this does lead to the fact that the path taken by the two tracks will be slightly different, too - on the edited track, it's got just that track fader to go through. On the rendered track, it'll have passed through the original fader, then through the fader that's playing the rendered track. If the fader isn't 100% transparent at unity gain (and floating point math is a little imprecise), then it may change things between the two tracks...albeit hopefully to an extremely miniscule degree)
Of course this doesn't really address Mr Katz assertion that processing, in general, leads to degradation. With 24-bit sampling, there's more room to manipulate before things get audible than there was with 16-bits.
- inverseroom
- on a wing and a prayer
- Posts: 5031
- Joined: Wed May 07, 2003 8:37 am
- Location: Ithaca, NY
- Contact:
- apropos of nothing
- dead but not forgotten
- Posts: 2193
- Joined: Tue May 13, 2003 6:29 am
- Location: Minneapolis, MN
- Contact:
The answer is:
If you want to insure no degradation of your wav files, copy'em, rather than burning them.
If you're needing to make time-aligned stems (neccesitating bounces), as long as you're just bouncing them singly, rather than mixing or affecting them, you will minimize the damage you're doing to them.
OTOH, the point is pretty well moot.
Will you be able to hear the difference between your bounced and unbounced waves? Probably not. Have you stemmed through analog? (I do -- I like it!) Then you've probably done far more "damage" to your signal at that stage (d/a->processing->a/d) than your bounce could ever do.
Sure, it'd be nice to observe a Hippocratic oath where it comes to audio signals, and primarily, we all do, but it comes to a point where it gets silly and starts to tie your hands if taken too far.
If you want to insure no degradation of your wav files, copy'em, rather than burning them.
If you're needing to make time-aligned stems (neccesitating bounces), as long as you're just bouncing them singly, rather than mixing or affecting them, you will minimize the damage you're doing to them.
OTOH, the point is pretty well moot.
Will you be able to hear the difference between your bounced and unbounced waves? Probably not. Have you stemmed through analog? (I do -- I like it!) Then you've probably done far more "damage" to your signal at that stage (d/a->processing->a/d) than your bounce could ever do.
Sure, it'd be nice to observe a Hippocratic oath where it comes to audio signals, and primarily, we all do, but it comes to a point where it gets silly and starts to tie your hands if taken too far.
At one point I tried the polarity trick with some consolidated tracks. They nulled out. On the flip side, I am always paranoid that all of the inaudible content that's getting f'ed with in such a process (rendering) adds up to a perceptible difference. Having said that, I try not to let my paranoia influence my decision making too much as it usually squashes the creative process somewhere along the line moreso than the original fear.
-J
-J
- inverseroom
- on a wing and a prayer
- Posts: 5031
- Joined: Wed May 07, 2003 8:37 am
- Location: Ithaca, NY
- Contact:
- apropos of nothing
- dead but not forgotten
- Posts: 2193
- Joined: Tue May 13, 2003 6:29 am
- Location: Minneapolis, MN
- Contact:
Analog degradation is slightly easier on the ears than digital degradation, admittedly.inverseroom wrote:Yeah, definitely, I mean how much of the Beatles' records was "degraded" by bouncing down? We consider that to be "good" degradation, because it's tape, but still.
I mean, really, lately my philosophy has been: do as little the signal post-recording as possible. Its not always possible, and sometimes a little whatsit is just what's called for, but as practicable, I've really been trying to just let it be. I think my recordings have gotten a lot better as a result.
-
- zen recordist
- Posts: 6677
- Joined: Wed May 07, 2003 11:15 am
- @?,*???&?
- on a wing and a prayer
- Posts: 5804
- Joined: Wed May 07, 2003 4:36 pm
- Location: Just left on the FM dial
- Contact:
Spoken by someone with no understanding of what digital is. One wonders why MoreSpaceEcho bothers posting here.MoreSpaceEcho wrote:whenever i render a track i save it as 32 bit float, because, well, why not? and they totally null so i've stopped worrying about it.
The following should bring some clarity:
The most important thing to importing or exporting audio is going to be first, the sample rate and bit rate it was recorded at and second, the bit rate and sample rate it will be played back at.
Taking tracks from one system to another will mean that the wordclock will be all-important. Ken Pohlmann cites in his 'Principles of digital audio' that the most crucial stage for any audio in the digital domain is having the proper sampling rate at the time of conversion from analog to digital.
Rendering tracks for someone to work on brings with it the possibility of having a different clock or one that is better or worse than the one you started with. Maintaining the same clock source throughout a project should not just desirable, but mandatory.
Remember too, the samples before and after will be same, but they may be played at a different or slightly different rate.
Here is a scenario, imagine Audiosuiting a track in Pro Tools. The file will be processed with the desired plugin regardless of the clock source for the session. Essentially, this would be like rendering a file to .wav. When the file is played back a given device or program, it needs a wordclock or a clocked source to set the sample rate.
- farview
- tinnitus
- Posts: 1204
- Joined: Tue Aug 31, 2004 1:42 pm
- Location: St. Charles (chicago) IL
- Contact:
But that is beside the point. The clock only comes into it during conversion. If you are rendering in the computer, the clock has nothing to do with anything.@?,*???&? wrote:Spoken by someone with no understanding of what digital is. One wonders why MoreSpaceEcho bothers posting here.MoreSpaceEcho wrote:whenever i render a track i save it as 32 bit float, because, well, why not? and they totally null so i've stopped worrying about it.
The following should bring some clarity:
The most important thing to importing or exporting audio is going to be first, the sample rate and bit rate it was recorded at and second, the bit rate and sample rate it will be played back at.
Taking tracks from one system to another will mean that the wordclock will be all-important. Ken Pohlmann cites in his 'Principles of digital audio' that the most crucial stage for any audio in the digital domain is having the proper sampling rate at the time of conversion from analog to digital.
Rendering tracks for someone to work on brings with it the possibility of having a different clock or one that is better or worse than the one you started with. Maintaining the same clock source throughout a project should not just desirable, but mandatory.
Remember too, the samples before and after will be same, but they may be played at a different or slightly different rate.
Here is a scenario, imagine Audiosuiting a track in Pro Tools. The file will be processed with the desired plugin regardless of the clock source for the session. Essentially, this would be like rendering a file to .wav. When the file is played back a given device or program, it needs a wordclock or a clocked source to set the sample rate.
- @?,*???&?
- on a wing and a prayer
- Posts: 5804
- Joined: Wed May 07, 2003 4:36 pm
- Location: Just left on the FM dial
- Contact:
Alot. Get the remix album of 'Yellow Submarine' that coincided with the release of the movie on DVD.inverseroom wrote:Yeah, definitely, I mean how much of the Beatles' records was "degraded" by bouncing down? We consider that to be "good" degradation, because it's tape, but still.
The remixes were done from FIRST generation masters and the tracks were aligned in Sonic Solutions. Then remixed to recreate the original mixes WITHOUT the signal degradation from bouncing.
The first generation tracks sound AMAZING.
This makes an INCREDIBLE comparison when A/B'ing with the disc we've known for a long time. Could the title track sound better and more open? Resoundingly, yes, very much so.
Seek it out.
- @?,*???&?
- on a wing and a prayer
- Posts: 5804
- Joined: Wed May 07, 2003 4:36 pm
- Location: Just left on the FM dial
- Contact:
You'll need to explain this more. A wordclock would have no effect on conversion.farview wrote:But that is beside the point. The clock only comes into it during conversion. If you are rendering in the computer, the clock has nothing to do with anything.@?,*???&? wrote:Spoken by someone with no understanding of what digital is. One wonders why MoreSpaceEcho bothers posting here.MoreSpaceEcho wrote:whenever i render a track i save it as 32 bit float, because, well, why not? and they totally null so i've stopped worrying about it.
The following should bring some clarity:
The most important thing to importing or exporting audio is going to be first, the sample rate and bit rate it was recorded at and second, the bit rate and sample rate it will be played back at.
Taking tracks from one system to another will mean that the wordclock will be all-important. Ken Pohlmann cites in his 'Principles of digital audio' that the most crucial stage for any audio in the digital domain is having the proper sampling rate at the time of conversion from analog to digital.
Rendering tracks for someone to work on brings with it the possibility of having a different clock or one that is better or worse than the one you started with. Maintaining the same clock source throughout a project should not just desirable, but mandatory.
Remember too, the samples before and after will be same, but they may be played at a different or slightly different rate.
Here is a scenario, imagine Audiosuiting a track in Pro Tools. The file will be processed with the desired plugin regardless of the clock source for the session. Essentially, this would be like rendering a file to .wav. When the file is played back a given device or program, it needs a wordclock or a clocked source to set the sample rate.
Who is online
Users browsing this forum: No registered users and 178 guests