Does rendering degrade WAV's?

Recording Techniques, People Skills, Gear, Recording Spaces, Computers, and DIY

Moderators: drumsound, tomb

Post Reply
User avatar
inverseroom
on a wing and a prayer
Posts: 5031
Joined: Wed May 07, 2003 8:37 am
Location: Ithaca, NY
Contact:

Does rendering degrade WAV's?

Post by inverseroom » Sun Feb 24, 2008 2:53 pm

I track at 24/44.1, and sometimes I do a lot of edits on a track and then render them out to a fresh 24/44.1 WAV. And this weekend I've been transferring projects from one DAW to another, and I'm basically rendering every single sound I've recorded for the past year, and it occurs to me how much truth there is to the notion that every render sucks a little bit of the life out of your tracks.

Personally, I can't hear any difference and am not remotely worried. But how much truth is to this? I'm going from 24/44.1 to 24/44.1, without printing any effects.

aaronaustin
takin' a dinner break
Posts: 176
Joined: Tue Jul 31, 2007 11:18 am
Location: Lexington, KY
Contact:

Post by aaronaustin » Sun Feb 24, 2008 3:15 pm

I've actually been wondering the same thing lately. Sorry, no help here.

Platinum Samples
gimme a little kick & snare
Posts: 83
Joined: Fri Sep 15, 2006 2:34 pm
Location: Los Angeles, CA
Contact:

Post by Platinum Samples » Sun Feb 24, 2008 3:45 pm

If you're not changing gain or doing any cross fades then they should be bit identical.

Rail
www.platinumsamples.com
??????????????????????????????????????????????????
Image

User avatar
inverseroom
on a wing and a prayer
Posts: 5031
Joined: Wed May 07, 2003 8:37 am
Location: Ithaca, NY
Contact:

Post by inverseroom » Sun Feb 24, 2008 4:44 pm

Well, I am doing some crossfades on a few tracks, mostly drum edits. But most other tracks it's just straight from the original WAVs.

I think i started thinking about this because of an interview with Bob Katz in the book Mastering Engineer's Handbook:
Every DSP operation costs something in terms of sound quality. It gets grainier, colder, narrower, and harsher. Adding a generation of normalization is just taking it down one generation.
I'm not normalizing, but a render is a DSP operation, right? Again, I can't hear any difference, but I wonder...

The Scum
moves faders with mind
Posts: 2525
Joined: Thu Jul 03, 2003 11:26 pm
Location: Denver, CO
Contact:

Post by The Scum » Sun Feb 24, 2008 5:23 pm

So why not do the render, then import it to a new track, and flip the polarity, and see how well they null.

I'd expect to get the same results either way...in each case, the computer is going through the same input data set (the edited track). Is there really a difference between writing the resulting data to the DAC, as opposed to copying out ("rendering") the data stream to a new file? The computer is just doing math, and math isn't arbitrary (well, unless you're dithering).

(this does lead to the fact that the path taken by the two tracks will be slightly different, too - on the edited track, it's got just that track fader to go through. On the rendered track, it'll have passed through the original fader, then through the fader that's playing the rendered track. If the fader isn't 100% transparent at unity gain (and floating point math is a little imprecise), then it may change things between the two tracks...albeit hopefully to an extremely miniscule degree)

Of course this doesn't really address Mr Katz assertion that processing, in general, leads to degradation. With 24-bit sampling, there's more room to manipulate before things get audible than there was with 16-bits.

User avatar
inverseroom
on a wing and a prayer
Posts: 5031
Joined: Wed May 07, 2003 8:37 am
Location: Ithaca, NY
Contact:

Post by inverseroom » Mon Feb 25, 2008 4:18 am

Well...to play the devil's advocate, "just doing math" sounds kinda suspicious. :wink:

I AM clicking the "allow dither" box--but going from 24/44.1 to 24/44.1 shouldn't require it, correct?

User avatar
apropos of nothing
dead but not forgotten
Posts: 2193
Joined: Tue May 13, 2003 6:29 am
Location: Minneapolis, MN
Contact:

Post by apropos of nothing » Mon Feb 25, 2008 8:22 am

The answer is:
If you want to insure no degradation of your wav files, copy'em, rather than burning them.

If you're needing to make time-aligned stems (neccesitating bounces), as long as you're just bouncing them singly, rather than mixing or affecting them, you will minimize the damage you're doing to them.

OTOH, the point is pretty well moot.

Will you be able to hear the difference between your bounced and unbounced waves? Probably not. Have you stemmed through analog? (I do -- I like it!) Then you've probably done far more "damage" to your signal at that stage (d/a->processing->a/d) than your bounce could ever do.

Sure, it'd be nice to observe a Hippocratic oath where it comes to audio signals, and primarily, we all do, but it comes to a point where it gets silly and starts to tie your hands if taken too far.

JdJ
pushin' record
Posts: 217
Joined: Tue Jan 03, 2006 8:11 am
Location: nh

Post by JdJ » Mon Feb 25, 2008 11:04 am

At one point I tried the polarity trick with some consolidated tracks. They nulled out. On the flip side, I am always paranoid that all of the inaudible content that's getting f'ed with in such a process (rendering) adds up to a perceptible difference. Having said that, I try not to let my paranoia influence my decision making too much as it usually squashes the creative process somewhere along the line moreso than the original fear.

-J

User avatar
inverseroom
on a wing and a prayer
Posts: 5031
Joined: Wed May 07, 2003 8:37 am
Location: Ithaca, NY
Contact:

Post by inverseroom » Mon Feb 25, 2008 11:09 am

Yeah, definitely, I mean how much of the Beatles' records was "degraded" by bouncing down? We consider that to be "good" degradation, because it's tape, but still.

User avatar
apropos of nothing
dead but not forgotten
Posts: 2193
Joined: Tue May 13, 2003 6:29 am
Location: Minneapolis, MN
Contact:

Post by apropos of nothing » Mon Feb 25, 2008 2:14 pm

inverseroom wrote:Yeah, definitely, I mean how much of the Beatles' records was "degraded" by bouncing down? We consider that to be "good" degradation, because it's tape, but still.
Analog degradation is slightly easier on the ears than digital degradation, admittedly.

I mean, really, lately my philosophy has been: do as little the signal post-recording as possible. Its not always possible, and sometimes a little whatsit is just what's called for, but as practicable, I've really been trying to just let it be. I think my recordings have gotten a lot better as a result.

MoreSpaceEcho
zen recordist
Posts: 6495
Joined: Wed May 07, 2003 11:15 am

Post by MoreSpaceEcho » Mon Feb 25, 2008 5:23 pm

whenever i render a track i save it as 32 bit float, because, well, why not? and they totally null so i've stopped worrying about it.

User avatar
@?,*???&?
on a wing and a prayer
Posts: 5804
Joined: Wed May 07, 2003 4:36 pm
Location: Just left on the FM dial
Contact:

Post by @?,*???&? » Mon Feb 25, 2008 6:56 pm

MoreSpaceEcho wrote:whenever i render a track i save it as 32 bit float, because, well, why not? and they totally null so i've stopped worrying about it.
Spoken by someone with no understanding of what digital is. One wonders why MoreSpaceEcho bothers posting here.

The following should bring some clarity:

The most important thing to importing or exporting audio is going to be first, the sample rate and bit rate it was recorded at and second, the bit rate and sample rate it will be played back at.

Taking tracks from one system to another will mean that the wordclock will be all-important. Ken Pohlmann cites in his 'Principles of digital audio' that the most crucial stage for any audio in the digital domain is having the proper sampling rate at the time of conversion from analog to digital.

Rendering tracks for someone to work on brings with it the possibility of having a different clock or one that is better or worse than the one you started with. Maintaining the same clock source throughout a project should not just desirable, but mandatory.

Remember too, the samples before and after will be same, but they may be played at a different or slightly different rate.

Here is a scenario, imagine Audiosuiting a track in Pro Tools. The file will be processed with the desired plugin regardless of the clock source for the session. Essentially, this would be like rendering a file to .wav. When the file is played back a given device or program, it needs a wordclock or a clocked source to set the sample rate.

User avatar
farview
tinnitus
Posts: 1204
Joined: Tue Aug 31, 2004 1:42 pm
Location: St. Charles (chicago) IL
Contact:

Post by farview » Mon Feb 25, 2008 7:28 pm

@?,*???&? wrote:
MoreSpaceEcho wrote:whenever i render a track i save it as 32 bit float, because, well, why not? and they totally null so i've stopped worrying about it.
Spoken by someone with no understanding of what digital is. One wonders why MoreSpaceEcho bothers posting here.

The following should bring some clarity:

The most important thing to importing or exporting audio is going to be first, the sample rate and bit rate it was recorded at and second, the bit rate and sample rate it will be played back at.

Taking tracks from one system to another will mean that the wordclock will be all-important. Ken Pohlmann cites in his 'Principles of digital audio' that the most crucial stage for any audio in the digital domain is having the proper sampling rate at the time of conversion from analog to digital.

Rendering tracks for someone to work on brings with it the possibility of having a different clock or one that is better or worse than the one you started with. Maintaining the same clock source throughout a project should not just desirable, but mandatory.

Remember too, the samples before and after will be same, but they may be played at a different or slightly different rate.

Here is a scenario, imagine Audiosuiting a track in Pro Tools. The file will be processed with the desired plugin regardless of the clock source for the session. Essentially, this would be like rendering a file to .wav. When the file is played back a given device or program, it needs a wordclock or a clocked source to set the sample rate.
But that is beside the point. The clock only comes into it during conversion. If you are rendering in the computer, the clock has nothing to do with anything.

User avatar
@?,*???&?
on a wing and a prayer
Posts: 5804
Joined: Wed May 07, 2003 4:36 pm
Location: Just left on the FM dial
Contact:

Post by @?,*???&? » Mon Feb 25, 2008 7:47 pm

inverseroom wrote:Yeah, definitely, I mean how much of the Beatles' records was "degraded" by bouncing down? We consider that to be "good" degradation, because it's tape, but still.
Alot. Get the remix album of 'Yellow Submarine' that coincided with the release of the movie on DVD.

The remixes were done from FIRST generation masters and the tracks were aligned in Sonic Solutions. Then remixed to recreate the original mixes WITHOUT the signal degradation from bouncing.

The first generation tracks sound AMAZING.

This makes an INCREDIBLE comparison when A/B'ing with the disc we've known for a long time. Could the title track sound better and more open? Resoundingly, yes, very much so.

Seek it out.

User avatar
@?,*???&?
on a wing and a prayer
Posts: 5804
Joined: Wed May 07, 2003 4:36 pm
Location: Just left on the FM dial
Contact:

Post by @?,*???&? » Mon Feb 25, 2008 7:50 pm

farview wrote:
@?,*???&? wrote:
MoreSpaceEcho wrote:whenever i render a track i save it as 32 bit float, because, well, why not? and they totally null so i've stopped worrying about it.
Spoken by someone with no understanding of what digital is. One wonders why MoreSpaceEcho bothers posting here.

The following should bring some clarity:

The most important thing to importing or exporting audio is going to be first, the sample rate and bit rate it was recorded at and second, the bit rate and sample rate it will be played back at.

Taking tracks from one system to another will mean that the wordclock will be all-important. Ken Pohlmann cites in his 'Principles of digital audio' that the most crucial stage for any audio in the digital domain is having the proper sampling rate at the time of conversion from analog to digital.

Rendering tracks for someone to work on brings with it the possibility of having a different clock or one that is better or worse than the one you started with. Maintaining the same clock source throughout a project should not just desirable, but mandatory.

Remember too, the samples before and after will be same, but they may be played at a different or slightly different rate.

Here is a scenario, imagine Audiosuiting a track in Pro Tools. The file will be processed with the desired plugin regardless of the clock source for the session. Essentially, this would be like rendering a file to .wav. When the file is played back a given device or program, it needs a wordclock or a clocked source to set the sample rate.
But that is beside the point. The clock only comes into it during conversion. If you are rendering in the computer, the clock has nothing to do with anything.
You'll need to explain this more. A wordclock would have no effect on conversion.

Post Reply

Who is online

Users browsing this forum: No registered users and 39 guests