Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

Loading...

來自 École Polytechnique Fédérale de Lausanne 的課程

数字信号处理

306 個評分

Learn the fundamentals of digital signal processing theory and discover the myriad ways DSP makes everyday life more productive and fun.

從本節課中

Module 5: Sampling and Quantization

- Paolo PrandoniLecturer

School of Computer and Communication Science - Martin VetterliProfessor

School of Computer and Communication Sciences

Consider again the problem of converting an audio file

from CD standard to DVD standard.

The CD sampling rate is 44.1 kilohertze,

whereas the DVD sampling rate is 48 kilohertz.

If we were to perform this conversion using the standard upsampler and

downsampler in Cascade we would have to first upsample the CD

sequence by a factor or 160.

And the down sample it by a factor of a 147.

This is a very large factors and

it would be impractical to do that in a cheap single processes system.

So we use another strategy which is called time-varying local interpolation.

Let's go see our first problem of Subsample Interpolation

given a discrete time sequence, of which, here, we show, for instance,

three samples around index n.

We want to find the punitive value of

the underlying continuous time signal at a time n plus tau.

Where tau is strictly in magnitude less than one half.

So we want to interpolate this sequence and

find intermediate values in this interval here around a sample of reference.

We call this the anchor sample.

In theory, what we should do is build a full sinc interpolation

of the discrete time sequence and

then resample the resulting continuous time signal with a time offset of tau.

But of course, that would require us to go into continuous time, and

we don't want to do that if we can avoid it.

So, can we perform subsample interpolation entirely in the discrete-time domain?

The way we proceed is to use local Lagrange interpolation,

like we did in the Interpolation examples from discrete time to continuous time.

We can formally build a continuous time interpolation around the anchor sample

by taking the same number of samples to the left and

to the right of the anchor sample.

And uses this sample to build a linear combination of Legrange interpolation

polynomials of the right order.

Now, we don't need to compute this explicitly,

this is just an implicit construction because what we want to find there is just

one particular value of this function computed in tau.

The expression for

the Lagrange polynomials of order 2n+1 is given by this formula here and we will use

that later to explicitly compute the value of the polynomial in tau.

Let's take an example, let's take big N = 1,l that means that we will take one

sample to the left, and one sample to the right of the anchor sample.

The result interpolation will be therefore of second order.

It will be a parabola.

The way we build this parabola to go through the three samples of reference,

is by taking a linear combination of the following three Lagrange polynomial of

order 2.

This is the first one, this is the second one, and this is the third one.

The interpolation is a linear combination of the streak curves weighted by

the values of the samples, so graphically, it looks like so.

These are of three samples.

The first polynomial will pass through the first sample and

be 0 where it goes through the other index locations.

This is the classic property of interpolating functions.

The second polynomial would go through the second sample and

go through 0 at the other location.

And the third polynomial will look like so.

When we sum them together we finally get the parabola that goes through the points.

This is the second-order interpolation of the three points that we have chosen.

And now if you want to find out the subsample value at n + tau, all

we need to do is find the value of this polynomial for an argument equal to tau.

Okay, so let's see once again what we're doing here.

We're saying that the value of the underlying continuous time signal

in n + tau is approximately equal to the value

of the local Lagrange interpolation around sample n, and displaced by tau.

If we compute this value we see that this is a linear combination

of the values of the Lagrange polynomials of order 2n plus 1

computed in tau weighed by the value of the samples around the anchor sample.

This looks suspiciously like convolution.

And indeed, if we define an impulse response d tau of k as a collection

of the values of the Lagrange polynomials in tau, we can express the value of

the local Lagrange interpolation in tau as the convolution between the samples around

the anchor point n and the values of this 2 n plus 1 point response.

So for every possible tau, we can define a (2N + 1)-tap FIR.

And if we filter any sequence with this FIR, we will get a shifted version of

the sequence, by a subsample amount tau, that is less than one-half in magnitude.

This is fundamentally a low order approximation

of the fractional delay filter.

For example, if we take N=1, which means we're taking three samples and

a second order approximation.

These are the expressions for

the three Lagrange polynomials involved in the interpolation.

If we plug any value of tau here, in the formula, we get three numbers and

this will be the three non-zero taps for the local interpolation filter.

For instance, suppose we want tau equal to 0.2, this will be the three coefficients

that we'll have to use In order to compute the sub-sample approximation at tau = 0.2.

Okay, so now we know how to cheaply compute a fractional delay,

we can shift a whole sequence by a fractional amount.

How does that apply to the sample rate conversion.

Well the thing is take for instance CD to DVD conversion.

What we need to do is for every 147 samples coming from the CD,

we need to generate 160 DVD samples.

So let's look at how the process works.

The first time The DVD and CD samples are aligned, we're at times 0.

So we take the CD sample and we'd put it out as a DVD sample.

The next time around,

we will need to put out a DVD sample before the next CD D time.

And the difference here between the CD time and DVD time.

Since the DVD is going faster, is given by

a delay tau which actually 147/160.

So we compute this sub-sample approximation at minus tau.

And we produce the first DVD sample.

The next step, still we're lagging behind, because the rate of samples that we need

to produce for the DVD, is faster than the rate of samples of the CD.

And the lag between the current CD sample and

the current DVD sample will have doubled.

So now we have a subsample approximation, where the lag is 2 times tau.

The process continues like so.

And at every DVD time we accumulate an extra tau

in the delay with respect to the anchor CD time.

At one point however, the accumulated delay will make it so

that the distance from the nominal anchor sample is larger than 0.5.

So here for instance for index equal to 7,

the distance of 7 tau is larger than 0.5 in magnitude.

So instead of using 7 as the anchor point, we go back and

we use anchor point number 6 again because the distance between point number 6 and

the current sub sample approximation is now less than 1 half in magnitude.

So we re-use the former anchor point.

We shift what used to be the delay to an advancement of 1- 7 tau and

then we continue again by accumulating delay so

the reference point will move closer to the current reference anchor point.

And then the process will continue like before with an accumulation of a delay of

tau at each new DVD sample.

If we do the math, we find that our only finite number of possible values for

tau in the whole interpolation process.

And namely there's exactly 160 possible values of tau,

after which the process repeats itself.

So in order to perform efficient CD to DVD conversion,

all we need to do is to precompute 160 FIR filters of line 3 and

use them in sequence on the CD data to produce the DVD audio samples.