banner



How To Find Coefficients Of Fourier Series

Finding Fourier coefficients

Rather than proving Fourier'south scandal for whatever capricious f(t) – and again, we don't know what ''any'' means yet – information technology's easier to kickoff off with a simpler question:

Problem Statement

If we know that f(t) can be written every bit a sum

     f(t) = A_0 + sum_{n=1}^{N} A_n sin (2 pi n t + phi_n),

then how do we find the values of the coefficients A_n and phi_n and then that the RHS sums upward to the LHS?

Notice the subtle difference: we're not trying to show that any f(t) is expressible as a fourier sum; we are given ane that is, and we merely want to find the right coefficients to properly give united states of america f(t).

Massaging into a better class

At that place are many ways to limited a sum of sinusoids. The one we wrote above – a sum of sines with of differing amplitudes and phases – is the easiest to intuitively understand, simply information technology'southward the hardest to work with algebraically. That is, the expressions for the coefficients are going to be pretty messy and unenlightening. And so it's well worth our fourth dimension to massage the expression into other equivalent forms.

Sines and cosines

The first thing we tin can practise is to employ the sum and difference formulas to rewrite a sine with a phase shift every bit a sum of a sine and a cosine:

     A_n sin (2 pi n t + phi_n) rightarrow a_n cos (2 pi n t) + b_n sin (2 pi n t)

This form is a wee bit nicer, partly considering we don't have the phases floating effectually in the arguments of our sinusoids anymore.The information near amplitude and phase is contained in the coefficients a_n and b_n. Annotation that Nosotros haven't lost any data; we're gratis to go back and forth between the ii forms.

Making this change, our sum becomes

     f(t) = frac{a_0}{2} + sum_{n=1}^{N} big[a_n cos (2 pi n t) + b_n sin (2 pi n t) big]

With the do good of hindsight, we've renamed the constant term a_0 / 2 so that things piece of work out nicer later.

Imaginary Exponentials

We tin do better than this. By far (mathematically), the best mode to work with sinusoids is in terms of imaginary exponentials.

Remark

Using complex exponentials is a common 'cute' flim-flam in math and physics. I remember when I first encountered them in solving for East'n'Grand waves, it felt pretty awkward and uncomfortable; I didn't really know what I was writing. I never actually grew out of the awkwardness; like many things in life, I guess one time you see it enough, you merely go used to it.

Now that I recollect almost it, at that place'south a few places where it'south actually quite enlightening and helpful to think in terms of complex exponentials. When you're talking about damped harmonic motion (such equally a spring-and-mass organization in a bowl of syrup), the motion looks similar a sinusoid inside a decaying exponential envelope; it'due south a bit nicer to just retrieve in terms of an exponential with a complex exponent, where the real part is the decaying envelope and the imaginary part is the frequency. Or when yous're talking nearly electromagnetic waves propagating in a dialectric (aka light travelling through a material), the real and imaginary parts of the dialectric constant stand for to absorbent losses that cause the light intensity to decay exponentially, or a phase shift (ie, light speeding upwardly or slowing downward) inside the material.

Okay enough of a tangent, back to the grade.

Remember that Euler'due south formula relates imaginary exponentials to sinusoids:

     e^{i theta} = cos theta + i sin theta.

We can visualize this fact on a circuitous aeroplane past observing how the real and imaginary components of the complex phase e^{i theta} oscillate as it rotates around the origin. It'south a rather remarkable and beautiful fact that imaginary exponentials relate to sinusoids; the ordinary growing or decaying exponentials have null to practise with periodicity!

Remark

I should say a bit more about how to translate complex numbers like e^{i theta}. The cleanest manner to think about complex numbers is as points on the circuitous plane, where ''pure phases'' such as e^{i theta} all fall on the unit circumvolve, an angle of theta radians from the x-axis. Since a rotation by 2 pi radians goes one time around the unit circle, it's clear that e^{i (theta + 2 pi)} = e^{i theta}, because you lot wind up at the same signal.

Functions such as g(t) = e^{2 pi i nu t} represent a betoken that starts off on the ten-axis and then goes around the unit circle nu times per 2d. From this picture show, it's pretty clear why we defined sines and cosines in terms of unit circles – the real component of the complex number is the cosine of the angle, and the imaginary component is the sine, which is quite precisely what Euler'southward formula tells us.

One other thing to note about functions of the form g(t) = e^{2 pi i nu t} is that if the frequency nu is an integer, and so every whole number of seconds, we wind around the circle an exact integer number of times and terminate upward at the indicate (x,y) = (1,0) again; in other words, if n is an integer, and then e^{2 pi i n} = 1.

If we flip the sign of the exponent in Euler'due south formula, we get a expression for e^{- i theta} = cos theta - i sin theta. (This represents a complex number revolving clockwise around the origin equally theta increases, corresponding to negative frequency!). If we have the appropriate linear combinations of these two expressions, nosotros can solve for sine and cosine in terms of the imaginary exponentials:

     cos theta = frac{e^{i theta} + e^{-i theta}}{2}; quad     sin theta = frac{e^{i theta} - e^{-i theta}}{2i}.

Discover that the functions of positive theta turned into functions of positive and negative theta. As a result, the range of the frequencies n in our Fourier sum is expanded. Rather than being restricted to positive frequencies 0 < n < N, we are now summing over positive and negative frequencies -N < n < N. The n = 0 term corresponds to the constant kickoff term that we introduced.

After doing a fleck of algebraic tickling, we can now express our periodic f(t) equally

     f(t) = sum_{n=-N}^{+N} c_n e^{2pi int},

where the new (circuitous!) coefficients c_n are given in terms of our sometime existent coefficients a_n and b_n past

     c_n = frac{b_n - i a_n}{2} quad (n > 0); qquad c_{n} = frac{b_n + i a_n}{2} = overline{c_{-northward}} quad (n < 0); qquad c_0 = frac{a_0}{2}.

Notice that we are now expressing our real function f(t) in terms of complex quantities, which might feel a bit queasy, but Prof. Osgood soothed us by saying that in that location'south really no ''existential difficulty'' to this affair. What nosotros've discovered is something more like this: f(t) might in general be a circuitous role, but as long as the coefficients satisfy the symmetry property c_{-n} = overline{c_n} (where the overbar signifies the circuitous conjugate), then f(t) is guaranteed to be real.

Solving for the coefficients

Subsequently all this massaging, our trouble can be phrased in a much more tractable and concise fashion:

Problem argument

Suppose nosotros are given a periodic function f(t) that tin can be expressed equally a weighted sum of exponentials as

     f(t) = sum_{n=-N}^{+N} c_n e^{2pi int}.

How do we notice the circuitous coefficients c_n?

It's time to perform the classic derivation of the Fourier coefficients.

Our goal is to solve for a detail coefficient — let's say the k'th coefficient c_k. To isolate the c_k, the first thing nosotros might try to do is to carve up the sum on the RHS into ii parts, the term with n=k and all the other terms with n neq k:

   f(t) = c_k e^{2pi ikt} + sum_{n neq k} c_n e^{2pi int}.

If nosotros shuffle effectually the terms so that the c_k term is alone on the LHS, we end upwardly with

   c_k e^{2pi ikt} = f(t) - sum_{n neq k} c_n e^{2pi int}.

To get c_k by itself, we split up both sides by e^{2pi ikt}, or equivalently, multiply by e^{-2pi ikt} (with a minus sign in the exponent), and end upwardly with the expression

   c_k = e^{-2pi ikt} f(t) - sum_{n neq k} c_n e^{2pi i(n - k)t}.

However, this expression isn't particularly useful for us, considering all we've washed by this indicate is limited the k'thursday coefficient in terms of all the other coefficients! If we don't know whatsoever of the coefficients, we can't get anywhere.

Fourier'due south Fob

To proceed further, we'll pull a trivial rabit out of a hat, and practice a picayune bit of calculus to brand the the 2d term on the RHS disappear. David Griffiths likes to call this step ''Fourier's Trick'' because it'south pretty clever and kind of magical. Afterward on in the class, I'm guessing we'll justify information technology in terms of the orthogonality of the basis functions, but for now, information technology's just a magic trick.

The trick is to integrate both sides of the equation from 0 to ane. There's no existent motivation for why we want to do this quite notwithstanding (until we learn about orthogonality!), but at the least, we can justify the limits of integration 0 and 1 past appealing to the periodicity of f(t). Since f(t) has menses 1, all its data is independent in the range from 0 to i; if we integrate over less than that, we'll miss some details, and if we integrate over more than, we'll including redundant information because f(t) will echo.

Anyways, we have three terms to integrate, and we'll consider them one-by-i.

The term on the LHS, c_k, is just a constant that we tin pull out of the integral; when nosotros integrate it from 0 to 1, nothing happens because int_0^1 dt = 1. Then the LHS stays c_k.

The outset term on the RHS volition look similar int_0^1 e^{-2pi ikt} f(t) dt, which, in principle, is a known value that nosotros can compute. We know f(t), we know what k we're considering, so we just need to do the integral to figure out the value.

The concluding term on the RHS is where the magic happens: every single term in the sum becomes zero when yous integrate from 0 to 1! To see why, consider what happens when y'all integrate any one of the terms. We tin pull out the abiding c_n from the integral, and we're left with something of the form

 int_0^1 e^{2 pi i (n - k) t} dt,

which nosotros can evaluate similar any other exponential integral:

 int_0^1 e^{2 pi i (n - k) t} dt = frac{1}{2 pi i (n - k)} bigg[ e^{2 pi i (n - k) t} bigg]_{t=0}^{t=1} = frac{1}{2 pi i (n - k)} bigg[ e^{2 pi i (n-k) (1)} - e^ {2 pi i (n-k) (0)} bigg]

Now the crucial insight: Since n-k is an integer, both of the exponentials on the RHS are only the exponentials of an integer multiple 2 pi i; in other words, they both evaluate to one. One minus 1 is zero, and then the integral vanishes. So the unabridged sum over n neq k vanishes when we integrate from 0 to i!

And so to summarize: when we integrate both sides of the expression for c_k, the LHS stays the same, the first term on the RHS doesn't simplify, and the second term on the RHS vanishes. Our final result is the formula for Fourier coefficients:

Result

If a periodic part f(t) can be expressed by a weighted sum of exponentials as

     f(t) = sum_{n=-N}^{+N} c_n e^{2pi int},

and then the Fourier coefficients c_n are given by

     c_n = int_0^1 f(t) e^{-2 pi int} dt.

In that location'due south a lot of things to be said well-nigh this statement, but classtime was over, then we'll have to hear them next time around.

How To Find Coefficients Of Fourier Series,

Source: https://jeffjar.me/fourier/lec2b.html

Posted by: santimandry.blogspot.com

0 Response to "How To Find Coefficients Of Fourier Series"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel