1

In my attempt to understand where Euler's identity ($e^{it}=\cos(t)+i\cdot \sin(t)$) I am using the fact that the derivative of $e^{it}$ is $ie^{it}$ which is perpendicular to $e^{it}$ at all $t$ (which is why it forms a circle). However, I can't seem to verify this using Mathematica.

I read elsewhere that the dot product of continuous functions is $$\frac{1}{T}\int_T{f(t)\overline{g(t)}dt}.$$ So I tried typing this into Mathematica as:

p[t_]:=Exp[I t]
v[t_]:=D[p[t],t]
(1 / (2 Pi)) Integrate[p[t] Conjugate[v[t]], {t, 0,2Pi}]

Which results in $-i$, not $0$ as I was expecting. However, if I do not take the conjugate of $v(t)$, then it does result in $0$. What am I missing here?

Alex D
  • 147
  • 10
    The product of two complex numbers is not the same as the dot product of its representation in $\mathbb{R}^2$ – rubikscube09 Mar 21 '20 at 20:12
  • I'm a bit confused here. I saw from this answer https://math.stackexchange.com/a/801884/761845 that you can get the dot product of continuous complex functions using that formula? Is my expectation that it will be 0 to show it's orthogonal wrong or is my understanding of the formula wrong? – Alex D Mar 21 '20 at 20:36

2 Answers2

1

The inner product/dot product for two complex valued, continuous functions, on the interval $[0,T]$ is given by: $$ \frac{1}{T}\int_0^Tf(t)\overline{g(t)} \mathrm{d}t $$ Here, the product in the integral is given by the product of complex numbers. If we write: $$ f(t) = u(t) + iv(t) $$ $$ g(t) = p(t) + iq(t) $$ Then, we are going to compute the integral: $$ \int_0^T (u(t) + iv(t))\overline{(p(t) + iq(t))} dt = \int_0^T(u(t) + i v(t)) (p(t) - iq(t))dt $$ $$ \int_0^T\underbrace{[u(t) p(t) + v(t)q(t)]}_{\text{Real Part}} + \underbrace{[p(t)v(t) - u(t)q(t)]}_{\text{Imaginary Part}}i dt$$ In particular, let $f(t) = e^{it}$, $g(t) = ie^{it}$. If you calculate the above, you'll see that the integrand doesn't vanish.Thus, these functions are not orthogonal because they use the product of complex numbers in the integral.

However, at every $t$, you get vectors in $\mathbb{R}^2$. These vectors (not functions) have a dot product, given by the standard dot product: $$ d(t) = f(t) \cdot g(t) = u(t)p(t) + v(t)q(t) $$

In this case, you'll se the above quantity $d(t)$ vanish for all $t$. However, $d(t)$ isn't the quantity we have in the integral, that is the product of complex numbers. This has a different geometric interpretation: while the dot product vanishes for two orthogonal vectors (this is basically the definition of the dot product and of orthogonal), the product of complex numbers does not!

rubikscube09
  • 3,831
  • So the part that is orthogonal is the dot product of the real-valued portion of the complex number? The way I visualize this in my head is a vector to the complex point for some $t$, $e^{it}$, and the derivative, $ie^{it}$, with a line that meets up to the first complex vector and forming a right angle. So then because there's that right angle as the point moves in the direction of the derivative, it forms a circle. So from what I am getting out of your answer, it's invalid to say two complex numbers are orthogonal? I guess I don't understand the point of this definition of the inner product – Alex D Mar 21 '20 at 21:01
  • if it does not express orthogonality – Alex D Mar 21 '20 at 21:01
  • The inner product represents orthogonality of functions. Just because two vectors are orthogonal at each point doesn't mean the functions are orthogonal or should be orthogonal. – rubikscube09 Mar 21 '20 at 21:10
  • So that's why $\int_0^{2\pi}{\cos(t)\sin(t)dt} = 0$? So then functions that produce orthogonal vectors may not necessarily be orthogonal themselves? So how do I express what I'm trying to explain in the comment above, using regular dot products or different functions (like the other stack exchange post I linked mentions $e^{inx}$ and $e^{-inx}$? – Alex D Mar 21 '20 at 21:21
  • You've expressed it pretty well. The inner product of the two vectors is 0 for every t. That expresses your insight pretty accurately. Regarding sin and cosine, yeah that's exactly it. It's a different kind of orthogonality, one that has a special meaning (best example of where this is relevant is Fourier series). – rubikscube09 Mar 21 '20 at 21:23
  • The Fourier transform is actually the reason I've been looking into this. So why is it, then, that the fourier transform uses $\frac{1}{T}\int_T{f(t)\overline{g(t)}dt}$ as an inner product when it is the product of complex numbers as the integrand? – Alex D Mar 21 '20 at 23:18
  • Because it satisfies all the properties an inner-product or dot product on a vector space (this time, a vector space of functions) should satisfy – rubikscube09 Mar 23 '20 at 01:32
  • Okay, thank you. I looked up the axioms for inner products on a vector space and that makes sense to me now. – Alex D Mar 23 '20 at 04:09
  • No problem. It was probably a confusing thing in this point to make sense of the dot product on a space and then a dot product (inner product) on functions that map into that space, and having the two concepts have subtle differences. – rubikscube09 Mar 23 '20 at 04:13
1

By this time (based on other comments) you may be cluing into the fact that there are many kinds of vector spaces, not all of which have vectors that can easily be drawn as arrows with a finite number of real coordinates. There are therefore also many kinds of inner products.

The inner product of continuous functions that you got from Understanding dot product of continuous functions is based on the idea that the entire definition of the function over its entire domain is a single vector. You can't describe one of these vectors just by giving an $x$ coordinate and a $y$ coordinate. Indeed any finite number of coordinates is not enough in general.

You appear to be trying to treat a complex number as a vector, which makes some intuitive sense if you think of plotting a complex number on a plane with two coordinates, one for the real part and one for the imaginary part, writing the number as $x + iy.$ The complex numbers you are interested in happen to be functions of a parameter $t,$ but a single value of $t$ gives you a single vector as a result; you do not have a vector corresponding to the entire definition of the function.

The kind of orthogonality you are going for is exemplified by the two formulas for your two vectors: $$ e^{it} \qquad \text{and} \qquad ie^{it}. $$

Notice that the only thing different about the second formula is the extra factor of $i.$ Multiplying a complex number by $\pm i$ "rotates" it $90$ degrees (in your visualization of the complex plane); if two numbers are orthogonal (in the sense you are looking for) then the ratio of the two numbers is some real multiple of $i.$ That is, if $w$ and $z$ are complex numbers, orthogonal in the sense you want, then $$ \frac wz = ir \quad \text{where $i$ is real.}$$

This definition is a bit awkward (the "where $i$ is real" part), but we can use the fact that $ir + \overline{ir} = 0$; we can say $w$ and $z$ are orthogonal if $$ \frac wz + \frac {\overline w}{\overline z} = 0. $$

This definition does not work if $z=0,$ but if we multiply all terms by $z\overline z$ then we get the equation $$ w{\overline z} + {\overline w}z = 0. $$

Note that if $w = a + ib$ and $z = c + id$ then $$ \frac12 \left(w{\overline z} + {\overline w}z\right) = ac + bd, $$ which is what you might want for an inner product.

David K
  • 98,388
  • I suppose my entire intuition of complex numbers is that it's similar to $(x, y)$ vectors where it's (re, im) instead. So when I saw that definition of an inner product on functions I assumed that the functions were producers of vectors (say $f: \mathbb{R}\to (x, y)$ or $f: \mathbb{R}\to\mathbb{C}$) and then the integral was like adding the result of every resulting vector of $f$ and $g$ similar to the dot product being every matching member of two vectors multiplied and then added together.I'm still pretty confused, but it's starting to make more sense. Or at least I know where I'm wrong. – Alex D Mar 21 '20 at 22:49
  • Note that even if you have two vector-valued functions and an integral $0 = \int_0^T u(t)\cdot v(t) ,dt,$ this only implies that the average dot product of the vectors $u(t)$ and $v(t)$ is zero, not that the vectors are always orthogonal. I think you want something that's true at every $t,$ not just some fact about something averaged over $t.$ – David K Mar 21 '20 at 23:22
  • Is that what the fourier transform is actually finding, then (the average)? I've been viewing it as the continuous corollary to a change of basis in $\mathbb{R}^2$ with $\vec{a}$, $\vec{b}$ length 1 orthogonal vectors so that some $w = <\vec{a}, \vec{w}>a + <\vec{b}, \vec{w}>b$. So essentially, the integral finds the fourier coefficient that's the equivalent of the dot product $<\vec{a}, \vec{w}>$ or $<\vec{b}, \vec{w}>$. Sort of just scaling a vector. – Alex D Mar 21 '20 at 23:58
  • Both points of view are true. Treating the whole function as a vector, the Fourier transform finds the coordinates of that vector over an infinite basis. It does this by using an inner product that is based on an integral, which is a kind of sum, and the factor $\frac1T$ turns it into a kind of average. – David K Mar 22 '20 at 01:39
  • Is that infinite basis, say $[-\infty, \infty]$hz in frequency space? – Alex D Mar 22 '20 at 03:36
  • There are some variations of "Fourier" transforms, but there is one with a basis consisting of exponential functions with a "frequency" parameter that ranges over $(-\infty, \infty)$, so yes. – David K Mar 22 '20 at 13:32
  • I think it just all clicked with me. I was messing around in Mathematica and I noticed that when I do Integrate[Exp[2*t*I] Exp[3*t*I], {t, 0, 2Pi}] that it equals 0, BUT if the coefficient to $t$ is the same but negative THEN we get $2\pi$. So that's why the fourier transform works! It gets the $a_n$th coefficient by multiplying by the conjugate so that it is not orthogonal and you get a value out of it.And that's why we have to scale by ${1}\over{2\pi}$, to make it equal 1 so when we "change of basis" into the frequency basis it's giving us the $a_n$ instead of a scaled up $a_n$. – Alex D Mar 23 '20 at 00:23
  • So all $\int_0^{2\pi}{e^{ait}e^{bit}dt} = 0$ except for $a = b$ which equals $2\pi$. Is my understanding correct finally? @david-k – Alex D Mar 23 '20 at 00:25
  • So, then, $(..., e^{-it}, 1, e^{it},e^{2it},e^{3it},...)$ forms the basis for the fourier transform.. – Alex D Mar 23 '20 at 00:27