Flash and JavaScript are required for this feature.
Download the video from Internet Archive.
Description: This lecture continues the topic of Hij integrals and H matrices.
Instructor: Prof. Robert Field
Lecture 14: From Hij Integr...
The following content is provided under a Creative Commons license. Your support will help MIT Open Courseware continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit [email protected].
ROBERT FIELD: So as I said at the beginning of the course, this is quantum mechanics for use, not admiration, and not historical. You're going to leave this course knowing how to solve a very large number of quantum mechanical problems. Or if not to solve it, to get insight into how you would solve it. And so I presented a couple of exactly solved problems, and that's not because they're historically important. It's because you're going to use them, and you're going to embed the results of those exactly solved problems in an approximate approach to almost any problem.
And the vast majority of problems that you would face use the harmonic oscillator as the core of your understanding because almost all potentials have a minimum, and so that means the first derivative is zero at the minimum. So you don't care about the first derivative. You care about where the minimum is. And the second derivative is the dominant thing, and that's the harmonic aspect. And so using a harmonic oscillator basis set, you're going to be able to attack every problem. And another thing about the harmonic oscillator is that it uses these As and A daggers, which are magic.
And what happens is you forget, totally, about the wave functions. The wave functions are there if you want to calculate some kind of probability amplitude distribution, but you never look at them when you're solving the problem, and that's a fantastic thing. Now, there is one more exactly solved problem that I want to talk about today, which is not usually included in the list of exactly solved problems because it's kind of special. The two-level problem. The two-level problem is exactly solved because there are only two levels, and the solution of that problem involves the quadratic equation.
So there is an exact solution. And this is used a lot in introducing new techniques in quantum mechanics, and so you're going to see the two-level problem again and again as opening the door to being able to deal with much more complicated problems, and I'm going to try to refer to that a lot. OK, so we're about to go from the Schrodinger picture to matrix mechanics, and I'd like to have some comments about what are the elements of the Schrodinger picture. And there's no wrong answer here, but I'm looking for certain things. Anybody want to tell me? Yes?
AUDIENCE: Is it based off wave equation or the Schrodinger equation, which is very similar to the wave equation?
ROBERT FIELD: Yeah. So we have a differential equation, and the solutions are wave functions. More?
AUDIENCE: [INAUDIBLE]
ROBERT FIELD: Well, mathematics is challenging to some people. Some people really love it. But yes, it's much more mathematical because you're faced with solving differential equations, coupled differential equations, and you're challenged with calculating a lot of intervals, and sometimes the intervals are not over simple functions. They're complicated. But the main thing is you have this thing which you could never observe, but is somehow the core of everything that you can know, and that really-- as I get older, it bothers me more and more. It's not that I'm going to invent some way to do quantum mechanics totally without a wave function.
The most important thing, I think, is that when you work in the Schrodinger picture, you get one wave function and one eigenvalue at a time. And so yes, you can solve these problems, but no, you don't get insight. You don't see the overall structure. You just see well, if you did this experiment, this is what you would observe. That's perfectly wonderful. And now, some anticipation. What's so special about the Heisenberg picture? Or what is the Heisenberg picture? Now, I haven't lectured on it so if you read my lecture notes, you know the answers to all of this, but can we speculate about what goes here? Yes?
AUDIENCE: [INAUDIBLE]
ROBERT FIELD: OK, so every operator can be represented by a matrix, and the elements of that matrix are calculated usually automatically. Often, they're an infinite number of basis states, so you sort of refer to the Schrodinger picture, but anyway, these matrices are arrays of numbers, and they can be infinite arrays of numbers, but you don't write an infinite array of numbers down. You write a few numbers and you recognize what the pattern is, and usually, you have some function of the quantum numbers, and that generates all the elements in this matrix. And the solutions are going to be certain eigenvectors, and instead of differential equations, it's linear algebra.
Now, for differential equations, you'll learn a lot of tricks. For integrals, you learn a lot of tricks. For linear algebra, there aren't any tricks. It's all right there. And so it's a much more transparent-- now, it might be numerically demanding because you're dealing with very large arrays of numbers, and you're trying to get something from it, but the linear algebra is simple. OK, and we have these things called matrix elements. I can use this word now because the numbers in a matrix are called matrix elements. They're integrals, but they're integrals that are, more or less, given to you.
And then we have infinite matrices. And in the Schrodinger picture, we might have infinite basis sets, but we don't really think too much about the problem of infinities because we're looking at things one at a time. Here, the operators are implicitly infinite, and we have to do something about that because no computer can find the eigenvalues and eigenvectors of an infinite matrix. So somehow, you have to truncate it, and you have to use some approximations, and the framework for that is perturbation theory.
Now, I love perturbation theory. I've made my career out of perturbation theory and you're going to see a lot of it, but not today. The next exam is going to have a lot of perturbation theory. OK, so let's review where we've been. With a two-level problem, we have two energy levels, and there is some interaction between them, and we get E plus, psi plus, and E minus, psi minus. So we have these integrals, H2,2, H1,2, and because it's a Hermitian matrix, it's equal to H2,1 star.
That's the definition of a Hermitian matrix, and that means if the H1,2 element is imaginary, the H2,1 matrix element is imaginary, but with the opposite sign. Now, almost all of the problems we're going to face are expressed in terms of real matrix elements, and so all you're really doing to go from the 1,2 to the 2,1 is just reversing the order of the index. Nothing else is happening, and that has also some great simplicity in the solutions we use to solve the two-level problems. OK, the solutions to the two-level problem are based on some simplifications.
So what we do is we make the two by two matrix symmetric by subtracting out the average energy. We define then, two numbers, which are H1,1 minus H2,2 over two and this. So we have E bar and delta, and we have the interaction, which is H1,2, and we call it v. And we have to be a little careful because v could be complex or imaginary, but for now, we're just going to treat it always as real. OK, so for this two-level problem, the energy levels are given by E bar plus or minus delta squared plus v-squared square root. Or E bar plus or minus the square root of x, where that's x.
And the eigenfunctions are expressed in terms of these coefficients-- one plus or minus is equal to 1/2 one plus or minus delta over x to the 1/2 square root. Square root is outside the bracket. And c2 plus or minus looks almost the same, except we have 1 minus or plus delta over square root of x. So in this reduced picture, there's not too much to remember. The difference between the eigenfunctions for the higher energy and the lower energy eigenstates differ only by these signs.
OK, now, this is a lot. I derived it in lecture last time, and the algebra is horrible, and I'm not good at presenting algebra, but it's all in the notes. But if you take these formulas and you try them on for size, you will be able to verify that these give normalized and orthogonal eigenfunctions, which I recommend, mostly. OK, we take this expression for the eigenvalues, and let's just take one of the eigenfunctions and find out whether it belongs to the correct eigenvalue. If it doesn't, you've made an algebraic mistake. It's a very useful thing.
OK, so now we're going to take the results for this two-level problem solved algebraically, and at the core of that was the quadratic energy formula. The quadratic formula for the equation that determines the energy. The quadratic formula is always applicable. It's exact. So what we end up getting is analytical expressions. It doesn't matter what the value of the two critical quantities, delta and x, are. You have solutions. So now we're going to take all this stuff and we're going to start over. We're going to rearrange it so that we have a different way of approaching the problem.
So we're going to start talking about matrices rather than operators. And so I represent a matrix by a boldface-- this symbol is how you ask a computer to make it-- yes, it's how I represent a matrix, instead of a hat, which is the way you represent an operator. But now we're going to say every operator is represented by a matrix rather than a differential operator. And so this Hamiltonian is E bar, zero, E bar plus delta, v, v star, minus delta. And v is usually real.
And another way we can say this is it's E bar times this fancy 1 with an under bar plus H bar. So this is a symmetric Hamiltonian matrix. This is the unit matrix. And now, since we're going to be doing matrix multiplications, let me just give you some mnemonics. So if we have a square matrix multiplying a square matrix, what we do is we multiply this row by that column, and we get one number, and you fill out the square matrix. And with a little practice, this will be permanently ingrained in your head. We also can have a matrix multiplying a vector.
And so a matrix multiplying a vector gives a vector, and this product gives a number here. And you've probably all seen these sorts of things or could grasp them very quickly, but it's useful just to not get confused. We can also do something like this, and again, we use the usual vector and we get a vector. I'm sorry, we get a column. And this is a difficult symbol to make on a computer, but you get this first element here like that. And of course, you can do this times that, and you get a number. And you can also do it the other way around.
You can do this times that-- I'm sorry, don't do that. And you get a square matrix. So those are the things that you have to practice, and it becomes second nature very quickly, and it's a lot easier than doing differential equations, or matrices, or integrals. OK, now, we use this superscript. This means transpose. This means complex conjugate and transpose. The theory deals with this, but when the Hamiltonian or the matrix you're interested in is real, the transformation that diagonalizes it is always just given. You need this to transpose.
So these two symbols look similar and you won't have any trouble with that. And now, we have this kind of symbol. So this is a normalization. That should be equal to one, and it is because one times one is one plus zero times zero. And we can also look at this, and that's zero because zero times one is zero and one times zero is zero. So this is normalization, this is orthogonality. OK, we're playing with numbers, and we don't really look at the size, even though the numbers all are obtained by doing an integral over the wave functions and the operator, but that's something that you sort of do in first grade of quantum mechanics, and you forget how you did it, and you just know that they're there to be played with.
OK, now, a unitary operator is one where the conjugate transpose-- the unitary matrix-- is equal to the inverse, which means t minus one times t equals one. So it's nice to be able to derive the inverse of a matrix. And for certain kinds of matrices, this is really easy because all you do is tip along the main diagonal. There are other matrices where you have to do a lot of work, but whenever you're dealing with matrices, you're not doing the work. The computer is doing the work. You teach the computer how to do matrix operations, and even if it's a hard one, the computer says OK, here it is.
OK, so if you have a real symmetric matrix, then you can say OK, T transpose matrix T gives a1, a n, zero, zero. So you can diagonalize a real symmetric matrix by this kind of a transformation. That's called an orthogonal transformation. And if it's not real, then you use the conjugate transpose, use the dagger, and you still get the diagonalization. Now, in most of the books that you'll ever look at about unitary transformations, they actually are giving you what's called the orthogonal transformation, and it's what works for a real matrix, and I'm going to do that too.
So when we have something like this, we say that this transformation diagonalizes A, or H, or whatever. And the word "diagonalizes" is really important because that's what you want because it presents all the eigenvalues. Remember, one of the things about quantum mechanics is you have an operator. You're going to observe something connected with an operator. The only things you can get are the eigenvalues. So here they are, all of them. That's kind of useful. OK, so in the Heisenberg picture, the key equation is the Hamiltonian as a matrix, some vector--
OK, this is the analog of the Schrodinger equation, but it's the Heisenberg equation. And mostly, it's just notation, and you have to get used to that. We want to find the vectors that are eigenvectors of this Hamiltonian with eigenvalue E. It's just like this the Schrodinger equation, except it's now looking for eigenvectors rather than eigenfunctions and eigenenergies. Now, in order to solve this problem, we exploit this kind of transformation, and we insert T T dagger between H and c, and that's just one. So we don't even have to worry about the other side.
We're just playing with matrices, but they look like functions or variables, and everything is-- it's really neat how beautiful linear algebra is because you are now dealing with an infinite number of equations at once. You're dealing with these objects and you're using your insight from algebra as much as possible in order to figure out what's going on. Really beautiful. OK, and so now we must multiply this equation on the left by T dagger. OK, I'm dropping the under-bars now.
OK, so now we say OK, here we have H twiddle, and here we have c twiddle, and here we have c twiddle. So this is now a different eigenvector equation, but we insist that this guy, H twiddle, looks like E1, E2, En, 0,0. A diagonal matrix, where all the eigenvalues are along the diagonal. And so this is what we want, and lo and behold, this is what we need in order to say, well, yeah, this thing has to be the eigenvector of this for one of the eigenvalues because this is an eigenvalue equation or an eigenvector equation.
So if we can diagonalize the Hamiltonian, the transformation that diagonalizes the Hamiltonian gives you the linear combination of basis vectors that is the eigenvector, and we'll talk about this some more. So for the two-level problem, we want to find E plus, 0, 0, E minus. And usually, E plus is the higher energy eigenvalue than E minus. Always, when you do this stuff, you get eigenvalues and you get eigenvectors. And frequently, when you do the algebra as opposed to the computer during the algebra, you don't know that a particular eigenvector belongs to which eigenvalues.
So it's useful to have a couple of things that you normally insist on. And so I like to label these things by plus and minus, corresponding to which is higher energy and which is lower. You could also say, well, the plus is going to correspond to a plus linear combination somewhere, but that's really dangerous. So now, let's just play a little bit. So we have simplified H magically so far to diagonal form. So H C twiddle-- I'm sorry, yes H C twiddle is going to be E c twiddle.
So H c plus is going to be E plus 0, 0, E minus, one, zero because that gives us E plus times one and zero times one. So multiply these two things. We have a column vector, and that's the same thing as E plus times one, zero. And we do the same sort of thing to-- instead of using c plus, we use c minus. That's a zero, one, and that will give us E minus times zero, one. This is all just playing with notation and we're about to start doing some work. OK, so T dagger times c is supposedly equal to c plus.
OK, and so well, we can write this formula in a schematic way, and so we have T dagger. Now, I always remember this because there used to be something analogous to Coke and Pepsi called Royal Crown Cola, and for Royal Crown, that just reminds me that row first, column second. I don't believe that any of you have ever heard of Royal Crown, but you could think of some other mnemonic. Now, it's really important to keep the rows and the columns straight. So we have 1, 1, and T dagger. Now, what goes here? This is in the first row, second column.
So what do I put here? Yeah. You could even say it. OK, and here we have T dagger 2,1, and here we have T dagger 2,2. Now, if we multiply one, zero, because that's what we're supposed to do here, we'll get T 1,1 plus T 1,2 times zero, and then T 2,1 plus T 2,2 times zero. OK, and that's simply T 1,1 dagger times one, zero plus T 2,1 dagger times zero, one. So this thing gives the linear combination of the basis vectors that is equal to a particular eigenvector.
So that means if we can find T, we can get T dagger, and we can get E plus and E minus, and c plus and c minus. So we have completely solved the problem if we know what T is. Well, with a two-level problem, we know algebraically that there is such a T, and that it's analytically determined. There is another way of approaching this, and that is to say the general orthogonal transformation, which we will call a unitary transformation, but it's missing a little bit of stuff if it really wants to be unitary. I'm going to call it-- so T dagger is cos theta, sin theta, minus sin theta, cos theta.
So this is a matrix which is determined by one thing, theta. We want to find what theta needs to be in order to diagonalize the matrix. Now, since we know we're talking about sines and cosines, and that there is one theta, we abbreviate this to c, s, minus s, c because the algebra is heinous. Not as bad as in the Schrodinger picture, but it's terrible, and so you want to compress the symbols as much as possible. OK, so we want T dagger HT because that's H twiddle. That's the thing we want, and we want T dagger HT. And now since we've expressed the T in this form, we can multiply this out. And so we have c, c, minus s, c, delta, v, v, delta. Delta, v, v, minus delta, c, minus s, s, c.
So we have three two by two matrices to multiply. Now, that's not something you do in your head. You could do two. So you multiply these two, and then you multiply by that, and the result-- I would be here for hours doing this, and you wouldn't learn anything except that I'm a real klutz. I should write this on its own board because it's really important. So that matrix becomes a big matrix, c squared minus s squared times delta plus 2sc times V, and c squared minus s squared times V minus 2cs delta.
And we have the same thing down here, c squared minus s squared times V minus 2cs times delta. And the last one is minus c squared minus s squared times delta minus 2cs times V. So this is what we get when we take the general form for the unitary transformation, and transform the Hamiltonian with it. And the first thing we do is we say, well, we want this to be zero. If this is zero, then this is zero, right? So this turns out to be an equation that tells us what theta has to be. OK, and we also know from trigonometry, c squared minus s squared is what?
I'm actually jumping ahead, but it's just cosine two theta, and 2sc is sine two theta. So we're going to get a simplification based on this, but now let's just say we want this to be zero. So that means that c squared minus s squared times V has to be equal to 2cs times delta. Which way am I going? 2cs over c squared minus s squared is V over delta. Well, that looks pretty good, especially because this is cosine-- this is sine two theta, and this is cosine two theta, which is tan theta is equal to this. So now we have a simple equation. We have the theta, and we have the V and a D-- a V over delta. I shouldn't do that.
So now I can cover this again. So we can take this equation and solve it, and we have that theta is equal to 1/2 inverse tangent of V over delta. There it is. That's an analytic result. So for any value of V over delta, we know what theta is. That's not an iterative solution. That's complete analytical result, and that's fantastic, and it says, just like with the quadratic formula, which we used to look at the original Hamiltonian at the eigenvalues of the original-- well, yeah, it says no matter what, there is a solution, and you can express this solution as some combination of V and delta.
OK, and so when you do that, you get that the energy levels are E bar plus or minus delta times cos two theta plus V times sin two theta. And there's no square root here. Why do you know that? Well, this is dimensionless. This is the units of energy, and so square roots keep coming popping up, but you don't want to put one here because that would be wrong. Even if you didn't do the derivation, if you saw a square root here, you'd know somebody is just writing down things from memory or copying badly, and making corrections. And that leads to E plus minus is equal to E bar plus or minus delta squared plus V squared, and there is a square root there.
This is what we derived via the quadratic formula. We knew this, and it came out to be the same. Well, it better have. And we can also determine what T is. And I'm not going to write it, it's in the notes. It's a lot of symbols, but it's something-- it's so compressed that you can guess the form, and so you should look at that. We derived the eigenfunctions of the Hamiltonian, and they are exactly the same as what we get here. And remember that the column of T dagger or T transpose is eigenvector, and sometimes we want to know those eigenvectors. We're doing semi-OK.
What happens if we go beyond the two-level problem? Well, you know from algebra that there is no general solution to a cubic equation. There are some limited range over which there is an analytic solution, but mostly, you don't use a cubic formula to solve the cubic equation. You do some kind of iteration. So for the number of levels greater than two, we know we're going to have a problem because just approaching it by transformation theory or linear algebra as opposed to the Schrodinger picture-- if you can't get a solution in one picture which requires solving an algebraic equation, you're not going to get it by playing with these unitary matrices.
So we're going to be approaching a problem where we have to find the eigenvalues and the elements of the transformation matrix in some sort of computer-based way. We're not going to do it. It would be nuts, even for a three by three. Although, I will give you an exam problem which will be a three by three, and you're going to use perturbation theory to solve it. I haven't told you about perturbation theory. That's going to be next week, but we're leading up to it. OK, but now the point is we have the machinery in place. We have exactly solved problems, and we have the key parameters for exactly solved problems. So the structural--
So for the harmonic oscillator, we have the force constant and the reduced mass. For the particle in the box, we have the bottom of the box, v0, and we have the width of the box. For the rigid rotor, we're going to have the reduced mass and we're going to have the internuclear distance.
There's things that determine all of the energy levels for exact solved problems, and they are basically the things that appear in the Hamiltonian, and we call them structural parameters. And we have energy levels. And often, these are some function of a quantum number. This is what we can observe. We observe the energy levels, and we represent them by some formula, and the coefficients of the quantum numbers relate to these things that we really what.
So when you're not dealing with an exactly solved problem-- like instead of having a harmonic oscillator, you have a harmonic oscillator with something at the bottom, or you have a particle in a box with a slant bottom, or you have a rigid rotor where it's not rigid, you have additional terms in the Hamiltonian, and they are going to-- we are going to use perturbation theory to relate the numerical values of the things we want to know to the things we can observe, and perturbation theory gives us the formulas of the quantum numbers, and tells us the explicit relationships of the coefficients of each term in the quantum number expression to the things we want. This is how we learn everything.
When we do spectroscopy, we measure these energy levels, and these energy levels encode all of the structure and all of the dynamics. It's really neat. OK, now, the last thing I want to-- do I have time? Yeah, maybe. Remember when we do time dependent quantum mechanics with a time independent Hamiltonian. We want to have psi of x and t.
And usually, we're given psi of x, t equals zero. We're given the initial state, and that initial state, if this is an interesting problem, is not an eigenstate of the Hamiltonian. It's a linear combination of eigenstates of that Hamiltonian, and the kinds of flux we almost always use to test our insight, or actually, because they're feasible experimentally, is the initial state is one of the eigenstates of an exactly solved problem. It's some special combination of easy stuff. And we need to know how the coefficient-- this thing-- how that is expressed as j equals one to n of c j psi j.
Because if we can express the t equals zero pluck as a linear combination of the eigenstates, then it's just a matter of mindless manipulation because we have c j, e to the minus i e j t over h bar times j. Bang, it's done. So what we want to know is how to go from a not-eigenstate to an eigenstate. And lo and behold, that's just the inverse of the transformation. So what we want to know is OK, since I don't have time to spell it out for you exactly, we have t dagger, which relates the zero order states to the eigenstates. We want to go in the opposite direction. We want the inverse transformation.
So we want t, or we want to take instead of the columns of t dagger, we take the rows. And so if you have a machine or a brain-- and I'm not doubting this!-- that enables you to write down all of the elements in the t dagger matrix, you are armed to go both from zero order states to eigenstates, and from plucks to time evolving wave packets. It's really beautiful and simple. And normally, when it's presented, these are presented as separate project problems, and the whole point is you've got a unified picture that enables you to go get whatever you need in a simple way, as long as a computer is able to diagonalize your critical matrices.
Well, I don't have time to talk about this in any detail, but if we look at the eigenfunctions or eigenvectors for the two-level problem, and we do power series expansions in theta, where theta is V over d-- theta is also called the mixing angle-- we discover that we have some formulas which says that the energy levels-- let's say the j-th energy level is equal to E bar plus-- now I'm doing this and I have to somehow grab something from in here. We have a sum of k not equal to j of V-- of H.
This is the formula for the correction of the energy by a second order perturbation theory, and we can also write the formula for the corrected wave function. By doing a power series expansion in terms of powers of V/d or theta, we find what the structure has to be for the solutions to the general problem when n is not two. I'll develop the formal theory for non-degenerate perturbation theory in Monday's lecture, and that's really empowering. It's really ugly, but it gives you the answers to essentially every problem you will ever face in quantum mechanics. OK, have a nice weekend.