PROFESSOR: An advantage of expressing the asymptotic notations, in terms of limits, is that a bunch of their properties then become immediately obvious. Here's one. If f is little o of g, or f is asymptotically equal to g, then, in fact, f is big o of g. Or we can reason about this informally by saying that the first one means that f is much less than g, and the second one means that f is about the same as g, and the final one means that f is roughly less. So being about the same and definitely less, and certainly this implies roughly less.
But we can in fact be entirely precise just using the definitions, because f equals o of g means the limit of f over g is 0. And f is asymptotically equal to g means that the limit of f over g is 1. And the definition of f equals big o of g is that the limit is finite. And clearly, if it's 0 or 1, then it's finite.
Another such property is that if f is much less than g, then g is not roughly less than f. More precisely, if f is a little o of g, then g is not big o of f. The left hand side says that the limit of f over g is 0. But that implies that the limit of g over f is 1 over 0, or infinity, which means it's not finite, so g is not a big o of f.
PROFESSOR: Now, the usual way that big o is defined in the literature doesn't mention limits at all. And, in fact, as I said, the definition really isn't a limit, it's a limsup. And let me show you the standard definition and then explain why the limsup soup captures it and is needed.
So the official definition of f is big o of g is that there's some constant multiplier, c, that you can amplify g by, such that once g is amplified by the factor c, then, in fact, f is less than or equal to c times g. But this may not hold right at the beginning. There's a certain point, n 0, after which it holds forever. Let's try to illustrate this complicated alternation of quantifiers expression with a diagram that may make it clearer.
So suppose that I want to express the fact that f is big o of g, where f is it a green line. So that green line is the graph of f of x, the function. And g in blue is shown-- and as a matter of fact, g of x is less than or equal to f of x. But nevertheless, f is going to be big o of g, because if you multiply g by a constant, it becomes sort of shifting it up to be this constant times g. It becomes this purple curve, and the purple curve, it gets to be above the green curve, from a certain point on. That's n 0.
So by raising up the blue curve, g, by an amount c, to get it to be this purple curve, the purple curve gets above f from a certain point n 0 on. And that's why f is big o of g. Now, of course, multiplying the blue curve, g, by a constant doesn't raise it up a fixed amount. It alters it, but if we imagine that our curve was a log scale, than, in fact, multiplying g by c is the same as adding log c on a log scale. So the picture is actually accurate if the vertical scale is logarithmic.
So using this standard definition, I can explain why in the equivalent definition of terms of limit, I couldn't say limit, I needed to say limsup. Here's what limsup does for us. Suppose I have a function, f, that's say, less than or equal to 2g. Which means that, surely, f is big o of g, according to the previous definition, because you amplify g by 2 and you get above f. The problem is that f of n over g of n may have no limits, so I can't simply say that f is o of g, because the limit of f over g is finite.
Let's see how that could happen. Suppose that f is in fact equal to g times a number that varies between 1 and 2. That's an example where sin of n pi over 2 varies between 0, 1, and minus 1. And you square it, it becomes 0 or 1. And you add 1 to it, it becomes 1 or 2.
So this is an expression, which as n grows, alternates between the values 1 and 2. And I'm multiplying g of n by this factor that's either 1 or 2. But the limit of f of n over g of n does not exist, it's alternating between 1 and 2. On the other hand, the limsup f of n over g is 2, which is finite, and therefore, according to the limsup definition, indeed f is o of g.
Now, the technical definition of limsup is one that you can read in the text or find in a calculus book. It's basically the largest limit point of the fraction f of n over g of n. And if you don't know what a limit point is, it's stuff that we don't need to go into. But I did want you to understand why formally, we need limsup. In most cases, the limit exists, and we can use the simpler limit definition, rather than the exists a constant, such that for every number n greater than or equal to n 0, et cetera, which is a more complicated definition.
OK. Let's collect a couple of more basic facts about little o and big o that we're going to need. Namely, that if a is less than b-- I know they can be negative numbers. I don't care, but real numbers. If a is less than b, then x to the a is little o of x to the b. The [? proof file ?] is almost immediately from the definitions, because to prove that x to the a is little o of x to the b, we want to look at the quotient of x to the a over x to the b.
But, of course, the quotient of x to the a over x to the b is equal to 1 over x to the b minus a. And since a is less than b, b minus a is positive. So that means that as x approaches infinity, the denominator is x to a positive power also goes to infinity. And therefore, 1 over x to that positive power goes to 0, which is the definition of x to the a being little o of x to the b.
Another crucial fact is that logarithms grow slower than roots. So if you think of epsilon as like a half or a third, saying that the log of x is less than or equal to the square root, it's less than equal to the cube root, it's less than or equal to the 50th root doesn't matter. OK. This is a proof that just falls back on elementary calculus.
And I guess I've highlighted it, because it's definitely worth remembering. Logarithms grow slower than roots. The proof begins with the immediately obvious remark that 1 over y is less than or equal to y, because they're equal when y is greater to 1. 1 over y is equal to y when y is greater than or equal to 1. And as y increases, y gets bigger, and 1 over y gets smaller, so the inequality is preserved. That's easy, OK.
Well that means that I can integrate both sides starting at 1. So if I take the integral of 1 over y from 1 to z, it's going to be less than or equal to the integral of y from 1 to z. Well, integral of 1 over y is log z, and the integral of y to z is z square over 2. So what we get is this new inequality, the log of z is less than or equal to z squared over 2, for z greater or equal to one.
So we're on the way there. We've got log of z is less than z squared, but not z to any epsilon power. But we'll get that just by making a smart substitution for z, so that's the next step. We have that log of z is less than equal to z squared over 2 for any z greater than or equal to 1. Let's let z be the square root of x to the delta, where delta is simply some positive number.
So in that case, what's the log of z? Well the log of the square root of x to the delta, the square root means it's half of log of x to the delta, which is delta log x. So log of z is delta log of x over 2. And, of course, z squared is just x to the delta, so z squared over 2 is x to the delta over 2.
Now, I can just cancel the denominators too. And I get that log of x, and then transpose the delta log of x is less than or equal to x to the delta over delta. But as long as delta is less than epsilon, x to the delta is going to be a little o of x to the epsilon, which means that x to the delta times a constant, namely 1 over delta, is also going to be little o of x to the epsilon. And I've just figured out that I've shown that log of x is little o of x to the epsilon as required.
One more crucial fact that I'm going to not prove, but I'll state is that polynomials grow slower than exponentials. This is closely related to the fact that logs grow slower than roots. But in particular, if c is any constant and a is greater than 1, then x to the c is little o of a to the x. And there's a bunch of ways to prove this using a L'Hopital's Rule, or McLaurin Series, and I'll leave it to you to look up your 1801 calculus text to find a proof of that fact.