go to homepage Calculus Several years ago I wrote a long paper on the foundation of the calculus. The paper was not really dense or difficult—as these things usually go—since I made a concentrated effort to keep both the language and the math fairly simple. But because I was tackling a large number of problems that had accumulated over hundreds of years, and because calculus is considered a bit scary to start with, the paper was still hard to absorb. I found it necessary to talk a lot about history and theory and to bring up very old and outdated ideas, like those of Archimedes and Euclid. This ended up confusing most of my readers, I think, and very few made it through to the end. This is the magic equation. What you won’t be told is that this magic equation was not invented by either Newton or Leibniz. All they did is invent two similar derivations of it. Both of them knew the equation worked, and they wanted to put a foundation under it. They wanted to understand where it came from and why it worked. But they failed and everyone else since has failed. The reason they failed is that the equation was used historically to find tangents to curves, and everyone all the way back to the ancient Greeks had tried to solve this problem by using a magnifying glass. What I mean by that is that for millennia, the accepted way to approach the problem and the math was to try to straighten out the curve at a point. If you could straighten out the curve at that point you would have the tangent at that point. The ancient Greeks had the novel idea of looking at smaller and smaller segments of the curve, closer and closer to the point in question. The smaller the segment, the less it curved. Rather than use a real curve and a real magnifying glass, the Greeks just imagined the segment shrinking down. This is where we come to the diminishing differential. Remember that I said the differential was a length. Well, the Greeks assigned that differential to the length of the segment, and then imagined it getting smaller and smaller. Two thousand years later, nothing had changed. Newton and Leibniz were still thinking the same way. Instead of saying the segment was “getting smaller” they said it was “approaching zero”. That is why we now use the little arrow and the zero. Newton even made tables, kind of like I will make below. He made tables of diminishing differentials and was able to pull the magic equation from these tables. The problem is that he and everyone else has used the wrong tables. You can pull the magic equation from a huge number of possible tables, and in each case the equation will be true and in each case the table will “prove” or support the equation. But in only one table will it be clear why the equation is true. Only one table will be simple enough and direct enough to show a 16-year-old where the magic equation comes from. Only one table will cause everyone to gasp and say, “Aha, now I understand.” Newton and Leibniz never discovered that table, and no one since has discovered it. All their tables were too complex by far. Their tables required you to make very complex operations on the numbers or variables or functions. In fact, these operations were so complex that even Newton and Leibniz got lost in them. As I will show after I unveil my table, Newton and Leibniz were forced to perform operations on their variables that were actually false. Getting the magic equation from a table of diminishing differentials is so complex and difficult that no one has ever been able to do it without making a hash of it. It can be done, but it isn’t worth doing. If you can pull the magic equation from a simple table of integers, why try to pull it from a complex table of functions with strange and confusing scripts? Why teach calculus as a big hazy mystery, invoking infinite series or approaches to 0’s or infinitesimals, when you can teach it at a level that is no more complex than 1+1=2? So here is the lesson. I will teach you differential calculus in one day, in one paper. If you have reached this level of math, the only thing that should look strange to you in the magic equation is the y’. You know what an exponent is, and you should know that you can write an exponent as (n-1) if you want to. That is just an expansion of a single number into a differential, as I taught you above. If n=2, for instance, then the exponent just equals 1, in that case. Beyond that, “n” is just another variable. It could be “z” or “a” or anything else. That variable just generalizes the equation for us, so that it applies to all possible exponents. All that is just simple algebra. But you don’t normally have primed variables in high school algebra. What does the prime signify? That prime is telling you that y is a different sort of variable than x. When you apply this magic equation to physics, x is usually a distance and y is a velocity. A variable could also be an acceleration, or it could be a point, or it could be just about anything. But we need a way to remind ourselves that some variables are one kind of parameter and some variables are another. So we use primes or double primes and so on. This is important, because it means that mathematically, a velocity is not a distance, and an acceleration is not a velocity. They have to be kept separate. A calculus equation takes you from one sort of variable to another sort. You cannot have a distance on both sides of the magic equation, or a velocity on both sides. If x is a distance, y’ cannot be a distance, too. Some people will try to convince you later that calculus can be completely divorced from physics, or from the real world. They will stress that calculus is pure math, and that you don’t need to think of distances or velocities or physical parameters. But if this were true, we wouldn’t need to keep our variables separate. We wouldn’t need to keep track of primed variables, or later double-primed variables and so on. Variables in calculus don’t just stand for numbers, they stand for different sorts of numbers, as you see. In pure math, there are not different sorts of numbers, beyond ordinal and cardinal, or rational and irrational, or things like that. In pure math, a counting integer is a counting integer and that is all there is to it. But in calculus, our variables are counting different things and we have to keep track of this. That is what the primes are for. What, you may ask, is the difference between a length and a velocity? Well, I think you can probably answer that without the calculus, and probably without much help from me. To measure a length you don’t need a watch. To measure velocity, you do. Velocity has a “t” in the denominator, which makes it a rate of change. A rate is just a ratio, and a ratio is just one number over another number, with a slash in between. Basically, you hold one variable steady and see how the other variable changes relative to it. With velocity, you hold time steady (all the ticks are the same length) and see how distance changes during that time. You put the variable you know more about (it is steady) in the denominator and the variable you are seeking information about (you are measuring it) in the numerator. Or, you put the defined variable in the denominator (time is defined as steady) and the undefined variable in the numerator (distance is not known until it is measured). All this can also be applied to velocity and acceleration. The magic equation can be applied to velocity and acceleration, too. If x is a velocity, then y’ is an acceleration. This is because acceleration is the rate of change of the velocity. Acceleration is v/t. So you can see that y’ is always the rate of change of x. Or, y’ is always x/t. This is another reason that calculus can’t really be divorced completely from physics. Time is a physical thing. A pure mathematician can say, “Well, we can say that y’ is always x/z, where z is not time but just a pure variable.” But in that case, x/z is still a rate of change. You can refuse to call “z” a time variable, but you still have the concept of change. A pure number changing still implies time passing, since nothing can change without time passing. Mathematicians want “change” without “time”, but change is time. If a mathematician can imagine or propose change without time, then he is cleverer than the gods by half, since he has just separated a word from its definition. At any rate, I think you are already in a better position to understand the calculus than any math student in history. Whether you like that little diversion into time and change is really beside the point, since even if you believe in pure math it doesn’t effect my argument. All the famous mathematicians in history have studied the curve in order to study rate of change. To develop the calculus, they have taken some length of some curve and then let that length diminish. They have studied the diminishing differential, the differential approaching zero. This approach to zero gives them an infinite series of differentials, and they apply a method to the series in order to understand its regression. But it is much more useful to notice that curves always concern exponents. Curves are all about exponents, and so is the calculus. So what I did is study integers and exponents, in the simplest situations. I started by letting z equal some point. If I let a variable stand for a point, then I have to have a different sort of variable stand for a length, so that I don’t confuse a point and a length. The normal way to do this is to let a length be Δz (read “change in z”). I want lengths instead of points, since points cannot be differentials. Lengths can. You cannot think of a point as (x-y). But if x and y are both points, then (x-y) will be a length, you see. In the first line of my table, I list the possible integer values of Δz. You can see that this is just a list of the integers, of course. Next I list some integer values for other exponents of Δz. This is also straightforward. At line 7, I begin to look at the differentials of the previous six lines. In line 7, I am studying line 1, and I am just subtracting each number from the next. Another way of saying it is that I am looking at the rate of change along line 1. Line 9 lists the differentials of line 3. Line 14 lists the differentials of line 9. I think you can follow my logic on this, so meet me down below. 1 Δz 1, 2, 3, 4, 5, 6, 7, 8, 9…. 2 Δ2z 2, 4, 6, 8, 10, 12, 14, 16, 18…. 3 Δz ^{2} 1, 4, 9, 16, 25, 36, 49 64, 814 Δz ^{3} 1, 8, 27, 64, 125, 216, 3435 Δz ^{4} 1, 16, 81, 256, 625, 12966 Δz ^{5} 1, 32, 243, 1024, 3125, 7776, 16807 7 ΔΔz 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 8 ΔΔ2z 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 9 ΔΔz ^{2} 1, 3, 5, 7, 9, 11, 13, 15, 17, 1910 ΔΔz ^{3} 1, 7, 19, 37, 61, 91, 12711 ΔΔz ^{4} 1, 15, 65, 175, 369, 67112 ΔΔz ^{5} 1, 31, 211, 781, 2101, 4651, 903113 ΔΔΔz 0, 0, 0, 0, 0, 0, 0 14 ΔΔΔz ^{2} 2, 2, 2, 2, 2, 2, 2, 2, 2, 215 ΔΔΔz ^{3} 6, 12, 18, 24, 30, 36, 4216 ΔΔΔz ^{4} 14, 50, 110, 194, 30217 ΔΔΔz ^{5} 30, 180, 570, 1320, 2550, 438018 ΔΔΔΔz ^{3} 6, 6, 6, 6, 6, 6, 6, 6 19 ΔΔΔΔz ^{4} 36, 60, 84, 10820 ΔΔΔΔz ^{5} 150, 390, 750, 1230, 183021 ΔΔΔΔΔz ^{4} 24, 24, 24, 2422 ΔΔΔΔΔz ^{5} 240, 360, 480, 60023 ΔΔΔΔΔΔz ^{5} 120, 120, 120from this, one can predict that 24 ΔΔΔΔΔΔz ^{6} 720, 720, 720And so on. Again, this is what you call simple number analysis. It is a table of differentials. The first line is a list of the potential integer lengths of an object, and a length is a differential. It is also a list of the integers, as I said. After that it is easy to follow my method. It is easy until you get to line 24, where I say, “One can predict that. . . .” Do you see how I came to that conclusion? I did it by pulling out the lines where the differential became constant. 7 ΔΔz 1, 1, 1, 1, 1, 1, 1 14 ΔΔΔz ^{2} 2, 2, 2, 2, 2, 2, 218 ΔΔΔΔz ^{3} 6, 6, 6, 6, 6, 6, 6 21 ΔΔΔΔΔz ^{4} 24, 24, 24, 2423 ΔΔΔΔΔΔz ^{5} 120, 120, 12024 ΔΔΔΔΔΔΔz ^{6} 720, 720, 720Do you see it now? 2ΔΔz = ΔΔΔz ^{2} 3ΔΔΔz ^{2} = ΔΔΔΔz^{3} 4ΔΔΔΔz ^{3} = ΔΔΔΔΔz^{4} 5ΔΔΔΔΔz ^{4} = ΔΔΔΔΔΔz^{5} 6ΔΔΔΔΔΔz ^{5} = ΔΔΔΔΔΔΔz^{6} All these equations are equivalent to the magic equation, y’ = nx ^{n-1}.
In any of those equations, all we have to do is let x equal the right side and y’ equal the left side. No matter what exponents we use, the equation will always resolve into our magic equation. If I know anything about teenagers, I will expect this reaction: “Well, sir, that may be a great simplification of Newton, for all we know, but it is not exactly 1+1=2.” Fair enough. It may take a bit of sorting through. But I assure you that compared to the derivation you will learn in school, my table is a miracle of simplicity and transparence. Not only that, but I will continue to simplify and explain. Since in those last equations we have z on both sides, we can cancel a lot of those deltas and get down to this: 2z = Δz ^{2} 3z ^{2} = Δz^{3} 4z ^{3} = Δz^{4} 5z ^{4} = Δz^{5} 6z ^{5} = Δz^{6}Now, if we reverse it, we can read that first equation as, “the rate of change of z squared is two times z.” That is information that we just got from a table, and that table just listed numbers. Simple differentials. One number subtracted from the next. This is useful to us because it is precisely what we were looking for when we wanted to learn calculus. We use the calculus to tell us what the rate of change is for any given variable and exponent. Given an x, we seek a y’, where y’ is the rate of change of x. And that is what we just found. Currently, calculus calls y’ the derivative, but that is just fancy terminology that does not really mean anything. It just confuses people for no reason. The fact is, y’ is a rate of change, and it is better to remember that at all times. You may still have one very important question. You will say, “I see where the numbers are coming from, but what does it mean? Why are we selecting the lines in the table where the numbers are constant?” We are going to those lines, because in those lines we have flattened out the curve. If the numbers are all the same, then we are dealing with a straight line. A constant differential describes a straight line instead of a curve. We have dug down to that level of change that is constant, beneath all our other changes. As you can see, in the equations with a lot of deltas, we have a change of a change of a change. . . . We just keep going down to sub-changes until we find one that is constant. That one will be the tangent to the curve. If we want to find the rate of change of the exponent 6, for instance, we only have to dig down 7 sub-changes. We don’t have to approach zero at all.
In a way we have done the same thing that the Greeks were doing and that Newton was doing. We have flattened out the curve. But we did not use a magnifying glass to do it. We did not go to a point, or get smaller and smaller. We went to sub-changes, which are a bit smaller, but they aren’t anywhere near zero. In fact, to get to zero, you would have to have an infinite number of deltas, or sub-changes. And this means that your exponent would have to be infinity itself. Calculus never deals with infinite exponents, so there is never any conceivable reason to go to zero. We don’t need to concern ourselves with points at all. Nor do we need to talk of infinitesimals or limits. We don't have an infinite series, and we don't have any vanishing terms. We have a definite and limited series, one that is 7 terms long with the exponent 6 and only 3 terms long with the exponent 2.
I hope you can see that the magic equation is just a generalization of all the constant differential equations we pulled from the table. To “invent” the calculus, we don’t have to derive the magic equation at all. All we have to do is generalize a bunch of specific equations that are given us by the table. By that I mean that the magic equation is just an equation that applies to all similar situations, whereas the specific equations only apply to specific situations (as when the exponent is 2 or 3, for example). By using the further variable “n”, we are able to apply the equation to all exponents. Like this: nz
^{n-1} = Δz^{n}And we don’t have to prove or derive the table either. The table is true by definition. Given the definition of integer and exponent, the table follows. The table is axiomatic number analysis of the simplest kind. In this way I have shown that the basic equation of differential calculus falls out of simple number relationships like an apple falls from a tree. Even pure mathematicians can have nothing to say against my table, since it has no necessary physical content. I call my initial differentials lengths, but that is to suit myself. You can subtract all the physical content out of my table and it is still the same table and still completely valid. We don’t need to consider any infinite series, we don’t need to analyze differentials approaching zero in any strange way, we don’t need to think about infinitesimals, we don’t need to concern ourselves with functions, we don’t need to learn weird notations with arrows pointing to zeros underneath functions, and we don’t need to notate functions with parentheses and little “f’s”, as in f(x). But the most important thing we can ditch is the current derivation of the magic equation, since we have no need of it. I will show you that this is important, because the current derivation is gobblydegook. I am once again making a very big claim, but once again I can prove it, in very simple language. Let’s look at the current derivation of the magic equation. This derivation is a simplified form of Newton’s derivation, but conceptually it is exactly the same. Nothing important has changed in 350 years. This is the derivation you will be taught this semester. The figure δ stands for “a very small change”. It is the small-case Greek “d”, which is called delta. The large-case is Δ, remember, which is a capital delta. Sometimes the two are used interchangeably, and you may see the derivation below with Δ instead of δ. You may even see it with the letter “d”. I will not get into which is better and why, since in my opinion the question is moot. After today we can ditch all three. Anyway, we start by taking any functional equation. “Functional” just means that y depends upon x in some way. Think of how a velocity depends on a distance. To measure a velocity you need to know a distance, so that velocity is a function of distance. But distance is not a function of velocity, since you can measure a distance without being concerned at all about velocity. So, we take any functional equation, say y = x ^{2}Increase it by δy and δx to obtain y + δy = (x + δx) ^{2}subtract the first equation from the second: δy = (x + δx) ^{2} - x^{2} = 2xδx + δx ^{2}divide by δx δy /δx = 2x + δx Let δx go to zero (only on the right side, of course) δy / δx = 2x y’ = 2x That is how they currently derive the magic equation. Any teenager, or any honest person, will look at that series of operations and go, “What the. . . ?” How can we justify all those seemingly arbitrary operations? The answer is, we can’t. As it turns out, precisely none of them are legal. But Newton used them, he was a very smart guy, and we get the equation we want at the end. So we still teach that derivation. We haven’t discovered anything better, so we just keep teaching that. Let me run through the operations quickly, to show you what is going on. We only have four operations, so it isn’t that difficult, really. Historically, only the last operation has caused people to have major headaches. Newton was called on the carpet for it soon after he published it, by a clever bishop named Berkeley. Berkeley didn’t like the fact that δx went to zero only on the right side. But no one could sort through it one way or the other and in a few decades everyone just decided to move on. They accepted the final equation because it worked and swept the rest under the rug. But what I will show you is that the derivation is lost long before the last operation. That last operation is indeed a big cheat, but mathematicians have put so many coats of pretty paint on it that it is impossible to make them look at it clearly anymore. They answer that δx is part of a ratio on the left side, and because of that it is sort of glued to the δy above it. They say that δy/δx must be considered as one entity, and they say that this means it is somehow unaffected by taking δx to zero on the right side. That is math by wishful thinking, but what are you going to do? To get them to stand up and take notice, I have been forced to show them the even bigger cheats in the previous steps. Amazingly, no one in all of history has noticed these bigger cheats, not even that clever bishop. So let us go through all the steps. In the first equation, the variables stand for either “all possible points on the curve” or “any possible point on the curve.” The equation is true for all points and any point. Let us take the latter definition, since the former doesn’t allow us any room to play. So, in the first equation, we are at “any point on the curve”. In the second equation, are we still at any point on the same curve? Some will think that (y + δy) and (x + δx) are the co-ordinates of another any-point on the curve—this any-point being some distance further along the curve than the first any-point. But a closer examination will show that the second curve equation is not the same as the first. The any-point expressed by the second equation is not on the curve y = x ^{2}. In fact, it must be exactly δy off that first curve. Since this is true, we must ask why we would want to subtract the first equation from the second equation. Why do we want to subtract an any-point on a curve from an any-point off that curve?
Furthermore, in going from equation 1 to equation 2, we have added different amounts to each side. This is not normally allowed. Notice that we have added δy to the left side and 2xδx + δx ^{2} to the right side. This might have been justified by some argument if it gave us two any-points on the same curve, but it doesn’t. We have completed an illegal operation for no apparent reason.
Now we subtract the first any-point from the second any-point. What do we get? Well, we should get a third any-point. What is the co-ordinate of this third any-point? It is impossible to say, since we got rid of the variable y. A co-ordinate is in the form (x,y) but we just subtracted away y. You must see that δy is not the same as y, so who knows if we are off the curve or on it. Since we subtracted a point on the first curve from a point off that curve, we would be very lucky to have landed back on the first curve, I think. But it doesn’t matter, since we are subtracting points from points. Subtracting points from points is illegal anyway. If you want to get a length or a differential you must subtract a length from a length or a differential from a differential. Subtracting a point from a point will only give you some sort of zero—another point. But we want δy to stand for a length or differential in the third equation, so that we can divide it by δx. As the derivation now stands, δy must be a point in the third equation. Yes, δy is now a point. It is not a change-in-y in the sense that the calculus wants it to be. It is no longer the difference in two points on the curve. It is not a differential! Nor is it an increment or interval of any kind. It is not a length, it is a point. What can it possibly mean for an any-point to approach zero? The truth is it doesn’t mean anything. A point can’t approach a zero length since a point is already a zero length. Look at the second equation again. The variable y stands for a point, but the variable δy stands for a length or an interval. But if y is a point in the second equation, then δy must be a point in the third equation. This makes dividing by δx in the next step a logical and mathematical impossibility. You cannot divide a point by any quantity whatsoever, since a point is indivisible by definition. The final step—letting δx go to zero—cannot be defended whether you are taking only taking the denominator on the left side to zero or whether you are taking the whole fraction toward zero (which has been the claim of most). The ratio δy/δx was already compromised in the previous step. The problem is not that the denominator is zero; the problem is that the numerator is a point. The numerator is zero. My new method drives right around this mess by dispensing with points altogether. You can see that the big problem in the current derivation is in trying to subtract one point from another. But you cannot subtract one point from another, since each point acts like a zero. Every point has zero extension in every direction. If you subtract zero from zero you can only get zero. You will say that I subtracted one point from another above (x-y) and got a length, but that is only because I treated each variable as a length to start with. Each “point” on a ruler or curve is actually a length from zero, or from the end of the ruler. Go to the “point” 5 on the ruler. Is that number 5 really a point? No, it is a length. The number 5 is telling you that you are five inches from the end of the ruler. The number 5 belongs to the length, not the point. Which means that the variable x, that may stand for 5 or any other number on the ruler, actually stands for a length, not a point. This is true for curves as well as straight lines or rulers. Every curve is like a curved ruler, so that all the numbers at “points” on the curve are actually lengths. You may say, “Well, don’t current mathematicians know that? Doesn’t the calculus take that into account? Can’t you just go back into the derivation above and say that y is a length from zero instead of a point, which means that in the third equation δy is a length, which means that the derivation is saved?” Unfortunately, no. You can’t say any of those things, since none of them are true. The calculus currently believes that y’ is an instantaneous velocity, which is a velocity at a point and at an instant. You will be taught that the point y is really a point in space, with no time extension or length. Mathematicians believe that the calculus curve is made up of spatial points, and physicists of all kinds believe it, too. That is why my criticism is so important, and why it cannot be squirmed out of. The variable y is not a length in the first equation of the derivation, and this forces δy to be a point in the third equation. A differential stands for a length only if the two terms in the differential are already lengths. They must both have extension. Five inches minus four inches is one inch. Everything in that sentence is a length. But the fifth-inch mark minus the fourth-inch mark is not the one inch-mark, nor is it the length one inch. A point minus a point is a meaningless operation. It is like 0 – 0. This is the reason I was careful to build my table only with lengths. I don’t use points. This is because I discovered that you can’t assign numbers to points. If you can’t assign numbers to points, then you can’t assign variables or functions to points. When I was building my table above, I kind of blew past this fact, since I didn’t want to confuse you with too much theory. My table is all lengths, but I didn’t really tell you why it had to be like that. Now, however, I think you are ready to notice that points can’t really enter equations or tables at all. Only ordinal numbers can be applied to points. These are ordinal numbers: 1st, 2nd, 3rd. The fifth point, the eighth point, and so on. But math equations apply to cardinal or counting numbers, 1, 2, 3. You can’t apply a counting number to a point. As I showed with the ruler, any time you apply a counting number to a “point” on the ruler, that number attaches to the length, not the point. The number 5 means five inches, and that is a length from zero or from the end of the ruler. It is the same with all lines and curves. And this applies to pure math as well as to applied math. Even if your lines and curves are abstract, everything I say here still applies in full force. The only difference is that you no longer call differentials lengths; you call them intervals or differentials or something. The students will now say, “Can’t you go back yourself and redefine all the points as lengths, in the existing derivation? Can’t you fix it somehow?” The answer is no. I can’t. I have showed you that Newton cheated on all four steps, not just the last one. You can’t “derive” his last equation from his first by applying a series of mathematical operations to them like this, and what is more you don’t need to. I have showed with my table that you don’t need to derive the magic equation since it just drops out of the definition of exponent fully formed. The equation is axiomatic. What I mean by that is that it really is precisely like the equation 1+1=2. You don’t need to derive the equation 1+1=2, or prove it. You can just pull it from a table of apples or oranges and generalize it. It is definitional. It is part of the definition of number and equality. In the same way, the magic equation is a direct definitional outcome of number, equality, and exponent. Build a simple table and the equation drops out of it without any work at all. If you must have a derivation, the simplest possible one is this one: We are given a functional equation of the general sort y = x ^{n}
and we seek y’, where, by definition y’ = Δx ^{n}
Then we go to our generalized equation from the table, which is nx ^{n-1} = Δx^{n}
By substitution, we get y’ = nx ^{n-1} That’s all we need. But I will give you one other piece of information that will come in handy later. Remember how we cancelled all those deltas, to simplify the first equations coming out of the table? Well, we did that just to make things look tidier, and to make the equations look like the current calculus equations. But those deltas are really always there. You can cancel them if you want to clean up your math, but when you want to know what is going on physically, you have to put them back in. What they tell you is that when you are dealing with big exponents, you are dealing with very complex accelerations. Once you get past the exponent two, you aren’t dealing with lengths or velocities anymore. The variable x to the exponent 6 will have 7 deltas in front of it, as you can see by going back to the table. That is a very high degree of acceleration. Three deltas is a velocity. Four is an acceleration. Five is a variable acceleration. Six is a change of a variable acceleration. And so on. Most people can’t really visualize anything beyond a variable acceleration, but high exponent variables do exist in nature, which means that you can go on changing changes for quite a while. If you go into physics or engineering, this knowledge may be useful to you. A lot of physicists appear to have forgotten that accelerations are often variable to high degrees. They assume that every acceleration in nature is a simple acceleration. In my long paper I covered a lot of other interesting topics, but I will only mention one more of them here. I have told you a bit about quantum mechanics above, so I will give you a clue about the end of that story, too. QED hit a wall about 20 years ago, and that is why all the big names are now working on string theory. String theory is a horrible mess, one that makes the mess of calculus look like spilled milk. But one of the main reasons it was invented was to save QED from the point. This problem I have solved for you about the point is exactly the same one that cold-cocked QED. All of physics is dependent on calculus and its offshoots, and using calculus with points in the equations has ended up driving everyone a little mad. The only way that physicists could make the equations of QED keep working is by performing silly operations on them, like the ones that Newton performed in his derivation. These operations in QED are called “renormalization”. That is a big word for fudging. The inventor of renormalization was the same Richard Feynman who I told you about above. His students are still finding new ways to renormalize equations that won’t work in normal ways. Mr. Feynman was a big mess maker, but he did have the honesty to at least admit it, regarding renormalization. He himself called it “hocus pocus” and a “dippy process” that was “not mathematically legitimate.” It would have been nice if Newton or Leibniz or Cauchy had had the intellectual honesty to say the same about the calculus derivation. The reason this should be interesting to you is that my correction to the calculus solves all the problems of QED at one blow, although they haven’t figured that out yet.* Just by reading this paper you are now smarter than all the “geniuses” fudging giant equations. With your new knowledge, you can go to college, wade briskly through all the muck, and start putting the house in order. Your understanding of calculus and the point will allow you to climb ladders that no one even knew existed. So please remember me when you get to the top. And don't dump any more garbage that might land on my head. *The "uncertainty" of quantum mechanics is due (at least in part) to the math and not to the conceptual framework. That is to say, the various difficulties of quantum physics are primarily problems of a misdefined Hilbert space and a misused mathematics (vector algebra), and not problems of probabilities or philosophy. My correction to the calculus allows for a fix of all higher maths, spaces, and theories. If this paper was useful to you in any way, please consider donating a dollar (or more) to the SAVE THE ARTISTS FOUNDATION. This will allow me to continue writing these "unpublishable" things. Don't be confused by paying Melisa Smith--that is just one of my many |