
go to homepage go
to updates go to long calculus paper
Calculus Simplified
by
Miles Mathis
A
note on my calculus papers, 2006
Several years
ago I wrote a long paper on the foundation of
the calculus. The paper was not really dense or difficult—as
these things usually go—since I made a concentrated effort
to keep both the language and the math fairly simple. But because
I was tackling a large number of problems that had accumulated
over hundreds of years, and because calculus is considered a bit
scary to start with, the paper was still hard to absorb. I found
it necessary to talk a lot about history and theory and to bring
up very old and outdated ideas, like those of Archimedes and
Euclid. This ended up confusing most of my readers, I think, and
very few made it through to the end.
For this reason I have now returned to the subject, hoping to
further shorten and simplify my findings. What I plan to do here
is try to sell my idea to a hypothetical reader. I will imagine I
am talking to a high school student just entering firstsemester
calculus. I will explain to him or her why my explanation is
necessary, why it is better, and why he or she should prefer to
take a course based on my explanation rather than a course based
on current theory. In doing this, I will show that current
notation and the current method of teaching calculus is a
gigantic mess. In a hundred years, all educated people will look
back and wonder how calculus could exist, and be taught, in such
a confusing manner. They will wonder how such basic math, so
easily understood, could have remained in a halfway state for so
many centuries. The current notation and derivation for the
equations of calculus will look to them like the leeches that
doctors used to put on patients, as an allround cure, or like
the holes they drilled in the head to cure headache. Many
students have felt that learning calculus is like having holes
drilled in their heads, and I will show that they were right to
feel that way.
What some of you students have no doubt
already felt is that the further along in math you get, the more
math starts to seem like a trick. When you first start out, math
is pretty easy, since it makes sense. You don’t just learn
an equation. No, you learn an equation and you learn why the
equation makes sense. You don’t just acquire a fact, you
acquire understanding. For example, when you learn addition, you
don’t just learn how to use a plus sign. You also learn why
the sign works. You are shown the two apples and the one apple,
and then you put them together to get three apples. You see the
apples and you go, “Aha, now I see!” Addition makes
sense to you. It doesn’t just work. You fully understand
why it works. Geometry is also
understood by most students, since geometry is a physical math.
You have pictures you can look at and line segments you can
measure and so on, so it never feels like some kind of magic. If
your trig teacher was a good teacher, you may have felt this way
about trig as well. The sine and cosine stuff seems a bit
abstract at first, but sooner or later, by looking at triangles
and circles, it may dawn on you that everything makes absolute
sense. Algebra is the next
step, and many people get lost there. But if you can get your
head around the idea of a variable, you are halfway home.
But
when we get to calculus, everyone gets swamped. Notice that I did
not say, “almost everyone.” No, I said everyone. Even
the biggest nerd with the thickest glasses who gets A’s on
every paper is completely confused. Those who do well in their
first calculus courses are the ones that just memorize the
equations and don’t ask any questions. One reason for this
is that with calculus you will be given some new signs, and these
signs will not really make sense in the old ways. You will be
given an arrow pointing at zero, and this little arrow and zero
will be underneath variables or next to big squiggly lines. This
arrow and zero are supposed to mean, “let the variable or
function approach zero,” but your teacher probably won’t
have time to really make you understand what a function is or why
anyone wanted it to approach zero in the first place. Your
teacher would answer such a question by saying, “Well, we
just let it go toward zero and then see what happens. What
happens is that we get a solution. We want a solution, don’t
we? If going to zero gives us a solution, then we are done. You
can’t ask questions in math beyond that.”
Well, if you teacher says that to you, you can tell your teacher
he or she is wrong. Math is not just memorizing equations, it is
understanding equations. All math, no matter how difficult, is
capable of being understood in the same way that 2+2=4 can be
understood; and if your teacher cannot explain it to you, then he
or she does not understand it.
What is happening with calculus is that you are taking your first
step into a new kind of math and science. It is a kind of
faithbased math. Almost everything you will learn from now on is
math of this sort. You will not have time to understand it,
therefore you must accept it and move on. Unless you plan to
become a professor of the history of math, you will not have time
to get to the roots of the thing and really make sense of it in
your head. What no high school or college student is supposed to
know is that even the historyofmath professors don’t
understand calculus. No one understands or ever understood
calculus, not Einstein, not Cauchy, not Cantor, not Russell, not
Bohr, not Feynman, no one. Not even Leibniz or Newton understood
it. That is a big statement, I know, but I have already proved it
and I will prove it again below. The short proof is to point out
that if they had really understood it, they would have corrected
it like I am about to. If any of these people had understood
calculus, they would have reconstructed the whole thing so that
you could understand it, too. There is no reason to teach you a
math that can’t be explained simply. There is no
conspiracy. You are taught calculus as a big mystery simply
because, until now, it was a big mystery.
Now, when I say that math after calculus is faithbased, I am
offending a lot of important people. Mathematicians are very
proud of their field, as you would expect, and they don’t
want some cowboy coming in and comparing it to religion. But I am
not just saying things to be novel or to get attention. I can
give you famous examples of how math has become faithbased. Many
of you will have heard of Richard Feynman, and not just because I
mentioned him ten sentences ago. He is probably the most famous
physicist after Einstein, and he got a lot of attention in the
second half of the 20th century—as one of the fathers of
QED, among other things. One of his most quoted quotes is, “Shut
up and calculate!” Meaning, “Don’t ask
questions. Don’t try to understand it. Accept that the
equation works and memorize it. The equation works because it
matches experiment. There is no understanding beyond that.”
All of quantum dynamics is
based on this same idea, which started with Heisenberg and Bohr
back in the early 1900’s. “The physics and math are
not understandable, in the normal way, so don’t ask stupid
questions like that any more.” This last sentence is
basically the short form of what is called the Copenhagen
Interpretation of quantum dynamics. The Copenhagen Interpretation
applies to just about everything now, not just QED. It also
applies to Relativity, in which the paradoxes must simply be
accepted, whether they make sense or not. And you might say that
it also applies to calculus. Historically, your professors have
accepted the Copenhagen Interpretation of calculus, and this
interpretation states that students’ questions cannot be
answered. You will be taught to understand calculus like your
teacher understands it, and if your teacher is very smart he
understands it like Newton understood it. He will have memorized
Newton’s or Cauchy’s derivation and will be able to
put it on the blackboard for you. But this derivation will not
make sense like 2+2=4 makes sense, and so you will still be
confused. If you continue to ask questions, you will be read the
Copenhagen Interpretation, or some variation of it. You will be
told to shut up and calculate.
The first semester of
calculus you will learn differential calculus. The amazing thing
is that you will probably make it to the end of the semester
without ever being told what a differential is. Most
mathematicians learn that differential calculus is about solving
certain sorts of problems using a derivative, and later courses
called “differential equations” are about solving
more difficult problems in the same basic way. But most never
think about what a differential is, outside of calculus. I didn’t
ever think about what a differential was until later, and I am
not alone. I know this because when I tell people that my new
calculus is based on a constant differential instead of a
diminishing differential, they look at me like I just started
speaking Japanese with a Dutch accent. For them, a differential
is a calculus term, and in calculus the differentials are always
getting smaller. So talking about a differential that does not
get smaller is like talking about a politician that does not lie.
It fails to register. A
differential is one number subtracted from another number: (21)
is a differential. So is (xy). A “differential” is
just a fancier term for a “difference”. A
differential is written as two terms and a minus sign, but as a
whole, a differential stands for one number. The differential
(21) is obviously just 1, for example. So you can see that a
differential is a useful expansion. It is one number written in a
longer form. You can write any number as a differential. The
number five can be written as (83), or in a multitude of other
ways. We may want to write a single number as a differential
because it allows us to define that differential as some useful
physical parameter. For instance, a differential is most often a
length. Say you have a ruler. Go to the 2inch mark. Now go to
the 1inch mark. What is the difference between the two marks? It
is one inch, which is a length. (21) may be a length. (xy) may
also be a length. In pure math, we have no lengths, of course,
but in math applied to physics, a differential is very often a
length. The problem is that
modern mathematicians do not like to teach you math by drawing
you pictures. They do not like to help you understand concepts by
having you imagine rulers or lengths or other physical things.
They want you to get used to the idea of math as completely pure.
They tell you that it is for your own good. They make you feel
like physical ideas are equivalent to pacifiers: you must grow up
and get rid of them. But the real reason is that, starting with
calculus, they can no longer draw you meaningful pictures. They
are not able to make you understand, so they tell you to shut up
and calculate. It is kind of like the wave/particle duality,
another famous concept you have probably already heard of. Light
is supposed to act like a particle sometimes and like a wave at
other times. No one has been able to draw a picture of light that
makes sense of this, so we are told that it cannot be done. But
in another one of my papers I have drawn a picture of light that
makes sense of this, and in this paper I will show you a pretty
little graph that makes perfect sense of the calculus. You will
be able to look at the graph with your own eyes and you will see
where the numbers are coming from, and you will say, “Aha,
I understand. That was easy!”
There is basically
only one equation that you learn in your first semester of
calculus. All the other equations are just variations and
expansions of the one equation. This one equation is also the
basic equation of what you will learn next semester in integral
calculus. All you have to do is turn it upside down, in a way.
This equation is y’
= nx^{n1} This
is the magic equation. What you won’t be told is that this
magic equation was not invented by either Newton or Leibniz. All
they did is invent two similar derivations of it. Both of them
knew the equation worked, and they wanted to put a foundation
under it. They wanted to understand where it came from and why it
worked. But they failed and everyone else since has failed. The
reason they failed is that the equation was used historically to
find tangents to curves, and everyone all the way back to the
ancient Greeks had tried to solve this problem by using a
magnifying glass. What I mean by that is that for millennia, the
accepted way to approach the problem and the math was to try to
straighten out the curve at a point. If you could straighten out
the curve at that point you would have the tangent at that point.
The ancient Greeks had the novel idea of looking at smaller and
smaller segments of the curve, closer and closer to the point in
question. The smaller the segment, the less it curved. Rather
than use a real curve and a real magnifying glass, the Greeks
just imagined the segment shrinking down. This is where we come
to the diminishing differential. Remember that I said the
differential was a length. Well, the Greeks assigned that
differential to the length of the segment, and then imagined it
getting smaller and smaller.
Two thousand years later, nothing had changed. Newton and Leibniz
were still thinking the same way. Instead of saying the segment
was “getting smaller” they said it was “approaching
zero”. That is why we now use the little arrow and the
zero. Newton even made tables, kind of like I will make below. He
made tables of diminishing differentials and was able to pull the
magic equation from these tables.
The problem is that he and everyone else has used the wrong
tables. You can pull the magic equation from a huge number of
possible tables, and in each case the equation will be true and
in each case the table will “prove” or support the
equation. But in only one table will it be clear why the equation
is true. Only one table will be simple enough and direct enough
to show a 16yearold where the magic equation comes from. Only
one table will cause everyone to gasp and say, “Aha, now I
understand.” Newton and Leibniz never discovered that
table, and no one since has discovered it. All their tables were
too complex by far. Their tables required you to make very
complex operations on the numbers or variables or functions. In
fact, these operations were so complex that even Newton and
Leibniz got lost in them. As I will show after I unveil my table,
Newton and Leibniz were forced to perform operations on their
variables that were actually false. Getting the magic equation
from a table of diminishing differentials is so complex and
difficult that no one has ever been able to do it without making
a hash of it. It can be done, but it isn’t worth doing. If
you can pull the magic equation from a simple table of integers,
why try to pull it from a complex table of functions with strange
and confusing scripts? Why teach calculus as a big hazy mystery,
invoking infinite series or approaches to 0’s or
infinitesimals, when you can teach it at a level that is no more
complex than 1+1=2?
So here is the lesson. I will teach
you differential calculus in one day, in one paper. If you have
reached this level of math, the only thing that should look
strange to you in the magic equation is the y’. You know
what an exponent is, and you should know that you can write an
exponent as (n1) if you want to. That is just an expansion of a
single number into a differential, as I taught you above. If n=2,
for instance, then the exponent just equals 1, in that case.
Beyond that, “n” is just another variable. It could
be “z” or “a” or anything else. That
variable just generalizes the equation for us, so that it applies
to all possible exponents. All
that is just simple algebra. But you don’t normally have
primed variables in high school algebra. What does the prime
signify? That prime is telling you that y is a different sort of
variable than x. When you apply this magic equation to physics, x
is usually a distance and y is a velocity. A variable could also
be an acceleration, or it could be a point, or it could be just
about anything. But we need a way to remind ourselves that some
variables are one kind of parameter and some variables are
another. So we use primes or double primes and so on.
This is important, because it means that mathematically, a
velocity is not a distance, and an acceleration is not a
velocity. They have to be kept separate. A calculus equation
takes you from one sort of variable to another sort. You cannot
have a distance on both sides of the magic equation, or a
velocity on both sides. If x is a distance, y’ cannot be a
distance, too. Some people
will try to convince you later that calculus can be completely
divorced from physics, or from the real world. They will stress
that calculus is pure math, and that you don’t need to
think of distances or velocities or physical parameters. But if
this were true, we wouldn’t need to keep our variables
separate. We wouldn’t need to keep track of primed
variables, or later doubleprimed variables and so on. Variables
in calculus don’t just stand for numbers, they stand for
different sorts of numbers, as you see. In pure math, there are
not different sorts of numbers, beyond ordinal and cardinal, or
rational and irrational, or things like that. In pure math, a
counting integer is a counting integer and that is all there is
to it. But in calculus, our variables are counting different
things and we have to keep track of this. That is what the primes
are for. What, you may ask, is
the difference between a length and a velocity? Well, I think you
can probably answer that without the calculus, and probably
without much help from me. To measure a length you don’t
need a watch. To measure velocity, you do. Velocity has a “t”
in the denominator, which makes it a rate of change. A rate is
just a ratio, and a ratio is just one number over another number,
with a slash in between. Basically, you hold one variable steady
and see how the other variable changes relative to it. With
velocity, you hold time steady (all the ticks are the same
length) and see how distance changes during that time. You put
the variable you know more about (it is steady) in the
denominator and the variable you are seeking information about
(you are measuring it) in the numerator. Or, you put the defined
variable in the denominator (time is defined as steady) and the
undefined variable in the numerator (distance is not known until
it is measured). All this can
also be applied to velocity and acceleration. The magic equation
can be applied to velocity and acceleration, too. If x is a
velocity, then y’ is an acceleration. This is because
acceleration is the rate of change of the velocity. Acceleration
is v/t. So you can see that y’ is always the rate of change
of x. Or, y’ is always x/t. This is another reason that
calculus can’t really be divorced completely from physics.
Time is a physical thing. A pure mathematician can say, “Well,
we can say that y’ is always x/z, where z is not time but
just a pure variable.” But in that case, x/z is still a
rate of change. You can refuse to call “z” a time
variable, but you still have the concept of change. A pure number
changing still implies time passing, since nothing can change
without time passing. Mathematicians want “change”
without “time”, but change is time. If a
mathematician can imagine or propose change without time, then he
is cleverer than the gods by half, since he has just separated a
word from its definition.
At any rate, I think you are
already in a better position to understand the calculus than any
math student in history. Whether you like that little diversion
into time and change is really beside the point, since even if
you believe in pure math it doesn’t effect my argument.
All the famous mathematicians
in history have studied the curve in order to study rate of
change. To develop the calculus, they have taken some length of
some curve and then let that length diminish. They have studied
the diminishing differential, the differential approaching zero.
This approach to zero gives them an infinite series of
differentials, and they apply a method to the series in order to
understand its regression. But
it is much more useful to notice that curves always concern
exponents. Curves are all about exponents, and so is the
calculus. So what I did is study integers and exponents, in the
simplest situations. I started by letting z equal some point. If
I let a variable stand for a point, then I have to have a
different sort of variable stand for a length, so that I don’t
confuse a point and a length. The normal way to do this is to let
a length be Δz (read “change in z”). I want
lengths instead of points, since points cannot be differentials.
Lengths can. You cannot think of a point as (xy). But if x and y
are both points, then (xy) will be a length, you see.
In the first line of my table, I list the possible integer values
of Δz. You can see that this is just a list of the
integers, of course. Next I list some integer values for other
exponents of Δz. This is also straightforward. At line 7, I
begin to look at the differentials of the previous six lines. In
line 7, I am studying line 1, and I am just subtracting each
number from the next. Another way of saying it is that I am
looking at the rate of change along line 1. Line 9 lists the
differentials of line 3. Line 14 lists the differentials of line
9. I think you can follow my logic on this, so meet me down
below.
1
Δz
1, 2, 3, 4, 5, 6, 7, 8, 9…. 2
Δ2z
2, 4, 6, 8, 10, 12, 14, 16, 18…. 3
Δz^{2}
1, 4, 9, 16, 25, 36, 49 64, 81 4
Δz^{3}
1, 8, 27, 64, 125, 216, 343 5
Δz^{4}
1,
16, 81, 256, 625, 1296 6
Δz^{5}
1,
32, 243, 1024, 3125, 7776, 16807 7
ΔΔz
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 8
ΔΔ2z
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 9
ΔΔz^{2}
1, 3, 5, 7, 9, 11, 13, 15, 17, 19 10
ΔΔz^{3}
1,
7, 19, 37, 61, 91, 127 11
ΔΔz^{4}
1,
15, 65, 175, 369, 671 12
ΔΔz^{5}
1,
31, 211, 781, 2101, 4651, 9031 13
ΔΔΔz
0, 0, 0, 0, 0, 0, 0 14
ΔΔΔz^{2}
2,
2, 2, 2, 2, 2, 2, 2, 2, 2 15
ΔΔΔz^{3}
6,
12, 18, 24, 30, 36, 42 16
ΔΔΔz^{4}
14,
50, 110, 194, 302 17
ΔΔΔz^{5}
30, 180, 570, 1320, 2550, 4380 18
ΔΔΔΔz^{3}
6,
6, 6, 6, 6, 6, 6, 6 19
ΔΔΔΔz^{4}
36,
60, 84, 108 20
ΔΔΔΔz^{5}
150,
390, 750, 1230, 1830 21
ΔΔΔΔΔz^{4}
24, 24, 24, 24 22
ΔΔΔΔΔz^{5}
240, 360, 480, 600 23
ΔΔΔΔΔΔz^{5}
120,
120, 120 from this, one can
predict that 24
ΔΔΔΔΔΔz^{6}
720, 720, 720 And so on.
Again,
this is what you call simple number analysis. It is a table of
differentials. The first line is a list of the potential integer
lengths of an object, and a length is a differential. It is also
a list of the integers, as I said. After that it is easy to
follow my method. It is easy until you get to line 24, where I
say, “One can predict that. . . .” Do you see how I
came to that conclusion? I did it by pulling out the lines where
the differential became constant. 7
ΔΔz 1, 1, 1, 1, 1, 1,
1 14 ΔΔΔz^{2}
2, 2, 2, 2, 2, 2, 2 18
ΔΔΔΔz^{3}
6, 6, 6, 6, 6, 6, 6 21
ΔΔΔΔΔz^{4}
24, 24, 24, 24 23
ΔΔΔΔΔΔz^{5}
120, 120, 120 24
ΔΔΔΔΔΔΔz^{6}
720, 720, 720
Do you see it
now? 2ΔΔz = ΔΔΔz^{2}
3ΔΔΔz^{2}
= ΔΔΔΔz^{3}
4ΔΔΔΔz^{3}
= ΔΔΔΔΔz^{4}
5ΔΔΔΔΔz^{4}
= ΔΔΔΔΔΔz^{5}
6ΔΔΔΔΔΔz^{5}
= ΔΔΔΔΔΔΔz^{6}
All these equations are equivalent to the magic equation,
y’ = nx^{n1}.
In any of those equations, all we have to do is let x equal the
right side and y’ equal the left side. No matter what
exponents we use, the equation will always resolve into our magic
equation.
If I know anything about teenagers, I will
expect this reaction: “Well, sir, that may be a great
simplification of Newton, for all we know, but it is not exactly
1+1=2.” Fair enough. It may take a bit of sorting through.
But I assure you that compared to the derivation you will learn
in school, my table is a miracle of simplicity and transparence.
Not only that, but I will continue to simplify and explain. Since
in those last equations we have z on both sides, we can cancel a
lot of those deltas and get down to this: 2z = Δz^{2}
3z^{2}
= Δz^{3}
4z^{3}
= Δz^{4}
5z^{4}
= Δz^{5}
6z^{5}
= Δz^{6}
Now,
if we reverse it, we can read that first equation as, “the
rate of change of z squared is two times z.” That is
information that we just got from a table, and that table just
listed numbers. Simple differentials. One number subtracted from
the next. This is useful to us
because it is precisely what we were looking for when we wanted
to learn calculus. We use the calculus to tell us what the rate
of change is for any given variable and exponent. Given an x, we
seek a y’, where y’ is the rate of change of x. And
that is what we just found. Currently, calculus calls y’
the derivative, but that is just fancy terminology that does not
really mean anything. It just confuses people for no reason. The
fact is, y’ is a rate of change, and it is better to
remember that at all times.
You may still have one very
important question. You will say, “I see where the numbers
are coming from, but what does it mean?
Why are we selecting the lines in the table where the numbers are
constant?” We are going to those lines, because in those
lines we have flattened out the curve. If the numbers are all the
same, then we are dealing with a straight line. A constant
differential describes a straight line instead of a curve. We
have dug down to that level of change that is constant, beneath
all our other changes. As you can see, in the equations with a
lot of deltas, we have a change of a change of a change. . . . We
just keep going down to subchanges until we find one that is
constant. That one will be the tangent to the curve. If we want
to find the rate of change of the exponent 6, for instance, we
only have to dig down 7 subchanges. We don’t have to
approach zero at all. In a way
we have done the same thing that the Greeks were doing and that
Newton was doing. We have flattened out the curve. But we did not
use a magnifying glass to do it. We did not go to a point, or get
smaller and smaller. We went to subchanges, which are a bit
smaller, but they aren’t anywhere near zero. In fact, to
get to zero, you would have to have an infinite number of deltas,
or subchanges. And this means that your exponent would have to
be infinity itself. Calculus never deals with infinite exponents,
so there is never any conceivable reason to go to zero. We don’t
need to concern ourselves with points at all. Nor do we need to
talk of infinitesimals or limits. We
don't have an infinite series, and we don't have any vanishing
terms. We
have a definite and limited series, one that is 7 terms long with
the exponent 6 and only 3 terms long with the exponent 2.
I
hope you can see that the magic equation is just a generalization
of all the constant differential equations we pulled from the
table. To “invent” the calculus, we don’t have
to derive the magic equation at all. All we have to do is
generalize a bunch of specific equations that are given us by the
table. By that I mean that the magic equation is just an equation
that applies to all similar situations, whereas the specific
equations only apply to specific situations (as when the exponent
is 2 or 3, for example). By using the further variable “n”,
we are able to apply the equation to all exponents. Like
this: nz^{n1}
= Δz^{n}
And we don’t have to
prove or derive the table either. The table is true by
definition. Given the definition of integer and exponent, the
table follows. The table is axiomatic number analysis of the
simplest kind. In this way I have shown that the basic equation
of differential calculus falls out of simple number relationships
like an apple falls from a tree.
Even pure mathematicians can have nothing to say against my
table, since it has no necessary physical content. I call my
initial differentials lengths, but that is to suit myself. You
can subtract all the physical content out of my table and it is
still the same table and still completely valid.
We don’t need to consider any infinite series, we don’t
need to analyze differentials approaching zero in any strange
way, we don’t need to think about infinitesimals, we don’t
need to concern ourselves with functions, we don’t need to
learn weird notations with arrows pointing to zeros underneath
functions, and we don’t need to notate functions with
parentheses and little “f’s”, as in f(x). But
the most important thing we can ditch is the current derivation
of the magic equation, since we have no need of it. I will show
you that this is important, because the current derivation is
gobblydegook.
I am once again making a very big claim,
but once again I can prove it, in very simple language. Let’s
look at the current derivation of the magic equation. This
derivation is a simplified form of Newton’s derivation, but
conceptually it is exactly the same. Nothing important has
changed in 350 years. This is the derivation you will be taught
this semester. The figure δ stands for “a very small
change”. It is the smallcase Greek “d”, which
is called delta. The largecase is Δ, remember, which is a
capital delta. Sometimes the two are used interchangeably, and
you may see the derivation below with Δ instead of δ.
You may even see it with the letter “d”. I will not
get into which is better and why, since in my opinion the
question is moot. After today we can ditch all three.
Anyway, we start by taking any functional equation. “Functional”
just means that y depends upon x in some way. Think of how a
velocity depends on a distance. To measure a velocity you need to
know a distance, so that velocity is a function of distance. But
distance is not a function of velocity, since you can measure a
distance without being concerned at all about velocity. So, we
take any functional equation, say y = x^{2} Increase
it by δy and δx to obtain y + δy = (x +
δx)^{2} subtract
the first equation from the second: δy = (x + δx)^{2}
 x^{2}
= 2xδx + δx^{2} divide
by δx δy /δx = 2x + δx Let δx
go to zero (only on the right side, of course) δy / δx
= 2x y’ = 2x
That is how they currently derive
the magic equation. Any teenager, or any honest person, will look
at that series of operations and go, “What the. . . ?”
How can we justify all those seemingly arbitrary operations? The
answer is, we can’t. As it turns out, precisely none of
them are legal. But Newton used them, he was a very smart guy,
and we get the equation we want at the end. So we still teach
that derivation. We haven’t discovered anything better, so
we just keep teaching that.
Let me run through the operations quickly, to show you what is
going on. We only have four operations, so it isn’t that
difficult, really. Historically, only the last operation has
caused people to have major headaches. Newton was called on the
carpet for it soon after he published it, by a clever bishop
named Berkeley. Berkeley didn’t like the fact that δx
went to zero only on the right side. But no one could sort
through it one way or the other and in a few decades everyone
just decided to move on. They accepted the final equation because
it worked and swept the rest under the rug.
But what I will show you is that the derivation is lost long
before the last operation. That last operation is indeed a big
cheat, but mathematicians have put so many coats of pretty paint
on it that it is impossible to make them look at it clearly
anymore. They answer that δx is part of a ratio on the left
side, and because of that it is sort of glued to the δy
above it. They say that δy/δx must be considered as
one entity, and they say that this means it is somehow unaffected
by taking δx to zero on the right side. That is math by
wishful thinking, but what are you going to do?
To get them to stand up and take notice, I have been forced to
show them the even bigger cheats in the previous steps.
Amazingly, no one in all of history has noticed these bigger
cheats, not even that clever bishop. So let us go through all the
steps. In the first equation,
the variables stand for either “all possible points on the
curve” or “any possible point on the curve.”
The equation is true for all points and any point. Let us take
the latter definition, since the former doesn’t allow us
any room to play. So, in the first equation, we are at “any
point on the curve”. In the second equation, are we still
at any point on the same curve? Some will think that (y + δy)
and (x + δx) are the coordinates of another anypoint on
the curve—this anypoint being some distance further along
the curve than the first anypoint. But a closer examination will
show that the second curve equation is not the same as the first.
The anypoint expressed by the second equation is not on the
curve y = x^{2}.
In fact, it must be exactly δy off
that first curve. Since this is true, we must ask why we would
want to subtract the first equation from the second equation. Why
do we want to subtract an anypoint on a curve from an anypoint
off that curve? Furthermore,
in going from equation 1 to equation 2, we have added different
amounts to each side. This is not normally allowed. Notice that
we have added δy to the left side and 2xδx + δx^{2}
to the right side. This might have been justified by some
argument if it gave us two anypoints on the same curve, but it
doesn’t. We have completed an illegal operation for no
apparent reason. Now we
subtract the first anypoint from the second anypoint. What do
we get? Well, we should get a third anypoint. What is the
coordinate of this third anypoint? It is impossible to say,
since we got rid of the variable y. A coordinate is in the form
(x,y) but we just subtracted away y. You must see that δy
is not the same as y, so who knows if we are off the curve or on
it. Since we subtracted a point on the first curve from a point
off that curve, we would be very lucky to have landed back on the
first curve, I think. But it doesn’t matter, since we are
subtracting points from points. Subtracting points from points is
illegal anyway. If you want to get a length or a differential you
must subtract a length from a length or a differential from a
differential. Subtracting a point from a point will only give you
some sort of zero—another point. But we want δy to
stand for a length or differential in the third equation, so that
we can divide it by δx. As the derivation now stands, δy
must be a point in the third equation.
Yes, δy is now a point. It is not a changeiny in the
sense that the calculus wants it to be. It is no longer the
difference in two points on the curve. It is not a differential!
Nor is it an increment or interval of any kind. It is not a
length, it is a point. What can it possibly mean for an anypoint
to approach zero? The truth is it doesn’t mean anything. A
point can’t approach a zero length since a point is already
a zero length. Look at the
second equation again. The variable y stands for a point, but the
variable δy stands for a length or an interval. But if y is
a point in the second equation, then δy must be a point in
the third equation. This makes dividing by δx in the next
step a logical and mathematical impossibility. You cannot divide
a point by any quantity whatsoever, since a point is indivisible
by definition. The final step—letting δx go to
zero—cannot be defended whether you are taking only taking
the denominator on the left side to zero or whether you are
taking the whole fraction toward zero (which has been the claim
of most). The ratio δy/δx was already compromised in
the previous step. The problem is not that the denominator is
zero; the problem is that the numerator is a point. The
numerator is zero.
My new method drives right around this mess by
dispensing with points altogether. You can see that the big
problem in the current derivation is in trying to subtract one
point from another. But you cannot subtract one point from
another, since each point acts like a zero. Every point has zero
extension in every direction. If you subtract zero from zero you
can only get zero. You will
say that I subtracted one point from another above (xy) and got
a length, but that is only because I treated each variable as a
length to start with. Each “point” on a ruler or
curve is actually a length from zero, or from the end of the
ruler. Go to the “point” 5 on the ruler. Is that
number 5 really a point? No, it is a length. The number 5 is
telling you that you are five inches from the end of the ruler.
The number 5 belongs to the length, not the point. Which means
that the variable x, that may stand for 5 or any other number on
the ruler, actually stands for a length, not a point. This is
true for curves as well as straight lines or rulers. Every curve
is like a curved ruler, so that all the numbers at “points”
on the curve are actually lengths.
You may say, “Well, don’t current mathematicians know
that? Doesn’t the calculus take that into account? Can’t
you just go back into the derivation above and say that y is a
length from zero instead of a point, which means that in the
third equation δy is a length, which means that the
derivation is saved?” Unfortunately, no. You can’t
say any of those things, since none of them are true. The
calculus currently believes that y’ is an instantaneous
velocity, which is a velocity at a point and at an instant. You
will be taught that the point y is really a point in space, with
no time extension or length. Mathematicians believe that the
calculus curve is made up of spatial points, and physicists of
all kinds believe it, too. That is why my criticism is so
important, and why it cannot be squirmed out of. The variable y
is not a length in the first equation of the derivation, and this
forces δy to be a point in the third equation.
A differential stands for a length only if the two terms in the
differential are already lengths. They must both have extension.
Five inches minus four inches is one inch. Everything in that
sentence is a length. But the fifthinch mark minus the
fourthinch mark is not the one inchmark, nor is it the length
one inch. A point minus a point is a meaningless operation. It is
like 0 – 0. This is the
reason I was careful to build my table only with lengths. I don’t
use points. This is because I discovered that you can’t
assign numbers to points. If you can’t assign numbers to
points, then you can’t assign variables or functions to
points. When I was building my table above, I kind of blew past
this fact, since I didn’t want to confuse you with too much
theory. My table is all lengths, but I didn’t really tell
you why it had to be like that. Now, however, I think you are
ready to notice that points can’t really enter equations or
tables at all. Only ordinal numbers can be applied to points.
These are ordinal numbers: 1st, 2nd, 3rd. The fifth point, the
eighth point, and so on. But math equations apply to cardinal or
counting numbers, 1, 2, 3. You can’t apply a counting
number to a point. As I showed with the ruler, any time you apply
a counting number to a “point” on the ruler, that
number attaches to the length, not the point. The number 5 means
five inches, and that is a length from zero or from the end of
the ruler. It is the same with all lines and curves. And this
applies to pure math as well as to applied math. Even if your
lines and curves are abstract, everything I say here still
applies in full force. The only difference is that you no longer
call differentials lengths; you call them intervals or
differentials or something.
The students will now say,
“Can’t you go back yourself and redefine all the
points as lengths, in the existing derivation? Can’t you
fix it somehow?” The
answer is no. I can’t. I have showed you that Newton
cheated on all four steps, not just the last one. You can’t
“derive” his last equation from his first by applying
a series of mathematical operations to them like this, and what
is more you don’t need to. I have showed with my table that
you don’t need to derive the magic equation since it just
drops out of the definition of exponent fully formed. The
equation is axiomatic. What I mean by that is that it really is
precisely like the equation 1+1=2. You don’t need to derive
the equation 1+1=2, or prove it. You can just pull it from a
table of apples or oranges and generalize it. It is definitional.
It is part of the definition of number and equality. In the same
way, the magic equation is a direct definitional outcome of
number, equality, and exponent. Build a simple table and the
equation drops out of it without any work at all.
If you
must have a derivation, the simplest possible one is this one:
We are given a functional
equation of the general sort y = x^{n}
and we seek y’, where,
by definition y’ = Δx^{n}
Then we go to our generalized
equation from the table, which is nx^{n1}
= Δx^{n}
By substitution, we get y’
= nx^{n1}
That’s all we need. But I will give you one other
piece of information that will come in handy later. Remember how
we cancelled all those deltas, to simplify the first equations
coming out of the table? Well, we did that just to make things
look tidier, and to make the equations look like the current
calculus equations. But those deltas are really always there. You
can cancel them if you want to clean up your math, but when you
want to know what is going on physically, you have to put them
back in. What they tell you is that when you are dealing with big
exponents, you are dealing with very complex accelerations. Once
you get past the exponent two, you aren’t dealing with
lengths or velocities anymore. The variable x to the exponent 6
will have 7 deltas in front of it, as you can see by going back
to the table. That is a very high degree of acceleration. Three
deltas is a velocity. Four is an acceleration. Five is a variable
acceleration. Six is a change of a variable acceleration. And so
on. Most people can’t really visualize anything beyond a
variable acceleration, but high exponent variables do exist in
nature, which means that you can go on changing changes for quite
a while. If you go into physics or engineering, this knowledge
may be useful to you. A lot of physicists appear to have
forgotten that accelerations are often variable to high degrees.
They assume that every acceleration in nature is a simple
acceleration.
In my long paper I covered a lot of other
interesting topics, but I will only mention one more of them
here. I have told you a bit about quantum mechanics above, so I
will give you a clue about the end of that story, too. QED hit a
wall about 20 years ago, and that is why all the big names are
now working on string theory. String theory is a horrible mess,
one that makes the mess of calculus look like spilled milk. But
one of the main reasons it was invented was to save QED from the
point. This problem I have solved for you about the point is
exactly the same one that coldcocked QED. All of physics is
dependent on calculus and its offshoots, and using calculus with
points in the equations has ended up driving everyone a little
mad. The only way that physicists could make the equations of QED
keep working is by performing silly operations on them, like the
ones that Newton performed in his derivation. These operations in
QED are called “renormalization”. That is a big word
for fudging. The inventor of renormalization was the same Richard
Feynman who I told you about above. His students are still
finding new ways to renormalize equations that won’t work
in normal ways. Mr. Feynman was a big mess maker, but he did have
the honesty to at least admit it, regarding renormalization. He
himself called it “hocus pocus” and a “dippy
process” that was “not mathematically legitimate.”
It would have been nice if Newton or Leibniz or Cauchy had had
the intellectual honesty to say the same about the calculus
derivation. The reason this
should be interesting to you is that my correction to the
calculus solves all the problems of QED at one blow, although
they haven’t figured that out yet.* Just by reading this
paper you are now smarter than all the “geniuses”
fudging giant equations. With your new knowledge, you can go to
college, wade briskly through all the muck, and start putting the
house in order. Your understanding of calculus and the point will
allow you to climb ladders that no one even knew existed. So
please remember me when you get to the top. And don't dump any
more garbage that might land on my head.
Addendum:
Here is an email I got from a reader, confirming that—at
least for some—my method does indeed make calculus
transparent at last:
Miles,
I
just wanted to say thank you for teaching me calculus in a day
(actually it was really only a couple of hours but who's
counting). I came across your work recently and have been
devouring it page by page. I especially love your candor
(frankness, honesty, truthfulness, sincerity, bluntness,
straightforwardness etc.). When I was in high school, I got 80's
and 90's in all my maths and sciences, except for calculus where
I got a 50. I had no idea what was going on. Luckily, I still
got into university where I again got 50 in first year calculus.
I redid the class and got 95 but that is only because I gave up
trying to understand calculus and just memorized the rules. Even
after graduating from university, and after 30 years working as a
computer scientist doing advanced imaging and robotics, the
calculus still mystified me. That was until a couple of days ago
when I read your pages on calculus. Now I am 100 percent certain
I understand differential calculus.
I'm
sure this will go a long ways in helping me understand physics
which has also mystified me for many years. I have been trying
to redefine (redivine) physics from the perspective of the
fractal paradigm (see attached paper). As a computer scientist
with expertise in graphics, I have always known that there is no
such thing as a point particle or a continuous curve. All
particles have extent (pixel/voxel size), and all curves are
generated using line segments (MOVETO, LINETO).
I
just wanted to let you know that you did help someone and that
someone out there does care.
Sincerely,
Lori Gardi
*The
"uncertainty" of quantum mechanics is due (at least in
part) to the math and not to the conceptual framework. That is to
say, the various difficulties of quantum physics are primarily
problems of a misdefined Hilbert space and a misused mathematics
(vector algebra), and not problems of probabilities or
philosophy. My correction to the calculus allows for a fix of all
higher maths, spaces, and theories.
If this paper was useful to you in
any way, please consider donating a dollar (or more) to the SAVE
THE ARTISTS FOUNDATION. This will allow me to continue writing
these "unpublishable" things. Don't be confused by
paying Melisa Smiththat is just one of my many noms de
plume. If you are a Paypal user, there is no fee; so it might
be worth your while to become one. Otherwise they will rob us 33
cents for each transaction.
