tions in about one second. It
can even outperform software
packages like MATLAB or Wol-
fram Mathematica. The key to
its advantage? It resembles the
way neurons are organized in
the brain.
We build connections between
brain cells as we learn patterns,
or associations. When you see a
house cat, you notice it has fur
and whiskers and moves on all
fours. Your brain gathers this
information and helps deter-
mine you’re looking at a cat, so
you can see a leopard at the zoo
and recognize it as another cat
despite differences in size. As you
experience the world, tiny brain
cells called neurons fire electro-
chemical signals to one another,
reinforcing the connections so
you can recognize patterns more
easily in the future.
Similarly, neural nets rely
on layers and layers of artifi-
cial “neurons” that resemble
the ones in our brains—only
they perform basic calculations.
When they work together, the
neural net has the power to solve
complex problems, even though
individual layers of the network
may only be equipped to complete
one operation.
This makes neural nets fan-
tastic at image recognition or
at helping autonomous vehicles
identify road hazards. They’re
less suited to solving mathe-
matical equations because the
shorthand humans use to write
expressions uses symbolic
figures that computers find
cumbersome.
For example, we write x^3 to
mean x multiplied by x mul-
tiplied by x, but a computer
can’t understand that we mean
“cubed” and not “3.” Lample
and Charton’s network unpacks
equations into smaller parts
through tree-like structures.
The trees’ leaves are numbers,
constants, or variables,
while the nodes are operator
symbols like addition, differenti-
ation-with-respect-to, and so on.
For example, the expression 2
+ 3x (5+2) can be represented as
shown in figure 1.
And the expression 3x^2 +
cos (2x) - 1 is broken down as
shown in figure 2.
Lample and Charton used
a vast amount of data to estab-
lish rich connections between
the net’s “neurons.” These rela-
tionships, built correctly, allow
it to “think” through a differen-
tial equation. The results were
impressive.
“On all tasks, we observe
that our model significantly
outperforms Mathematica,”
write Lample and Charton in
their paper. “On function inte-
gration, our model obtains
close to 100 percent accuracy,
while Mathematica reaches
85 percent.”
Finally, in case you want to
check your math, here is the
final answer you should have
gotten when solving for y in that
first example:
y = sin-1(4 x^4 -14 x^3 +x^2 )
Lample and Charton didn’t
give any hints as to what Face-
book plans to do with this neural
net, as it’s just a proof of con-
cept at the moment. If you ask
us, though, this is an innova-
tion worth sharing. We have a
couple projects in the back that
could benefit from a computer
like this.
May/June 2020 17
2
3
5
x
+
+
2
figure 1
x
3 cos
x
2
x
- pow 1
+
2
x
figure 2
WHILE NEURAL NETS
CAN PERFORM EVERYDAY
CALCULATIONS, LIKE
ARITHMETIC, THEY DON’T
EXACTLY HAVE A WHIZ-
KID REPUTATION WHEN IT
COMES TO CALCULATIONS
WITH SYMBOLIC DATA—
BASICALLY INFORMATION
THAT YOU CAN’T ADD,
MULTIPLY, OR OTHERWISE
USE IN OPERATIONS.
"