# Solving the Three Body Problem

The three body problem is famous for being

impossible to solve. But actually it’s been solved many times, and in ingenious ways.

Some of those solutions are incredibly useful, and some are incredibly bizarre. Physics – and arguably all of science changed

forever in 1687 when Isaac Newton published his Principia. Within it were equations of

motion and gravity that transformed our erratic-seeming cosmos into a perfectly tuned machine of clockwork

predictability. Given the current positions and velocities of the bodies of the solar

system, Newton’s equations could be used in principle be used to calculate their locations

at any distant time, future or past. I say “in principle” because the reality is not

so simple. Despite the beauty of Newton’s equations, they lead to a simple solution

for planetary motion in only one case – when two and only two bodies orbit each other sans

any other gravitational influence in the universe. Add just one more body and in most cases all

motion becomes fundamentally chaotic – there exists no simple solution. This is the three-body

problem, and we’ve been trying to solve it for 300 years. What does it mean to find a solution to the

three-body problem? Newton’s laws of motion and his law of universal gravitation give

us a set of differential equations. In some cases these can be solved with Newton’s

other great invention – calculus – to give a simple equation. Plug numbers into that

equation and its solved. Those numbers are the starting positions and velocities of your

gravitating bodies, plus a value for time. The equations will then give you the state

of the system at that time, no matter how far in the past or future. We call such a

simple, exactly-solvable equation an analytic expression. That just means it can be written

out with a finite number of mathematical operations and functions. In the case of two gravitating bodies, the

solutions to Newton’s laws are just the equations for the path traveled by the bodies

– be it the parabola of a thrown ball, the circle or ellipse of a planetary orbit, or

the hyperbola of an interstellar comet – in general, conic sections – the shapes you get

when you slice up a cone. These solutions were so simple that Johanne Kepler figured

out much about the elliptical solution for planetary motion 70 years before Newton’s

laws were even known. And after the Principia was published many

sought simple, analytic solutions for more complex systems, with systems of three gravitating

bodies being the natural next step. But the additional influence of even a single extra

body appeared to make an exact solution impossible. The three body problem became the obsession

for many great mathematicians – but over the following three centuries, solutions have been found

for very few specialized cases. Why? Well, in the late 1800s, mathematicians Ernst Bruns

and Henri Poincaré convincingly asserted that no general analytic solutions exists. The reality of the three-body problem is that

the evolution of almost all starting configurations is dominated by chaotic dynamics. Future states

are highly dependent on small changes in the initial conditions. Orbits tend towards wild and

unpredictable patterns, and almost inevitably one of the bodies is eventually ejected from

the system. But despite the apparent hopelessness, there was much profit in learning to predict

the gravitational motion of many bodies. For most of the three centuries since Newton,

predicting the motion of the planets and the moon was critical for nautical navigation.

Now it’s essential to space travel. How do we do it? Well, just because the

three body problem for the most part has no useful analytic solution, approximate solutions

can be found. For example, if the bodies are far enough apart then we can approximate a

many-body system as a series of two-body systems. For example, each planet of our solar system

can be thought of as a separate two-body system with the Sun. That gives you a series of simple

elliptical orbits, like those predicted by Kepler. But those orbits eventually shift due to the interactions between the planets. Another useful approximation is when one of

the three bodies has a very low mass compared to the other two. We can ignore the minuscule

gravitational influence of the smaller body and assume that it moves within the

completely solvable two-body orbits of its larger companions. We call this the reduced

three-body problem. It works very well for tiny things like artificial satellites around

the Earth. It can also be used to approximate the orbits of the moon relative to the Earth

and Sun, or the Earth relative to the Sun and Jupiter. These approximate solutions are useful, but

ultimately fail to predict perfectly. Even the smallest planetary bodies have some mass,

and the solar system as a whole has many massive constituents. The Sun, Jupiter and Saturn

alone are automatically a three-body system with no analytic solution, before we even

add in the Earth. But the absence of an analytic solution doesn’t

mean the absence of any solution. To get an accurate prediction for most three-body systems

you need to break the motion of the system into many pieces, and solve them one at a

time. A sufficiently small section of any gravitational trajectory can be approximated

with an exact, analytical solution – perhaps a straight line or a segment of two-body path

around the center of mass of the entire system, assuming everything else stays fixed. If you

break up the problem into tiny enough paths segments or time-steps, then the small motions

of all bodies in the system can be updated step by step. This method of solving differential equations

one step at a time is called numerical integration, and when applied to the motion of many bodies

it’s an N-body simulation. With modern computers, N-body simulations

can accurately predict the motion of the planets into the distant future or solve for millions

of objects to simulate the formation and evolution of entire galaxies. But these numerical solutions didn’t begin with the invention artificial computers. Before that, these calculations had to be done by hand – in fact by many hands. The limitations of approximate solutions,

the laboriousness of pre-computer numerical integration, and also the legendary status

of the three-body problem inspired generations of physicists and mathematicians to continue

to seek exact, analytic solutions. And some succeeded – albeit in very specialized cases.

The first was Leonhard Euler, who found a family of solutions for three bodies orbiting

around a mutual center of mass, where all bodies remain in a straight line – essentially in

permanent eclipse. Joseph-Louis Lagrange found solutions in which the three bodies form an

equilateral triangle. In fact, for any two bodies orbiting each other, the Euler and

Lagrange’s solutions define 5 additional orbits for a third body that can be described

with simple equations. These are the only perfectly analytical solutions to the three

body problem that exist. Place a low-mass object on any of these 5 orbits and it will

stay there indefinitely, tracking the Earth’s orbit around the Sun. We now call these the

Lagrange points, and they’re useful places to park our spacecraft. There was a bit of a gap after Euler and Lagrange

because to discover new specialized three-body solutions, we had to search the vast space

of possible orbits using computers. The key was to find three-body systems that had periodic

motion – they evolve – sometimes in complex ways – back to their starting configuration.

In the 70s, Michel Henon and Roger Broucke found a family of solutions involving two

masses bouncing back and forth in the center of a third body’s orbit. In the 90s Cris

Moore discovered a stable figure-8 orbit of three equal masses. The numerical discovery

of the figure-8 solution was proved mathematically by Alain Chenciner and Richard Montgomery,

and insights gained from that proof led to a boom in the discovery of new periodic three

body orbits. Some of these periodic solutions are incredibly

complex, but Montgomery came up with a fascinating way to depict them in the absence of simple

equations. It’s called the shape-sphere, and it works like this. Imagine the bodies

in 3-body system are the vertices of a triangle, whose center is the center of mass of the

system. The evolution of the system can be expressed through the changing shape of that

triangle. We throw away certain information – the size of the triangle and its orientation,

keeping only information about the relative lengths of the edges, or equivalently the

angles between the edges. Now we map that information on the surface

of a sphere. We only need the 2-D surface because if we know 2 internal angles of the

triangle we also know the 3rd. So, the equator of the sphere represents both angles being

zero- that’s a fully collapsed triangle – the 3-bodies are in a straight line, as

in Euler’s solutions. The poles are equilateral triangles – so, Lagrange’s solutions. All

other orbits move on this sphere as the triangle defined by the orbits evolves. It turns out

that the periodic motion on the shape-sphere appears much simpler and easier to analyze

than the motion of the bodies themselves. Now hundreds of stable 3-body orbits are known

– although it should be noted that besides the Euler and Lagrange solutions, none of

these are likely to occur in nature. So their practical use may be limited. Very recently, a new approach to solving the

three-body problem has appeared, which transforms the chaotic nature of three-body interactions

into a useful tool, rather than a liability. Nicholas Stone and Nathan Leigh published

this in Nature in December 2019. The thing about chaotic motion is that the state of

the system seems to get randomly shuffled over time. The motion is actually perfectly

deterministic – defined between one instant and the next – but can be thought of as approximately

random over long intervals. Such a pseudo-random system will, over time, explore all possible

configurations consistent with some basic properties like the energy and angular momentum

of the system. The system explores what we call a phase space – a space of possible

arrangements of position and velocity. Well, for a pseudo-random system, statistical mechanics

lets us calculate the probability of the system being in any part of that phase space at any

one time. How is this useful? Well, actually, almost

all three-body systems eject one of the bodies, leaving a nice, stable two-body system – a

binary pair. Stone and Leigh found that they could identify the regions of phase space

where these ejections were likely – and by doing so they could map the range of likely

orbital properties for the two objects left behind after the ejection. This looks to be

incredibly useful for understanding the evolution of dense regions of the universe, where three-body

systems of stars or black holes may form and then disintegrate very frequently. One last thing about the three-body problem.

Henri Poincare thought the general case could not be solved. In fact he was wrong. In 1906,

not so long after Poincare stern proclamation, Finnish mathematician Karl Sundman found a

solution to the general three-body problem. It was a converging infinite series that added

together an endless chain of terms to solve the orbital calculation. Because the series

converged, which successive terms diminished to effectively nothing, so in principle the

equation could be written out on paper. However the convergence of Sundman’s series is so slow

that it would 10^8 million terms to converge for a typical calculation in celestial mechanics.

That is a lot of sheets of paper. So there you have it – the three-body problem

is perfectly solved uselessly, or for seemingly useless and bizarre orbits. And it can be

approximately solved for all useful and practical purposes – with enough precision to work just

fine. Good to know next time you’re in a chaotic orbit, trying to astronavigate around

two other gravitating denizens of space time. A few weeks ago, I invited Matt to come to

Fermilab to make an awesome crossover video on the subject of neutrinos. He accepted and

the rest, as they say, is history. There were some great questions in the comments and Matt

asked me to answer a few of them. So here it goes. Sanskar Jain asks what it means for a neutrino

to go with a particular lepton, meaning electron, muon or tau. It turns out that over short

distances and before neutrinos have a chance to oscillate, they remember how they were

made. Neutrinos made in nuclear reactors are made with electrons and if they interact again,

they make only electrons. In particle beams, neutrinos are made with muons and can subsequently

only make muons. In fact, this observation in 1962 led to the discovery that there were

different kinds of neutrinos and, subsequently, to the 1988 Nobel Prize in physics. Gede Ge asks why we use argon in our neutrino

detectors, and that’s a great question. The answer is that we don’t always. Neutrino

detectors have been made of water, metal, dry cleaning fluid, even baby oil doped with

a chemical called scintillator. We use argon because it ionizes very easily. That means

when a neutrino>>DOES

“Ignore”, “assume”, “approximate”, “useful solution”..

is this science?

You also have to assume that the mass of the body stays the same.

Come on you nerds, nobody going for the low hanging fruit? Fine I'll take it.

3 body problem? Matt, you know we just call that a threesome over here in the states, and it's only a problem if one of those three bodies is not your actual S.O.

Actually any one dating in academia knows how hard the two body problem is really.

Please do G objects

8:47 Drawing Ovaries. Good Job Maths!

Are there other solutions when you try to solve it with the equations of general relativity?

Tell Cixin Liu

Why are three bodies so important and let's say not four or five bodies? Is it a more deterministic problem then or completely relatively chaotic situation?

The only way a 3 body system is capable is if one of the bodies produce equal amounts of positive and negative ions while the other 2 bodies produce either more positive or negative ions. Couldn't you use the water molecule as an example?

Now do it with four.

thats the best fact about physics xD

"we discovered solutions for a big problem : useless solutions, but solutions nonetheless"

generations later:

"it IS useful afterall"!

1. Chaos does not exist. Ignorance of systemic interactions does.

2. Nature does not operate in any interval phase state.

[12:55 – 13:11] especially [13:02 – 13:11] Yes 🙂 Thank You

I am doing some world-building and I am trying to figure out a stable configuration of six Neptune sized shell-worlds. I want them all to be in each other's Lagrange points. What I am needing is how far away from the star they need to be, then on that information place the correct type of star so that they would be in a habitable zone. I already figured out things like how far the horizon would be, but I want to know how far ahead and behind the planets would be compared to the star. I want my characters to be able to judge time to dawn and dusk based on where they see the planets.

Always love PBS and Femilab videos. 'Remember how they were made'… I'm assuming that's simplifying something complicated. I'm not remotely qualified to talk intelligently about this stuff, but that line made me wince. Can someone elaborate a bit.

We need to make sure this video is seen by Trisolaris

PBS Space Time or anyone that can answer. Can you do water electrolysis outside of water by using the water molecules in the air? If yes couldn't you stage the charged positive and negative molecules to make magnetic gas rings? If yes couldn't you then send more magnetic gas rings through the other rings to keep them stable? If yes to those questions then we can create infinite power and really fast travel like how a rail gun sends it's projectile but in this case we are not limited to amount of magnets, power, or barrel length. Let me know what you think

It's pronounced "Eye-zukk" in England, not zakk. Don't know about the rest of the world.

PLEASE, PLEASE, someone help me with a different three-body problem; A space station builds 2 space ships that leave the station in opposite directions, Then they turn around and accelerate towards each other, and pass very close to the space station, and each other, at a speed of 55% of the speed of light relative to the station, in opposite directions. According to an observer on the station, Einstein's Theory of Relativity hasn't been violated since each ship is only going 55% of light speed, but the two ships pass close to each other at a speed of 110% of the speed of light relative to each other, and according to an observer on either of the two ships, the other ship passes at greater than light speed. It seems to me that even if the theory of relativity hasn't been broken, the time distortion of either ship relative to the station is different from the time distortion relative to each other, and how can that be possible? When they stop and return to the station, what do the three clocks read? I can only conclude that either Relativity must be wrong somehow when it says that it is impossible to exceed light speed in relation to anything else, or it is impossible to travel at more than 50% of the speed of light in case you happen to meet something going more than 50% of the speed of light in the opposite direction. So what is the actual truth?

I feel betrayed by everyone who's told me there was no solution.

8:27 the figure 8 movement looks like someone juggling three balls.

Everyone talks about neutrinos, what about the old ones? The oldtrinos! Just like the trons, neutrons, nothing said about oldtrons!

He didn't say no solution exists… he said no "general" solution exists. Two distinct different statements

In 8:26 , what does "stable" mean in the case of the figure 8 orbit? If you nudge one of the three objects a small nudge and wait for let's say a thousand periods of motion, will the system oscillate around its "stable" 8 orbit, will it be in an orbit that resembles the original 8 orbit, will the distances between the three objects still be in the same order of magnitude as the original 8 orbit?

Or does "stable" mean that the original system will remain in the 8 orbit forever if left alone, but if perturbed a little, it will descend to a chaotic motion and eject one of the objects to infinity?

i don't know why i keep coming back to this channel; i don't understand a thing!

sometimes i leave it on when i go to sleep cause the videos are quite soothing, or perhaps i'm hoping i'll be able to sleep-learn something. like the mozart effect or something or other.

You have a 3 Bodies Problem :

A / One body escapes, you have a 2 Bodies solution.

B / Two Bodies collide, you have a 10^63 Problem !

You can't see an electron till you look for it , you can't see dark matter till you don't look for it!

This "3 body problem" is called a SYZYGY which creates a RED HOT ACCRETION DISK at the center of 2 spinning/orbiting stars/PLANEts.

You guys are a little quiet compared to other videos I think.

What is missing from the equation is the factor of potential mass becoming equal with the mass and speed of each body as a factor. Each of the bodies tend to repel each other when there potential masses become equal with a force commensurate with their distance apart. Potential mas equals actual mass time relative velocity.and this means the total relative velocity as in our Earth's speed added to the Sun's relative velocity. At each perihelion moment the two become equal in potential mass and Earth is repelled by changing its orbital configuration as what is called precession.

Sunderman's solution to the three body problem:

Possibly not useful right now but a potential use for quantum computers..?

Most people know about step-by-step methods, like Euler method, Runge-Kutta methods, implicit methods, etc. But there is one very very different method, called Parker-Sochacki method or algorithm, that simply gives you a polynomial expansion of the solution to arbitrary order. It is super accurate, and just magic. It is expensive to obtain, but actually due to its accuracy and ability to project a lot into future, it can be faster than small-step-at-a-time methods, even if they use adaptive step methods. The Parker-Sochacki methods doesn't work for arbitrary ordinary differential equations, but it just happens that it does work for gravitational fields! But if you add some other effects like solar wind, and stuff, it is not going to work.

The key is in the magnetic frequencies of individual matter. The orbit's are individually stringed to the material that physically supports the orbital behavior. Power needs to fluctuate greatly. The eternal floor's minerals dance the orbitals per grounding cycle. Magma fluctuates minerals collectively and occasionally in that magma is a cooked product. Then they are replaced by streams in the entropy pool. Draw a straight line down to the eternal floor from each orbital. Sometimes the function of levitation, allows entire galaxies to engage the feild energy of even one moon. Pogo waves. cJT

As a programmer and all around computer enthusiast, I'm obviously most interested in numerical solutions. I have a question: what kind of integration methods are used for those massive n-body simulations like for simulation galaxy collisions? Like, do they use RK4, or maybe simpler methods? Also, do those simulations assume the bodies are points, simple rigid bodies or maybe they are fluid simulations?

The names Euler and Lagrange, AND the word phase space have been dropped in one episode, is the next episode gonna be about Hamilton-Lagrange mechanics, and the minimisation of the action with the calculus of variations? Probably not, but a man can dream about his ideal episode of Spacetime.

I am just reading The Three Body Problem. This is so uncanny

I love the way the ellipse was precessing in the example of the analytic solutions

There’s awesome Matlab simulations you can look up for chaotic systems, for example double or triple pendulums. Comparing two pendulums side by side and slightly changing initial conditions of one of them winds up leading to totally different results just after a few seconds. Pretty cool to visualize it.

1:41 Yikes. You misstated Newton's 1st Law! Newton was top notch. He got us covered before we realized we needed it.

Those properties you stated are ONLY true in inertial reference frames (IRF). Ergo seeing those properties allows you to know you're in an IRF, which allows you to use Newton's other laws of motion (which are otherwise false). If you're not in an inertial frame, then nothing in those laws holds true… so it's vital that the laws tell you how to identify an IRF.

I can prove it. Move your head. You see objects accelerate in your field of vision despite no net force acting on them.

QED.

IRF's clean up the human work of doing math, but they're not "physically preferred" in any meaningful way.

That's part of the genius of Newton's first, IMO. He told us the thing we were taking for granted before we realized we were taking it for granted. Compare to the laws of Thermodynamics that had to later include a 0th law because everyone took the most fundamental aspect of thermometers for granted.

1:41 Yikes. You misstated Newton's 1st Law! Newton was top notch. He got us covered before we realized we needed it.

Those properties you stated are ONLY true in inertial reference frames (IRF). Ergo seeing those properties allows you to know you're in an IRF, which allows you to use Newton's other laws of motion (which are otherwise false). If you're not in an inertial frame, then nothing in those laws holds true… so it's vital that the laws tell you how to identify an IRF.

I can prove it. Move your head. You see objects accelerate in your field of vision despite no net force acting on them.

QED.

IRF's clean up the human work of doing math, but they're not "physically preferred" in any meaningful way.

That's part of the genius of Newton's first, IMO. He told us the thing we were taking for granted before we realized we were taking it for granted. Compare to the laws of Thermodynamics that had to later include a 0th law because everyone took the most fundamental aspect of thermometers for granted.

1:41 Yikes. You misstated Newton's 1st Law! Newton was top notch. He got us covered before we realized we needed it.

Those properties you stated are ONLY true in inertial reference frames (IRF). Ergo seeing those properties allows you to know you're in an IRF, which allows you to use Newton's other laws of motion (which are otherwise false). If you're not in an inertial frame, then nothing in those laws holds true… so it's vital that the laws tell you how to identify an IRF.

I can prove it. Move your head. You see objects accelerate in your field of vision despite no net force acting on them.

QED.

IRF's clean up the human work of doing math, but they're not "physically preferred" in any meaningful way.

That's part of the genius of Newton's first, IMO. He told us the thing we were taking for granted before we realized we were taking it for granted. Compare to the laws of Thermodynamics that had to later include a 0th law because everyone took the most fundamental aspect of thermometers for granted.

QED.

When you’re professor asks you to find the analytical solution of even the simplest system. 😵

Look up Sverre Aarseth, he dedicated his working career to building state of the art N-body code.

Love the guest speaker. You should do this more often.

now i want this as my screensaver. just looping untill one of the bodys gets eject where it starts over with a new starting condition.

7:32 the Euler and Lagrange solutions solve obviously just the 2 body problem in disguise.

Hi Matt, how come you said that 3 body stable solutions rarely appear in nature? Some of the sphere examples you showed in the video (9:50) look suspiciously like electron orbitals, no? And thank you for all the excellent videos, have been learning a lot from you!

…still undersimplifying [01:02]—for example a testpoint inside a sphere is actually a static, problem, e.g. the dynamics of a big bang implicate decelerating/(de)-expansion toward the outer-side…conversely the inside of a neutron star collapsing to a mass-hole is sucked-out…calculus is more a co-invention [01:36] with Leibnitz…computers in the ~1960s used rational approximations P(x)/Q(x) of cyclic functions (e.g. sin(x)), and today crude-guessing numerical integration [06:14] by Simpsons' and Runge-Kutta, rules…but quadrature [07:22] is possible for a solar nebula…Why not specify the chaotic orbits as a combination of all possible stable, periodic 3 body configurations? That way you make it out in a space that describes it relative to anything it could be.

(In a thick New Jersey accent)

"I'll show you a three body problem right here!"

Grabs their own Lower East SideGeneral relativity messes up those analytical solutions, though 😉

…so a three-seed-crypto-generator (by numeric-interaction) should appear more-random [10:39] over time…!?!Math is friggin awesome.

By the way how many Light years are 3parsec?

PrinKIPia

flocking ai 🙂

Good sci-fi book of the same name. Y'all read it?

If u like this channel , u might like Andrew Yang for President in 2020 !

why don't more objects simply pull each other together into a collision instead of orbiting?

I'm so glad that NASA does not mean to deceive in Hebrew or anything

👽This country needs another Obama.👽

Yeah, lets solve for a problem whose fundamental building blocks are inherently flawed to begin with. Gravity, Sure lets solve for a problem when a primary component of that formula, gravity, has yet to even be solved to any degree of satisfaction.

what he's showing is basically the same erratic behavior seen in gluons. The three body problem is based on the fact that all three body's are bound by an electromagnetic field. The solar system proves his gluon like interaction is useless.

So, can we use it to go to Tau Ceti at nearly the speed of light????

Just dawned on me how much he looks like Peter Dinklage…

Matt!!! great episode! It would be awesome if you do a science fact check on the Earth´s Remembrance Trilogy. For example, the Unfolding of the proton on 'The Three Body Problem', the 'Dark Forest' Theory as a solution to the Fermi Paradox, as well as the concept of the Black Domain (from Death's End).

Anyway, I also think you'd make a great wallfacer…

The name of Ernst Heinrich Bruns is misspelled at 3:10 min of the video.

The reason people can't solve the 3 body problem is that most of us only have 1 body (or less).

what could be it's implications on quantum mechanics, for example helium atoms?

Add my wife to the equation mmm…..

Now I get why juggling 2 apples is easy but 3 so hard. At least for me…

You could totally use some of the orbits to make an orbiting slingshot for spaceships to go deeper into space for a lot less energy than from fuel alone

Newton law of universal gravity assume instantaneous communication of the force that act upon the bodies. This is incorrect of course. Has it been taken into account on the numerical integration you are referring to?

Today some half educated teenager can start a graphics engine like Unity, write like 10 lines of code, and get a pretty accurate 3d model of any three body situation (or our whole solar system) done in a few minutes. Crazy how far technology has taken us.

https://www.youtube.com/user/fermilab Incase anyone was interested!

Threesomes can often be difficult.

With unwanted outcomes and unwarranted emotional distress has may arise but with enough time and patience you can have a great time I used to have a three body problem now I have a 3 body bubble

Dr. Lincoln!!

Lagrange points L1-L3 are all unstable, satellites placed there (such as SOHO) have to do correction burns to keep them in that position. The stability of L4 and L5 depends on the mass ratio of the two other bodies. for example, there's enough difference between the Sun and Jupiter, and so we have the Greek and Trojan asteroids. If however, the larger body is not more than 25 times more massive than the smaller body, L4 and L5 will not be stable

A two body vs a three body system is like the differences between a single proportion vs a double proportion equation.

This one reminds me of how much I miss Infinite Series.

Thank you, this will be useful for our lord to save themselves.

The understanding of the triforce is key

misleading garbage

Are there any 3-body systems that require 3 dimensions, i.e. where the velocities cannot be constrained to an (inertial) plane?

Here is the million dollar question, is solar system stable? 😀

Nicely done. Perfect solutions for conditions that are useless in the normal experience. Except your experiences are way beyond normal.

It would be interesting if you made shorter videos about different physicists who have influenced humanity throughout history.

The next best thing since carl sagen's cosmos

The difficult thing is the n body problem, where you have a few thousands of elements that influence each other – like a protein chain.

Why do I get the feeling that an N body solution will help us better understand the weak force…🤔

There is no space travel. Air space travel IS real.

If we live on a ball where gravity fakes the truth that up is down and down is up. Then there is no up and down.

Walking up a hill or down wouldn’t be real or felt at any position.

what is funny when he mentioned Newton and his Principia I learned about that as a kid playing video games!! the game mario in time had you directly interact with Isaac… cool for 1993 (i was 11)

Help me Dad! I don't Wana die, my whole body dying!! It hurts

Where's a guild navigator when you need one?

No chaos , just reset ?

Narf

There's no such thing as ejection in the 3-body problem (or any gravitational problem). The body that is "ejected" is still in the system and will inevitably return, never escaping the 3-body problem. Just because you didn't map its trajectory doesn't mean it's gone.

That started to sound more like electron cloud probabilities