We all know that GPT is very competent, very knowledgeable. So let me go ahead and ask a technical question. It’s not trivial but not terribly complicated either; any student studying general relativity should be able to show the correct derivation. Here is my question:
Derive the geodesic equations for \(d\tau^2=Bdt^2-Adr^2-r^2d\theta^2-r^2\sin^2\theta d\phi^2\), where \(A\) and \(B\) are functions of \(r\) only.
Let’s see how GPT3.5 responds:
The geodesic equations for the given metric are given by:
$$\frac{d^2x^\alpha}{d\tau^2}+\Gamma^\alpha_{\mu\nu}\frac{dx^\mu}{d\tau}\frac{dx^\nu}{d\tau}=0$$
where \(\tau\) is the proper time, \(x^\alpha\) are the coordinates \((t,r,\theta,\phi)\), and \(\Gamma^\alpha_{\mu\nu}\) are the Christoffel symbols of the second kind, given by:
$$\Gamma^\alpha_{\mu\nu}=\frac{1}{2}g^{\alpha\beta}(\partial_\mu g_{\beta\nu}+\partial_\nu g_{\beta\mu}-\partial_\beta g_{\mu\nu})$$
where \(g_{\alpha\beta}\) is the metric tensor.
For the given metric, the non-zero Christoffel symbols are:
\begin{align*}
\Gamma^t_{rr}&=\frac{1}{2}B’\\
\Gamma^t_{tt}&=\frac{1}{2}B\frac{A’}{B}\\
\Gamma^r_{rt}&=\frac{1}{2}\frac{A’}{B}\\
\Gamma^r_{rr}&=-\frac{1}{2}\frac{A’}{B}\\
\Gamma^\theta_{r\theta}&=\frac{1}{r}\\
\Gamma^\theta_{\theta\phi}&=-\sin\theta\cos\theta\\
\Gamma^\phi_{r\phi}&=\frac{1}{r}\\
\Gamma^\phi_{\theta\phi}&=\cot\theta
\end{align*}
where a prime denotes differentiation with respect to \(r\).
Using these, we can write down the geodesic equations explicitly:
\begin{align*}
\frac{d^2t}{d\tau^2}+\frac{B’}{2B}\left(\frac{dt}{d\tau}\right)^2+\frac{A’}{2B}\left(\frac{dr}{d\tau}\right)\left(\frac{dt}{d\tau}\right)&=0\\
\frac{d^2r}{d\tau^2}-\frac{A’}{2B}\left(\frac{dt}{d\tau}\right)^2+\frac{A’}{2B}\left(\frac{dr}{d\tau}\right)^2+r\left(\left(\frac{d\theta}{d\tau}\right)^2+\sin^2\theta\left(\frac{d\phi}{d\tau}\right)^2\right)&=0\\
\frac{d^2\theta}{d\tau^2}+2\frac{dr}{d\tau}\frac{d\theta}{d\tau}-\sin\theta\cos\theta\left(\frac{d\phi}{d\tau}\right)^2&=0\\
\frac{d^2\phi}{d\tau^2}+2\frac{dr}{d\tau}\frac{d\phi}{d\tau}+2\cot\theta\frac{d\theta}{d\tau}\frac{d\phi}{d\tau}&=0
\end{align*}
Hmmm… sounds eminently knowledgeable, does it not?
Well… maybe not. How about we ask an actual computer algebra system:
(%i1) load(ctensor)$
(%i2) derivabbrev:true$
(%i3) ct_coords:[t,r,u,v]$
(%i4) depends([A,B],[r])$
(%i5) lg:matrix([B,0,0,0],[0,-A,0,0],[0,0,-r^2,0],[0,0,0,-r^2*sin(u)^2])$
(%i6) cmetric(false)$
(%i7) christof(mcs)$
B
r
(%t7) mcs = ---
1, 1, 2 2 A
B
r
(%t8) mcs = ---
1, 2, 1 2 B
A
r
(%t9) mcs = ---
2, 2, 2 2 A
1
(%t10) mcs = -
2, 3, 3 r
1
(%t11) mcs = -
2, 4, 4 r
r
(%t12) mcs = - -
3, 3, 2 A
cos(u)
(%t13) mcs = ------
3, 4, 4 sin(u)
2
r sin (u)
(%t14) mcs = - ---------
4, 4, 2 A
(%t15) mcs = - cos(u) sin(u)
4, 4, 3
(%i16) geod:[0,0,0,0]$
(%i17) cgeodesic(true)$
B t + B r t
s s r s s
(%t17) geod = -----------------
1 B
2 2 2 2 2
2 r sin (u) (v ) + 2 r (u ) - B (t ) - 2 A r - A (r )
s s r s s s r s
(%t18) geod = - --------------------------------------------------------------
2 2 A
2
r cos(u) sin(u) (v ) - r u - 2 r u
s s s s s
(%t19) geod = - ----------------------------------------
3 r
r sin(u) v + 2 r cos(u) u v + 2 r sin(u) v
s s s s s s
(%t20) geod = -------------------------------------------------
4 r sin(u)
Looks different, doesn’t it. And no, I don’t mean LaTeX vs. the fixed pitch character representations of equations in a text terminal. Rather, the content.
The thing is, what GPT produces looks plausible. It has the right idea. The equations seem to make sense. Unless you know what to expect, you are likely to accept the result as correct, since it appears correct. But GPT sucks at math. It gets easily confused. It is a text model that is optimized to write equations that look right… but only has a superficial understanding of what it produces. Kind of like a student who is trying hard to remember, produces something that resembles the right thing, but without a perfect memory (and keep in mind, trained neural nets are not like any other software we are used to using, as they have no perfect memory!) and without in-depth understanding, fails.
I am sure over time this will improve. GPT-4 is already better at it than 3.5 (which was used to produce this outcome). And future versions may likely interface with computer algebra subsystems (among other things) to augment the neural net with specific capabilities. But for now, perhaps I can be forgiven for asking GPT’s cousin, DALL-E, to draw me a cat, exasperated by the bad math GPT produces: