Tuesday, October 30, 2012

Different ways to interpret Feynman diagrams

Feynman diagrams are the funny pictures that Richard Feynman drew on his van:



You see that a Feynman diagram is composed of several lines that meet at vertices (at the nodes of the graph). Some of the lines are straight, some of them are wiggly: this shape of each line distinguishes the particle type. For example, straight lines are often reserved for fermions while wiggly lines are reserved for photons or other gauge bosons.




Some lines (I mean line intervals) are external – one of their two endpoints is free, unattached to anything. These are the external physical particles that must obey the mass on-shell condition \(p_\mu p^\mu = m^2\) and that specify the problem we're solving (i.e. what's the probability that some particular collection of particles with some momenta and polarization will scatter and produce another or the same collection particles with other momenta and polarizations). Other lines (I mean line intervals) are internal and they are unconstrained. You must sum over all possible ways to connect the predetermined external lines by allowed vertices and allowed internal lines. If you associate a momentum with these internal lines, also known as "propagators", it doesn't have to obey the mass on-shell condition. We say that the particle is "virtual". An explanation why its \(E\) may differ from \(\sqrt{p^2+m^2}\) is that the virtual particle only exists temporarily and the energy can't be accurately measured or imposed because of the inequality \(\Delta E\cdot\Delta t\geq\hbar/2\).

Because the virtual particles are not external, they define neither the initial state nor the final state. Still, they "temporarily appear" during the process, e.g. scattering, and they influence what's happening. In fact, they're needed for almost every interaction. Also, the Feynman diagrams have vertices at which several lines meet, where they terminate. The vertices describe the real events in the spacetime in which the particles merge, splht, or otherwise interact. However, we're doing quantum mechanics so none of these points in spacetime are uniquely or objectively determined. In fact, all the choices contribute to the calculable results – the total probability amplitudes.

A Feynman diagram is a compact picture that may be drawn by most kids in the kindergarten. However, each Feynman diagram – assuming we know the context and conventions – may also be uniquely translated to an integral, a contribution to the complex "probability amplitude" whose total value is used to calculate the probability of any process in quantum field theory. The laws used to translate the kindergarten picture to a particular integral or a related mathematical expression are known as the "Feynman rules".

How do we derive them?

I will discuss three seemingly very different methods:
  • Dyson's series, an operator-based method
  • Feynman's sum over histories i.e. configurations of fields
  • Feynman's sum over histories i.e. trajectories of first-quantized particles
Richard Feynman originally derived his Feynman diagrams by the second method. As his name in the description of a method indicates, Freeman Dyson rederived the Feynman rules for the Feynman diagrams using the first method – and it was an important moment from a marketing viewpoint because this is how Freeman Dyson made Feynman diagrams extremely popular and essentially omnipresent.

The third method was added for the sake of conceptual completeness and it is the least rigorous one. However, it still gives you another way to think about the origin of Feynman diagrams – a way that is perhaps generalized in the "most straightforward way" if you try to construct Feynman diagrams for perturbative string theory.

It's important to mention that Feynman has discovered many things and methods, of course, but we shouldn't confuse them. The Feynman diagrams are the pictures on the van, tools to calculate scattering amplitudes and Green's functions. But he also invented the Feynman path integral ("sum over histories") approach to any quantum mechanical theory. It's not quite the same thing as the Feynman diagrams – it applies to any quantum theory, not just quantum field theory. However, as I have already said, he used the "sum over histories" of a quantum field theory to derive the Feynman diagrams for the first time.

Two other, conceptually differently looking ways to derive the Feynman diagrams were found later. The third method uses the "sum over histories" but applied to a "differently formulated system" than Feynman originally chose; the first method due to Dyson doesn't use the "sum over histories" at all.

Quadratic terms in the action, higher-order terms in the action

But all the three strategies to derive the Feynman rules share certain technical principles which are "independent of the formalism":
  • The lines, both the propagators and the external lines, are associated with individual fields or particle species and with the bilinear or quadratic terms they contribute to the action (and the Lagrangian or the Hamiltonian).
  • The vertices are associated with cubic, quartic, or other higher-order terms in the action (and the Lagrangian or the Hamiltonian), assuming that it is written in a polynomial form.
Let's assume we have an action and the Lagrangian that depends on the fields \(\phi_i\) in a polynomial way:\[

\eq{
\LL &= a_0 + \sum_i a_{1,i} \phi_i + \frac{1}{2!} \sum_{i,j} a_{2,ij} \phi_i\phi_j +\\
&+ \frac{1}{3!}\sum_{i,j,k} a_{3,ijk} \phi_i\phi_j\phi_k+\dots
}

\] which continues to higher orders, if needed, and which also contains various similar terms with the (first or higher-order) spacetime derivatives \(\partial_\mu\) of the fields \(\phi_i\) contracted in various ways. We don't consider the spacetime derivatives as something that affects the order in \(\phi\) so \(\partial_\mu \phi\partial^\mu \phi\) is still a second-order term in \(\phi\). The number of fields \(\phi_i\) – the order in \(\phi\) – that appear in the cubic or higher-order term will determine how many lines are attached to the corresponding vertex of the Feynman diagram.

The individual coefficients \(a_{n,i}\) etc. are parameters or "coupling constants" of a sort. How do we treat them?

Well, the first term, the universal constant \(a_0\), is some sort of the vacuum energy density. As long as we consider dynamics without gravity, it won't affect anything that may be observed. For example, the classical (or Heisenberg) equations of motion for the operators are unaffected because the derivative of a constant such as \(a_0\) with respect to any degree of freedom vanishes. We know that even in the Newtonian physics, the overall additive shift to energy is a matter of conventions. The potential energy is \(mgh\) where \(h\) is the height but you may interpret it as the weight above your table or above the sea level or above any other level and Newton's equations still work.

If we include gravity, the term \(a_0\) acts like a cosmological constant and it curves the spacetime. Fine. We will ignore gravity here so we will ignore \(a_0\), too.

The next terms are linear, proportional to \(a_{1,i}\). They are multiplied by one copy of a quantum field. For the Lorentz invariance to hold, it should better be a scalar field and if it is not, it must be a bosonic field and the vector indices must be contracted with those of some derivatives, e.g. as in \(\partial_\mu A^\mu\).

What do we do with the linear terms?

Well, here we can't say that they don't matter. They do depend on the fields and they do matter. But we will still erase them because of a different reason: they matter too much. If the potential energy contains a term proportional to \(\phi\) near \(\phi=0\), it means that \(\phi=0\) isn't a stationary point. The value of \(\phi\) will try to "roll down" in one of the directions to minimize the potential energy. It will either do so indefinitely, in which case the Universe is a catastrophically unstable hell, or it will ultimately reach a local minimum of the potential energy. In the latter, peaceful case, you may expand around \(\phi=\phi_{\rm min}\), i.e. around the new minimum, and if you do so, the linear terms will be absent.

So if we perform these basic steps, we see that without a loss of generality, we may assume that the Lagrangian only begins with the bilinear or quadratic terms. The following ones are cubic, and so on.

(We could start with a quantum field theory that has nontrivial linear terms, e.g. in the scalar field, anyway. In that case, the instability of the "vacuum" we assumed would manifest itself by a non-vanishing "one-point functions" for the relevant scalar field(s). The Feynman diagrams for these one-point functions ("scattering of a 1-particle state to a 0-particle state or vice versa") are known as "tadpoles" – tadpoles have a loop(s)/head and one external leg – because a journal editor decided that Sidney Coleman's alternative term for these diagrams, the "spermion", was even more problematic than a "tadpole".)

Bilinear terms and propagators

The method of Feynman diagrams typically assumes that we are expanding around a "free field theory". A free field theory is one that isn't interacting. What does it mathematically mean? It means that its Lagrangian is purely bilinear or quadratic. If we want to extract the "relevant" bilinear Lagrangian out of a theory that has many higher-order terms as well, we simply erase the higher-order terms.

Why is a quadratic Lagrangian defining a "free theory"? It's because by taking the variation, it implies equations of motions for the fields that are linear. And linear equations obey the superposition principle: if \(\phi_A(\vec x,t)\) and \(\phi_B(\vec x,t)\) are solutions to the equations of motion, so is \(\phi_A+\phi_B\). If \(\phi_A\) describes a wave packet moving in one direction and \(\phi_B\) describes a wave packet moving in another direction, they may intersect or overlap but the wave packets may be simply added which means that they pretend that they don't see one another: they just penetrate through their friend. This is the reason why they don't interact. Linear equations describe waves that just freely propagate and don't care about anyone else. Linear equations are derived from quadratic or bilinear actions. That's why quadratic or bilinear actions define "free field theories".

If we appropriately integrate by parts, we may bring the bilinear terms to the form\[

\LL_{\rm free}=\frac{1}{2}\sum_{ij} C_{ij} \phi_i P_{ij} \phi_j

\] where \(P_{ij}\) is some operator, for example \((\partial_\mu\partial^\mu+m^2)\delta_{ij}\). The factor \(1/2\) is a convention that is natural because if we differentiate the expression above with respect to a \(\phi_i\), we produce two identical terms due to the Leibniz rule for the derivative of the product. (That's not the case if the first \(\phi_i\) were \(\phi^*_i\) which is needed when it's complex: for complex fields, including the Dirac fields etc., the factor of \(1/2\) is naturally dropped.)

So the classical equations of motion derived for those fields look like this:\[

\sum_j P_{ij} \phi_j = 0.

\] You should imagine the Klein-Gordon equation as an example of such an equation.

Some operator, e.g. the box operator, acts on the fields and gives you zero. These are linear equations. You may often explicitly write down solutions such as plane waves, \(\phi_i = \exp(ip\cdot x)\), and all their linear superpositions are solutions as well. The coefficients of these plane waves are called creation and annihilation operators etc. You may derive what spectrum of free particles may be produced by a free field theory.

This may be done in the operator approach – the free fields are infinite-dimensional harmonic oscillators defined by their raising and lowering operators – as well as by the "sum over histories" approach – the harmonic oscillator may be solved in this way as well. The "sum over histories" approach encourages you to choose the \(\ket x\) or \(\ket{ \{\phi_i(\vec x,t)\} }\) continuous (or functionally metacontinuous) basis of the Hilbert space. By the functionally metacontinuous basis, I mean a basis that gives you a basis vector for each function or \(n\)-tuple of functions \( \{\phi_i(\vec x,t=t_0) \} \) even though these functions form a set that is not only continuous but actually infinite-dimensional.

But I want to focus on the derivation of the Feynman rules including the vertices. We don't want to spend hours with a free field theory. When we construct the Feynman rules, the free part of the action determines the particles that may be created and annihilated and that define the initial and final Hilbert space as a Fock space; and it determines the propagators.

The propagators will be determined by "simply" inverting the operator \(P_{ij}\) I used to define the bilinear action above. This inverted \(P^{-1}_{ij}\) plays the role of the propagator for a simple reason: we ultimately need to solve the linear equation of motion with some function on the right hand side. Each function may be written as a combination of continuously infinitely many (i.e. as an integral over) delta-functions so we really need to solve the equation\[

\sum_j P_{ij} \phi_j = \delta^{(4)} (x-x') \cdot k_i

\] for some coefficients \(k_i\) – which may be decomposed into Kronecker deltas \(\delta_{im}\) for individual values of \(m\). The value of \(x'\) – the spacetime event where the delta-function is localized – doesn't change anything profound about the equation due to the translational symmetry. A funny thing is that the equation above may be formally solved by multiplying it with the inverse operator:\[

\phi_i = \sum_j P^{-1}_{ij} \delta^{(4)}(x-x')\cdot k_j.

\] That's why the inverse of the operator \(P_{ij}\) – which is nonlocal (the opposite to differentiation is integration and we are generalizing this fact) appears in the Feynman rules.

So far I am presenting features of the results "informally"; we are not strictly deriving any Feynman rules and we haven't chosen one of the three methods yet.

Higher-order terms

I will postpone this point but the cubic and higher-order terms in the Lagrangian will produce the vertices of the Feynman diagrams. In the position representation, the locations of the vertices must be integrated over the whole spacetime.

In the momentum representation, the vertices are interactions that appear "everywhere" and we must instead impose the 4-momentum conservation at each vertex. In the latter approach, some momenta will continue to be undetermined even if the external particles' momenta are given. The more independent "loops" the Feynman diagram has, the more independent momenta running through the propagators must be specified. All the allowed values of the loop momenta must be integrated over.

The momentum and position approaches are related by the Fourier transform. Note that the Fourier transform of a product is a "convolution" and this is the sort of mathematical facts that translates the rules from the momentum representation to the position representation and vice versa.

Starting with the methods: Dyson series

We have already leaked what the final Feynman rules should look like so let us try to derive them. Dyson's method coincides with the tools in quantum mechanics that most courses teach you at the beginning, so it's a beginner-friendly method (although this statement depends on our culture and on those perhaps suboptimal ways how we teach quantum mechanics and quantum field theory). But it's actually not the first method by which the Feynman rules were derived; Feynman originally used the "sum over histories" applied to fields.

Dyson's method uses several useful technicalities, namely the Dirac interaction picture; time ordering; and a modified Taylor expansion for the exponential.

The Dirac interaction picture is a clever compromise between Schrödinger's picture in which the operators are independent of time and the state vector evolves according to Schrödinger's equation that depends on the Hamiltonian; and the Heisenberg picture in which the state vector is independent of time and the operators evolve according the Heisenberg equations of motion that resemble the classical equations of motion with extra hats (which are omitted on this blog because it's a quantum mechanical blog).

In the Dirac interaction picture, we divide the Hamiltonian to the "easy", bilinear part we have discussed above and this "free part" is used for the Heisenberg-like evolution equations (the operators evolve in a simple linear way as a result); and the "hard", higher-order or interacting part of the Hamiltonian which is used as "the" Hamiltonian in a Schrödinger-like equation. So we have:\[

\eq{
H(t) &= H_0 + V(t), \\
i\hbar \pfrac{\phi_i(\vec x,t)}{t} &= [\phi_i(\vec x,t),H_0]\\
i\hbar \ddfrac{\ket{\psi(t)}}{t} &= V(t)\ket{\psi(t)}.
}

\] The operators evolve according to \(H_0\), the free part, but the wave function evolves according to \(V(t)\). Note that \(V(t)\) – and of course the whole \(H(t)\) as well – is a rather general composite operator so it also depends on time: its evolution is also determined by its commutator with \(H_0\). On the other hand, \(H_0\) itself, while an operator, is \(t\)-independent because it commutes with itself.

The operator \(H_0\) depends on the elementary fields \(\phi_i\) in a quadratic way so the commutator in the second, Heisenberg-like equation above is linear in the fields \(\phi_i\). Consequently, these equations of motion are "solvable" and the solutions may be written as some combinations of the plane waves – the usual decomposition of operators \(\phi_i(\vec x,t)\) into plane waves multiplied by coefficients that are interpreted as creation and annihilation operators.

The proof that this Dirac interaction picture is equivalent to either Heisenberg or Schrödinger picture is analogous to the proof of the equivalence of the latter two pictures themselves; one just considers "something in between them".

Getting the time-ordered exponential

At any rate, we may now ask how the initial state \(\ket\psi\) at \(t=-\infty\) evolves to the final state at \(t=+\infty\) via the Schrödinger-like equation that only contains the interacting (higher-order) \(V(t)\) part of the Hamiltonian. We may divide the evolution into infinitely many infinitesimal steps by \(\epsilon\equiv \Delta t\). The evolution in each step (the process of waiting for time \(\epsilon\)) is given by the map\[

\ket\psi \mapsto \zav{ 1+\frac{\epsilon}{i\hbar} V(t) }\ket\psi.

\] For an infinitesimal \(\epsilon\), the terms that are higher-order in \(\epsilon\) may be neglected. To exploit the formula above, we must simply perform this map infinitely many times on the initial \(\ket\psi\). Imagine that one day is very short and its length is \(\epsilon\) and use the symbol \(U_t\) for the parenthesis \(1+\epsilon V(t)/i\hbar \) above. Then the evolution over the first six days of the week will be given by\[

\ket\psi \mapsto U_{\rm Sat} U_{\rm Fri} U_{\rm Thu} U_{\rm Wed} U_{\rm Tue} U_{\rm Mon}\ket\psi.

\] Note that the Monday evolution operator acts first on the ket, so it appears on the right end of the product of evolution operators. The later day we consider, the further on the left side – further from the ket vector – it appears in the product. So the evolution from Monday to Saturday (or Sunday) is given by a product where the later operators are always placed on the left side from the earlier ones. We call such products of operators "time-ordered products".

In fact, we may define a "metaoperator" of time-ordering \({\mathcal T}\) which, if it acts on things like \(V(\text{Day1}) V(\text{Day2})\), produces the product of the operators in the right order, with the later ones standing on the left. The ordering is important because operators usually refuse to commute with each other in quantum mechanics.

Now, if you study the product of the \(U_{\rm Day}\) operators above, you will realize that the product generalizes our favorite "moderate interest rates still yield the exponential growth at the end" formula for the exponential\[

\exp(X) = \lim_{N\to \infty} \zav{ 1 + \frac XN }^N

\] where \(1/N\) may be identified with \(\epsilon\). The generalization affects two features of this formula. First, the terms \(X/N\) aren't constant, i.e. independent of \(t\), but they gradually evolve with \(t\) because they depend on \(V(t)\). Second, we mustn't forget about the time ordering. Both modifications are easily incorporated. The first one is acknowledged by writing \(X\) inside \(\exp(X)\) as the integral over time; the second one is taken into account by including the "metaoperator" of time-ordering. (I call it a "metaoperator" so that it suppresses your tendency to think that it's just an operator on the Hilbert space. It's not. It's an abstract symbol that does something with genuine operators on the Hilbert space. What it does is still linear – in the operators.)

With these modifications, we see that the evolution map is simply\[

\ket\psi\mapsto {\mathcal T} \exp\zav{ \int_{-\infty}^{+\infty}\dd t\, \zav{ \frac{V(t)}{i\hbar} } } \ket\psi

\] The time-ordered exponential is an explicit form for the evolution operator (the \(S\)-matrix) that simply evolves your Universe from minus infinity to plus infinity. In classical physics, you could rarely write such an evolution map explicitly but quantum mechanics is, in a certain sense, simpler. Linearity made it possible to "solve" the most general system by an explicit formula.

Once we have this "time-ordered exponential", we may deal with it in additional clever ways. The exponential may be Taylor-expanded, assuming that we don't forget about the time-ordering symbol in front of all the monomial terms in the Taylor expansion. The operators \(V(t)\) are polynomial in the fields and their spacetime derivatives: we allow each "elementary field" factor to either create or annihilate particles in the initial or final state (these elementary fields will become the inner end points of external lines of Feynman diagrams); or we keep the elementary fields "ready to perform internal services". In the latter case, we will need to know the correlators such as\[

\bra 0 \phi_i(\vec x,t) \phi_j(\vec x', t')\ket 0

\] which is a sort of a "response function" that may be calculated – even by the operator approaches – and which will play the role of the propagators. The remaining coefficients and tensor structures seen in \(V(t)\) will be imprinted to the Feynman rules for the vertices, the places where at least 3 lines meet.

I suppose you know these things or you will spend enough time with the derivation so that you understand many subtleties. My goal here isn't to go through one particular method in detail, however. My goal is to show you different ways how to look at the derivation of the Feynman diagrams. They seem conceptually or philosophically very different although the final predictions for the probability amplitudes are exactly equivalent.

Feynman's original method: "sum over histories" of fields

Feynman originally derived the Feynman rules by "summing over histories" of fields. The very point of the "sum over histories" approach to quantum mechanics is that we consider a classical system, the classical limit of the quantum system we want to describe, and consider all of its histories, including (and especially) those that violate the classical equations of motion. For each such a history or configuration in the spacetime, we calculate the action \(S\), and we sum i.e. integrate \(\exp(iS/\hbar)\) over all these histories, perhaps with the extra condition that the initial and final configurations agree with the specified ones (those that define the problem we want to calculate).

(See Feynman's thesis: arrival of path integrals, Why path integrals agree with the uncertainty principle, and other texts about path integrals.)

We have already mentioned that we're dividing the action, Lagrangian, or Hamiltonian to the "free part" and the "interacting part". We're doing the same thing if we use this Feynman's original method, too. To deal with the external lines, we have to describe the wave functions (or wave functionals) for the multiparticle states; this task generalizes the analogous problem with the quantum harmonic oscillator to the case of the infinite dimension and I won't discuss it in detail.

What's more important are the propagators, i.e. the internal lines, and the vertices. The propagators produce the inverse operator \(P_{ij}^{-1}\) from the Lagrangian again. These "Green's functions" have the property I have informally mentioned – they solve the "wave equation" with the Dirac delta-function on the right hand side; and they are equal to the two-point correlation functions evaluated in the vacuum.

But Feynman's path integral has a new way to derive the appearance of this inverse operator as the propagator. It boils down to the Gaussian integral\[

\int \dd^n x\,\exp(\vec x\cdot M\cdot \vec x) = \frac{\pi^{n/2}}{\sqrt{\det M}}.

\] but what is even more relevant is a modified version of this integral that has an extra linear term in the exponent aside from the bilinear piece:\[

\int \dd^n x\,\exp(\vec x\cdot M\cdot \vec x+ \vec J\cdot \vec x) = \dots

\] This more complicated integral may be solved by "completing the square" i.e. by the substitution\[

\vec x = \vec x' - \frac{1}{2} M^{-1}\cdot \vec J.

\] With this substitution, after we expand everything, the \(\vec x'\cdot \vec J\) "mixed terms" get canceled. As a replacement, we produce an extra term\[

-\frac{1}{4} \vec J\cdot M^{-1} \cdot \vec J

\] in the exponent; the coefficient \(-1/4\) arises as \(+1/4-1/2\). And because \(M\) is the matrix that is generalized by our operator \(P_{ij}\) discussed previously, we see how the inverse \(P^{-1}_{ij}\) appears sandwiched in between two vectors \(\vec J\).

The strategy to evaluate the Feynman's path integral is to imagine that this whole integral is a "perturbation" of a Gaussian integral we know how to calculate. We work with all the \(V(\vec x,t)\) interaction terms as if they were general perturbations similar to the \(\vec J\) vector above, and in this way, we reproduce all the vertices and all the propagators again.

Note that I have been even more sketchy here because this text mainly serves as a remainder that there exists a "philosophically different attitude" to the Feynman diagrams that one shouldn't overlook or dismiss just because he got used to other techniques and a different philosophy. If you want to calculate things, it's good to learn one method and ignore most of the others so that you're not distracted. But once you start to think about philosophy and generalizations, you shouldn't allow your – often random and idiosyncratic – habits to make you narrow-minded and to encourage you to overlook that there are completely different ways how to think about the same physics. These different ways to think about physics often lead to different kinds of "straightforward generalizations" that might look very unnatural or "difficult to invent" in other approaches.

In science, one must disentangle insights that are established – directly or indirectly supported by the experimental data – from arbitrary philosophical fads that you may be promoting just because you got used to them or for other not-quite-serious reasons. Of course, this broader point is the actual important punch line I am trying to convey by looking at a particular technical problem, namely methods to derive the Feynman rules.

Feynman's other method: "sum over histories" of merging and splitting particles

Once I have unmasked my real agenda, I will be even more sketchy when it comes to the third philosophical paradigm. You may "derive" the Feynman rules, at least qualitatively, from the "first-quantized approach" emulating non-relativistic quantum mechanics.

Again, in this derivation, we are "summing over histories". But they're not "histories of the fields \(\phi_i(\vec x,t)\)" as in the approach from the previous section – the original method Feynman exploited to derive the Feynman rules. Instead, we may sum over histories of ordinary mechanics, i.e. over histories of trajectories \(\vec x(t)\) for different particles in the process.

This approach, emulating non-relativistic quantum mechanics, the propagators \(D(x,y)\) arise as the probability amplitude for a particle to get from the point \(x\) of the spacetime to the point \(y\). It just happens that the form of the propagators – which have been interpreted as matrix elements of the "inverse wave operator" \(P^{-1}_{ij}\); and as two-points functions evaluated in the vacuum – may also be interpreted as the amplitude for a particle getting from one point to another.

Well, this works in some approximations and one needs to deal with antiparticles properly in order to restore the Lorentz invariance and causality (note that the sum over particles' trajectories still deals with trajectories that are superluminal almost everywhere, but the final result still obeys the restrictions and symmetries of relativity!) and it's tough. At the end, the "derivation" ends up being a heuristic one.

But morally speaking, it works. In this interpretation, a Feynman diagram encodes some histories of point-like particles that propagate in the spacetime and that merge or split at the vertices which correspond to spacetime points at which the total number of particles in the Universe may change (this step would be unusual in non-relativistic quantum mechanics, of course). The path integral over all the paths of the internal particles gives us the propagators; the vertices where the particles split or join must be accompanied by the right prefactors, index contractions, and other algebraic structures. But in some sense, it works.

It's this interpretation of the Feynman diagrams that has the most straightforward generalization in string theory. In string theory, we may imagine cylindrical or strip-like world sheets – histories of a single closed string or a single open string propagating in time – and they generalize the world lines. The path integral over all histories like that, between the initial closed/open string state and the final one, gives us a generalized Green's function for a single string.

And in string theory, we simply allow the topology of the world sheet to bd nontrivial – to resemble the pants diagram or the genus \(h\) surface with additional boundaries or crosscaps – and it's enough (as well as the only consistent way) to introduce interactions. While the interactions of point-like particles are given by vertices, "singular places" of the Feynman diagrams, and this singular character of the vertices is ultimately responsible for all the short-distance problems in quantum field theories, the world sheets for strings have no singular places at all. They're smooth manifolds – each open set is diffeomorphic to a subset of \(\RR^2\), especially if you work in the Euclidean signature – but if you look at a manifold globally (and only if you do so), you may determine its topology and say whether some interactions have taken place.

So this third method of interpreting the Feynman diagrams – as the sum over histories of point-like particles in the spacetime that are allowed to split and join at the vertices – which was the "most heuristic one" and the "method that was least connected to exact formulae" encoding the mathematical expressions behind the Feynman diagrams actually becomes the most straightforward, the most rigorous way to derive the analogous amplitudes in string theory.



Take the world from another point of view, interview with RPF, 36 minutes, PBS NOVA 1973. At 0:40, he also mentions that brushing your teeth is a superstition. Given my recent appreciation of the yeasts that are unaffected by the regular toothpastes, I started to think RPF had a point about this issue, too.

If you got stuck with a particular "philosophy" how to derive the Feynman rules, e.g. with Dyson's series, it could be much harder – but not impossible – to derive the mathematical expressions for multiloop string diagrams. There have been many methods due to Richard Feynman mentioned in this text but once again, the most far-reaching philosophical lesson is one that may be attributed to Richard Feynman as well:
Perhaps Feynman's most unique and towering ability was his compulsive need to do things from scratch, work out everything from first principles, understand it inside out, backwards and forwards and from as many different angles as possible.
I took the sentence from a review of a book about Feynman. It's great if you decompose things to the smallest possible blocks, rediscover them from scratch, and try to look at the pieces out of which the theoretical structure is composed from as many angles as you can. New perspectives may give you new insights, new perceptions of a deeper understanding, and new opportunities to find new laws and generalize them in ways that others couldn't think of.

And that's the memo.

P.S.: BBC and Discovery's Science Channel plan to shoot a Feynman-centered historical drama about the Challenger tragedy.



Prayer for Marta ["Let the peace remain with this land. Let anger, envy, jealousy, fear and conflicts subside, let them subside. Now when your lost control over your things will return to you, the people, it will return to you..."], an iconic politically flavored 1968 song by which the singer restarted freedom lost in 1968 during the Velvet Revolution in 1989.

P.P.S.: Ms Marta Kubišová, a top Czech pop singer in the late 1960s (Youtube videos), refused to co-operate with the pro-occupation establishment after the 1968 Soviet invasion which is why she became a harassed clerk in a vegetable shop rather than a pillar of the totalitarian entertainment similar to her ex-friend Ms Helena Vondráčková.

She just received Napoleon Bonaparte's Legion of Honor award, a well deserved one. Congratulations!

No comments:

Post a Comment