I Have a Dream!

March 24, 2014

 

I dream of reformulating the Classical and Quantum Electrodynamics.

 

Why it is necessary?

It is necessary for better understanding the corresponding physics and for having better equations since currently the equations are such that their solutions need modifications (this fact reflects lack of physics understanding while constructing these equations).

 

Why it has not been done before?

Many have tried, but none prevailed. And currently it is renormalizators (practitioners) who are teaching the subject, not theory developers, so they do everything to convince students to accept “bare particle” physics. In Classical Electrodynamics (CED) some teach that \dddot{\mathbf{r}} (the remainder after the mass renormalization) is a good radiation reaction term [1, 2] even though it leads to “false start” solutions; some, on the contrary, teach that \dddot{\mathbf{r}} is not applicable at “small times” and one must use \dot{\mathbf{F}}_{ext} instead [3], but up to now no mechanical equation was found to conserve the energy-momentum exactly and in a physical manner. We content ourselves with an approximate description. The Noether theorem did not help!

 

Similarly in QED – although the equation set is different from that of CED, the renormalization is still a crucial part of calculations. And in addition, soft mode contributions (absent in the first Born approximation) are obligatory for obtaining physically meaningful results. If one is obliged to sum up some of its contributions to all orders, it indicates a bad initial approximation used for the perturbation theory.

 

Many theory developers (founding fathers) were looking for better theory formulations. It happened to be an extremely difficult problem, mainly due to prejudices implicitly involved in theoretical constructions. Paul Dirac, a rare physicist who was not thinking of fame and money at all, never gave up. His motto – a theory must be mathematically and physically sensible [4], and for the sake of that we must search for better Hamiltonians, better formulations, better description than the current one, is my motto too.

 

Paul_DiracIf you have read my blogs (this one, http://fishers-in-the-snow.blogspot.fr/ , http://vladimir-anski.livejournal.com/) and articles, you may have an idea what I mean by reformulation. If you like, my program can roughly be understood as fulfilling the counter-term subtractions exactly:

 

\mathcal{L}_{good}=\mathcal{L}+\mathcal{L}_{CT}\qquad (1)

 

and including some of this (“good”) Lagrangian terms into a new initial approximation, i.e., figuratively speaking:

 

\mathcal{L}_{good}= \left[{\mathcal{L}}_0+\mathcal{L}_{soft}\right]+\left[\mathcal{L}_{good}-{\mathcal{L}}_0-\mathcal{L}_{soft}\right]=\tilde{\mathcal{L}}_0+\tilde{\mathcal{L}}_{int}^R.\qquad (2)

 

Then the “interaction term” will be different too:

 

\tilde{\mathcal{L}}_{int}^R =\mathcal{L}_{good}-\tilde{\mathcal{L}}_0-\mathcal{L}_{soft},\qquad (3)

 

so that no renormalization will be needed and the soft diagram contributions will be taken into account automatically in the first Born approximation (like in [7]).

 

What I need?

In order to pursue my research, I need funds. I believe that we can achieve a better description if we abandon some prejudices and employ some physical reasoning instead of doing by a blind analogy. I have already outlined possible directions in my articles [5-8].  But currently I am working for a private company, fulfilling subcontract studies, and it takes all my time and efforts. This activity is far from my dream, though. I have to abandon it in order to concentrate myself on my own subject.  I’ve got to break free!

Academia does not support this “reformulation approach” anymore. I can only count on private funding. If you or your friends or friends of your friends are rich people, then create a fund for supporting my research, run it and we will make it possible.

I do not need a crazy amount like a Milner prise, no! A regular salary of a theorist will suffice.

(Small donations may be sent to my PayPal account: vladimir.kalitvianski@wanadoo.fr)

 

——————————————

[1] Sidney Coleman, Classical Electron Theory from a Modern Standpoint, http://www.rand.org/content/dam/rand/pubs/research_memoranda/2006/RM2820.pdf

[2] Gilbert N. Plass, Classical Electrodynamic Equations of Motion with Radiative Reaction, Rev. Mod. Phys. V. 33, 37 (1961), http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.33.37 or https://drive.google.com/file/d/0B4Db4rFq72mLcUN6bEhweTgyWkE/edit?usp=sharing

[3] V. L. Ginzburg, Theoretical Physics and Astrophysics, Pergamon Press (1979), http://www.amazon.com/Theoretical-Physics-Astrophysics-Monographs-Philosophy/dp/0080230679 , https://drive.google.com/file/d/0B4Db4rFq72mLWGhCTXVJLUU1WVk/edit?usp=sharing

[4] Jagdish Mehra (editor), The Physicist’s Conception of Nature, (1973), https://drive.google.com/file/d/0B4Db4rFq72mLWnIyM1FSOGcxaDA/edit?usp=sharing

[5] Reformulation instead of renormalization, http://arxiv.org/abs/0811.4416

[6] Atom as a “Dressed” Nucleus, http://arxiv.org/abs/0806.2635

[7] A toy model of Renormalization and Reformulation, http://arxiv.org/abs/1110.3702

[8] Unknown Physics of Short Distances, https://www.academia.edu/370847/On_Probing_Small_Distances_in_Quantum_World

On integrating out short-distance physics

September 26, 2014

 

I would like to explain how short-distance (or high-energy) physics is “integrated out” in a reasonably constructed theory. Speaking roughly and briefly, it is integrated out automatically. I propose to build QFT in a similar way.

 

Phenomena to describe

Let us consider a two-electron Helium atom in the following state: one electron is in the “ground” state and the other one is in a high orbit. The total wave function of this system \Psi(\mathbf{r}_{Nucl},\mathbf{r}_{e_{1}},\mathbf{r}_{e_2},t) depending on  the absolute coordinates is conveniently presented as a product of a plane wave e^{i(\mathbf{P}_A\mathbf{R}_A- E_{P_A} t)/\hbar} describing the atomic center of mass and a wave function of the relative or internal collective motion of constituents \psi_n (\mathbf{r}_1,\mathbf{r}_2)e^{-i E_n t/\hbar} where \mathbf{R}_A= \left[ M_{Nucl}\mathbf{r}_{Nucl}+m_e(\mathbf{r}_{e_1}+\mathbf{r}_{e_2})\right ]/(M_{Nucl}+2m_e) and \mathbf{r}_a are the electron coordinates relative to the nucleus \mathbf{r}_a = \mathbf{r}_{e_a}-\mathbf{r}_{Nucl},\; a=1,2 (see Fig. 1).

Coordinates_2

  

 

 

 

        Figure 1. Coordinates in question.

Normally, this wave function is still a complicated thing and the coordinates \mathbf{r}_1 and \mathbf{r}_2 are not separated (the interacting constituents are in mixed states). What can be separated in \psi_n are normal (independent) modes of the collective motion (or “quasi-particles”). Normally it is their properties who are observed.

However, in case of one highly excited electron (n\gg 1), the wave function of internal motion, for numerical estimations, can be quite accurately approximated with a product of two hydrogen-like wave functions \psi_n (\mathbf{r}_1,\mathbf{r}_2) \approx \psi_0 (\mathbf{r}_1)\cdot \phi_n (\mathbf{r}_2) where \psi_0 (\mathbf{r}_1) is a wave function of He^+ ion (Z_A=2) and \phi_n (\mathbf{r}_2) is a wave function of Hydrogen in a highly excited state (n\gg1,\; Z_{eff}=1).

The system is at rest as a whole and serves as a target for a fast charged projectile. I want to consider large angle scattering, i.e., scattering from the atomic nucleus rather than from the atomic electrons. The projectile-nucleus interaction V(\mathbf{r}_{pr}-\mathbf{r}_{Nucl}) is expressed via “collective” coordinates thanks to the relationship \mathbf{r}_{Nucl}=\mathbf{R}_A-m_e(\mathbf{r}_1+\mathbf{r}_2)/M_A. I take a non-relativistic proton with v \gg v_n as a projectile and I will consider such transferred momentum values q=|\mathbf{q}| that are insufficient to excite the inner electron levels by “hitting” the nucleus. Below I will precise these conditions. Thus, for the outer electron the proton is sufficiently fast to be reasonably treated by the perturbation theory in the first Born approximation, and for the inner electron the proton scattering is such that cannot cause its transitions. This two-electron system will model a target with soft and hard excitations.

Now, let us look at the Born amplitude of scattering from such a target. The general formula for the cross section is the following (all notations are from [1]):

d\sigma_{np}^{n'p'}(\mathbf{q}) = \frac{4m^2 e^4}{(\hbar q)^4} \frac{p'}{p} \cdot \left | Z_A\cdot f_n^{n'}(\mathbf{q}) - F_n^{n'}(\mathbf{q})\right |^2 d\Omega\qquad (1)

F_n^{n'}(\mathbf{q})=\int\psi_{n'}^*(\mathbf{r}_1 , \mathbf{r}_2)\psi_{n}(\mathbf{r}_1 , \mathbf{r}_2)\left (\sum_a  e^{-i\mathbf{q}\mathbf{r}_a}\right ) \exp\left (i\frac{m_e}{M_A}\mathbf{q}\sum_b \mathbf{r}_b \right )d^3 r_1 d^3 r_2 \, (2)

f_n^{n'}(\mathbf{q})=\int\psi_{n'}^*(\mathbf{r}_1 , \mathbf{r}_2)\psi_{n}(\mathbf{r}_1 , \mathbf{r}_2) \exp\left (i\frac{m_e}{M_A}\mathbf{q}\sum_a \mathbf{r}_a \right )d^3 r_1 d^3 r_2\qquad (3)

The usual atomic form-factor (2) describes scattering from atomic electrons and it becomes relatively small for large scattering angles \langle(\mathbf{q}\mathbf{r}_a)^2\rangle_n\gg1. It is so because the atomic electrons are light compared to the heavy projectile and they cannot cause large-angle scattering for a kinematic reason. I can consider scattering angles superior to those determined with the direct projectile-electron interactions (\theta\gg \frac{m_e}{M_{pr}}\frac{2v_0}{v}) or, even better, I may exclude the direct projectile-electron interactions in order not to involve F_n^{n'}(\mathbf{q}) into calculations any more.

Let us analyze the second atomic form-factor (3) in the elastic channel. With our assumptions on the wave function, it can be easily calculated if the corresponding wave functions are injected in (3):

f_n^{n}(\mathbf{q}) \approx \int \left | \psi_{0}(\mathbf{r}_1)\right |^2\left |\phi_n(\mathbf{r}_2) \right |^2 e^{ i\frac{m_e}{M_A}\mathbf{q}(\mathbf{r}_1+\mathbf{r}_2)}d^3 r_1 d^3 r_2\qquad(4)

It factorizes into two Hydrogen-like form-factors:

f_n^{n}(\mathbf{q})\approx f1_0^{0}(\mathbf{q}) \cdot f2_n^{n}(\mathbf{q}) \qquad (5)

Form-factor \left|f1_0^{0}(\mathbf{q})\right| can be close to unity and at the same time form-factor \left|f2_n^{n}(\mathbf{q})\right| may be very small in our conditions. In other words, the projectile may “see” a big positive charge cloud created with the motion of the atomic “core” (i.e., with He^+ ion), but it may not see the true structure of the atomic “core” consisting of the nucleus and the ground state electron. The complicated short-distance structure is integrated out in (4) and results in an elastic from-factor \left|f1_0^{0}\right| tending to unity. We can pick up such a proton energy E_{pr}, such a scattering angle \theta, and such an excited state |n\rangle, that \left|f1_0^{0}\right| may be equal to unity even at the largest transferred momentum, i.e., at \theta=\pi. In order to see to what extent this is physically possible in our problem, let us analyze the “characteristic” angle \theta 1_0 for the inner electron state [1]. It is an angle at which the inelastic processes become relatively essential (the probability of exciting any “internal” state is described with the factor \left[1-|f1_0^0|^2\right]):

\theta 1_0=2 \arcsin\left(\frac{2v_0}{2v}\cdot 5\right)\qquad (6)

Here, instead of v_0 stands 2v_0 for the He^+ ion due to Z_A = 2 and the factor 5 originates from the expression \left(1+\frac{M_A}{M_{pr}}\right). So, \theta 1_0=\pi for v=5 v_0=2.5\cdot 2v_0. Fig. 2 shows such a case together with another form-factor – for a third excited state of the other electron.

Figure  2. Helium form-factors f1_0^0 and f2_3^3 at v=5v_0. q_{elastic}=p\cdot 2\sin(\theta/2).

 

We see that for scattering angles \theta\ll\theta 1_0 (v) form-factor |f1_0^0| becomes very close to unity (only elastic channel is open for the inner electron state) whereas form-factor \left|f2_n^n\right| may be still very small if \theta\ge\theta 2_n\ll 1. The latter form-factor describes a large and soft “positive charge cloud”, and for inelastic scattering (n'\ne n) it describes the soft target excitations energetically accessible when hitting the nucleus. (Electron as a projectile with v=10\cdot 2v_0 does not “see” the core structure at all [1].)

The inner electron excitations due to hitting the nucleus can also be suppressed at any angle in case of relatively small projectile velocities (Fig. 3). By the way, an electron
as a projectile, even with v=10\cdot 2v_0, does not “see” the core structure at all [1].)

Figure 3. Helium form-factors f1_0^0 and f2_5^5 at v=2v_0.

 

Let us note that for small velocities the first Born approximation may become somewhat inaccurate: a “slow” projectile “polarizes” the atomic “core” and this effect influences numerically the exact elastic cross section. Higher-order perturbative corrections of the Born series take care of this effect, but the short-distance physics will still not intervene in a harmful way in our calculations. Instead of simply dropping out (i.e., producing a unity factor in the cross section (1)), it will be taken into account (“integrated out”) more precisely, when necessary.

Hence, whatever the true internal structure is (the true high-energy physics, the true high-energy excitations), the projectile in our “two-electron” theory cannot factually probe it when it lacks energy. The projectile sees it mostly as a point-like charge. It is comprehensible physically and is rather natural. In our calculation, however, this “integrating out” (factually, “taking into account”) the short-distance physics occurs automatically rather than “by hands”, i.e., with introducing a cut-off and discarding the harmful corrections. It convinces me in possibility of constructing a physically reasonable QFT where no cut-off and discarding are necessary.

The first Born approximation in the elastic channel gives a “photo” of the atom charge distribution as if the atom was unperturbed, a photo with a certain resolution, though. Inelastic processes give possible final target states different from the initial one. Inclusive cross section reduces to a great extent to a Rutherford scattering formula for a still point-like target charge with Z_{eff}=1 and with mass M_{Nucl}\approx 4M_{pr} [1].

Increasing the projectile energy and the scattering angles at experiment helps reveal the short-distance physics in more detail. Doing so, we may discover high-energy excitations inaccessible at lower energies/angles. Thus, we learn that our knowledge (for example, about pointlikeness of the core) was not really precise, “microscopic”.

 

Discussion

Above we did not encounter any calculation difficulties. We may say, therefore, that out theory is physically reasonable.

What makes our theory physically reasonable? The permanent interactions of the atomic constituents taken into account exactly both in their wave function and in the relationships between their absolute coordinates and the relative (or collective) coordinates (namely, \mathbf{r}_{Nucl} expressed via \mathbf{R}_A and \mathbf{r}_a is involved into V(\mathbf{r}_{pr}-\mathbf{r}_{Nucl}) ). The rest is a perturbation theory in this or that approximation. It calculates the occupation number evolutions.

Now, let us imagine for instance that this our “two-electron” theory is a “theory of everything”. Low-energy experiments would not reveal the “core” structure, but would present it as a point-like charge-1 “nucleus”. Such experiments would then be well described with a simpler, “one-electron” theory, a theory of a hydrogen-like atom with \phi_n (\mathbf{r}_2) and M_A \approx 4M_{pr}. The presence of the other electron is not necessary in such a theory – the latter works fine and without difficulties.

May we call a “one-electron” theory an effective one? Maybe. I prefer the term “incomplete” – it does not include and predict all target excitations existing in Nature, but it has no mathematical problems as a model even outside its domain of validity. The projectile energy E_{pr} (or the transferred momentum |\mathbf{q}|) is not a “scale” in our theory in a Wilsonian sense.

Thus, the absence of the true physics of short distances in a “one-electron” theory does not make it fail mathematically. And this is so because the one-electron theory is constructed correctly too – what is know to be coupled permanently is already taken into account exactly in it via the wave function \phi_n. Hence, when people say that a given theory has mathematical problems “because not everything in it is taken into account”, I remain skeptic. I think the problem is in its erroneous formulation. It is a problem of formulation or modeling (see, for example, unnecessary and harmful “electron self-induction effect” discussed in [2] and an equation coupling error discussed in [3]). And I do not believe that when everything else is taken into account, the difficulties will disappear automatically. Especially if “new physics” is taken into account in the same way – erroneously. Instead of excuses, we need a correct formulation of incomplete theories on each level of our knowledge.

Now, let us consider a one-electron state in QED. According to QED equations, “everything is permanently coupled with everything”, in particular, even one-electron state contains possibilities of exciting high-energy states like creating hard photons and electron-positron pairs. It is certainly so in experiments, but the standard QED suffers from calculation difficulties of obtaining them in a natural way because of its awkward formulation. That is why “guessing right equations” is still an important task.

 

Electronium and all that

My electronium model [1] is an attempt to take into account a low-energy physics exactly, like in a “one-electron” incomplete atomic model mentioned briefly above. It does not include all possible QED excitations but soft photons; however, and this is important, it works fine in a low-energy region. Colliding two electroniums would produce soft radiation immediately, in the first Born approximation. By the way, the photons are those normal modes of the collective motions whose variables in \psi_n are separated.

How would I complete my electronium model, if given a chance? I would add all QED excitations in a similar way – I would add a product of the other possible “normal modes” to the soft photon wave function and I would express the electron coordinates via the center of mass and relative motion coordinates, like in electronium or in atom. Such a completion would work as fine as my non-relativistic electronium model, but it would produce the whole spectrum of possible QED excitations in a natural way. Of course, I have not done it yet (due to lack of funds) and it might be technically very difficult to do, but in principle such a (reformulated QED) model would be free from difficulties by construction. Yes, it would be an “incomplete” QFT, but no references to the absence of the other particles (excitations) existing in Nature would be necessary to justify manually integrating out the “short-distance physics” in it, as it is carried out today in the frame of Wilsonian RG exercise.

 

Conclusions

In a “complete” reformulated QFT (or “theory of everything”) non-accessible at a given energy E excitations would not contribute (with some reservations). Roughly speaking, they would be integrated out automatically, like in my “two-electron” target model given above.

But this property of “insensibility to short-distance physics” does not exclusively belong to the “complete” reformulated QFT. “Incomplete” theories can also be formulated in such a way that this property will hold. It means the short-distance physics, present in an “incomplete theory” and different from reality, will not be harmful for calculations technically, as it was demonstrated in this article. When the time arrives, the new high-energy excitations could be taken into account in a natural way described primitively above (as a transition from a “one-electron” to “two-electron” model, for example). I propose to think over this way of constructing QFT. I feel it is a promising direction of building physical theories.

 

References

[1] Kalitvianski V 2009 Atom as a “Dressed” Nucleus Cent. Eur. J. Phys. 7(1) 1–11 (Preprint arXiv:0806.2635 [physics.atom-ph])

[2] Feynman R 1964 The Feynman Lectures on Physics vol. 2 (Reading, Massachusetts: Addison-Wesley Publishing Company, Inc.) pp 28-4–28-6

[3] Kalitvianski V 2013 A Toy Model of Renormalization and Reformulation Int. J. Phys. 1(4) 84–93
(Preprint arXiv:1110.3702 [physics.gen-ph])

On “Renormalization and Gauge Invariance” by G. ‘t Hooft

September 6, 2014

There was a period when renormalization was considered as a temporary remedy, working luckily in a limited set of theories and supposed to disappear within a physically and mathematically better approach. P. Dirac called renormalization “doctoring numbers” and advised us to search for better Hamiltonians. J. Schwinger also was underlying the necessity to identify the implicit wrong hypothesis whose harm is removed with renormalization in order to formulate the theory in better terms from the very beginning. Alas, many tried, but none prevailed.

In his article G. ‘t Hooft mentions the skepticism with respect to renormalization, but he says that this skepticism is not justified.

I was reading this article to understand his way of thinking about renormalization. I thought it would contain something original, insightful, clarifying. After reading it, I understood that G. ‘t Hooft had nothing to say.

Indeed, what does he propose to convince me?

Let us consider his statement: “Renormalization is a natural feature, and the fact that renormalization counter terms diverge in the ultraviolet is unavoidable”. It is rather strong to be true. An exaggeration without any proof. But probably, G. ‘t Hooft had no other experience in his research career.

“A natural feature” of what or of whom? Let me precise then, it may be unavoidable in a stupid theory, but it is unnatural even there. In a clever theory everything is all right by definition. In other words, everything is model-dependent. However G. ‘t Hooft tries to make an impression that there may not be a clever theory, an impression that the present theory is good, ultimate and unique.

The fact that mass terms in the Lagrangian of a quantized field theory do not exactly correspond to the real masses of the physical particles it describes, and that the coupling constants do not exactly correspond to the scattering amplitudes, should not be surprising.

I personally, as an engineering physicist, am really surprised – I am used to equations with real, physical parameters. To what do those parameters correspond then?

The interactions among particles have the effect of modifying masses and coupling strengths.” Here I am even more surprised! Who ordered this? I am used to independence of masses/charges from interactions. Even in relativistic case, the masses of constituents are unchanged and what depends on interactions is the total mass, which is calculable. Now his interaction is reportedly such that it changes masses and charges of constituents and this is OK. I am used to think that masses/charges were characteristics of interactions, and now I read that factually interactions modify interactions (or equations modify equations ;-)).

To convince me even more, G. ‘t Hooft says that this happens “when the dynamical laws of continuous systems, such as the equations for fields in a multi-dimensional world, are subject to the rules of Quantum Mechanics”, i.e., not in everyday situation. What is so special about continuous systems, etc.? I, on the contrary, think that this happens every time when a person is too self-confident and makes a stupidity, i.e., it may happen in every day situations. You have just to try it if you do not believe me. Thus, when G. ‘t Hooft talks me into accepting perturbative corrections to the fundamental constants, I wonder whether he’s checked his theory for stupidity (like the stupid self-induction effect) or not. I am afraid he hasn’t. Meanwhile the radiation reaction is different from the near-field reaction, so we make a mistake when take the latter into account. This is not a desirable effect , that is why it is removed by hand anyway.

But let us admit he managed to talk me into accepting the naturalness of perturbative corrections to the fundamental constants. Now I read: “that the infinite parts of these effects are somehow invisible”. Here I am so surprised that I am screaming. Even a quiet animal would scream after his words. Because if they are invisible, why was he talking me into accepting them?

Yes, they are very visible, and yes, it is we who should make them invisible and this is called renormalization. This is our feature. Thus, it is not “somehow”, but due to our active intervention in calculation results. And it works! To tell the truth, here I agree. If I take the liberty to modify something for my convenience, it will work without fail, believe me. But it would be better and more honest to call those corrections “unnecessary” if we subtract them.

How he justifies this our intervention in our theory results? He speaks of bare particles as if they existed. If the mass and charge terms do not correspond to physical particles, they correspond to bare particles and the whole Lagrangian is a Lagrangian of interacting bare particles. Congratulations, we have figured out bare particles from postulating their interactions! What an insight!

No, frankly, P. Dirac wrote his equations for physical particles and found that this interaction was wrong, that is why we have to remove the wrong part by the corresponding subtractions. No bare particles were in his theory project or in experiments. We cannot pretend to have guessed a correct interaction of the bare particles. If one is so insightful and super-powerful, then try to write a correct interaction of physical particles, – it is already about time.

Confrontation with experimental results demonstrated without doubt that these calculations indeed reflect the real world. In spite of these successes, however, renormalization theory was greeted with considerable skepticism. Critics observed that ”the infinities are just being swept under the rug”. This obviously had to be wrong; all agreements with experimental observations, according to some, had to be accidental.

That’s a proof from a Nobelist! It cannot be an accident! G. ‘t Hooft cannot provide a more serious argument than that. In other words, he insists that in a very limited set of renormalizable theories, our transformations of calculation results from the wrong to the right may be successful not by accident, but because these unavoidable-but-invisible stuff does exists in Nature. Then why not to go farther? With the same success we can advance such a weird interaction that the corresponding bare particles will have a dick on the forehead to cancel its weirdness and this shit will work, so what? Do they exist, those weird bare particles, in your opinion?

And he speaks of gauge invariance. Formerly it was a property of equations for physical particles and now it became a property of bare ones. Gauge invariance, relativistic invariance, locality, CPT, spin-statistics and all that are properties of bare particles, not of the real ones; let us face the truth if you take seriously our theory.

I like much better the interaction with counter-terms. First of all, it does not change the fundamental constants. Next, it shows imperfection of our “gauge” interaction – the counter-terms subtract the unnecessary contributions. Cutoff-dependence of counter-terms is much more natural and it shows that we are still unaware of a right interaction – we cannot write it down explicitly; at this stage of theory development we are still obliged to repair the calculation results perturbatively. In a clever theory, Lagrangian only contains unknown variables, not the solutions, but presently the counter-terms contain solution properties, in particular, the cutoff. The theory is still underdeveloped, it is clear.

No, this paper by G. ‘t Hooft is not original nor accurate, that’s my assessment.

A Temple of Physics

August 9, 2014

It is the forum “Physics Overflow” (PO for short). Why it is a Temple? Because its moderators and administrators want so – it should be a place for worshiping the established mainstream physics (the truth) and no heresy is allowed there. I have been told this rule so many times and my research paper “A Toy Model…” has earned so many negative marks, that I with my negative reputation (-279) descended to the last line of the forum users. My paper, submitted there to the Review Section, has not been reviewed yet; still, it got already the worst reputation and became a standard of crackpottery in the review section. Look, for example, what they write about other papers submitted to the review section: “All three of these papers are awful garbage, much worse than VK’s in crackpot value, the negative reviews in this case write themselves.

I am flattered. Being the first (even from the end) and setting up a standard of research papers – it is an achievement. I am really flattered although I came there for other purposes.

Philip Gibbs wrote an anti-crackpot index in his blog. I think the PO moderators and administrators are anti-crackpots by (self)definition. Although young and rather anonymous, they already qualify to rule the science. Or, better, they rule the science, therefore they qualify.

P.S. I quit PhysicsOverflow, briefly, because of lack of physics and excess of politics/censorship there. They started to erase parts of my comments and the whole comments when they decided it was better for their site. I said that then there was no point for me to stay there. An administrator dimension10 wrote to me: “You’re right – it makes no sense for you to stay here, that’s what we’ve been telling you all along.”  You know, according to them, the mainstream needs a strong protection (because it is too fragile). Well, they won, I quit. Vive the mainstream!

Living with divergences

January 30, 2014


12_monkeys_2
.
S. Weinberg wrote a paper “Living with infinities” devoted partially to the memory of Gunnar Källén. Also he outlined there his personal view on the problem of renormalization. Good for him.
.
I just take his title and refer to a movie clip where some live with divergences too. I rephrase L.J. Washington’s sober words:
.
It’s a condition of mental divergence: we find ourselves in the Wilsonian world, being a part of intellectual elite and subjugating infinities. But even though the renormalization ideology is totally convincing for us in every way, nevertheless it is actually a construct of our psyche. We are mentally divergent. In that we escape certain unnamed realities that plague our lives here. When we stop appealing to it, we’ll be well.
.
12_monkeys_3
.
(Behind L. J. Washington someone resembling P. Dirac solves a puzzle.)
.
.

The True (but modest) Heroes of Microworld

November 23, 2013

I’m speaking of bare particles. “Heroes” is maybe too pathetic, but “bricks” would be OK since everything is made of them despite their being non-observable. Why are they non-observable? Because they are non-interacting particles or particles “before interaction”. Inaccessible, for short.

Let us take QED – the first QFT Nobel prises were given for. Its Lagrangian is the following:

\mathcal{L}=\left(i\bar{\psi}\gamma^{\mu}\partial_{\mu}\psi - m\bar{\psi}\psi\right) -e\bar{\psi} \gamma^{\mu} A_{\mu} \psi - \frac{1}{4} F_{\mu\nu}F^{\mu\nu}, \; F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}

It is relativistic and gauge-invariant because the bare particles are such. Parameters m and e are bare particle mass and charge and the term e\bar{\psi}\gamma_{\mu}\psi\cdot A_{\mu} is how bare particles interact. Of course, bare particles have spin and other quantum numbers.

You may wonder how do we physicists know all that if the bare particles are non-observable (and why do they interact if they are non-interacting particles)?

Good questions. Very intelligent! The answer is – due to our physical insights. You know, insight is the ability to see the invisible, to penetrate mentally into the unknown, to figure out everything correctly from small, rare, and distorted pieces of a whole picture. Factually we, from long distances (from low-energy experiments with physical particles), penetrated to the end – to the point r=0 where bare particles live. Thus we insightful nailed the bare particle properties and their interaction laws correctly despite their hiding from us.

And yes, the bare non-interacting particles do interact and even self-interact. It is they who permanently do this hard work. At a first, naive, glance these statements are inconsistent, but no. It is a kind of duality in physics. This duality is not much advertised because the bare particles are really modest bricks.

(It’s a joke without humor.)

On “The Higgs Fake” book by Alexander Unzicker

November 12, 2013

A recent book by Alexander Unzicker “The Higgs Fake” considers, in particular, how “particle physicists are fooling themselves with alleged results, while their convictions are based on group-think and parroting.” It represents a critical point of view and it is not groundless. I would like to support Alexander Unzicker in his critics. In former times the founding fathers of physics were speaking of non resolved fundamental problems, which are still not resolved satisfactorily, but nowadays everything is represented as a fulfledged building based on some “fundamental principles”. Let us take, for example, a citation of W. Pauli, one of most honest physicist of the last century:

We will be considered the generation that left behind unsolved such essential problems as the electron self-energy.

I think the essence is here and avoiding it created just a shaman’s practice where cheating and self-fooling are essential parts. To prove that, let me be more specific and let us consider the electron electromagnetic mass. This notion had arisen in Classical Electrodynamics (CED) well before the famous E=mc^2 was derived [1] and it remains an unsolved problem even today (there are still publications on this subject).  We must not confuse it with the electromagnetic mass defect, though, i.e., with a calculable interaction energy.

The electromagnetic mass can be thought of as a Coulomb energy of the electric field surrounding the electron when we calculate the total field energy. In other words, it is a consequence of the field concept. This part of the field energy is cut-off dependent and thus can take any value at your convenience. We all are familiar with the classical radius of electron r_0=e^2/mc^2, but if we take into account the electron magnetic moment field energy too, we will obtain another radius, closer to the Compton length \hbar/mc. Still, in nature there is no electron of the classical or any other radius. And normally this part of the field energy is entirely discarded and what is left is an interaction energy of charges. Thus, when we calculate a field energy, the electromagnetic mass is just of no use.

Apart from the total field energy, the electromagnetic mass of a point-like charge m_{em} enters the usual “mechanical” equation of a charge when we decide to insert the charge proper field into the charge equation of motion in the frame of a self-action ansatz. The latter is done for the sake of taking into account a weak radiation reaction force, which must provide the total energy conservation. The motivation – energy conservation – is understandable, but in a field approach with \mathcal{L}_{int}\propto j\cdot A, we insert the entire filed F_{\mu\nu}, not just a radiation field F_{\mu\nu}^{rad}, in the mechanical equation. We do it by (a wrong) analogy with an external force F_{\mu\nu}^{ext}. So, before this our intervention we have a good “mechanical” equation (I use a non relativistic form)

m_e \ddot{ \mathbf{r} }= \mathbf{F}_{ext},\qquad (1)

which works almost fine (the near field, whatever it is, easily follows the charge according to Maxwell equations), and after our noble intervention it becomes

m_e \ddot{ \mathbf{r} }= \mathbf{F}_{ext} - m_{em}\ddot{ \mathbf{r} } + \frac{2e^2}{3c^3}\dddot{\mathbf{r}}.\qquad (2)

that does not work any more. The corresponding self-force term - m_{em}\ddot{ \mathbf{r} } with m_{em}\to\infty makes it impossible for a charge to change its state of a uniform motion v = const. This is a self-induction force, an extremely strong one. It’s an understandable “physical effect”, but first, it is not observed as infinite, and second, the self-induction force is not a radiation reaction force in any way, so our approach to describing the radiation influence via self-action is blatantly wrong. Albeit of an anticipated sign (and even when made finite and small), it does not help conserve the total energy. Microsoft Windows would say:

ERROR_1

I. e., the term - m_{em}\ddot{ \mathbf{r} } is not of the right functional dependence. Instead of recognizing this error, physicists started to search a pretext to keep to the self-action idea in place. They noticed that discarding the term - m_{em}\ddot{ \mathbf{r} } “helps” (we will later see how it helps), but calling it honestly “discarding” makes fun of physicists. Discarding is not a calculation. Thus, another brilliant idea was advanced – an idea of “bare” mass m_0 =m_e-m_{em} that “absorbs” m_{em} (a “mechanism” called later a mass renormalization). Tricky is Nature, but clever are physicist. In a fresh historical paper Kerson Huang expresses the common attitude to it [2]:

Huang

One notices with great relief that the self‐mass can be absorbed into the physical mass in the equation of motion

and he writes down an equation, which experimentally follows from nowhere:

m_0 \ddot{ \mathbf{r} }= \mathbf{F}_{ext} - m_{em}\ddot{ \mathbf{r} } + \frac{2e^2}{3c^3}\dddot{\mathbf{r}}.\qquad (3)

It is here where the negative bare mass m_0 <0 is introduced in physics by physicists, introduced exclusively with the purpose to subtract the harmful electromagnetic mass. This introduction is not convincing to me. A negative mass makes the particle move to the left when the force pulls it to the right. We never observed such a silly behaviour (like that of a stupid goat) and we never wrote the corresponding equations. We cannot pretend that (1) describes such a wrong particle in an external field, but adding its self-induction makes the equation right, as it does Kerson Huang. It’s all the way around: in order to make the wrong equation (2) closer to the original one (1), we just discard the electromagnetic mass whatever value it takes. Kerson Huang should have written honestly “One notices that the self‐mass ought to be omitted“.

As well, those who refer to a hydrodynamics analogy, present this silly speculation about arbitrary m_0 and m_{em} as a typical calculation, a calculation like in hydrodynamics where everything is separately measurable, known, and physical. In CED it is not the case. And if the electromagnetic mass is present already in our phenomenological equation (1), the method of self-action takes it into account once more which shows again that such an approach is self-inconsistent. You know, self-induction of a wire is in fact a completely calculable physical phenomenon occurring with many interacting charges. Similarly in plasma description we calculate interactions for dynamics. Interaction is a good concept, but a self-action of an elementary particle is a bad idea. It describes no internal dynamics by definition.

If a bare particle is not observable, we cannot even establish an equation for it and we cannot pretend that its equation is of the same form as the Newton equations for physical particles. They, however, say the bare mass is not observable alone – it always comes in (3) together with the electromagnetic one: m_0 +m_{em}=m_e. But it is not true either: equation (1) contains the physical mass m_e and in addition, if the external force in (1) contains the omnipresent gravity force, say, m_e g for simplicity, the latter does not acquire any addendum when we add that self-induction force. In reality, we fight our own invention m_{em} with help of another one – m_0, but too many people believe in both.

This is the real truth about mass “renormalization” procedure. We ourselves introduce the self-mass in our equation and then we remove it. As nothing remains from it anywhere (the physical mass stays intact), I can safely say that there is no electromagnetic mass at all, that’s my answer to this question (again, not to confuse with the mass defect due to interaction). (By the way, renormalization does not work without fail – there are many non renormalizable theories where bad interaction terms spoil not only the original equation coefficients, but also introduce wrong “remainders”. Success of renormalization is based on lucky accidents, see my opus here or here. P. Dirac clearly called it a fluke.)

Those who insist on this “calculation” forget that then there are forces keeping the charge parts together and these forces have their own “self-induction” and “radiation reaction” contributions. No, this model needs too many “unknowns”.

Here I naively wonder why not from the very beginning to use just the radiated field instead of the total field to take into account the “radiation reaction”? Then they might never obtain the harmful jerk term \propto \dddot{\mathbf{r}}, but they do not do it. They stick to the self-action patched with the “bare mass mechanism” and they hope that the jerk “remainder” of self-action will correctly describe the radiation reaction. Let us see.

So, after shamefully camouflaging discarding silly m_{em}\ddot{ \mathbf{r} }, they are left with the jerk  \frac{2e^2}{3c^3}\dddot{\mathbf{r}} called a “radiation reaction” force:

m_e \ddot{ \mathbf{r} }= \mathbf{F}_{ext} + \frac{2e^2}{3c^3}\dddot{\mathbf{r}}.\qquad (4)

Fortunately, it is wrong too. I say “fortunately” because it reinforces my previous statement that the self-action is a wrong idea. This remainder cannot be used as it gives runaway solutions. Not small radiation reaction, but a rapid self-acceleration. Microsoft Windows would say:

ERROR_2

I. e., the term \frac{2e^2}{3c^3}\dddot{\mathbf{r}} is not of the right functional dependence either. In other words, all terms of self-action force in (2) are wrong. Briefly, this self-action idea was tried and it failed miserably. Period.

(This self-action can be figuratively represented as connecting an amplifier output to its input. It creates a feedback. First the feedback is strongly negative – no reaction to an external signal is possible anymore. After “repairing” this undesirable feedback, we get a strong positive feedback. Now we have a self-amplification whatever the external signal value is. No good either.)

A. Unzicker speaks of a fake and for some readers this may look as an exaggeration. If you want to see physicists cheating, here is another bright example. This cheating consists in using \dddot{\mathbf{r}} in their “proof” of energy conservation [3], as if the corresponding equation (4) had physically reasonable quasi-periodical solutions. But it doesn’t! Runaway solutions are not quasi-periodical and are not physical at all, so the proof is just a deception. (They multiply \dddot{\mathbf{r}} by \dot{\mathbf{r}} and integrate it by parts to “show” that on average it is a radiation power.) If they insist on using quasi-periodic solutions in their proof, these solutions do not belong to Eq. (4). (A “jerky” equation like (4) does not even have any physical Lagrangian to be directly derived from!)

As a matter of fact, after cheating with the “proof“, this harmful jerk term is also (quietly) abandoned in favor of some small force term used in practice instead. This small term is \frac{2e^2}{3m_e c^3}\dot{\mathbf{F}}_{ext} (or alike):

m_e \ddot{ \mathbf{r} }= \mathbf{F}_{ext} + \frac{2e^2}{3m_e c^3}\dot{\mathbf{F}}_{ext}.\qquad (5)

Equation (5) is much better, but here again, I notice cheating once more because they represent it as a “derivation” from (4). Now cheating consists in replacing \dddot{\mathbf{r}} with \dot{\mathbf{F}}_{ext} as if we solved (4) by iterations (perturbation method). However, in the true iterative procedure we obtain a given function of time \dot{\mathbf{F}}_{ext}^{(0)}(t)=\dot{\mathbf{F}}_{ext}\left(\mathbf{r}^{(0)}(t),\mathbf{v}^{(0)}(t)\right) on the right-hand side rather than a term \dot{\mathbf{F}}_{ext} expressed via unknown dynamical variables \mathbf{r} and \mathbf{v}. For example, in an oscillator equation

\ddot{y}+ \omega^2 y= \frac{2e^2}{3mc^3}\dddot{y}\qquad (6)

the first perturbative term \dot{F}_{ext}^{(0)}(t)\propto \dot{y}^{(0)}(t) is a known external periodic (resonance!) driving force whereas the replacement term \dot{F}_{ext}\propto \dot{y} is unknown damping force (kind of a friction):

\ddot{\tilde{y}}+ \gamma\,\dot{\tilde{y}}+ \omega^2 \tilde{y}= 0,\quad \gamma=\frac{2e^2\omega^2}{3mc^3}.\qquad (7)

A perturbative solution to (6) y\approx y^{(0)} + y^{(1)} (a red line in Fig. 2)

Y_2Fig. 2.

is different from a damped oscillator solution \tilde{y} (a blue line in Fig. 2). Solution to a damped oscillator equation is non linear in \gamma, non linear in a quite certain manner. It is not a self-action, but an interaction with something else. This difference in equations is qualitative (conceptual) and it is quantitatively important in case of a strong radiation reaction force and/or when t\to\infty (I used in this example y^{(0)}=\sin\omega t with \omega=10, and \gamma=0.3). I conclude therefore that a damped oscillator equation (7) is not a perturbative version of (6), but is another guesswork result tried and left finally in practice because of its physically more reasonable (although still approximate) behaviour. Similarly, equation (5) is not a perturbative version of (4), but another (imperceptible) equation replacement [3], [4]. Of course, there is no and may not be any proof that perturbative series for (4) converge to solutions of (5). The term \frac{2e^2}{3m_e c^3}\dot{\mathbf{F}}_{ext} is a third functional dependence tried for description of the radiation reaction force.

Hence, researchers have been trying to derive equations describing the radiation reaction force correctly, but they’ve failed. For practical (engineering) purposes they constructed (found by trying different functions) approximate equations like (5) that do not provide the exact energy conservation and do not follow from “principles” (no Lagrangian, no Noether theorem, etc.). Factually the field approach has been “repaired” several times with anti-field guesswork, if you like. Anyway, we may not represent it as a continuous implementation of principles because it isn’t so.

Guessing equations, of course, is not forbidden, on the contrary, but this story shows how far away we have gone from the original idea of self-action. It would not be such a harmful route if the smart mainstream guys did not raise every step of this zigzag guesswork into “the guiding principles” – relativistic and gauge invariance, restricting, according to the mainstream opinion, the form of interaction to j\cdot A. Nowadays too few researchers see these steps as a severe lack of basic understanding of what is going on. On the contrary, the mainstream ideology consists in dealing with the same wrong self-action mechanism patched with the same discarding prescription (“renormalization”), etc., but accompanied also with anthems to these “guiding principles” and to their inventors. I do not buy it. I understand the people’s desire to look smart – they grasped principles of Nature, but they look silly to me instead.

Indeed, let us for a moment look at Eq. (5) as at an exact equation containing the desirable radiation reaction correctly. We see, such an equation exists (at least, we admit its existence), it does not contain any non physical stuff like m_{em} and m_0 , and together with Maxwell equations it works fine. Then why not to obtain (5) directly from (1) and from another physical concept different from a wrong self-action idea patched with several forced replacements of equations? Why do we present our silly way as a right and unique? Relativistic and gauge invariance (equation properties) must be preserved, nobody argues, but making them “guiding principles” only leads to catastrophes, so (5) it is not a triumph of “principles”, but a lucky result of our difficult guesswork done against the misguiding principles. Principles do no think for us researchers. Thinking is our duty. Factually we need in (1) a small force like that in (5), but our derivation gives (2). What we then do is a lumbering justification of replacements of automatically obtained bad functions with creatively constructed better ones. Although equations like (5) work satisfactorily in some range of forces, the lack of mechanical equation with exact radiation reaction force in CED shows that we have not reached our goal and those principles have let us down.

Note, although the above is a non relativistic version of CED, the CED story is truly relativistic and gauge invariant and it serves as a model to many further theory developments. In particular, nowadays in QFT they “derive” the wrong self-action Lagrangian from a “principle of local gauge invariance” (a gauge principle for short). They find it beautiful mathematically, enjoy the equation symmetries and conservation laws that follow from this symmetry. They repeat QED where they think there is this “gauge principle”. However such gauge equations do not have physical solutions, so their conserved quantities are just a bullshit. During enjoying the beauty of gauge interaction, they omit to mention that the solutions are non physical. The gauge principle in QED does not lead to physical equations. We are forced to rebuild a gauge theory, as I outlined above. In CED the bare and electromagnetic masses appear and disappear shortly after, but in QED and QFT they are present in each perturbative order. In addition, the physical charge also acquires unnecessary and bad “corrections”, and their omnipresence makes an impression of their belonging to physics.

Next, new “principles” come into play – they come into play with the purpose to fix this shit. Those principles serve to “allow” multiple replacements of bad terms in solutions with better ones – bare stuff and renormalizations, of course. A whole “fairy science” about a “vacuum polarization” around a still “bare” charge is developed to get rid of bad perturbative corrections in this wrong gauge construction (renormalization group). It boils down to adding a counter-term Lagrangian \mathcal{L}_{CT} to the gauge one j\cdot A:

\mathcal{L}_{int}^R =j\cdot A+\mathcal{L}_{CT},\qquad (8)

so the interaction becomes different from a purely gauge one. (Often it is presented as imposing physical conditions to a (bad) theory.) Thus, bare stuff and bad corrections cancel each other and do not exits any more. That’s their fate – to disappear from physics forever, if you understand it right. And it is we who make them disappear, not physical phenomena like vacuum polarization, etc. In other words, renormalization is not a calculation, but a specific modification of calculation results.

But this fix is not sufficient either. They need to sum up soft diagrams too (to all orders) in order to obtain physically meaningful results because, alas, the electron does not correctly radiate otherwise and calculation fails! The latter fact shows eloquently that some part of “perturbation” (8) (let’s call it figuratively \mathcal{L}_{soft}) is not small and should be taken into account exactly (joined with \mathcal{L}_0, hence, removed from the “perturbation”):

SoftFig. 3. Electron scattering from an external field in the first Born approximation, as it must be.

\tilde{\mathcal{L}}_0=\mathcal{L}_0+\mathcal{L}_{soft},\qquad (9)

\tilde{\mathcal{L}}_{int}^R =j\cdot A+\mathcal{L}_{CT}-\mathcal{L}_{soft}.\qquad (10)

Such taking into account exactly is in fact using another, more physical, zeroth-order approximation with Lagrangian \tilde{\mathcal{L}}_0 (9). The electron charge e is involved there non perturbatively, so the electron is already coupled with the field variables, at least, partially (I call such an approximation an “electronium” [5]). Interaction (10) is even more different from the “gauge” one. (A good qualitative and quantitative analogy to such IR-divergent series and their exact sums is the second atomic form-factor f_n ^n (\mathbf{q}) (3) and its series in powers of m_e/M_A when |\mathbf{q}|=const and n\to\infty, see Fig. (3) in [5].)

You see, our former initial approximation (decoupled electron in \mathcal{L}_0) is not physical. You know why? Because we admit free particles in our minds and thus in equations. We observe interacting macroscopic bodies. In the simplest case we speak of a probe body in an external force. Sometimes the external forces add up into nearly zero and they do not change noticeably the body kinetic energy. Then we say the probe body is “free”. But we observe it with help of interactions too (inclusive image obtained with photons, for example), so it is never free, as a matter of fact, and, of course, its mass is not bare. For electron it also means that its very notion as a “point particle” and its equations is an inclusive picture of something compound [5]. An electron coupled within field oscillators has a natural mechanism of “radiation reaction” and a natural inclusive picture. Such a coupling is always on and never is off, unlike the gauge term j\cdot A treated perturbatively. W. Pauli always argued that one should look for a formulation of QED (or a field theory in general) which would mathematically not allow the description of a charged particle without its electromagnetic field. Now, seeing to what extent \mathcal{L}_0 and j\cdot A are different from (9) and (10), I can safely say that they really do not understand what to start with in their “gauge theories”. Even a physical solution of a partially coupled electron (a “hairy” electron line in Fig. 3) is not written, understood, and explained in QED, but who cares?

In electroweak unification they wanted to make the weak part of interaction to be a “gauge” too, but the gauge fields are massless. What a pity! Not only this construction needs counter-terms and soft diagram summations, now it needs a special “mechanism” to write down the mass terms in \mathcal{L}_0. Such a fix was found and it is known now as a Higgs mechanism. This fix to a bad gauge interaction idea is presented now as the ultimate explanation of the nature of mass: “Every ounce of mass found in the universe is made possible by the Higgs boson.” I wonder how were we doing before Higgs? With writing down phenomenological mass terms, we were in error, weren’t we? No. Then why all these complications? Because they do not know how to write down interactions with massive particles correctly (an old story, see (9) and (10) above). All they write is not only non physical, but also non renormalizable, so they decided to try here the gauge principle too. Fortunately or unfortunately, but some such constructions are renormalizable, thus they survived.

We remember the fiasco with the electron electromagnetic mass, and the Higgs proper mass is not really different since the Higgs boson acquires its own mass due to “self-action” too. It is not a calculation, but a fake since the Higgs boson mass is taken from experiment.

The Standard Model is also furnished with “fine tuning mechanism” because otherwise it is still a bullshit. And let me mention fitting parameters coming with the “Higgs mechanism”. Now the fitting properties of theory increased. Some, however, confuse it with increase of “predictive power”.

To me the Higgs is a fix, a fix somewhat similar to the bare mass term in CED compensating an obviously wrong construction, but a more complicated fix. I do not think it is an achievement. A bare mass notion is not an achievement in physics. The freedom in choosing the cutoff \Lambda in a relationship m_0(\Lambda)=m_e-m_{em}(\Lambda) (à la renorm-group) is not physics, \Lambda-independence of m_e is not a CED “universality”. I hope I am clear here. But nowadays particle physics is stuffed with artefacts of our patches and stopgaps, so it is really difficult to distinguish what is physical and what is a fairy tale (a fake).

Today they sell you the bare stuff, its self-action dictated with the gauge principle, then counter-terms, IR diagram summation, Higgs field with self-action and fine tuning, poisons and antidotes, shit with nutlets, etc. as a physical theory. They are very pushy in that. They grasped all the principles of Nature.

No, they fool themselves with “clever insights” and fairy tales instead of doing physics. They count on “guiding principles”, they are under the spell of the gauge and other principles. Sticking to them is like being possessed. This fact underlines the shaky grounds the modern QFT is based on.

We have no right to dope ourselves with self-fooling and self-flattering. The conceptual problems have not been resolved, let us recognize it.

(To be updated.)

[1] Laurie M. Brown (editor). Renormalization From Lorentz to Landau (and beyond), 1993, Springer-Verlag, the talk of Max Dresden.

[2] Kerson Huang, A Critical History of Renormalization, http://arxiv.org/abs/1310.5533

[3] H. Lorentz, Landau-Lifshitz, R. Feynman, etc.

[4] Fritz Rohrlich, The dynamics of a charged particle, (2008) http://arxiv.org/abs/0804.4614

[5] Vladimir Kalitvianski, Atom as a “Dressed” Nucleus, Central European Journal of Physics, V. 7, N. 1, pp. 1-11 (2009), http://arxiv.org/abs/0806.2635

Higgs field filled the whole space

October 11, 2013

Sorry for pun, if any.

I wonder whether the photon field filled the whole space then?

International Journal of Physics (Sciepub) has published my paper online

August 14, 2013

This paper is available on arXiv and now on the IJP site in open access.

IJP

A popular explanation of renormalization

January 6, 2013

I show where the error is made. Everyone can follow it.

Many think that renormalization belongs to relativistic quantum non linear field theories, and it is true, but it is not all the truth. The truth is that renormalization arises every time when we modify undesirably coefficients of our equations by introducing somewhat erroneous “interaction”, so we return to the old (good) values and call it renormalization. Both modifications of coefficients show our shameful errors in modeling and this can be demonstrated quite easily with help of a simple and exactly solvable equation system resembling the Classical and Quantum Electrodynamics.

Let us consider a couple of very familiar differential equations with phenomenological coefficients (two Newton equations):

One can see that the particle acceleration excites the oscillator, if the particle is in an external force. In this respect it is analogous to the electromagnetic wave radiation due to charge acceleration in Electrodynamics. When there is no external force, the “mechanical” and the “wave” equations become “decoupled”.

The oscillator equation system can be equivalently rewritten via the external force:

It shows that the external force application point, i.e., our particle, is a part of the oscillator, and this reveals how Nature works (remember P. Dirac’s: “One wants to understand how Nature works” in his talk “Does Renormalization Make Sense?” at a conference on perturbative QCD, AIP Conf. Proc. V. 74, pp. 129-130 (1981)).

Systems (1) and (2) look like they do not respect an “energy conservation law”: the oscillator energy can change, but the particle equation does not contain any “radiation reaction” term. Our task is to complete the mechanical equation with a small “radiation reaction” term, like in Classical Electrodynamics. It is namely here where we make an error. Indeed, let me tell you without delay that the right “radiation reaction” term for our particle is the following:

If we inject it in system (2), we will obtain a correct equation system:

Here we are, nothing else is needed for “reestablishing” the energy conservation law. System (4) can be derived from a physical Lagrangian in a regular way (see formula (22) here). We can safely give (4) to engineers and programmers to perform numerical calculations. Period. But it is not what we actually do in theoretical physics.

Instead, we, roughly speaking, insert (3) in (1) with help of our wrong ansatz on how “interaction” should be written. Let us see what then happens:

Although it is not visible in (5) at first glance, the oscillator equation gets spoiled – even the free oscillator frequency changes. Consistency with experiment gets broken. Why? The explanation is simple: while developing the right equation system, we have to keep the right-hand side of oscillator equation a known function of time or, more precisely, an external force, like in (2), rather than keep its “form” (1) (I call it “preserving the physical mechanism, the spirit, not the form”). Otherwise it will be expressed via unknown variable \mathbf{\ddot{r}}_{p}, which is coupled now to \mathbf{\ddot{r}}_{osc}, and this modifies the coefficient at the oscillator acceleration when \mathbf{\ddot{r}}_{p} in the oscillator equation is replaced with the right-hand side of the mechanical equation. In other words, if we proceed from (1), then we will make an elementary mathematical error because we not only add the right radiation reaction term, but also modify coefficients in the oscillator equation, contrary to our goal. As a result, both equations from (5) have wrong exact solutions. If we insist on this way, it is just our mistake (blindness, stubbornness) and no “bare” particles are responsible for undesirable modifications of equation coefficients.

However, in CED and QED they advance such an “interaction Lagrangian” (self-action) that spoils both the “mechanical” and the “wave” equations because it preserves the equation “form”, not the “spirit”. In our toy model we too can explicitly spoil both equations and obtain:

with advancing a similar “interaction Lagrangian” for “decoupled” equations from (1):

Here in (6) \tilde{M}_p=M_p+\delta M_p,\; \tilde{M}_{osc}=M_{osc}+\delta M_{osc} – masses with “self-energy corrections”. Thus, it is the “interaction Lagrangian” (7) who is bad, not the original constants in (1), whichever smart arguments are invoked for proposing (7).

Moreover, there is a physical Lagrangian for the correct equation system  (4). Therefore, we simply have not found it yet, so we are the main responsible for modifying the equation coefficients in our passage from (1) to (6), not some “bare particle interactions”.

In CED and QFT they perform a second modification of coefficients, now in perturbative solutions of (6) to obtain perturbative solutions of (4), roughly speaking. Such a second modification is called “renormalization” and it boils down to deliberately discarding the wrong and unnecessary “corrections” to the original coefficients in (6):

In other words, renormalization is our brute-force “repair” of spoiled by us coefficients of the original physical equations, whatever these equations are – classical of quantum. Although it helps sometimes, it is not a calculation in the true sense, but a “working rule” at best. A computer cannot do numerically such solution (curve) modifications. The latter only can be done in analytical expressions by hand. Such a renormalization can be implemented as a subtraction of some terms from (7), namely, a subtraction of

(called counter-terms) and it underlines again the initial wrongness of (7). It only may work by chance – if the remainder (3) is guessed right in the end, as in our toy model.

P. Dirac, R. Feynman, W. Pauli, J. Schwinger, S. Tomonaga, and many others were against such a “zigzag” way of doing physics: introducing something wrong and then subtracting it (physically we add an electron self-induction force that prevents the electron form any change of its state and then we discard its contribution entirely). However nowadays this prescription is given a serious physical meaning, namely, they say that no discarding we do, but it is the original coefficients who “absorb” our wrong corrections because our original coefficients in (1) are “bare” and “running”! Of course, it is not true: nothing was bare/running in (1) and is such in (4), but this is how the blame is erroneously transfered from a bad interaction Lagrangian to good original equations and their constants. Both modifications of coefficients (self-action ansatz and renormalization) are presented as a great achievement today. It, however, does not reveal how Nature works, but how human Nature works. Briefly, this is nothing else but a self-fooling, let us recognize it. No grand unification is possible until we learn how to get to (4) directly from (1), without renormalization.

Most of our “theories” are non renormalizable just for this reason: stubbornly counting that renormalization will help us out, we, by analogy, propose wrong “interaction Lagrangians” that not only modify the original coefficients in equations, but also bring wrong “radiation reaction” terms. Remember the famous \mathbf{\dddot{r}}_p leading to runaway exact solutions in CED and needing a further “repair” like \mathbf{\dddot{r}}_p\to\mathbf{\dot{F}}_{ext} or so.

We must stop keeping to this wrong way of doing physics and pretending that everything is alright.

P.S. Wilsonian framework, as any other, proceeds from an implicit idea of uniqueness and correctness of the spoiled (i.e., wrong) equations, and cutoff and renormalizations are simply and “naturally” needed there because “we do not know something” or because “our theory lacks something”. Such a “calming” viewpoint prevents us from reformulating the equations from other physical principles and “freezes” the incorrect way of doing physics in QFT. Wilsonian interpretation, as any other, is in fact a covert recognition of incorrectness of the theory equations (equations (6) in our case), let us state it clearly. First, one cuts off a correction under some “clever pretext”, and next, one discards it entirely anyway because this correction is just entirely wrong whatever cut-off value is, so the “clever pretext” for cutting off is put to shame.

And those who still believe in bare particles and their interactions, “discovered” by clever and insightful theorists despite bare stuff being non observable, believe in miracles. One of the miracles is the famous “absorption” of wrong corrections by wrong constants in the right theory (i.e., the constants themselves absorb corrections, without human intervention).


Follow

Get every new post delivered to your Inbox.

%d bloggers like this: