On “The Higgs Fake” book by Alexander Unzicker

November 12, 2013

A recent book by Alexander Unzicker “The Higgs Fake” considers, in particular, how “particle physicists are fooling themselves with alleged results, while their convictions are based on group-think and parroting.” It represents a critical point of view and it is not groundless. I would like to support Alexander Unzicker in his critics. In former times the founding fathers of physics were speaking of non resolved fundamental problems, which are still not resolved satisfactorily, but nowadays everything is represented as a fulfledged building based on some “fundamental principles”. Let us take, for example, a citation of W. Pauli, one of most honest physicist of the last century:

We will be considered the generation that left behind unsolved such essential problems as the electron self-energy.

I think the essence is here and avoiding it created just a shaman’s practice where cheating and self-fooling are essential parts. To prove that, let me be more specific and let us consider the electron electromagnetic mass. This notion had arisen in Classical Electrodynamics (CED) well before the famous E=mc^2 was derived [1] and it remains an unsolved problem even today (there are still publications on this subject).  We must not confuse it with the electromagnetic mass defect, though, i.e., with a calculable interaction energy.

The electromagnetic mass can be thought of as a Coulomb energy of the electric field surrounding the electron when we calculate the total field energy. In other words, it is a consequence of the field concept. This part of the field energy is cut-off dependent and thus can take any value at your convenience. We all are familiar with the classical radius of electron r_0=e^2/mc^2, but if we take into account the electron magnetic moment field energy too, we will obtain another radius, closer to the Compton length \hbar/mc. Still, in nature there is no electron of the classical or any other radius. And normally this part of the field energy is entirely discarded and what is left is an interaction energy of charges. Thus, when we calculate a field energy, the electromagnetic mass is just of no use.

Apart from the total field energy, the electromagnetic mass of a point-like charge m_{\rm{em}} enters the usual “mechanical” equation of a charge when we decide to insert the charge proper field into the charge equation of motion in the frame of a self-action ansatz. The latter is done for the sake of taking into account a weak radiation reaction force, which must provide the total energy conservation. The motivation – energy conservation – is understandable, but in a field approach with \mathcal{L}_{\rm{int}}\propto j\cdot A, we insert the entire filed F_{\mu\nu}, not just a radiation field F_{\mu\nu}^{\rm{rad}}, in the mechanical equation. We do it by (a wrong) analogy with an external force F_{\mu\nu}^{\rm{ext}}. So, before this our intervention we have a good “mechanical” equation (I use a non relativistic form)

m_{\rm{e}} \ddot{ \mathbf{r} }= \mathbf{F}_{\rm{ext}},\qquad (1)

which works almost fine (the near field, whatever it is, easily follows the charge according to Maxwell equations), and after our noble intervention it becomes

m_{\rm{e}} \ddot{ \mathbf{r} }= \mathbf{F}_{\rm{ext}} - m_{\rm{em}}\ddot{ \mathbf{r} } + \frac{2e^2}{3c^3}\dddot{\mathbf{r}}.\qquad (2)

that does not work any more. The corresponding self-force term - m_{\rm{em}}\ddot{ \mathbf{r} } with m_{\rm{em}}\to\infty makes it impossible for a charge to change its state of a uniform motion v = const. This is a self-induction force, an extremely strong one. It’s an understandable “physical effect”, but first, it is not observed as infinite, and second, the self-induction force is not a radiation reaction force in any way, so our approach to describing the radiation influence via self-action is blatantly wrong. Albeit of an anticipated sign (and even when made finite and small), it does not help conserve the total energy. Microsoft Windows would say:


I. e., the term - m_{\rm{em}}\ddot{ \mathbf{r} } is not of the right functional dependence. Instead of recognizing this error, physicists started to search a pretext to keep to the self-action idea in place. They noticed that discarding the term - m_{\rm{em}}\ddot{ \mathbf{r} } “helps” (we will later see how it helps), but calling it honestly “discarding” makes fun of physicists. Discarding is not a calculation. Thus, another brilliant idea was advanced – an idea of “bare” mass m_0 =m_{\rm{e}}-m_{\rm{em}} that “absorbs” m_{\rm{em}} (a “mechanism” called later a mass renormalization). Tricky is Nature, but clever are physicist. In a fresh historical paper Kerson Huang expresses the common attitude to it [2]:


One notices with great relief that the self‐mass can be absorbed into the physical mass in the equation of motion

and he writes down an equation, which experimentally follows from nowhere:

m_0 \ddot{ \mathbf{r} }= \mathbf{F}_{\rm{ext}} - m_{\rm{em}}\ddot{ \mathbf{r} } + \frac{2e^2}{3c^3}\dddot{\mathbf{r}}.\qquad (3)

It is here where the negative bare mass m_0 <0 is introduced in physics by physicists, introduced exclusively with the purpose to subtract the harmful electromagnetic mass. This introduction is not convincing to me. A negative mass makes the particle move to the left when the force pulls it to the right. We never observed such a silly behaviour (like that of a stupid goat) and we never wrote the corresponding equations. We cannot pretend that (1) describes such a wrong particle in an external field, but adding its self-induction makes the equation right, as it does Kerson Huang. It’s all the way around: in order to make the wrong equation (2) closer to the original one (1), we just discard the electromagnetic mass whatever value it takes. Kerson Huang should have written honestly “One notices that the self‐mass ought to be omitted“.

As well, those who refer to a hydrodynamics analogy, present this silly speculation about arbitrary m_0 and m_{\rm{em}} as a typical calculation, a calculation like in hydrodynamics where everything is separately measurable, known, and physical. In CED it is not the case. And if the electromagnetic mass is present already in our phenomenological equation (1), the method of self-action takes it into account once more which shows again that such an approach is self-inconsistent. You know, self-induction of a wire is in fact a completely calculable physical phenomenon occurring with many interacting charges. Similarly in plasma description we calculate interactions for dynamics. Interaction is a good concept, but a self-action of an elementary particle is a bad idea. It describes no internal dynamics by definition.

If a bare particle is truly not observable, we cannot even establish an equation for it and we cannot pretend that its equation is of the same form as the Newton equations for physical particles. That is why they say that the bare mass is not observable alone – it always comes in (3) together with the electromagnetic one: m_0 +m_{\rm{em}}=m_{\rm{e}}. But it is not true either: equation (1) contains the physical mass m_{\rm{e}} and in addition, if the external force in (1) contains the omnipresent gravity force, say, m_{\rm{e}} g for simplicity, the latter does not acquire any addendum when we add that self-induction force. In reality, we fight our own invention m_{\rm{em}} with help of another one – m_0, but too many people believe in both.

This is the real truth about mass “renormalization” procedure. We ourselves introduce the self-mass in our equation and then we remove it. As nothing remains from it anywhere (the physical mass stays intact), I can safely say that there is no electromagnetic mass at all, that’s my answer to this question (again, not to confuse with the mass defect due to interaction). (By the way, renormalization does not work without fail – there are many non renormalizable theories where bad interaction terms spoil not only the original equation coefficients, but also introduce wrong “remainders”. Success of renormalization is based on lucky accidents, see my opus here or here. P. Dirac clearly called it a fluke.)

Those who insist on this “calculation” forget that then there are forces keeping the charge parts together and these forces have their own “self-induction” and “radiation reaction” contributions. No, this model needs too many “unknowns”.

Here I naively wonder why not from the very beginning to use just the radiated field instead of the total field to take into account the “radiation reaction”? Then they might never obtain the harmful jerk term \propto \dddot{\mathbf{r}}, but they do not do it. They stick to the self-action patched with the “bare mass mechanism” and they hope that the jerk “remainder” of self-action will correctly describe the radiation reaction. Let us see.

So, after shamefully camouflaging discarding silly m_{\rm{em}}\ddot{ \mathbf{r} }, they are left with the jerk  \frac{2e^2}{3c^3}\dddot{\mathbf{r}} called a “radiation reaction” force:

m_{\rm{e}} \ddot{ \mathbf{r} }= \mathbf{F}_{\rm{ext}} + \frac{2e^2}{3c^3}\dddot{\mathbf{r}}.\qquad (4)

Fortunately, it is wrong too. I say “fortunately” because it reinforces my previous statement that the self-action is a wrong idea. This remainder cannot be used as it gives runaway solutions. Not small radiation reaction, but a rapid self-acceleration. Microsoft Windows would say:


I. e., the term \frac{2e^2}{3c^3}\dddot{\mathbf{r}} is not of the right functional dependence either. In other words, all terms of self-action force in (2) are wrong. Briefly, this self-action idea was tried and it failed miserably. Period.

(This self-action can be figuratively represented as connecting an amplifier output to its input. It creates a feedback. First the feedback is strongly negative – no reaction to an external signal is possible anymore. After “repairing” this undesirable feedback, we get a strong positive feedback. Now we have a self-amplification whatever the external signal value is. No good either.)

A. Unzicker speaks of a fake and for some readers this may look as an exaggeration. If you want to see physicists cheating, here is another bright example. This cheating consists in using \dddot{\mathbf{r}} in their “proof” of energy conservation [3], as if the corresponding equation (4) had physically reasonable quasi-periodical solutions. But it doesn’t! Runaway solutions are not quasi-periodical and are not physical at all, so the proof is just a deception. (They multiply \dddot{\mathbf{r}} by \dot{\mathbf{r}} and integrate it by parts to “show” that on average it is a radiation power.) If they insist on using quasi-periodic solutions in their proof, these solutions do not belong to Eq. (4). (A “jerky” equation like (4) does not even have any physical Lagrangian to be directly derived from!)

As a matter of fact, after cheating with the “proof“, this harmful jerk term is also (quietly) abandoned in favor of some small force term used in practice instead. This small term is \frac{2e^2}{3m_{\rm{e}} c^3}\dot{\mathbf{F}}_{\rm{ext}} (or alike):

m_{\rm{e}} \ddot{ \mathbf{r} }= \mathbf{F}_{\rm{ext}} + \frac{2e^2}{3m_{\rm{e}} c^3}\dot{\mathbf{F}}_{\rm{ext}}.\qquad (5)

Equation (5) is much better, but here again, I notice cheating once more because they represent it as a “derivation” from (4). Now cheating consists in replacing \dddot{\mathbf{r}} with \dot{\mathbf{F}}_{\rm{ext}} as if we solved (4) by iterations (perturbation method). However, in the true iterative procedure we obtain a given function of time \dot{\mathbf{F}}_{\rm{ext}}^{(0)}(t)=\dot{\mathbf{F}}_{\rm{ext}}\left(\mathbf{r}^{(0)}(t),\mathbf{v}^{(0)}(t)\right) on the right-hand side rather than a term \dot{\mathbf{F}}_{\rm{ext}} expressed via unknown dynamical variables \mathbf{r} and \mathbf{v}. For example, in an oscillator equation

\ddot{y}+ \omega^2 y= \frac{2e^2}{3m_{\rm{e}}c^3}\dddot{y}\qquad (6)

the first perturbative term \dot{F}_{\rm{ext}}^{(0)}(t)\propto \dot{y}^{(0)}(t) is a known external (periodic and resonance (!) in case of harmonic oscillator) driving force whereas the replacement term \dot{F}_{\rm{ext}}\propto \dot{y} is unknown damping force (kind of a friction):

\ddot{\tilde{y}}+ \gamma\,\dot{\tilde{y}}+ \omega^2 \tilde{y}= 0,\quad \gamma=\frac{2e^2\omega^2}{3m_{\rm{e}}c^3}.\qquad (7)

A perturbative solution to (6) y\approx y^{(0)} + y^{(1)} (a red line in Fig. 2)

Y_2Fig. 2.

is different from a damped oscillator solution \tilde{y} (a blue line in Fig. 2). Solution to a damped oscillator equation is non linear in \gamma, non linear in a quite certain manner. It is not a self-action, but an interaction with something else. This difference in equations is qualitative (conceptual) and it is quantitatively important in case of a strong radiation reaction force and/or when t\to\infty (I used in this example y^{(0)}=\sin\omega t with \omega=10, and \gamma=0.3). I conclude therefore that a damped oscillator equation (7) is not a perturbative version of (6), but is another guesswork result tried and left finally in practice because of its physically more reasonable (although still approximate) behaviour. Similarly, equation (5) is not a perturbative version of (4), but another (imperceptible) equation replacement [3], [4]. Of course, there is no and may not be any proof that perturbative series for (4) converge to solutions of (5). The term \frac{2e^2}{3m_{\rm{e}} c^3}\dot{\mathbf{F}}_{\rm{ext}} is a third functional dependence tried for description of the radiation reaction force.

Hence, researchers have been trying to derive equations describing the radiation reaction force correctly, but they’ve failed. For practical (engineering) purposes they constructed (found by trying different functions) and are content with approximate equations like (5) that do not provide the exact energy conservation and do not follow from “principles” (no Lagrangian, no Noether theorem, etc.). Factually the field approach has been “repaired” several times with anti-field guesswork, if you like. Anyway, we may not represent it as a continuous implementation of principles because it isn’t so.

Guessing equations, of course, is not forbidden, on the contrary, but this story shows how far away we have gone from the original idea of self-action. It would not be such a harmful route if the smart mainstream guys did not raise every step of this zigzag guesswork into “the guiding principles” – relativistic and gauge invariance, restricting, according to the mainstream opinion, the form of interaction to j\cdot A. Nowadays too few researchers see these steps as a severe lack of basic understanding of what is going on. On the contrary, the mainstream ideology consists in dealing with the same wrong self-action mechanism patched with the same discarding prescription (“renormalization”), etc., but accompanied also with anthems to these “guiding principles” and to their inventors. I do not buy it. I understand the people’s desire to look smart – they grasped principles of Nature, but they look silly to me instead.

Indeed, let us forget for a moment about its inexactness and look at Eq. (5) as at an exact equation, i.e., as containing the desirable radiation reaction correctly. We see, such an equation exists (at least, we admit its existence), it does not contain any non physical stuff like m_{\rm{em}} and m_0 , and together with Maxwell equations it works fine. Then why not to obtain (5) directly from (1) and from another physical concept different from a wrong self-action idea patched with several forced replacements of equations? Why do we present our silly way as the right and unique? Relativistic and gauge invariance (equation properties) must be preserved, nobody argues, but making them “guiding principles” only leads to catastrophes, so (5) it is not a triumph of “principles”, but a lucky result of our difficult guesswork done against the misguiding principles. Principles do no think for us researchers. Thinking is our duty. Factually we need in (1) a small force like that in (5), but our derivation gives (2). What we then do is a lumbering justification of replacements of automatically obtained bad functions with creatively constructed better ones. Although equations like (5) work satisfactorily in some range of forces, the lack of mechanical equation with exact radiation reaction force in CED shows that we have not reached our goal and those principles have let us down.

Note, although the above is a non relativistic version of CED, the CED story is truly relativistic and gauge invariant and it serves as a model to many further theory developments. In particular, nowadays in QFT they “derive” the wrong self-action Lagrangian from a “principle of local gauge invariance” (a gauge principle for short). They find it beautiful mathematically, enjoy the equation symmetries and conservation laws that follow from this symmetry. They repeat QED where they think there is this “gauge principle”. However such gauge equations do not have physical solutions, so their conserved quantities are just a bullshit. During enjoying the beauty of gauge interaction, they omit to mention that the solutions are non physical. The gauge principle in QED does not lead to physical equations. We are forced to rebuild a gauge theory as I outlined above. In CED the bare and electromagnetic masses appear and disappear shortly after for good, but in QED and QFT they reappear in each perturbative order. In addition, the physical charge also acquires unnecessary and bad “corrections”, and their omnipresence makes an impression of their belonging to physics.

Next, new “principles” come into play – they come into play with the purpose to fix this shit. Those principles serve to “allow” multiple replacements of bad terms in solutions with better ones – bare stuff and renormalizations, of course. A whole “fairy science” about a “vacuum polarization” around a still “bare” charge is developed to get rid of bad perturbative corrections in this wrong gauge construction (renormalization group). It boils down to adding a counter-term Lagrangian \mathcal{L}_{\rm{CT}} to the gauge one j\cdot A:

\mathcal{L}_{\rm{int}}^{\rm{R}} =j\cdot A+\mathcal{L}_{\rm{CT}},\qquad (8)

so the interaction becomes different from a purely gauge one. (Often it is presented as imposing physical conditions to a (bad) theory.) Thus, bare stuff and bad corrections cancel each other and do not exits any more. That’s their fate – to disappear from physics forever, if you understand it right. And it is we who make them disappear, not physical phenomena like vacuum polarization, etc. In other words, renormalization is not a calculation, but a specific modification of calculation results.

But this fix is not sufficient either. One needs to sum up soft diagrams too (to all orders) in order to obtain physically meaningful results because, alas, the electron does not correctly radiate otherwise and calculation fails! The latter fact shows eloquently that some part of “perturbation” (8) (let’s call it figuratively \mathcal{L}_{\rm{soft}}) is not small and should be taken into account exactly (joined with \mathcal{L}_0, hence, removed from the “perturbation”):

SoftFig. 3. Electron scattering from an external field in the first Born approximation, as it must be.

\tilde{\mathcal{L}}_0=\mathcal{L}_0+\mathcal{L}_{\rm{soft}},\qquad (9)

\tilde{\mathcal{L}}_{\rm{int}}^{\rm{R}} =j\cdot A+\mathcal{L}_{\rm{CT}}-\mathcal{L}_{\rm{soft}}.\qquad (10)

Such taking into account exactly is in fact using another, more physical, zeroth-order approximation with Lagrangian \tilde{\mathcal{L}}_0 (9). The electron charge e is involved there non perturbatively, so the electron is already coupled with the field variables, at least, partially (I call such an approximation an “electronium” [5]). Interaction (10) is even more different from the “gauge” one. (A good qualitative and quantitative analogy to such IR-divergent series and their exact sums is the second atomic form-factor f_n ^n (\mathbf{q}) (3) and its series in powers of m_{\rm{e}}/M_{\rm{A}} when |\mathbf{q}|=\rm{const} and n\to\infty, see Fig. 3 in [5] and [7].)

You see, our former initial approximation (decoupled electron in \mathcal{L}_0) is not physical. You know why? Because we admit free particles in our minds and thus in equations. We observe interacting macroscopic bodies. In the simplest case we speak of a probe body in an external force. Sometimes the external forces add up into nearly zero and they do not change noticeably the body kinetic energy. Then we say the probe body is “free”. But we observe it with help of interactions too (inclusive image obtained with photons, for example), so it is never free, as a matter of fact, and, of course, its mass is not bare. For electron it also means that its very notion as a “point particle” and its equations is an inclusive picture of something compound [5]. An electron coupled within field oscillators has a natural mechanism of “radiation reaction” and a natural inclusive picture. Such a coupling is always on and never is off, unlike the gauge term j\cdot A treated perturbatively. W. Pauli always argued that one should look for a formulation of QED (or a field theory in general) which would mathematically not allow the description of a charged particle without its electromagnetic field. Now, seeing to what extent \mathcal{L}_0 and j\cdot A are different from (9) and (10), I can safely say that they really do not understand what to start with in their “gauge theories”. Even a physical solution of a partially coupled electron (a “hairy” electron line in Fig. 3) is not written, understood, and explained in QED, but who cares? (My mechanical [6] and atomic toy models [7] demonstrate that this can be achieved.)

In electroweak unification they wanted to make the weak part of interaction to be a “gauge” too, but the gauge fields are massless. What a pity! Not only this construction needs counter-terms and soft diagram summations, now it needs a special “mechanism” to write down the mass terms in \mathcal{L}_0. Such a fix was found and it is known now as a Higgs mechanism. This fix to a bad gauge interaction idea is presented now as the ultimate explanation of the nature of mass: “Every ounce of mass found in the universe is made possible by the Higgs boson.” I wonder how were we doing before Higgs? With writing down phenomenological mass terms, we were in error, weren’t we? No. Then why all these complications? Because they do not know how to write down interactions with massive particles correctly (an old story, see (9) and (10) above). All they write is not only non physical, but also non renormalizable, so they decided to try here the gauge principle too. Fortunately or unfortunately, but some such constructions are renormalizable, thus they survived.

We remember the fiasco with the electron electromagnetic mass, and the Higgs proper mass is not really different since the Higgs boson acquires its own mass due to “self-action” too. It is not a calculation, but a fake since the Higgs boson mass is taken from experiment.

The Standard Model is also furnished with “fine tuning mechanism” because otherwise it is still a bullshit. And let me mention fitting parameters coming with the “Higgs mechanism”. Now the fitting properties of theory increased. Some, however, confuse it with increase of “predictive power”.

To me the Higgs is a fix, a fix somewhat similar to the bare mass term in CED compensating an obviously wrong construction, but a more complicated fix. I do not think it is an achievement. A bare mass notion is not an achievement in physics. The freedom in choosing the cutoff \Lambda in a relationship m_0(\Lambda)=m_{\rm{e}}-m_{\rm{em}}(\Lambda) (à la renorm-group) is not physics, \Lambda-independence of m_{\rm{e}} is not a CED “universality”. I hope I am clear here. But nowadays particle physics is stuffed with artefacts of our patches and stopgaps, so it is really difficult to distinguish what is physical and what is a fairy tale (a fake).

Today they sell you the bare stuff, its self-action dictated with the gauge principle, then counter-terms, IR diagram summation, Higgs field with self-action and fine tuning, poisons and antidotes, shit with nutlets, etc. as a physical theory. They are very pushy in that. They grasped all the principles of Nature.

No, they fool themselves with “clever insights” and fairy tales instead of doing physics. They count on “guiding principles”, they are under the spell of the gauge and other principles. Sticking to them is like being possessed. This fact underlines the shaky grounds the modern QFT is based on.

We have no right to dope ourselves with self-fooling and self-flattering. The conceptual problems have not been resolved, let us recognize it.

(To be updated.)

[1] Laurie M. Brown (editor). Renormalization From Lorentz to Landau (and beyond), 1993, Springer-Verlag, the talk of Max Dresden.

[2] Kerson Huang, A Critical History of Renormalization, http://arxiv.org/abs/1310.5533

[3] H. Lorentz, Landau-Lifshitz, R. Feynman, etc.

[4] Fritz Rohrlich, The dynamics of a charged particle, (2008) http://arxiv.org/abs/0804.4614

[5] Vladimir Kalitvianski, Atom as a “Dressed” Nucleus, Central European Journal of Physics, V. 7, N. 1, pp. 1-11 (2009), http://arxiv.org/abs/0806.2635

[6] Vladimir Kalitvianski, A toy model of renormalization and reformulation, http://arxiv.org/abs/1110.3702

[7] Vladimir Kalitvianski, On integrating out short-distance physics, http://arxiv.org/abs/1409.8326

Higgs field filled the whole space

October 11, 2013

Sorry for pun, if any.

I wonder whether the photon field filled the whole space then?

International Journal of Physics (Sciepub) has published my paper online

August 14, 2013

This paper is available on arXiv and now on the IJP site in open access.


A popular explanation of renormalization

January 6, 2013

I show where the error is made. Everyone can follow it.

Many think that renormalization belongs to relativistic quantum non linear field theories, and it is true, but it is not all the truth. The truth is that renormalization arises every time when we modify undesirably coefficients of our equations by introducing somewhat erroneous “interaction”, so we must return to the old (good) values and call it renormalization. Both modifications of coefficients show our shameful errors in modeling and this can be demonstrated quite easily with help of a simple and exactly soluble equation system resembling the radiation reaction problem in Classical and Quantum Electrodynamics.

Let us consider a couple of very familiar differential equations with phenomenological coefficients (two Newton equations):

\normalsize \begin{cases}M_p\mathbf{\ddot{r}}_p = \mathbf{F}_{ext}(t),\\ M_{osc}\mathbf{\ddot{r}}_{osc}+k\mathbf{r}_{osc}=\alpha M_{osc}\mathbf{\ddot{r}}_{p},\quad\omega = \sqrt{k/M_{osc}}.\end{cases}\qquad (1)

One can see that the particle acceleration excites the oscillator, if the particle is in an external force. In this respect it is analogous to the electromagnetic wave radiation due to charge acceleration in Electrodynamics. When there is no external force, the “mechanical” and the “wave” equations become “decoupled”.

The oscillator equation system can be equivalently rewritten via the external force:

\normalsize \begin{cases}M_p\mathbf{\ddot{r}}_p = \mathbf{F}_{ext}(t),\\ M_{osc}\mathbf{\ddot{r}}_{osc}+k\mathbf{r}_{osc}=\alpha \frac{M_{osc}}{M_p}\mathbf{F}_{ext}(t).\end{cases}\qquad (2)

It shows that the external force application point, i.e., our particle, is a part of the oscillator, and this reveals how Nature works (remember P. Dirac’s: “One wants to understand how Nature works” in his talk “Does Renormalization Make Sense?” at a conference on perturbative QCD, AIP Conf. Proc. V. 74, pp. 129-130 (1981)).

Systems (1) and (2) look like they do not respect an “energy conservation law”: the oscillator energy can change, but the particle equation does not contain any “radiation reaction” term. Our task is to complete the mechanical equation with a small “radiation reaction” term, like in Classical Electrodynamics. It is namely here where we make an error. Indeed, let me tell you without delay that the right “radiation reaction” term for our particle is the following:

\normalsize \alpha M_{osc}\ddot{\mathbf{r}}_{osc}.\qquad (3)

If we inject it in system (2), we will obtain a correct equation system:

\normalsize \begin{cases}M_p\mathbf{\ddot{r}}_p=\mathbf{F}_{ext}(t)+\alpha M_{osc}\ddot{\mathbf{r}}_{osc},\\M_{osc}\mathbf{\ddot{r}}_{osc}+k\mathbf{r}_{osc}=\alpha \frac{M_{osc}}{M_p}\mathbf{F}_{ext}(t).\end{cases}\qquad (4)

Here we are, nothing else is needed for “reestablishing” the energy conservation law. System (4) can be derived from a physical Lagrangian in a regular way (see formula (22) here). We can safely give (4) to engineers and programmers to perform numerical calculations. Period. But it is not what we actually do in theoretical physics.

Instead, we, roughly speaking, insert (3) in (1) with help of our wrong ansatz on how “interaction” should be written. Let us see what then happens:

\normalsize \begin{cases}M_p\mathbf{\ddot{r}}_p = \mathbf{F}_{ext}(t)+\alpha M_{osc}\ddot{\mathbf{r}}_{osc},\\ M_{osc}\mathbf{\ddot{r}}_{osc}+k\mathbf{r}_{osc}=\alpha M_{osc}\mathbf{\ddot{r}}_{p},\end{cases}\qquad (5)

Although it is not visible in (5) at first glance, the oscillator equation gets spoiled – even the free oscillator frequency changes. Consistency with experiment gets broken. Why? The explanation is simple: while developing the right equation system, we have to keep the right-hand side of oscillator equation a known function of time or, more precisely, an external force, like in (2), rather than keep its “form” (1) (I call it “preserving the physical mechanism, the spirit, not the form”). Otherwise it will be expressed via unknown variable \mathbf{\ddot{r}}_{p}, which is coupled now to \mathbf{\ddot{r}}_{osc}, and this modifies the coefficient at the oscillator acceleration when \mathbf{\ddot{r}}_{p} in the oscillator equation is replaced with the right-hand side of the mechanical equation. In other words, if we proceed from (1), then we will make an elementary mathematical error because we not only add the right radiation reaction term, but also modify coefficients in the oscillator equation, contrary to our goal. As a result, both equations from (5) have wrong exact solutions. If we insist on this way, it is just our mistake (blindness, stubbornness) and no “bare” particles are responsible for undesirable modifications of equation coefficients.

However, in CED and QED they advance such an “interaction Lagrangian” (self-action) that spoils both the “mechanical” and the “wave” equations because it preserves the equation “form”, not the “spirit”. In our toy model we too can explicitly spoil both equations and obtain:

\normalsize \begin{cases}\tilde{M}_p\mathbf{\ddot{r}}_p=\mathbf{F}_{ext}(t)+\alpha M_{osc}\ddot{\mathbf{r}}_{osc},\\\tilde{M}_{osc}\mathbf{\ddot{r}}_{osc}+k\mathbf{r}_{osc}=\alpha \frac{M_{osc}}{\tilde{M}_p}\mathbf{F}_{ext}(t),\end{cases}\qquad (6)

with advancing a similar “interaction Lagrangian” for “decoupled” equations from (1):

\normalsize L_{int}=-\alpha M_{osc}\left(\mathbf{\dot{r}}_p\cdot\mathbf{\dot{r}}_{osc}-\frac{\eta}{2} \mathbf{\dot{r}}_p ^2\right).\qquad (7)

Here in (6) \tilde{M}_p=M_p+\delta M_p,\; \tilde{M}_{osc}=M_{osc}+\delta M_{osc} – masses with “self-energy corrections”. Thus, it is the “interaction Lagrangian” (7) who is bad, not the original constants in (1), whichever smart arguments are invoked for proposing (7).

Moreover, there is a physical Lagrangian for the correct equation system  (4). Therefore, we simply have not found it yet, so we are the main responsible for modifying the equation coefficients in our passage from (1) to (6), not some “bare particle interactions”.

In CED and QFT they perform a second modification of coefficients, now in perturbative solutions of (6) to obtain perturbative solutions of (4), roughly speaking. Such a second modification is called “renormalization” and it boils down to deliberately discarding the wrong and unnecessary “corrections” \delta M to the original coefficients in (6):

\tilde{M}\to M

In other words, renormalization is our brute-force “repair” of spoiled by us coefficients of the original physical equations, whatever these equations are – classical of quantum. Although it helps sometimes, it is not a calculation in the true sense, but a “working rule” at best. A computer cannot do numerically such solution (curve) modifications. The latter only can be done in analytical expressions by hand. Such a renormalization can be implemented as a subtraction of some terms from (7), namely, a subtraction of

\alpha \eta\frac{ M_{osc}\dot{\mathbf{r}}_p^2}{2} -\alpha ^2\left(\frac{M_{osc}}{M_p} \right )^2\frac{M_p\dot{\mathbf{r}}_{osc}^2}{2},\qquad (9)

(called counter-terms) and it underlines again the initial wrongness of (7). It only may work by chance – if the remainder (3) is guessed right in the end, as in our toy model.

P. Dirac, R. Feynman, W. Pauli, J. Schwinger, S. Tomonaga, and many others were against such a “zigzag” way of doing physics: introducing something wrong and then subtracting it (physically we add an electron self-induction force -\delta m \cdot\ddot{\bf r} that prevents the electron form any change of its state \dot{\bf r}=const and then we discard its contribution entirely). However nowadays this prescription is given a serious physical meaning, namely, they say that no discarding we do, but it is the original coefficients who “absorb” our wrong corrections because our original coefficients in (1) are “bare” and “running”! Of course, it is not true: nothing was bare/running in (1) and is such in (4), but this is how the blame is erroneously transfered from a bad interaction Lagrangian to good original equations and their constants. Both modifications of coefficients (self-action ansatz and renormalization) are presented as a great achievement today. It, however, does not reveal how Nature works, but how human Nature works. Briefly, this is nothing else but a self-fooling, let us recognize it. No grand unification is possible until we learn how to get to (4) directly from (1), without renormalization.

Most of our “theories” are non renormalizable just for this reason: stubbornly counting that renormalization will help us out, we, by analogy, propose wrong “interaction Lagrangians” that not only modify the original coefficients in equations, but also bring wrong “radiation reaction” terms. Remember the famous \mathbf{\dddot{r}}_p leading to runaway exact solutions in CED and needing a further “repair” like \mathbf{\dddot{r}}_p\to\mathbf{\dot{F}}_{ext} or so.

We must stop keeping to this wrong way of doing physics and pretending that everything is alright.

P.S. Wilsonian framework, as any other, proceeds from an implicit idea of uniqueness and correctness of the spoiled (i.e., wrong) equations, and cutoff and renormalizations are simply and “naturally” needed there because “we do not know something” or because “our theory lacks something”. Such a “calming” viewpoint prevents us from reformulating the equations from other physical principles and “freezes” the incorrect way of doing physics in QFT. Wilsonian interpretation, as any other, is in fact a covert recognition of incorrectness of the theory equations (equations (6) in our case), let us state it clearly. First, one cuts off a correction under some “clever pretext”, and next, one discards it entirely anyway because this correction is just entirely wrong whatever cut-off value is, so the “clever pretext” for cutting off is put to shame.

And those who still believe in bare particles and their interactions, “discovered” by clever and insightful theorists despite bare stuff being non observable, believe in miracles. One of the miracles is the famous “absorption” of wrong corrections by wrong constants in the right theory (i.e., the constants themselves absorb corrections, without human intervention).

My presentations at INLN

March 16, 2012

On the 15-th March I gave two talks à l’Institue Non Linéaire de Nice (Sophia-Antipolis), next to Nice and Cannes, France. My interlocutors were Thierry Grandou (INLN, France) and Herbert Fried (Brown University, USA). Both of them were interested in learning my position and in my explanations, and I am very grateful to them for their invitation. It is a very rare case when people do not reject the very idea that the renormalizations can be removed from our framework by reformulation of our theories in better terms.

The slides without comments are here and here, and with comments (but smaller in size) are here and here.

IVONA – the best text-to-speech converter and the best voices

November 3, 2011

Recently I found a very good TTS converter with natural voices and other features. It is IVONA. Try it and maybe one day it will come in handy! It has British and American English male and female voices, as well as some other languages. It can not only be used as a simple text reader, but also voice up your applications if you are a software developer.

Ultimate explanation of renormalizations

July 16, 2011

Trying to communicate my results and ideas to people, I started to prepare a PowerPoint document. Any theoretical physics student can follow it. An article version is here: http://arxiv.org/abs/1110.3702.

There are so many different “expoundings” of renormalizations in the literature. I think mine is the only correct one. The others mislead and even fool you. For example, one geek considers the Archimedes effect as a mass renormalization and says that it may give a negative effective mass. What a shit! Don’t buy it! Whatever is the resulting force applied to a body \vec{F}_{tot} = \vec{F}_1 + \vec{F}_2 + \vec{F}_3+..., the body mass remains the same. 😉

P.S. I am speaking, of course, of the “old-fashion” problem of coupling certain equations, not of the solid-state-like renormalization à la Wilson. In other words, I do not touch the effective theory approach, which works fine where it belongs.

Clarification of my position

December 30, 2010

Some readers think that I am “against” QED and QFT results because I am against renormalizations. I think I might be insufficiently clear in my critics of renormalizations and thus produced such a false impression.

No, on the contrary, the final results of QED are right and I use them as a valuable data. I am just for a short-cut to these results. A careful reader can easily infer my position from my posts. I am convinced that we (I mean the QED fathers and followers) work with a wrong QED Hamiltonian. Because of this, we are forced to “repair” the calculation results “on the go”. “Repairing” includes discarding unnecessary corrections to the fundamental constants and a selective summation of soft diagrams to all orders. So we only obtain the right inclusive cross sections in the end, not before!

The right Hamiltonian can give the same final results directly, in a routine perturbative way, without discarding any corrections and without summation of divergent diagrams to all orders. The right Hamiltonian, if you like, can be equally called an “exactly renormalized” Hamiltonian. It contains only physical characteristics and it must be constructed just in a more physical way – what is coupled permanently in nature should be implemented so in the new Hamiltonian rather than “coupled perturbatively”. A better initial approximation leads to a better perturbative series – the latter turns into finite and reasonably small corrections due to the new initial approximation being closer to the exact solution. That’s it!

Some readers want me to produce QED results with even more precision than the actual QED provides. They say it is the only way to attract attention to my approach. Frankly, they want too much from me (I mean, from one person). A theory development is a result of years of work of many professional researchers. And I do not even hold an academic position with sufficient research freedom to carry our these laborious calculations. So my results are modest in this respect. But I hope I outline the right direction quite unambiguously.

P.S. See this.

Zoom in on Atom or Unknown Physics of Short Distances

December 2, 2010

In about 1985, while considering a banal problem of scattering form atoms, I occasionally derived a positive charge (“small” or a “second”) atomic form-factor

describing effects of electrostatic interaction of a charged projectile with a point-like (structureless) atomic nucleus in atom [1]:




To my surprise, it was unknown and it was absent in textbooks despite easiness of its derivation and a transparent physical sense. Namely, for elastic scattering f_n^ { n} describes the positive charge distribution in atom (not within the nucleus) and for inelastic processes the form-factors f_n^ {n^{\prime}} describe the atom excitation amplitudes due to transmitting an essential momentum q to the atomic nucleus while scattering (exciting atom due to pushing its nucleus).

In Fig. 1 one can see cross-sections (cuts) of the atomic (Hydrogen) wave functions squared describing the relative electron-nucleus motion in the Hydrogen atom or a Hydrogen-like ion. (A nice applet to visualize and rotate with your mouse 3D and 2D images of Hydrogen configurations is here.) Strictly speaking, they are the relative distance probabilities although they are often erroneously called “negative charge clouds”:

Fig. 1. Probability density 2D plots (electron-nucleus relative distance probability).

I say “erroneously” because in absolute coordinates quite similar, “positive charge clouds”, exist at much shorter distances due to nucleus motion around the atomic center of inertia (CI). The nucleus does not stay at the atomic CI, but moves around it. So the charge density pictures “seen” by a fast projectile include the positive clouds too. This fact is contained in formulas (1-3), both charge clouds being expressed via the same atomic wave function \psi_{nlm}! If we make a zoom in, we will see a picture similar to Fig. 1 since

Fig. 2 represents qualitatively such a picture for a particular state of a Hydrogen (or a Hydrogen-like ion).

Fig. 2. Qualitative image of atomic charge density, 2D plot of \rho(r) for the state |3,2,2>.


Two dots in the middle of Fig. 2 are “positive charge clouds”. In other words, Fig. 1, without scale indicated in it, describes the negative charge density 2D plots at the “atomic distances” equally well as it does the positive charge density at much shorter distances. This is the true, elastic (non destructive) physics of short distances. Unexpected?

Another beautiful qualitative picture, this time for the state |4,3,1>:

Fig. 3. Qualitative image of atomic charge density, 2D plot of \rho(r) for the state |4,3,1>.

I would like to underline that the nucleus bound in atom is not seen as a point-like in elastic processes. Its charge and probability are quantum-mechanically smeared. And the most surprising thing about it is the smear size dependence on the electron configuration. The farther electron “orbit”, the larger the positive cloud. It looks counter-intuitively first, but it is so! The atomic form-factors (2) and (3) are just the Fourier images of the cloud densities. The corresponding elastic cross section (1) can be written also via an effective projectile-atomic potential which is softer than the Coulomb “singularity” at short distances [1]. By the way, in a solid state the positive charge “clouds” are rather large and comparable with the lattice step. So in elastic picture the positive charge in a solid is distributed as in a “plum pudding” model! And in molecules too – the positive charge clouds are very large, but nobody pictures them either!

I can even mention here the fact that each “sub-cloud” contains a fractional charge 😉 , which is never observed as such outside of atom, separately, like quarks. And the cloud configuration is related to groups of symmetries. Thus, let me call those clouds with fractional charges “shmarks“. They are “visible” in elastic picture with respect to atomic excitations, but in a deeply inelastic (inclusive) picture with respect to soft photon emissions.

Smearing is always the case for bound states. But one should not confuse how the atomic electron “sees” the nucleus and how a fast projectile sees the atom (Fig. 2, 3). The atomic electron does not see the positive clouds pictured in Fig. 2 and Fig. 3  because the latter is a result of mutual (not independent) motion.

In scattering experiments, however, it is extremely difficult to observe these pictures because of difficulties in preparing the target atoms in certain |n,l,m> states and in selecting the true elastic events. Normally it is impossible to distinguish (resolve in energy) an elastically scattered projectile from inelastically scattered one (the energy loss difference is relatively small). So experimentally, when all projectiles scattered in a given solid angle dΩ are counted, one observes an inclusive cross section. It is the inclusive cross section that corresponds to the Rutherford formula! So the point-like “free” nucleus in the Rutherford cross section is an inclusive (rich) picture, a sum of an elastic and all inelastic atomic cross sections.

Similarly, the “free point-like electron” in QED is not really free, but is permanently coupled to the quantized electromagnetic field. So its charge is also smeared quantum-mechanically and its smear size is state-dependent. This explains what elastic (non destructive) physics occurs at short distances: there is no Coulomb singularity, as a matter of fact. But the sum of an elastic and all inelastic cross sections results in a Rutherford-like cross section as if the electron were point-like, free, and “long-handed” (i.e., with long-range Coulomb elastic potential for mechanical problems).


(By the way, in this approximation the Positronium looks as a rather neutral system: its positive and negative clouds coincide so d\sigma_{elastic}\approx 0 due to cancellation of positive and negative charge form-factors. Inelastic channels are open instead: d\sigma_n ^{n^{\prime}}>0 )

The Born approximation is very important: if you scatter a too slow particle from an atom, the atom will get polarised and its clouds will essentially change in course of interaction.

Some more advanced results are given here.


[1] V. Kalitvianski, Atom as a “Dressed” Nucleus, CEJP, V. 7, N. 1, pp. 1-11 (2009), http://arxiv.org/abs/0806.2635 .

Problem of infinitely big corrections

May 22, 2009

In this web log I would like to share my findings on reformulation of problems with big (infinite or divergent) perturbative corrections and discuss them. (The blog is regularly updated so do not pay attention to the date – it is a starting date.)

I myself encountered big (divergent) analytical perturbative corrections in practice long ago; it was the beginning of my scientific career (1981-1982 years). It was a simple and exactly solvable Sturm-Liouville problem, with transcendental eigenvalue equations solvable exactly only numerically. Analytical solutions (series) were divergent. First I thought to develop a renormalization prescription to cope with  the “bad” perturbative expansion, as I was taught to at the University, but soon I managed to reformulate the whole problem with choosing a better initial approximation by a better variable choice (variable change (see [1])). Since then I have been persuaded that we have to seek a physically/mathematically better initial approximation each time when the perturbative corrections in calculations are too big (in particular, infinite).

In fact, here may be at least two types of difficulties:

1) A particular physical and mathematical problem has exact, physically meaningful solutions, but perturbation theory (PT) corrections are divergent, like in the Sturm-Liouville problem considered in my articles. Then a better choice of  the initial approximation may improve the PT series behaviour. No renormalizations are necessary here (although possible, see Appendix 5 in [1]).

2) A particular physical and mathematical problem has not any physically meaningful solutions and PT corrections are divergent, like in theories with self-actions. In this case no formal variable change can help – it is a radical reformulation of the theory (new physical equations) which is needed.

In about 1985, considering non-relativistic scattering of charged projectiles from atoms, I derived the positive charge atomic form-factors f_{nn^\prime}(\vec{q}) surprisingly unknown to the wide public (English publication is in [2]).  These form-factors described correctly the physics of elastic, inelastic, and inclusive scattering to large angles. Briefly, according to my results, scattering from an atom with a very large momentum transfer is inelastic rather than the elastic, Rutherford. All textbooks describe it in a wrong way – they obtain an elastic cross section due to erroneously neglecting an essential (“coupling”) term.

This physics is quite analogous to that of QED with its soft radiation which accompanies any scattering in reality (also inelastic channel), but which is not obtained in the first Born approximation in the theory. QED does not obtain the soft radiation due to decoupling the quantized field from the charge in the initial approximation. Solution for a coupled system (charge + filed oscillators) is not known. In my “atomic” case the corresponding “coupled” solution is formally known and unambiguous, at least conceptually, and this helped me construct a better initial approximation in QED – by a physical ansatz, so that I obtain now the soft radiation automatically.

Let me underline here that the QFT Hamiltonians are guessed. And the “standard guess” includes a self-action term first appeared in H. Lorentz works.

The self-action idea was supposed to preserve the energy-momentum conservation laws in the point-like electron dynamics, but it failed – it led to infinite correction to the electron mass and “runaway” exact solutions after discarding the infinity (after mass “renormalization”).  In other words, the self-action ansatz in a point-like charge model is just wrong. Many physicists have tried to resolve this problem – to advance new equations with new physics.  They were M. Born, L. Infeld, P. Dirac, R. Feynman, and many many others. As I said, in this case no variable change can help – it is a reformulation of the theory (equations) which is needed and what has been sought by researchers.

I personally found that the energy-momentum conservation laws can be preserved in a different, more physical way, if one considers the electron and the electromagnetic filed as features of one compound system: intrinsically coupled charge and field. A physical and mathematical hint of this coupling is the following: as soon as the charge acceleration excites the field oscillators, the charge is a part of these oscillators. Then the external force work splits into two parts – acceleration of the center of inertia of the compound system and exciting its “internal” degrees of freedom (oscillators). So I propose to start from different theory formulation – without self-action, but with another coupling mechanism. This should be done non perturbatively – from the very beginning, just by constructing a better, more physical initial Hamiltonian. Here my understanding corresponds to that of P. Dirac’s who insisted in searching new physical ideas and new Hamiltonians (see, for example, The Inadequacies of Quantum Field Theory by P. Dirac. Reminiscences about a Great Physicist / Ed. B. Kursunoglu, E.P. Wigner. — Cambridge: Univ. Press, 1987. P. 194-198.) In the “mainstream” theories it is the renormalizations that fulfil this “dirty job” perturbatively – they discard unnecessary self-action contributions to the fundamental constants at each PT order. Renormalizations are in fact a transition to another, different result or to the perturbative solution of  different, unknown equations. Recently I found a similar explicit statement by P. Dirac in his “The Requirements of Fundamental Physical Theory”,  Europ. J. Phys. 1984. V. 5. P. 65-67 (Lindau Lecture of 1982). Being done perturbatively, such a transition is not quite visible. Usually everything is presented as the constant redefinitions in the frame of the same theory. As a result, it is not clear at all to what formulation without self-action the renormalized solutions correspond and if they are physical at all. A very simplified analysis of the renormalization “anatomy” in its “working” in an exactly solvable problem is presented in [3] (see also Transparent_Renormalization_1.pdf).

In this web-log, in order to demonstrate all this, I am going to present flawless and transparent examples rather than hand waving. References to available publications are the following (they are English translations and adaptations of my Russian publications):

[1] “On Perturbation theory for the Sturm-Liouville Problem with Variable Coefficients”, http://arxiv.org/abs/0906.3504.

[2] “Atom as a “Dressed” Nucleus”, http://arxiv.org/abs/0806.2635
(invited and published in CEJP, V. 7, N. 1, pp. 1-11, (2009), http://www.springerlink.com/content/h3414375681x8635/?p=309428ad758845479b8aeb522c6adfdd&pi=0), and

[3] “Reformulation instead of Renormalizations”,   (an APPENDIX recently added ), http://arxiv.org/abs/0811.4416.

[4] “A Toy Model of Renormalization and Reformulation”, http://arxiv.org/abs/1110.3702 (published in Open Access in International Journal of Physics http://pubs.sciepub.com/ijp/1/4/2/index.html )

[5] “On integrating out short-distance physics”, http://arxiv.org/abs/1409.8326.

With time I am going to develop, improve them and add new examples to this blog.

I have been repeatedly told that my style of writing is too absolutist and imperfect anyway. I apologize for that. It is not my goal to offend anyone. I do not consider the people advocating self-action and renormalizations as stupid or evil. I consider them as “trapped” and innocent. My expositions, made simple on purpose, are written just to present the moment when and how we all got trapped in this trap. This subject turned out to be extremely tricky for researchers and the only known “resort” has been the “renormalization prescription” for a too long time. Fortunately now there is another physical and mathematical solution and I try to advance it in my works.

First of all it is, of course, a new physical insight that makes it possible to reformulate physical problems in the micro-physics. It “contradicts” to the very idea of “elementary” (in the true sense!) particles. That is why it has been hard for fundamental physicists to figure it out – the mainstream development in micro-physics is based on attempts to deal with “elementary”, independent, separated particles. This idea turned out to be blocking the right insight. On the other hand, the quasi-particle ideas and solutions are widely used in many-body problems. Agree, if some particles are in interaction, they can form compound (non elementary) systems. And some compound systems cannot be ever “disassembled”, unlike bricks in a wall. Some compound systems are “welded” by nature rather than made of “separable” bricks. In a compound system the observable variables are those of quasi-particles [3]. So, the electron and the quantized electromagnetic field, always coupled together, form a compound system – I call it an electronium. The photons in it remain photons, the electron remains the electron; what is different is the way how they are coupled in the electronium. The electron is not free any more, but it moves in electronium around the electronium center of inertia, somewhat similarly to the nucleus motion in an atom [2] (the nuclei in atoms are not free).

Indeed, it is known that charge-field interaction cannot be “switched off”, even “adiabatically”. The notion of electronium implements this intrinsic property of the charge nature by construction. The photons are just excited states of the electronium – they are quasi-particles describing the “relative” or “internal” motion of this compound system [2, 3]. The electron (a charge) is a part of oscillators and is the external force application point. In the frame of such a compound system the energy-momentum conservation laws hold without the electron’s “self-action”. That is why no corrections to mass (=rest energy) and charge (=coupling constant between “particle” and “wave” subsystems) arise in my approach.

The true understanding of electronium is only possible in Quantum Mechanics. It is based on the notion of charge form-factor. The latter describes the charge “cloud” in a bound state. It is practically unknown, but true, that the positive (nucleus electric) charge in an atom is quantum mechanically smeared, just like the negative (electron) charge [3] in a smaller volume. It is also described with an atomic (positive charge or “second”) form-factor, so the positive charge in an atom is not “point-like”. The positive charge “cloud” in atoms is small, but finite. It gives a natural “cut-off” or regularization factor in atomic calculations just because of taking the electron-nucleus coupling exactly rather than perturbatively.

Similarly, the electron charge in electronium is quantum mechanically smeared. This gives correct physical and mathematical description of quantum electrodynamics: emission, absorption, scattering, bound states, and all that – without infinities since the electronium takes into account exactly the charge-field coupling – by construction. Thinking of electron as of a free point-like particle is not correct since the point-like free “elementary particle” appears as the inclusive, secondary picture, not a fundamental one (see [2] for details). The point-like electron “emerges” from this theory as the inclusive, classical or average picture.

Any mathematician knows that the “better” is the initial approximation in a Taylor series, the smaller are corrections to it. (“Better” here means closer to the exact function.) So the problem of “big” corrections is often the problem of “bad” choice of the initial approximation in an iterative procedure. It is the case 1.

In the theoretical physics it holds as well as in the mathematics – the problems are formulated as mathematical problems describing a given physical situation. Theorists choose the total Hamiltonians and the initial approximations following their ideas about physical reality. Unfortunately one can easily obtain the case 2 where the very formulation is non physical and the divergences just show it. I consider the point-like electron model, free electromagnetic field, and the “self-action” ansatz (by H. Lorentz) to be the worst ones although explainable historically. It failed as a physical model (corrections to mass, runaway solutions). Worse, it has given a bad example to follow – the mass renormalization and the perturbative “treatment” of the non-physical remainder. The notion of “infinite bare” mass and an “infinite mass counter-term” is the top of “bad” physics. As long as we follow the flawed approach, we will not advance in physical description of many phenomena. This is what we see nowadays.

Fortunately the theory can be reformulated in quite physical terms. The only sacrifice to do on this way is the idea of “elementariness” of electron in the sense of its being “free” of electromagnetic field and being just a “point-like” in reality.

My research is not finished yet – I am quite busy with other things at my job. I do not hold an academic position. On the contrary, I am on subcontract works implying no freedom and strict timing for each subcontract. As soon as I find a grant or a position (or at least a part time position) to be able to devote myself to the relativistic calculations, I will carry out the Lamb shift and anomalous magnetic moment calculations at higher orders. If you hold a post in science with sufficient responsibilities , you may take an initiative to make my researching possible. I cannot do everything on my own and the resistance of renormalizators is very high. If you are an extremely rich person, consider sponsoring my research via my PayPal account (all you need for that is my e-mail address).

Any constructive proposals/discussions/questions are welcome.

Vladimir Kalitvianski. vladimir.kalitvianski@wanadoo.fr


P.S. Funny video of coffee cup experiments at work. You may think it’s a telekinesis, but it’s not:

%d bloggers like this: