“*I think the bottom line with my questions is that I fully accept that divergent series occur in physics all the time, and quite clearly they contain information that we can extract, but I would like to understand more to what degree can we trust those results.*”

And there were some answers including mine. Amongst other things, I mentioned a “constructive way” of building asymptotic series, which would be useful in practice. As an example, I considered a toy function . A direct summations of its Taylor series is useless because it diverges from the exact function value at any finite . It is so not only for fast “growing” coefficient cases, but also for regular (converging) Taylor series truncated at some finite order, when we try to extrapolate the truncated series to finite (large) values of . A truncated series “grows” as the highest power of , but the expanded function can be finite and limited, so the truncated series becomes inaccurate.

Thinking this fact over, in about 1981-1982, I decided that the difficulties with extrapolation to finite was in expanding a finite and slowly changing function in powers of fast growing functions like . “*Why not to expand such a slow function in powers of slowly changing functions?*“, thought I, for example: , where at small , but for finite .

In order to give some idea and demonstrate fruitfulness of my “constructive” approach, I considered the following functions instead of : , , and with adjustable coefficient . The corresponding figures are the following:

Fig. 1. Expansion in powers of .

Fig. 2. Expansion in powers of .

Fig. 3. Expansion in powers of .

The smaller terms in the new series, the better approximation. Following this banal observation, I adjusted the coefficient in to minimize the coefficient at (note, the axis is made longer in the next three figures):

Fig. 4. Expansion in powers of .

We see that is approximated now much better than with its truncated asymptotic series in powers of (Fig. 1).

The same idea applied to the function works well too:

Fig. 5. Expansion in powers of .

Finally, the ground state energy of the anharmonic oscillator in QM (anharmonicity ) has also a divergent series: , which can be transformed into a series in powers of . It gives a good extrapolation of (error % within , Fig. 6), unlike the original series:

Fig. 6. The ground state energy of 1D anharmonic oscillator.

Thus, my idea was not too stupid as it allowed to extrapolate the asymptotic (divergent) series in the region of big with a decent accuracy.

P.S. In my practice I also encountered a series (a convergent one) whose convergence I managed to improve with partially summing up some of its terms into a finite function , so the resulting series became even better convergent (Chapters 3 and 4). It is somewhat similar to soft contributions summation in QED, if you like.

The moral is the following: if you want to have a “convergent” series, then build it yourself and enjoy.

]]>Hank Campbel, the founder of Science 2.0, wrote two articles about science in danger:

https://www.academia.edu/10692499/The_Corruption_of_Peer_Review_Is_Harming_Scientific_Credibility

]]>The problem is simple. The author considers first a -like potential and calculates scattering amplitudes (reflection/transmission amplitudes). In particular, he obtains the “low-energy” formula: . It is just a regular calculation. Everything is physically reasonable and no renormalization is necessary so far. In particular, when the potential coefficient tends to infinity, the transmission amplitude vanishes and the incident wave is completely reflected. It is comprehensible in case of a positive , but it holds as well for a negative value of in the “wave” mechanics.

After that, the author considers another interaction potential . This potential gives “undesirable” results. Replacing with a “regularized” version of this kind , the author obtains a “regularized” amplitude . Again, so far so good. When , the transmission amplitude tends to zero too. It is qualitatively comprehensible because each grows in its absolute value as in case of in the problem considered just above. is a highly reflecting potential. Factually, it separates the region from the region , like an infinite barrier of a finite width.

But the author does not like this result. He wants fulfilling a “low-energy theorem” for this potential too. He wants a non zero transmission amplitude! I do not know why he wants this, but I suspect that in “realistic” cases we use interactions like because we do not know how to write down something like . As well, it is possible that in experiment one observes a non zero transmission amplitude and renormalization of “works”. So, his desire to obtain a physical result from an unphysical potential is the main “human phenomenon” happening in this domain. We want right results from a wrong theory. We **require** them from it! (The second theory is wrong because of wrong guess of potential.)

Of course, a wrong theory does not give you the right results whatever spells one pronounces over it. So he takes an initiative and **replaces** a wrong result with a right one: . Comparing it with , he concludes that they are equivalent if . The author denotes as and calls it a “phenomenological parameter” to compare with experiment via the famous formula . After finding it from experimental data, the author says that a theory with describes the experimental data.

Is his reasoning convincing to you as a way of doing physics?

If one is obliged to manipulate the calculation results with saying that is “not observable”, I wonder why and for what reason one then proposes such a and insists on correctness and uniqueness of it? Because it is “relativistic and gauge invariant”? Because after renormalization it “works”? And how about physics?

Factually, the renormalized result belongs to another theory (to a theory with another potential). Then why not to find it out from physical reasoning and use instead of ? This is what I call a theory reformulation. Am I not reasonable?

]]>**Phenomena to describe **

Let us consider a two-electron Helium atom in the following state: one electron is in the “ground” state and the other one is in a high orbit. The total wave function of this system depending on the absolute coordinates is conveniently presented as a product of a plane wave describing the atomic center of mass and a wave function of the relative or internal collective motion of constituents where and are the electron coordinates relative to the nucleus (see Fig.1).

**Figure 1.** Coordinates in question.

Normally, this wave function is still a complicated thing and the coordinates and are not separated (the interacting constituents are in mixed states). What can be separated in are normal (independent) modes of the collective motion (or “quasi-particles”). Normally it is their properties (proper frequencies, for example) who are observed.

However, in case of one highly excited electron (), the wave function of internal motion, for our numerical estimations and qualitative analysis, can be quite accurately approximated with a product of two hydrogen-like wave functions where is a wave function of ion () and is a wave function of Hydrogen in a highly excited state ().

The system is at rest as a whole and serves as a target for a fast charged projectile. I want to consider large angle scattering, i.e., scattering from the atomic nucleus rather than from the atomic electrons. The projectile-nucleus interaction is expressed via “collective” coordinates thanks to the relationship . I take a non-relativistic proton with as a projectile and I will consider such transferred momentum values that are insufficient to excite the inner electron levels by “hitting” the nucleus. Below I will precise these conditions. Thus, for the outer electron the proton is sufficiently fast to be reasonably treated by the perturbation theory in the first Born approximation, and for the inner electron the proton scattering is such that cannot cause its transitions. This two-electron system will model a target with soft and hard excitations.

Now, let us look at the Born amplitude of scattering from such a target. The general formula for the cross section is the following (all notations are from [1]):

The usual atomic form-factor (2) describes scattering from atomic electrons and it becomes relatively small for large scattering angles . It is so because, roughly speaking, the atomic electrons are light compared to the heavy projectile and they cannot cause large-angle scattering for a kinematic reason. I can consider scattering angles superior to those determined with the direct projectile-electron interactions () or, even better, I may exclude the direct projectile-electron interactions in order not to involve into calculations any more. Then no “screening” due to atomic electrons exists for the projectile nor atomic excitations due to direct projectile-electron interaction at any scattering angle.

Let us analyze the second atomic form-factor (3) in the elastic channel. With our assumptions on the wave function, it can be easily calculated if the corresponding wave functions are injected in (3):

It factorizes into two Hydrogen-like form-factors:

Form-factor describes quantum mechanical smearing of the nucleus charge (a “positive charge cloud”) due to nucleus coupling to the first atomic electron. This form-factor may be close to unity (smearing may not be “visible” because of its small size ). Form-factor describes quantum mechanical smearing of the nucleus charge (another “positive charge cloud”) due to nucleus coupling to the second atomic electron. In our conditions is rather small because the corresponding smearing size is much larger. In our problem setup the projectile “probes” these positive charge clouds and do not interact directly with the electrons.

Thus, the projectile may “see” a big “positive charge cloud” created with the motion of the atomic nucleus in its “high” orbit (i.e., with the motion of ion thanks to the second electron, but with full charge seen with the projectile), and at the same time it may not see the additional small positive cloud of the nucleus “rotating” also in the ground state of ion. The complicated short-distance structure (the small cloud within the large one) is integrated out in (4) and results in the elastic from-factor tending to unity, as if this short-distance physics were absent. We can pick up such a proton energy , such a scattering angle , and such an excited state , that $ may be equal to unity even at the largest transferred momentum, i.e., at .

In order to see to what extent this is physically possible in our problem, let us analyze the “characteristic” angle for the inner electron state [1]. (I remind that .) is an angle at which the inelastic processes become relatively essential (the probability of not exciting the target “internal” states is and that of exciting any “internal” state is described with the factor ):

Here, instead of stands for the ion due to and factor 5 originates from the expression . So, for . Fig. 2 shows just such a case (the red line) together with the other form-factor – for a third excited state of the other electron (the blue line) for demonstrating a strong impact of .

**Figure 2.** Helium form-factors and at .

We see that for scattering angles form-factor becomes very close to unity (only elastic channel is open for the inner electron state) whereas form-factor may be still very small if . The latter form-factor describes a large and soft “positive charge cloud” in the elastic channel, and for inelastic scattering () it describes the soft target excitations energetically accessible when hitting the heavy nucleus.

The inner electron level excitations due to hitting the nucleus can also be suppressed not only for , but also for any angle in case of relatively small projectile velocities (Fig. 3).

**Figure 3.** Helium form-factors and at .

By the way, a light electron as a projectile does not see the additional small smearing even at because its energy is way insufficient (its de Broglie wavelength is too large for that). The incident electron should be rather relativistic to be able to probe such short-distance details [1].

Let us note that for small velocities the first Born approximation may become somewhat inaccurate: a “slow” projectile may “polarize” the atomic “core” (more exactly, the nucleus may have enough time to make several quick turns during interaction) and this effect influences numerically the exact elastic cross section. Higher-order perturbative corrections of the Born series take care of this effect, but the short-distance physics will still not intervene in a harmful way in our calculations. Instead of simply dropping out (i.e., producing a unity factor in the cross section (1)), it will be taken into account (“integrated out”) more precisely, when necessary.

Hence, whatever the true internal structure is (the true high-energy physics, the true high-energy excitations), the projectile in our “two-electron” theory cannot factually probe it when it lacks energy. The soft excitations are accessible and the hard ones are not. It is comprehensible physically and is rather natural – the projectile, as a long wave, only sees large things. Small details are somehow averaged or integrated out. In our calculation, however, this “integrating out” (factually, “taking into account”) the short-distance physics occurs automatically rather than “by hands”. We do not introduce a cut-off and do not discard (absorb) the harmful corrections in order to obtain something physical. We do not have harmful corrections at all. It convinces me in possibility of constructing a physically reasonable QFT where no cut-off and discarding are necessary.

The first Born approximation (3) in the elastic channel gives a “photo” of the atomic positive charge distribution as if the atom was internally unperturbed during scattering, a photo with a certain resolution, though.

Inelastic processes give possible final target states different from the initial one (different could configurations).

The fully inclusive cross section (i.e., the sum of the elastic and all inelastic ones) reduces to a great extent to a Rutherford scattering formula for a free and still point-like target nucleus (no clouds at all!) [1]. The inclusive picture is another kind of averaging over the whole variety of events, averaging often encountered in experiments and resulting in a deceptive simplification. One has to keep this in mind because usually it is not mentioned while speaking of short-distance physics, as if there were no difference between elastic, inelastic, and inclusive pictures.

Increasing the projectile energy (decreasing its de Broglie wavelength), increasing the scattering angles and resolution at experiment helps reveal the short-distance physics in more detail. Doing so, we may discover high-energy excitations inaccessible at lower energies/angles. Thus, we may learn that our knowledge (for example, about pointlikeness of the core) was not really precise, “microscopic”.

**Discussion**

Above we did not encounter any mathematical difficulties. It was a banal calculation, as it should be in physics. We may therefore say that out theory is physically reasonable.

What makes our theory physically reasonable? The permanent interactions of the atomic constituents taken into account exactly both via their wave function and via the relationships between their absolute and the relative (or collective) coordinates, (namely, involved in was expressed via and ). The rest was a perturbation theory in this or that approximation. For scattering processes it calculated the occupation number evolutions – the transition probabilities between different target states. It is an ideal in the scattering physics description.

Now, let us imagine for instance that this our “two-electron” theory is a “Theory of Everything” (or a true “underlying theory”) unknown to us so far. Low-energy experiments outlined above would not reveal the “core” structure, but would present it as a point-like nucleus smeared only due to the second electron. Such experiments would then be well described with a simpler, “one-electron” theory, a theory of a hydrogen-like atom with and . The presence of the first electron would not be necessary in such a theory: the latter would work fine and without difficulties – it would reproduce low-energy target excitations.

May we call the “one-electron” theory an effective one? Maybe. I prefer the term “incomplete” – it does not include and predict all target excitations existing in Nature, but it has no mathematical problems (catastrophes) as a model even outside its domain of validity. The projectile energy (or a characteristic transferred momentum ) is not a “scale” in our theory in a Wilsonian sense.

Thus, the absence of the true physics of short distances in the “one-electron” theory does not make it ill-defined or fail mathematically. And this is so because the one-electron theory is also constructed correctly – what is know to be coupled permanently and determines the soft spectrum is already taken into account in it via the wave function and via the coordinate relationships. That is why when people say that a given theory has mathematical problems “because not everything in it is taken into account”, I remain skeptic. I think the problem is in its erroneous formulation. It is a problem of formulation or modeling (see, for example, unnecessary and harmful “electron self-induction effect” discussed in [2] and an equation coupling error discussed in [3]). And I do not believe that when everything else is taken into account, the difficulties will disappear automatically. Especially if “new physics” is taken into account in the same way – erroneously. Instead of excuses, we need a correct formulation of incomplete theories on each level of our knowledge.

Now, let us consider a one-electron state in QED. According to QED equations, “everything is permanently coupled with everything”, in particular, even one-electron state, as a target, contains possibilities of exciting high-energy states like creating hard photons and electron-positron pairs. It is certainly so in experiments, but the standard QED suffers from calculation difficulties (catastrophes) of obtaining them in a natural way because of its awkward formulation. A great deal of QED calculations consists in correcting its initial wrongness. That is why “guessing right equations” is still an important physical and mathematical task.

**Electronium and all that**

My electronium model [1] is an attempt to take into account a low-energy QED physics, like in the “one-electron” incomplete atomic model mentioned briefly above. The non relativistic electronium model does not include all possible QED excitations but soft photons; however, and this is important, it works fine in a low-energy region. Colliding two electroniums produces soft excitations (radiation) immediately, in the first Born approximation. (It looks like colliding two complex atoms – in the final state one naturally obtains excited atoms.) There is no background for the infrared problem there because the soft modes are taken into account “exactly” rather than “perturbatively”. Perturbative treatment of soft modes gives a divergent series due to “strongness” of soft mode contributions into the calculated probabilities [4]:

**Picture 4.** Extraction form [4].

It is easy to understand in case of expanding our second form-factors in powers of “small coupling parameter” in the exponential (3): . For the first electron (i.e., for the hard excitations) the term may be small (see Fig. 3) whereas for the second one is rather large and diverges in the soft limit . In QED the hard and soft photon modes are treated perturbatively because the corresponding electron-field interaction is factually written in the so called “mixed variables” [5] and the corresponding series are similar to expansions of our inelastic form-factors in powers of .

By the way, the photons are those normal modes of the collective motions whose variables in the corresponding are separated.

How would I complete my electronium model, if given a chance? I would add all QED excitations in a similar way – I would add a product of the other possible “normal modes” to the soft photon wave function and I would express the constituent electron coordinates via the center of mass and relative motion coordinates, like in the non relativistic electronium or in atom. Such a completion would work as fine as my actual (primitive) electronium model, but it would produce the whole spectrum of possible QED excitations in a natural way. Of course, I have not done it yet (due to lack of funds) and it might be technically very difficult to do, but in principle such a reformulated QED model would be free from mathematical and conceptual difficulties *by construction*. Yes, it would be still an “incomplete” QFT, but no references to the absence of the other particles (excitations) existing in Nature would be necessary. I would not introduce a cut-off and running constants in order to get rid of initial wrongness, as it is carried out today in the frame of Wilsonian RG exercise.

**Conclusions**

In a “complete” reformulated QFT (or “Theory of Everything”) non-accessible at a given energy excitations would not contribute (with some reservations). Roughly speaking, they would be integrated out (taken into account) automatically, like in my “two-electron” target model given above, reducing naturally to a unity factor.

But this property of “insensibility to short-distance physics” does not exclusively belong to the “complete” reformulated QFT. “Incomplete” theories can also be formulated in such a way that this property will hold. It means the short-distance physics, present in an “incomplete theory” and different from reality, cannot be and will not be harmful for calculations technically, as it was eloquently demonstrated in this article. When the time arrives, the new high-energy excitations could be taken into account in a natural way described primitively above as a transition from a “one-electron” to “two-electron” target model. I propose to think over this way of constructing QFT. I feel it is a promising direction of building physical theories.

**References**

[1] Kalitvianski V 2009 Atom as a “Dressed” Nucleus Cent. *Eur. J. Phys.* **7**(1) 1–11 (*Preprint *arXiv:0806.2635 [physics.atom-ph])

[2] Feynman R 1964 *The Feynman Lectures on Physics* vol. 2 (Reading, Massachusetts: Addison-Wesley Publishing Company, Inc.) pp 28-4–28-6

[3] Kalitvianski V 2013 A Toy Model of Renormalization and Reformulation *Int. J. Phys.* **1**(4) 84–93

(Preprint arXiv:1110.3702 [physics.gen-ph])

[4] Akhiezer A I, Berestetskii V B 1965 *Quantum Electrodynamics* (New York, USA: Interscience Publishers) p 413

[5] Kalitvianski V 2008 Reformulation Instead of Renormalization *Preprint* arXiv:0811.4416 [physics.gen-ph]

In his article G. ‘t Hooft mentions the skepticism with respect to renormalization, but he says that this skepticism is not justified.

I was reading this article to understand his way of thinking about renormalization. I thought it would contain something original, insightful, clarifying. After reading it, I understood that G. ‘t Hooft had nothing to say.

Indeed, what does he propose to convince me?

Let us consider his statement: “*Renormalization is a natural feature, and the fact that renormalization counter terms diverge in the ultraviolet is unavoidable*”. It is rather strong to be true. An exaggeration without any proof. But probably, G. ‘t Hooft had no other experience in his research career.

“A natural feature” of what or of whom? Let me precise then, it may be unavoidable in a stupid theory, but it is unnatural even there. In a clever theory everything is all right by definition. In other words, everything is model-dependent. However G. ‘t Hooft tries to make an impression that there may not be a clever theory, an impression that the present theory is good, ultimate and unique.

“*The fact that mass terms in the Lagrangian of a quantized field theory do not exactly correspond to the real masses of the physical particles it describes, and that the coupling constants do not exactly correspond to the scattering amplitudes, should not be surprising.*”

I personally, as an engineering physicist, am really surprised – I am used to equations with real, physical parameters. To what do those parameters correspond then?

“*The interactions among particles have the effect of modifying masses and coupling strengths*.” Here I am even more surprised! Who ordered this? I am used to independence of masses/charges from interactions. Even in relativistic case, the masses of constituents are unchanged and what depends on interactions is the total mass, which is calculable. Now his interaction is reportedly such that it changes masses and charges of constituents and this is OK. I am used to think that masses/charges were characteristics of interactions, and now I read that factually interactions modify interactions (or equations modify equations ;-)).

To convince me even more, G. ‘t Hooft says that this happens “*when the dynamical laws of continuous systems, such as the equations for fields in a multi-dimensional world, are subject to the rules of Quantum Mechanics*”, i.e., not in everyday situation. What is so special about continuous systems, etc.? I, on the contrary, think that this happens every time when a person is too self-confident and makes a stupidity, i.e., it may happen in every day situations. You have just to try it if you do not believe me. Thus, when G. ‘t Hooft talks me into accepting perturbative corrections to the fundamental constants, I wonder whether he’s checked his theory for stupidity (like the stupid self-induction effect) or not. I am afraid he hasn’t. Meanwhile the radiation reaction is different from the near-field reaction, so we make a mistake when take the latter into account. This is not a desirable effect , that is why it is removed by hand anyway.

But let us admit he managed to talk me into accepting the naturalness of perturbative corrections to the fundamental constants. Now I read: “*that the infinite parts of these effects are somehow invisible*”. Here I am so surprised that I am screaming. Even a quiet animal would scream after his words. Because if they are invisible, why was he talking me into accepting them?

Yes, they are very visible, and yes, it is we who should make them invisible and this is called renormalization. This is **our** feature. Thus, it is not “somehow”, but due to our active intervention in calculation results. And it works! To tell the truth, here I agree. If I take the liberty to modify something for my convenience, it will work without fail, believe me. But it would be better and more honest to call those corrections “unnecessary” if we subtract them.

How he justifies this our intervention in our theory results? He speaks of bare particles as if they existed. If the mass and charge terms do not correspond to physical particles, they correspond to bare particles and the whole Lagrangian is a Lagrangian of interacting bare particles. Congratulations, we have figured out bare particles from postulating their interactions! What an insight!

No, frankly, P. Dirac wrote his equations for physical particles and found that this interaction was wrong, that is why we have to remove the wrong part by the corresponding subtractions. No bare particles were in his theory project or in experiments. We cannot pretend to have guessed a correct interaction of the bare particles. If one is so insightful and super-powerful, then try to write a correct interaction of physical particles, – it is already about time.

“*Confrontation with experimental results demonstrated without doubt that these calculations indeed reflect the real world. In spite of these successes, however, renormalization theory was greeted with considerable skepticism. Critics observed that ”the infinities are just being swept under the rug”. This obviously had to be wrong; all agreements with experimental observations, according to some, had to be accidental.*”

That’s a proof from a Nobelist! It cannot be an accident! G. ‘t Hooft cannot provide a more serious argument than that. In other words, he insists that in a very limited set of renormalizable theories, our transformations of calculation results from the wrong to the right may be successful not by accident, but because these unavoidable-but-invisible stuff does exists in Nature. Then why not to go farther? With the same success we can advance such a weird interaction that the corresponding bare particles will have a dick on the forehead to cancel its weirdness and this shit will work, so what? Do they exist, those weird bare particles, in your opinion?

And he speaks of gauge invariance. Formerly it was a property of equations for physical particles and now it became a property of bare ones. Gauge invariance, relativistic invariance, locality, CPT, spin-statistics and all that are properties of bare particles, not of the real ones; let us face the truth if you take seriously our theory.

I like much better the interaction with counter-terms. First of all, **it does not change the fundamental constants**. Next, it shows imperfection of our “gauge” interaction – the counter-terms subtract the unnecessary contributions. Cutoff-dependence of counter-terms is much more natural and it shows that we are still unaware of a right interaction – we cannot write it down explicitly; at this stage of theory development we are still obliged to repair the calculation results perturbatively. In a clever theory, Lagrangian only contains unknown variables, not the solutions, but presently the counter-terms contain solution properties, in particular, the cutoff. The theory is still underdeveloped, it is clear.

No, this paper by G. ‘t Hooft is not original nor accurate, that’s my assessment.

]]>**Why it is necessary?**

It is necessary for better understanding the corresponding physics and for having better equations since currently the equations are such that their **solutions** need modifications. This fact reflects lack of physics understanding while constructing these equations.

**Why it has not been done before?**

In fact, many have tried, but none prevailed. And currently it is renormalizators (practitioners) who are teaching the subject, not theory developers, so they do everything to convince students to accept “bare particle” physics. In Classical Electrodynamics (CED) some teach that (the remainder after the mass renormalization) is a good radiation reaction term [1, 2] even though it leads to “false start” solutions; others, on the contrary, teach that is not applicable at “small times” and one must use instead [3], but up to now no mechanical equation was found to conserve the energy-momentum exactly and in a physical manner. We content ourselves with an approximate description. The Lorentz covariance and the Noether theorem did not help [4], [5]!

Similarly in QED – although the equation set is different from that of CED, the renormalization is still a crucial part of calculations. And in addition, soft mode contributions (absent in the first Born approximation) are obligatory for obtaining physically meaningful results. If one is obliged to sum up some of its contributions to all orders, it indicates a bad initial approximation used for the perturbation theory.

Many theory developers (founding fathers) were looking for better theory formulations. It happened to be an extremely difficult problem, mainly due to prejudices implicitly involved in theoretical constructions. Paul Dirac, a rare physicist who was not thinking of fame and money at all, never gave up. His motto – a theory must be mathematically and physically sensible [6], and for the sake of that we must search for better Hamiltonians, better formulations, better description than the current one, is my motto too.

If you have read my blogs (this one, http://fishers-in-the-snow.blogspot.fr/ , http://vladimir-anski.livejournal.com/) and articles, (more here) you may have an idea what I mean by reformulation. If you like, my program can roughly be understood as both fulfilling the counter-term subtractions exactly:

and including some of this “good” (renormalized, to be exact) Lagrangian terms into a new initial approximation, i.e., figuratively speaking, I mean representing:

The new “free” Lagrangian will contain soft modes and physical constants by construction. Then the “interaction term” will be different too:

so that no renormalization will be needed, and the soft diagram contributions will be taken into account automatically in the first Born approximation by construction, like in [9], [11]. The resulting perturbation theory series will resemble a usual Taylor series with no necessity to cheat and modify its terms. This is an **unexplored possibility** of the theory formulation and it is what I would like to do.

**What I need?**

In order to pursue my research, **I need funds**. I believe that we can achieve a better description if we abandon some prejudices and employ some physical reasoning instead of doing by a blind analogy. I have already outlined possible directions in my articles [7-11]. But currently I am working for a private company, fulfilling subcontract studies, and it takes all my time and efforts. This activity is far from my dream, though. I have to abandon it in order to concentrate myself on my own subject. I’ve got to break free!

Academia does not support this “reformulation approach” any more. I can only count on private funding. If you or your friends or friends of your friends are rich people, then create a fund for supporting my research, run it and we will make it possible.

I do not need a crazy amount like a Milner prise, no! A regular salary of a theorist will suffice.

P.S. Et voilà, I became unemployed (27 April 2016). Sponsors, hurry up, I am getting older!

——————————————

[1] Sidney Coleman, *Classical Electron Theory from a Modern Standpoint*, http://www.rand.org/content/dam/rand/pubs/research_memoranda/2006/RM2820.pdf

[2] Gilbert N. Plass, *Classical Electrodynamic Equations of Motion with Radiative Reaction*, Rev. Mod. Phys. V. **33**, 37 (1961), http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.33.37 or https://drive.google.com/file/d/0B4Db4rFq72mLcUN6bEhweTgyWkE/edit?usp=sharing

[3] V. L. Ginzburg, *Theoretical Physics and Astrophysics*, Pergamon Press (1979), http://www.amazon.com/Theoretical-Physics-Astrophysics-Monographs-Philosophy/dp/0080230679 , https://drive.google.com/file/d/0B4Db4rFq72mLWGhCTXVJLUU1WVk/edit?usp=sharing

[4] Feynman Lectures on Physics, Volume II, Chapter 28.

[5] L. Landau, E. Lifshitz, The Classical Theory of Fields, § 75, p. 205.

[6] Jagdish Mehra (editor), *The Physicist’s Conception of Nature*, (1973), https://drive.google.com/file/d/0B4Db4rFq72mLWnIyM1FSOGcxaDA/edit?usp=sharing

[7] *Reformulation instead of renormalization*, http://arxiv.org/abs/0811.4416

[8] *Atom as a “Dressed” Nucleus*, http://arxiv.org/abs/0806.2635

[9] *A toy model of Renormalization and Reformulation*, http://arxiv.org/abs/1110.3702

[10] *Unknown Physics of Short Distances*, https://www.academia.edu/370847/On_Probing_Small_Distances_in_Quantum_World

[11] *On integrating out short-distance physics*, Physics Journal, V. **1**. N. 3, pp. 331-342 (2015)

.

S. Weinberg wrote a paper “Living with infinities” devoted partially to the memory of Gunnar Källén. Also he outlined there his personal view on the problem of renormalization. Good for him.

.

I just take his title and refer to a movie clip where some live with divergences too. I rephrase L.J. Washington’s **sober** words:

.

“*It’s a condition of mental divergence: we find ourselves in the Wilsonian world, being a part of intellectual elite and subjugating infinities. But even though the renormalization ideology is totally convincing for us in every way, nevertheless it is actually a construct of our psyche. We are mentally divergent. In that we escape certain unnamed realities that plague our lives here. When we stop appealing to it, we’ll be well.*“

.

.

(Behind L. J. Washington someone resembling P. Dirac solves a puzzle.)

.

.

]]>Let us take QED – the first QFT Nobel prises were given for. Its Lagrangian is the following:

It is relativistic and gauge-invariant because the bare particles are such. Parameters and are bare particle mass and charge and the term is how bare particles interact. Of course, bare particles have spin and other quantum numbers.

You may wonder how do we physicists know all that if the bare particles are non-observable (and why do they interact if they are non-interacting particles)?

Good questions. Very intelligent! The answer is – due to our physical insights. You know, insight is the ability to see the invisible, to penetrate mentally into the unknown, to figure out everything correctly from small, rare, and distorted pieces of a whole picture. Factually we, from long distances (from low-energy experiments with physical particles), penetrated to the end – to the point where bare particles live. Thus we insightful nailed the bare particle properties and their interaction laws correctly despite their hiding from us.

And yes, the bare non-interacting particles do interact and even self-interact. It is they who permanently do this hard work. At a first, naive, glance these statements are inconsistent, but no. It is a kind of duality in physics. This duality is not much advertised because the bare particles are really modest bricks.

(It’s a joke without humor.)

]]>“*We will be considered the generation that left behind unsolved such essential problems as the electron self-energy.*“

I think the essence is here and avoiding it created just a shaman’s practice where cheating and self-fooling are essential parts. To prove that, let me be more specific and let us consider the electron electromagnetic mass. This notion had arisen in Classical Electrodynamics (CED) well before the famous was derived [1] and it remains an unsolved problem even today (there are still publications on this subject). We must not confuse it with the electromagnetic *mass defect*, though, i.e., with a *calculable interaction energy*.

The electromagnetic mass can be thought of as a Coulomb energy of the electric field surrounding the electron when we calculate the total field energy. In other words, it is a consequence of the field concept. This part of the field energy is cut-off dependent and thus can take any value at your convenience. We all are familiar with the classical radius of electron , but if we take into account the electron magnetic moment field energy too, we will obtain another radius, closer to the Compton length . Still, in nature there is no electron of the classical or any other radius. And normally this part of the field energy is entirely discarded and what is left is an interaction energy of charges. Thus, when we calculate a *field energy*, the electromagnetic mass is just of no use.

Apart from the total field energy, the electromagnetic mass of a point-like charge enters the usual “mechanical” equation of a charge when we decide to insert the charge proper field into the charge equation of motion in the frame of a self-action ansatz. The latter is done for the sake of taking into account a weak radiation reaction force, which must provide the total energy conservation. The motivation – energy conservation – is understandable, but in a field approach with , we insert the entire filed , not just a radiation field , in the mechanical equation. We do it by (a wrong) analogy with an external force . So, before this our intervention we have a good “mechanical” equation (I use a non relativistic form)

which works almost fine (the near field, whatever it is, easily follows the charge according to Maxwell equations), and after our noble intervention it becomes

that does not work any more. The corresponding self-force term with makes it impossible for a charge to change its state of a uniform motion . This is a self-induction force, an extremely strong one. It’s an understandable “physical effect”, but first, it is not observed as infinite, and second, the self-induction force is not a radiation reaction force in any way, so our approach to describing the radiation influence via self-action is blatantly wrong. Albeit of an anticipated sign (and even when made finite and small), it does not help conserve the total energy. Microsoft Windows would say:

I. e., the term is not of the right functional dependence. Instead of recognizing this error, physicists started to search a pretext to keep to the self-action idea in place. They noticed that discarding the term “helps” (we will later see how it helps), but calling it honestly “discarding” makes fun of physicists. Discarding is not a calculation. Thus, another brilliant idea was advanced – an idea of “bare” mass that “absorbs” (a “mechanism” called later a mass renormalization). Tricky is Nature, but clever are physicist. In a fresh historical paper Kerson Huang expresses the common attitude to it [2]:

“*One notices with great relief that the self‐mass can be absorbed into the physical mass in the equation of motion*“

and he writes down an equation, which experimentally follows from nowhere:

It is here where the negative bare mass is introduced in physics by physicists, introduced exclusively with the purpose to subtract the harmful electromagnetic mass. This introduction is not convincing to me. A negative mass makes the particle move to the left when the force pulls it to the right. We never observed such a silly behaviour (like that of a stupid goat) and we never wrote the corresponding equations. We cannot pretend that (1) describes such a wrong particle in an external field, but adding its self-induction makes the equation right, as it does Kerson Huang. It’s all the way around: in order to make the wrong equation (2) closer to the original one (1), we just **discard **the electromagnetic mass whatever value it takes. Kerson Huang should have written honestly “*One notices that the self‐mass ought to be omitted*“.

As well, those who refer to a hydrodynamics analogy, present this silly speculation about arbitrary and as a typical calculation, a calculation like in hydrodynamics where everything is separately measurable, known, and physical. In CED it is not the case. And if the electromagnetic mass is present already in our phenomenological equation (1), the method of self-action takes it into account once more which shows again that such an approach is self-inconsistent. You know, self-induction of a wire is in fact a completely calculable physical phenomenon occurring with many* interacting* charges. Similarly in plasma description we calculate* interactions* for dynamics. Interaction is a good concept, but a self-action of an elementary particle is a bad idea. It describes no internal dynamics by definition.

If a bare particle is truly not observable, we cannot even establish an equation for it and we cannot pretend that its equation is of the same form as the Newton equations for physical particles. That is why they say that the bare mass is not observable alone – it always comes in (3) together with the electromagnetic one: . But it is not true either: equation (1) contains the physical mass and in addition, if the external force in (1) contains the omnipresent gravity force, say, for simplicity, the latter does not acquire any addendum when we add that self-induction force. In reality, we fight our own invention with help of another one – , but too many people believe in both.

This is the real truth about mass “renormalization” procedure. We ourselves introduce the self-mass in our equation and then we remove it. As nothing remains from it anywhere (the physical mass stays intact), I can safely say that there is no electromagnetic mass at all, that’s my answer to this question (again, not to confuse with the mass defect due to interaction). (By the way, renormalization does not work without fail – there are many non renormalizable theories where bad interaction terms spoil not only the original equation coefficients, but also introduce wrong “remainders”. Success of renormalization is based on lucky accidents, see my opus **here** or **here**. P. Dirac clearly called it a fluke.)

Those who insist on this “calculation” forget that then there are forces keeping the charge parts together and these forces have their own “self-induction” and “radiation reaction” contributions. No, this model needs too many “unknowns”.

Here I naively wonder why not from the very beginning to use just the radiated field instead of the total field to take into account the “radiation reaction”? Then they might never obtain the harmful jerk term , but they do not do it. They stick to the self-action patched with the “bare mass mechanism” and they hope that *the jerk *“remainder” of self-action will correctly describe the radiation reaction. Let us see.

So, after shamefully camouflaging discarding silly , they are left with the jerk called a “radiation reaction” force:

Fortunately, it is wrong too. I say “fortunately” because it reinforces my previous statement that the self-action is a wrong idea. This remainder **cannot be used** as it gives runaway solutions. Not small radiation reaction, but a rapid self-acceleration. Microsoft Windows would say:

I. e., the term is not of the right functional dependence either. In other words, all terms of self-action force in (2) are wrong. Briefly, this self-action idea was tried and it failed miserably. Period.

(This self-action can be figuratively represented as connecting an amplifier output to its input. It creates a feedback. First the feedback is strongly negative – no reaction to an external signal is possible anymore. After “repairing” this undesirable feedback, we get a strong positive feedback. Now we have a self-amplification whatever the external signal value is. No good either.)

A. Unzicker speaks of a fake and for some readers this may look as an exaggeration. If you want to see physicists cheating, here is another bright example. This cheating consists in using in their “**proof**” of energy conservation [3], as if the corresponding equation (4) had physically reasonable quasi-periodical solutions. But it doesn’t! Runaway solutions are not quasi-periodical and are not physical at all, so the proof is just a deception. (They multiply by and integrate it by parts to “show” that on average it is a radiation power.) If they insist on using quasi-periodic solutions in their proof, these solutions do not belong to Eq. (4). (A “jerky” equation like (4) does not even have any physical Lagrangian to be directly derived from!)

As a matter of fact, after cheating with the “**proof**“, this harmful jerk term is also (quietly) abandoned in favor of some small force term used in practice instead. This small term is (or alike):

Equation (5) is much better, but here again, I notice cheating once more because they represent it as a “derivation” from (4). Now cheating consists in ** replacing ** with as if we solved (4) by iterations (perturbation method). However, in the true iterative procedure we obtain a given function of time on the right-hand side rather than a term expressed via

the first perturbative term is **a known external** (periodic and resonance (!) in case of harmonic oscillator) **driving force** whereas the replacement term is **unknown damping force** (kind of a friction):

A perturbative solution to (6) (a red line in Fig. 2)

is different from a damped oscillator solution (a blue line in Fig. 2). Solution to a damped oscillator equation is non linear in , non linear in a quite certain manner. It is not a self-action, but an *interaction* with something else. This difference in equations is qualitative (conceptual) and it is quantitatively important in case of a strong radiation reaction force and/or when (I used in this example with , and ). I conclude therefore that a damped oscillator equation (7) is not a perturbative version of (6), but is another guesswork result tried and left finally in practice because of its physically more reasonable (although still approximate) behaviour. Similarly, equation (5) is not a perturbative version of (4), but another (imperceptible) equation replacement [3], [4]. Of course, there is no and may not be any proof that perturbative series for (4) converge to solutions of (5). The term is a third functional dependence tried for description of the radiation reaction force.

Hence, researchers have been trying *to derive equations* describing the radiation reaction force correctly, but they’ve failed. For practical (engineering) purposes they constructed (found by trying different functions) and are content with approximate equations like (5) that do not provide the exact energy conservation and do not follow from “principles” (no Lagrangian, no Noether theorem, etc.). Factually the field approach has been “repaired” several times with anti-field guesswork, if you like. Anyway, we may not represent it as a continuous implementation of principles because it isn’t so.

Guessing equations, of course, is not forbidden, on the contrary, but this story shows how far away we have gone from the original idea of self-action. It would not be such a harmful route if the smart mainstream guys did not raise every step of this zigzag guesswork into “the guiding principles” – relativistic and gauge invariance, ** restricting**, according to the mainstream opinion, the form of interaction to . Nowadays too few researchers see these steps as a severe lack of basic understanding of what is going on. On the contrary, the mainstream ideology consists in dealing with the same wrong self-action mechanism patched with the same discarding prescription (“renormalization”), etc., but accompanied also with anthems to these “guiding principles” and to their inventors. I do not buy it. I understand the people’s desire to look smart – they grasped principles of Nature, but they look silly to me instead.

Indeed, let us forget for a moment about its inexactness and look at Eq. (5) as at an *exact* equation, i.e., as containing the desirable radiation reaction *correctly*. We see, such an equation exists (at least, we admit its existence), it does not contain any non physical stuff like and , and together with Maxwell equations it works fine. Then why not to obtain (5) directly from (1) and from another physical concept different from a wrong self-action idea patched with several forced replacements of equations? Why do we present our silly way as the right and unique? Relativistic and gauge invariance (equation properties) must be preserved, nobody argues, but making them “guiding principles” only leads to catastrophes, so (5) it is not a triumph of “principles”, but a lucky result of our difficult guesswork done against the misguiding principles. Principles do no think for us researchers. Thinking is our duty. Factually we need in (1) a small force like that in (5), but our derivation gives (2). What we then do is a lumbering justification of replacements of automatically obtained bad functions with creatively constructed better ones. Although equations like (5) work satisfactorily in some range of forces, the lack of mechanical equation with exact radiation reaction force in CED shows that we have not reached our goal and those principles have let us down.

Note, although the above is a non relativistic version of CED, the CED story is truly relativistic and gauge invariant and it serves as a model to many further theory developments. In particular, nowadays in QFT they “derive” the wrong self-action Lagrangian from a “principle of local gauge invariance” (a gauge principle for short). They find it beautiful mathematically, enjoy the equation symmetries and conservation laws that follow from this symmetry. They repeat QED where they think there is this “gauge principle”. However such gauge equations do not have physical solutions, so their conserved quantities are just a bullshit. During enjoying the beauty of gauge interaction, they omit to mention that the solutions are non physical. The gauge principle in QED does not lead to physical equations. We are forced to rebuild a gauge theory as I outlined above. In CED the bare and electromagnetic masses appear and disappear shortly after for good, but in QED and QFT they reappear in each perturbative order. In addition, the physical charge also acquires unnecessary and bad “corrections”, and their omnipresence makes an impression of their belonging to physics.

Next, new “principles” come into play – they come into play with the purpose to fix this shit. Those principles serve to “allow” multiple replacements of bad terms in solutions with better ones – bare stuff and renormalizations, of course. A whole “fairy science” about a “vacuum polarization” around a still “bare” charge is developed to get rid of bad perturbative corrections in this wrong gauge construction (renormalization group). It boils down to adding a counter-term Lagrangian to the gauge one :

so the interaction becomes *different* from a purely gauge one. (Often it is presented as imposing physical conditions to a (bad) theory.) Thus, bare stuff and bad corrections cancel each other and do not exits any more. That’s their fate – to disappear from physics forever, if you understand it right. And it is we who make them disappear, not physical phenomena like vacuum polarization, etc. In other words, renormalization is not a calculation, but a specific modification of calculation results.

But this fix is not sufficient either. One needs to sum up soft diagrams too (to all orders) in order to obtain physically meaningful results because, alas, the electron does not correctly radiate otherwise and calculation fails! The latter fact shows eloquently that some part of “perturbation” (8) (let’s call it figuratively ) is not small and should be taken into account exactly (joined with , hence, removed from the “perturbation”):

Fig. 3. Electron scattering from an external field in the first Born approximation, as it must be.

Such taking into account exactly is in fact using another, more physical, zeroth-order approximation with Lagrangian (9). The electron charge is involved there non perturbatively, so the electron is already *coupled* with the field variables, at least, partially (I call such an approximation an “electronium” [5]). Interaction (10) is even more different from the “gauge” one. (A good qualitative and quantitative analogy to such IR-divergent series and their exact sums is the second atomic form-factor (3) and its series in powers of when and , see Fig. 3 in [5] and [7].)

You see, our former initial approximation (decoupled electron in ) is not physical. You know why? Because we admit free particles in our minds and thus in equations. We observe interacting macroscopic bodies. In the simplest case we speak of a probe body in an external force. Sometimes the external forces add up into nearly zero and they do not change noticeably the body kinetic energy. Then we say the probe body is “free”. But we observe it with help of interactions too (inclusive image obtained with photons, for example), so it is never free, as a matter of fact, and, of course, its mass is not bare. For electron it also means that its very notion as a “point particle” and its equations is an inclusive picture of something compound [5]. An electron coupled within field oscillators has a natural mechanism of “radiation reaction” and a natural inclusive picture. Such a coupling is always on and never is off, unlike the gauge term treated perturbatively. W. Pauli always argued that one should look for a formulation of QED (or a field theory in general) which would mathematically not allow the description of a charged particle without its electromagnetic field. Now, seeing to what extent and are different from (9) and (10), I can safely say that they really do not understand what to start with in their “gauge theories”. Even a physical solution of a partially coupled electron (a “hairy” electron line in Fig. 3) is not written, understood, and explained in QED, but who cares? (My mechanical [6] and atomic toy models [7] demonstrate that this can be achieved.)

In electroweak unification they wanted to make the weak part of interaction to be a “gauge” too, but the gauge fields are massless. What a pity! Not only this construction needs counter-terms and soft diagram summations, now it needs a special “mechanism” to write down the mass terms in . Such a fix was found and it is known now as a Higgs mechanism. This fix to a bad gauge interaction idea is presented now as the ultimate explanation of the nature of mass: “Every ounce of mass found in the universe is made possible by the Higgs boson.” I wonder how were we doing before Higgs? With writing down phenomenological mass terms, we were in error, weren’t we? No. Then why all these complications? Because they do not know how to write down *interactions* with massive particles correctly (an old story, see (9) and (10) above). All they write is not only non physical, but also non renormalizable, so they decided to try here the gauge principle too. Fortunately or unfortunately, but some such constructions are renormalizable, thus they survived.

We remember the fiasco with the electron electromagnetic mass, and the Higgs proper mass is not really different since the Higgs boson acquires its own mass due to “self-action” too. It is not a calculation, but a fake since the Higgs boson mass is taken from experiment.

The Standard Model is also furnished with “fine tuning mechanism” because otherwise it is still a bullshit. And let me mention *fitting parameters* coming with the “Higgs mechanism”. Now the fitting properties of theory increased. Some, however, confuse it with increase of “predictive power”.

To me the Higgs is a fix, a fix somewhat similar to the bare mass term in CED compensating an obviously wrong construction, but a more complicated fix. I do not think it is an achievement. A bare mass notion is not an achievement in physics. The freedom in choosing the cutoff in a relationship (à la renorm-group) is not physics, -independence of is not a CED “universality”. I hope I am clear here. But nowadays particle physics is stuffed with artefacts of our patches and stopgaps, so it is really difficult to distinguish what is physical and what is a fairy tale (a fake).

Today they sell you the bare stuff, its self-action dictated with the gauge principle, then counter-terms, IR diagram summation, Higgs field with self-action and fine tuning, poisons and antidotes, shit with nutlets, etc. as a physical theory. They are very pushy in that. They grasped all the principles of Nature.

No, they fool themselves with “clever insights” and fairy tales instead of doing physics. They count on “guiding principles”, they are under the spell of the gauge and other principles. Sticking to them is like being possessed. This fact underlines the shaky grounds the modern QFT is based on.

We have no right to dope ourselves with self-fooling and self-flattering. The conceptual problems have not been resolved, let us recognize it.

(To be updated.)

[1] Laurie M. Brown (editor). *Renormalization From Lorentz to Landau (and beyond)*, 1993, Springer-Verlag, the talk of Max Dresden.

[2] Kerson Huang, *A Critical History of Renormalization*, http://arxiv.org/abs/1310.5533

[3] H. Lorentz, Landau-Lifshitz, R. Feynman, etc.

[4] Fritz Rohrlich, *The dynamics of a charged particle*, (2008) http://arxiv.org/abs/0804.4614

[5] Vladimir Kalitvianski, *Atom as a “Dressed” Nucleus*, Central European Journal of Physics, V. 7, N. 1, pp. 1-11 (2009), http://arxiv.org/abs/0806.2635

[6] Vladimir Kalitvianski, *A toy model of renormalization and reformulation*, http://arxiv.org/abs/1110.3702

[7] Vladimir Kalitvianski, *On integrating out short-distance physics*, http://arxiv.org/abs/1409.8326