This paper is available on arXiv and now on the AIS site in open access.
This paper is available on arXiv and now on the AIS site in open access.
There was a question on Physics Overflow which can be reduced to the last phrase:
“I think the bottom line with my questions is that I fully accept that divergent series occur in physics all the time, and quite clearly they contain information that we can extract, but I would like to understand more to what degree can we trust those results.”
And there were some answers including mine. Amongst other things, I mentioned a “constructive way” of building asymptotic series, which would be useful in practice. As an example, I considered a toy function . A direct summations of its Taylor series is useless because it diverges from the exact function value at any finite . It is so not only for fast “growing” coefficient cases, but also for regular (converging) Taylor series truncated at some finite order, when we try to extrapolate the truncated series to finite (large) values of . A truncated series “grows” as the highest power of , but the expanded function can be finite and limited, so the truncated series becomes inaccurate.
Thinking this fact over, in about 1981-1982, I decided that the difficulties with extrapolation to finite was in expanding a finite and slowly changing function in powers of fast growing functions like . “Why not to expand such a slow function in powers of slowly changing functions?“, thought I, for example: , where at small , but for finite .
In order to give some idea and demonstrate fruitfulness of my “constructive” approach, I considered the following functions instead of : , , and with adjustable coefficient . The corresponding figures are the following:
Fig. 1. Expansion in powers of .
Fig. 2. Expansion in powers of .
Fig. 3. Expansion in powers of .
The smaller terms in the new series, the better approximation. Following this banal observation, I adjusted the coefficient in to minimize the coefficient at (note, the axis is made longer in the next three figures):
Fig. 4. Expansion in powers of .
We see that is approximated now much better than with its truncated asymptotic series in powers of (Fig. 1).
The same idea applied to the function works well too:
Fig. 5. Expansion in powers of .
Finally, the ground state energy of the anharmonic oscillator in QM (anharmonicity ) has also a divergent series: , which can be transformed into a series in powers of . It gives a good extrapolation of (error % within , Fig. 6), unlike the original series:
Fig. 6. The ground state energy of 1D anharmonic oscillator.
Thus, my idea was not too stupid as it allowed to extrapolate the asymptotic (divergent) series in the region of big with a decent accuracy.
P.S. In my practice I also encountered a series (a convergent one) whose convergence I managed to improve with partially summing up some of its terms into a finite function , so the resulting series became even better convergent (Chapters 3 and 4). It is somewhat similar to soft contributions summation in QED, if you like.
The moral is the following: if you want to have a “convergent” series, then build it yourself and enjoy.
Hank Campbel, the founder of Science 2.0, wrote two articles about science in danger:
(I am really grateful to Ron Maimon for his tender loving care about scientists who try to make some science. A rare person he is!)
I would like to discuss briefly a “proof” of necessity and usefulness of renormalization on a popular example being taught to students of University of Maryland. This example was taken from What is an example of an infinity arising in QFT and an example of a renormalization technique being used to deal with it? The direct pdf file reference is the following: Page on umd.edu
The problem is simple. The author considers first a -like potential and calculates scattering amplitudes (reflection/transmission amplitudes). In particular, he obtains the “low-energy” formula: . It is just a regular calculation. Everything is physically reasonable and no renormalization is necessary so far. In particular, when the potential coefficient tends to infinity, the transmission amplitude vanishes and the incident wave is completely reflected. It is comprehensible in case of a positive , but it holds as well for a negative value of in the “wave” mechanics.
After that, the author considers another interaction potential . This potential gives “undesirable” results. Replacing with a “regularized” version of this kind , the author obtains a “regularized” amplitude . Again, so far so good. When , the transmission amplitude tends to zero too. It is qualitatively comprehensible because each grows in its absolute value as in case of in the problem considered just above. is a highly reflecting potential.
But the author does not like this result. He wants fulfilling a “low-energy theorem” for this potential too. He wants a non zero transmission amplitude! I do not know why he wants this, but I suspect that in “realistic” cases we use interactions like because we do not know how to write down something like . As well, it is possible that in experiment one observes a non zero transmission amplitude and renormalization of “works”. So, his desire to obtain a physical result from an unphysical potential is the main “human phenomenon” happening in this domain. We want right results from a wrong theory. We require them from it! (The second theory is wrong because of wrong guess of potential.)
Of course, a wrong theory does not give you the right results whatever spells one pronounces over it. So he takes an initiative and replaces a wrong result with a right one: . Comparing it with , he concludes that they are equivalent if . The author denotes as and calls it a “phenomenological parameter” to compare with experiment via the famous formula . After finding it from experimental data, the author says that a theory with describes the experimental data.
Is his reasoning convincing to you as a way of doing physics?
If one is obliged to manipulate the calculation results with saying that is “not observable”, I wonder why and for what reason one then proposes such a and insists on correctness and uniqueness of it? Because it is “relativistic and gauge invariant”? Because after renormalization it “works”? And how about physics?
Factually, the renormalized result belongs to another theory (to a theory with another potential). Then why not to find it out from physical reasoning and use instead of ? This is what I call a theory reformulation. Am I not reasonable?
I would like to explain how short-distance (or high-energy) physics is “integrated out” in a reasonably constructed theory. Speaking roughly and briefly, it is integrated out automatically. I propose to build QFT in a similar way.
Phenomena to describe
Let us consider a two-electron Helium atom in the following state: one electron is in the “ground” state and the other one is in a high orbit. The total wave function of this system depending on the absolute coordinates is conveniently presented as a product of a plane wave describing the atomic center of mass and a wave function of the relative or internal collective motion of constituents where and are the electron coordinates relative to the nucleus (see Fig.1).
Figure 1. Coordinates in question.
Normally, this wave function is still a complicated thing and the coordinates and are not separated (the interacting constituents are in mixed states). What can be separated in are normal (independent) modes of the collective motion (or “quasi-particles”). Normally it is their properties (proper frequencies, for example) who are observed.
However, in case of one highly excited electron (), the wave function of internal motion, for our numerical estimations and qualitative analysis, can be quite accurately approximated with a product of two hydrogen-like wave functions where is a wave function of ion () and is a wave function of Hydrogen in a highly excited state ().
The system is at rest as a whole and serves as a target for a fast charged projectile. I want to consider large angle scattering, i.e., scattering from the atomic nucleus rather than from the atomic electrons. The projectile-nucleus interaction is expressed via “collective” coordinates thanks to the relationship . I take a non-relativistic proton with as a projectile and I will consider such transferred momentum values that are insufficient to excite the inner electron levels by “hitting” the nucleus. Below I will precise these conditions. Thus, for the outer electron the proton is sufficiently fast to be reasonably treated by the perturbation theory in the first Born approximation, and for the inner electron the proton scattering is such that cannot cause its transitions. This two-electron system will model a target with soft and hard excitations.
Now, let us look at the Born amplitude of scattering from such a target. The general formula for the cross section is the following (all notations are from ):
The usual atomic form-factor (2) describes scattering from atomic electrons and it becomes relatively small for large scattering angles . It is so because, roughly speaking, the atomic electrons are light compared to the heavy projectile and they cannot cause large-angle scattering for a kinematic reason. I can consider scattering angles superior to those determined with the direct projectile-electron interactions () or, even better, I may exclude the direct projectile-electron interactions in order not to involve into calculations any more. Then no “screening” due to atomic electrons exists for the projectile nor atomic excitations due to direct projectile-electron interaction at any scattering angle.
Let us analyze the second atomic form-factor (3) in the elastic channel. With our assumptions on the wave function, it can be easily calculated if the corresponding wave functions are injected in (3):
It factorizes into two Hydrogen-like form-factors:
Form-factor describes quantum mechanical smearing of the nucleus charge (a “positive charge cloud”) due to nucleus coupling to the first atomic electron. This form-factor may be close to unity (smearing may not be “visible” because of its small size ). Form-factor describes quantum mechanical smearing of the nucleus charge (another “positive charge cloud”) due to nucleus coupling to the second atomic electron. In our conditions is rather small because the corresponding smearing size is much larger. In our problem setup the projectile “probes” these positive charge clouds and do not interact directly with the electrons.
Thus, the projectile may “see” a big “positive charge cloud” created with the motion of the atomic nucleus in its “high” orbit (i.e., with the motion of ion thanks to the second electron, but with full charge seen with the projectile), and at the same time it may not see the additional small positive cloud of the nucleus “rotating” also in the ground state of ion. The complicated short-distance structure (the small cloud within the large one) is integrated out in (4) and results in the elastic from-factor tending to unity, as if this short-distance physics were absent. We can pick up such a proton energy , such a scattering angle , and such an excited state , that $ may be equal to unity even at the largest transferred momentum, i.e., at .
In order to see to what extent this is physically possible in our problem, let us analyze the “characteristic” angle for the inner electron state . (I remind that .) is an angle at which the inelastic processes become relatively essential (the probability of not exciting the target “internal” states is and that of exciting any “internal” state is described with the factor ):
Here, instead of stands for the ion due to and factor 5 originates from the expression . So, for . Fig. 2 shows just such a case (the red line) together with the other form-factor – for a third excited state of the other electron (the blue line) for demonstrating a strong impact of .
Figure 2. Helium form-factors and at .
We see that for scattering angles form-factor becomes very close to unity (only elastic channel is open for the inner electron state) whereas form-factor may be still very small if . The latter form-factor describes a large and soft “positive charge cloud” in the elastic channel, and for inelastic scattering () it describes the soft target excitations energetically accessible when hitting the heavy nucleus.
The inner electron level excitations due to hitting the nucleus can also be suppressed not only for , but also for any angle in case of relatively small projectile velocities (Fig. 3).
Figure 3. Helium form-factors and at .
By the way, a light electron as a projectile does not see the additional small smearing even at because its energy is way insufficient (its de Broglie wavelength is too large for that). The incident electron should be rather relativistic to be able to probe such short-distance details .
Let us note that for small velocities the first Born approximation may become somewhat inaccurate: a “slow” projectile may “polarize” the atomic “core” (more exactly, the nucleus may have enough time to make several quick turns during interaction) and this effect influences numerically the exact elastic cross section. Higher-order perturbative corrections of the Born series take care of this effect, but the short-distance physics will still not intervene in a harmful way in our calculations. Instead of simply dropping out (i.e., producing a unity factor in the cross section (1)), it will be taken into account (“integrated out”) more precisely, when necessary.
Hence, whatever the true internal structure is (the true high-energy physics, the true high-energy excitations), the projectile in our “two-electron” theory cannot factually probe it when it lacks energy. The soft excitations are accessible and the hard ones are not. It is comprehensible physically and is rather natural – the projectile, as a long wave, only sees large things. Small details are somehow averaged or integrated out. In our calculation, however, this “integrating out” (factually, “taking into account”) the short-distance physics occurs automatically rather than “by hands”. We do not introduce a cut-off and do not discard (absorb) the harmful corrections in order to obtain something physical. We do not have harmful corrections at all. It convinces me in possibility of constructing a physically reasonable QFT where no cut-off and discarding are necessary.
The first Born approximation (3) in the elastic channel gives a “photo” of the atomic positive charge distribution as if the atom was internally unperturbed during scattering, a photo with a certain resolution, though.
Inelastic processes give possible final target states different from the initial one (different could configurations).
The fully inclusive cross section (i.e., the sum of the elastic and all inelastic ones) reduces to a great extent to a Rutherford scattering formula for a free and still point-like target nucleus (no clouds at all!) . The inclusive picture is another kind of averaging over the whole variety of events, averaging often encountered in experiments and resulting in a deceptive simplification. One has to keep this in mind because usually it is not mentioned while speaking of short-distance physics, as if there were no difference between elastic, inelastic, and inclusive pictures.
Increasing the projectile energy (decreasing its de Broglie wavelength), increasing the scattering angles and resolution at experiment helps reveal the short-distance physics in more detail. Doing so, we may discover high-energy excitations inaccessible at lower energies/angles. Thus, we may learn that our knowledge (for example, about pointlikeness of the core) was not really precise, “microscopic”.
Above we did not encounter any mathematical difficulties. It was a banal calculation, as it should be in physics. We may therefore say that out theory is physically reasonable.
What makes our theory physically reasonable? The permanent interactions of the atomic constituents taken into account exactly both via their wave function and via the relationships between their absolute and the relative (or collective) coordinates, (namely, involved in was expressed via and ). The rest was a perturbation theory in this or that approximation. For scattering processes it calculated the occupation number evolutions – the transition probabilities between different target states. It is an ideal in the scattering physics description.
Now, let us imagine for instance that this our “two-electron” theory is a “Theory of Everything” (or a true “underlying theory”) unknown to us so far. Low-energy experiments outlined above would not reveal the “core” structure, but would present it as a point-like nucleus smeared only due to the second electron. Such experiments would then be well described with a simpler, “one-electron” theory, a theory of a hydrogen-like atom with and . The presence of the first electron would not be necessary in such a theory: the latter would work fine and without difficulties – it would reproduce low-energy target excitations.
May we call the “one-electron” theory an effective one? Maybe. I prefer the term “incomplete” – it does not include and predict all target excitations existing in Nature, but it has no mathematical problems (catastrophes) as a model even outside its domain of validity. The projectile energy (or a characteristic transferred momentum ) is not a “scale” in our theory in a Wilsonian sense.
Thus, the absence of the true physics of short distances in the “one-electron” theory does not make it ill-defined or fail mathematically. And this is so because the one-electron theory is also constructed correctly – what is know to be coupled permanently and determines the soft spectrum is already taken into account in it via the wave function and via the coordinate relationships. That is why when people say that a given theory has mathematical problems “because not everything in it is taken into account”, I remain skeptic. I think the problem is in its erroneous formulation. It is a problem of formulation or modeling (see, for example, unnecessary and harmful “electron self-induction effect” discussed in  and an equation coupling error discussed in ). And I do not believe that when everything else is taken into account, the difficulties will disappear automatically. Especially if “new physics” is taken into account in the same way – erroneously. Instead of excuses, we need a correct formulation of incomplete theories on each level of our knowledge.
Now, let us consider a one-electron state in QED. According to QED equations, “everything is permanently coupled with everything”, in particular, even one-electron state, as a target, contains possibilities of exciting high-energy states like creating hard photons and electron-positron pairs. It is certainly so in experiments, but the standard QED suffers from calculation difficulties (catastrophes) of obtaining them in a natural way because of its awkward formulation. A great deal of QED calculations consists in correcting its initial wrongness. That is why “guessing right equations” is still an important physical and mathematical task.
Electronium and all that
My electronium model  is an attempt to take into account a low-energy QED physics, like in the “one-electron” incomplete atomic model mentioned briefly above. The non relativistic electronium model does not include all possible QED excitations but soft photons; however, and this is important, it works fine in a low-energy region. Colliding two electroniums produces soft excitations (radiation) immediately, in the first Born approximation. (It looks like colliding two complex atoms – in the final state one naturally obtains excited atoms.) There is no background for the infrared problem there because the soft modes are taken into account “exactly” rather than “perturbatively”. Perturbative treatment of soft modes gives a divergent series due to “strongness” of soft mode contributions into the calculated probabilities :
Picture 4. Extraction form .
It is easy to understand in case of expanding our second form-factors in powers of “small coupling parameter” in the exponential (3): . For the first electron (i.e., for the hard excitations) the term may be small (see Fig. 3) whereas for the second one is rather large and diverges in the soft limit . In QED the hard and soft photon modes are treated perturbatively because the corresponding electron-field interaction is factually written in the so called “mixed variables”  and the corresponding series are similar to expansions of our inelastic form-factors in powers of .
By the way, the photons are those normal modes of the collective motions whose variables in the corresponding are separated.
How would I complete my electronium model, if given a chance? I would add all QED excitations in a similar way – I would add a product of the other possible “normal modes” to the soft photon wave function and I would express the constituent electron coordinates via the center of mass and relative motion coordinates, like in the non relativistic electronium or in atom. Such a completion would work as fine as my actual (primitive) electronium model, but it would produce the whole spectrum of possible QED excitations in a natural way. Of course, I have not done it yet (due to lack of funds) and it might be technically very difficult to do, but in principle such a reformulated QED model would be free from mathematical and conceptual difficulties by construction. Yes, it would be still an “incomplete” QFT, but no references to the absence of the other particles (excitations) existing in Nature would be necessary. I would not introduce a cut-off and running constants in order to get rid of initial wrongness, as it is carried out today in the frame of Wilsonian RG exercise.
In a “complete” reformulated QFT (or “Theory of Everything”) non-accessible at a given energy excitations would not contribute (with some reservations). Roughly speaking, they would be integrated out (taken into account) automatically, like in my “two-electron” target model given above, reducing naturally to a unity factor.
But this property of “insensibility to short-distance physics” does not exclusively belong to the “complete” reformulated QFT. “Incomplete” theories can also be formulated in such a way that this property will hold. It means the short-distance physics, present in an “incomplete theory” and different from reality, cannot be and will not be harmful for calculations technically, as it was eloquently demonstrated in this article. When the time arrives, the new high-energy excitations could be taken into account in a natural way described primitively above as a transition from a “one-electron” to “two-electron” target model. I propose to think over this way of constructing QFT. I feel it is a promising direction of building physical theories.
 Kalitvianski V 2009 Atom as a “Dressed” Nucleus Cent. Eur. J. Phys. 7(1) 1–11 (Preprint arXiv:0806.2635 [physics.atom-ph])
 Feynman R 1964 The Feynman Lectures on Physics vol. 2 (Reading, Massachusetts: Addison-Wesley Publishing Company, Inc.) pp 28-4–28-6
 Kalitvianski V 2013 A Toy Model of Renormalization and Reformulation Int. J. Phys. 1(4) 84–93
(Preprint arXiv:1110.3702 [physics.gen-ph])
 Akhiezer A I, Berestetskii V B 1965 Quantum Electrodynamics (New York, USA: Interscience Publishers) p 413
 Kalitvianski V 2008 Reformulation Instead of Renormalization Preprint arXiv:0811.4416 [physics.gen-ph]
There was a period when renormalization was considered as a temporary remedy, working luckily in a limited set of theories and supposed to disappear within a physically and mathematically better approach. P. Dirac called renormalization “doctoring numbers” and advised us to search for better Hamiltonians. J. Schwinger also was underlying the necessity to identify the implicit wrong hypothesis whose harm is removed with renormalization in order to formulate the theory in better terms from the very beginning. Alas, many tried, but none prevailed.
In his article G. ‘t Hooft mentions the skepticism with respect to renormalization, but he says that this skepticism is not justified.
I was reading this article to understand his way of thinking about renormalization. I thought it would contain something original, insightful, clarifying. After reading it, I understood that G. ‘t Hooft had nothing to say.
Indeed, what does he propose to convince me?
Let us consider his statement: “Renormalization is a natural feature, and the fact that renormalization counter terms diverge in the ultraviolet is unavoidable”. It is rather strong to be true. An exaggeration without any proof. But probably, G. ‘t Hooft had no other experience in his research career.
“A natural feature” of what or of whom? Let me precise then, it may be unavoidable in a stupid theory, but it is unnatural even there. In a clever theory everything is all right by definition. In other words, everything is model-dependent. However G. ‘t Hooft tries to make an impression that there may not be a clever theory, an impression that the present theory is good, ultimate and unique.
“The fact that mass terms in the Lagrangian of a quantized field theory do not exactly correspond to the real masses of the physical particles it describes, and that the coupling constants do not exactly correspond to the scattering amplitudes, should not be surprising.”
I personally, as an engineering physicist, am really surprised – I am used to equations with real, physical parameters. To what do those parameters correspond then?
“The interactions among particles have the effect of modifying masses and coupling strengths.” Here I am even more surprised! Who ordered this? I am used to independence of masses/charges from interactions. Even in relativistic case, the masses of constituents are unchanged and what depends on interactions is the total mass, which is calculable. Now his interaction is reportedly such that it changes masses and charges of constituents and this is OK. I am used to think that masses/charges were characteristics of interactions, and now I read that factually interactions modify interactions (or equations modify equations ;-)).
To convince me even more, G. ‘t Hooft says that this happens “when the dynamical laws of continuous systems, such as the equations for fields in a multi-dimensional world, are subject to the rules of Quantum Mechanics”, i.e., not in everyday situation. What is so special about continuous systems, etc.? I, on the contrary, think that this happens every time when a person is too self-confident and makes a stupidity, i.e., it may happen in every day situations. You have just to try it if you do not believe me. Thus, when G. ‘t Hooft talks me into accepting perturbative corrections to the fundamental constants, I wonder whether he’s checked his theory for stupidity (like the stupid self-induction effect) or not. I am afraid he hasn’t. Meanwhile the radiation reaction is different from the near-field reaction, so we make a mistake when take the latter into account. This is not a desirable effect , that is why it is removed by hand anyway.
But let us admit he managed to talk me into accepting the naturalness of perturbative corrections to the fundamental constants. Now I read: “that the infinite parts of these effects are somehow invisible”. Here I am so surprised that I am screaming. Even a quiet animal would scream after his words. Because if they are invisible, why was he talking me into accepting them?
Yes, they are very visible, and yes, it is we who should make them invisible and this is called renormalization. This is our feature. Thus, it is not “somehow”, but due to our active intervention in calculation results. And it works! To tell the truth, here I agree. If I take the liberty to modify something for my convenience, it will work without fail, believe me. But it would be better and more honest to call those corrections “unnecessary” if we subtract them.
How he justifies this our intervention in our theory results? He speaks of bare particles as if they existed. If the mass and charge terms do not correspond to physical particles, they correspond to bare particles and the whole Lagrangian is a Lagrangian of interacting bare particles. Congratulations, we have figured out bare particles from postulating their interactions! What an insight!
No, frankly, P. Dirac wrote his equations for physical particles and found that this interaction was wrong, that is why we have to remove the wrong part by the corresponding subtractions. No bare particles were in his theory project or in experiments. We cannot pretend to have guessed a correct interaction of the bare particles. If one is so insightful and super-powerful, then try to write a correct interaction of physical particles, – it is already about time.
“Confrontation with experimental results demonstrated without doubt that these calculations indeed reflect the real world. In spite of these successes, however, renormalization theory was greeted with considerable skepticism. Critics observed that ”the infinities are just being swept under the rug”. This obviously had to be wrong; all agreements with experimental observations, according to some, had to be accidental.”
That’s a proof from a Nobelist! It cannot be an accident! G. ‘t Hooft cannot provide a more serious argument than that. In other words, he insists that in a very limited set of renormalizable theories, our transformations of calculation results from the wrong to the right may be successful not by accident, but because these unavoidable-but-invisible stuff does exists in Nature. Then why not to go farther? With the same success we can advance such a weird interaction that the corresponding bare particles will have a dick on the forehead to cancel its weirdness and this shit will work, so what? Do they exist, those weird bare particles, in your opinion?
And he speaks of gauge invariance. Formerly it was a property of equations for physical particles and now it became a property of bare ones. Gauge invariance, relativistic invariance, locality, CPT, spin-statistics and all that are properties of bare particles, not of the real ones; let us face the truth if you take seriously our theory.
I like much better the interaction with counter-terms. First of all, it does not change the fundamental constants. Next, it shows imperfection of our “gauge” interaction – the counter-terms subtract the unnecessary contributions. Cutoff-dependence of counter-terms is much more natural and it shows that we are still unaware of a right interaction – we cannot write it down explicitly; at this stage of theory development we are still obliged to repair the calculation results perturbatively. In a clever theory, Lagrangian only contains unknown variables, not the solutions, but presently the counter-terms contain solution properties, in particular, the cutoff. The theory is still underdeveloped, it is clear.
No, this paper by G. ‘t Hooft is not original nor accurate, that’s my assessment.
I am dreaming of reformulating the Classical and Quantum Electrodynamics.
Why it is necessary?
It is necessary for better understanding the corresponding physics and for having better equations since currently the equations are such that their solutions need modifications (this fact reflects lack of physics understanding while constructing these equations).
Why it has not been done before?
In fact, many have tried, but none prevailed. And currently it is renormalizators (practitioners) who are teaching the subject, not theory developers, so they do everything to convince students to accept “bare particle” physics. In Classical Electrodynamics (CED) some teach that (the remainder after the mass renormalization) is a good radiation reaction term [1, 2] even though it leads to “false start” solutions; others, on the contrary, teach that is not applicable at “small times” and one must use instead , but up to now no mechanical equation was found to conserve the energy-momentum exactly and in a physical manner. We content ourselves with an approximate description. The Lorentz covariance and the Noether theorem did not help , !
Similarly in QED – although the equation set is different from that of CED, the renormalization is still a crucial part of calculations. And in addition, soft mode contributions (absent in the first Born approximation) are obligatory for obtaining physically meaningful results. If one is obliged to sum up some of its contributions to all orders, it indicates a bad initial approximation used for the perturbation theory.
Many theory developers (founding fathers) were looking for better theory formulations. It happened to be an extremely difficult problem, mainly due to prejudices implicitly involved in theoretical constructions. Paul Dirac, a rare physicist who was not thinking of fame and money at all, never gave up. His motto – a theory must be mathematically and physically sensible , and for the sake of that we must search for better Hamiltonians, better formulations, better description than the current one, is my motto too.
If you have read my blogs (this one, http://fishers-in-the-snow.blogspot.fr/ , http://vladimir-anski.livejournal.com/) and articles, (more here) you may have an idea what I mean by reformulation. If you like, my program can roughly be understood as both fulfilling the counter-term subtractions exactly:
and including some of this “good” (renormalized, to be exact) Lagrangian terms into a new initial approximation, i.e., figuratively speaking, I mean representing:
The new “free” Lagrangian will contain soft modes and physical constants by construction. Then the “interaction term” will be different too:
so that no renormalization will be needed, and the soft diagram contributions will be taken into account automatically in the first Born approximation by construction, like in , . The resulting perturbation theory series will resemble a usual Taylor series with no necessity to cheat and modify its terms. This is an unexplored possibility of the theory formulation and it is what I would like to do.
What I need?
In order to pursue my research, I need funds. I believe that we can achieve a better description if we abandon some prejudices and employ some physical reasoning instead of doing by a blind analogy. I have already outlined possible directions in my articles [7-11]. But currently I am working for a private company, fulfilling subcontract studies, and it takes all my time and efforts. This activity is far from my dream, though. I have to abandon it in order to concentrate myself on my own subject. I’ve got to break free!
Academia does not support this “reformulation approach” any more. I can only count on private funding. If you or your friends or friends of your friends are rich people, then create a fund for supporting my research, run it and we will make it possible.
I do not need a crazy amount like a Milner prise, no! A regular salary of a theorist will suffice.
P.S. Et voilà, I became unemployed (27 January 2016). Sponsors, hurry up, I am getting older!
 Sidney Coleman, Classical Electron Theory from a Modern Standpoint, http://www.rand.org/content/dam/rand/pubs/research_memoranda/2006/RM2820.pdf
 Gilbert N. Plass, Classical Electrodynamic Equations of Motion with Radiative Reaction, Rev. Mod. Phys. V. 33, 37 (1961), http://journals.aps.org/rmp/abstract/10.1103/RevModPhys.33.37 or https://drive.google.com/file/d/0B4Db4rFq72mLcUN6bEhweTgyWkE/edit?usp=sharing
 V. L. Ginzburg, Theoretical Physics and Astrophysics, Pergamon Press (1979), http://www.amazon.com/Theoretical-Physics-Astrophysics-Monographs-Philosophy/dp/0080230679 , https://drive.google.com/file/d/0B4Db4rFq72mLWGhCTXVJLUU1WVk/edit?usp=sharing
 Jagdish Mehra (editor), The Physicist’s Conception of Nature, (1973), https://drive.google.com/file/d/0B4Db4rFq72mLWnIyM1FSOGcxaDA/edit?usp=sharing
 Reformulation instead of renormalization, http://arxiv.org/abs/0811.4416
 Atom as a “Dressed” Nucleus, http://arxiv.org/abs/0806.2635
 A toy model of Renormalization and Reformulation, http://arxiv.org/abs/1110.3702
 Unknown Physics of Short Distances, https://www.academia.edu/370847/On_Probing_Small_Distances_in_Quantum_World
 On integrating out short-distance physics, Physics Journal, V. 1. N. 3, pp. 331-342 (2015)
I’m speaking of bare particles. “Heroes” is maybe too pathetic, but “bricks” would be OK since everything is made of them despite their being non-observable. Why are they non-observable? Because they are non-interacting particles or particles “before interaction”. Inaccessible, for short.
It is relativistic and gauge-invariant because the bare particles are such. Parameters and are bare particle mass and charge and the term is how bare particles interact. Of course, bare particles have spin and other quantum numbers.
You may wonder how do we physicists know all that if the bare particles are non-observable (and why do they interact if they are non-interacting particles)?
Good questions. Very intelligent! The answer is – due to our physical insights. You know, insight is the ability to see the invisible, to penetrate mentally into the unknown, to figure out everything correctly from small, rare, and distorted pieces of a whole picture. Factually we, from long distances (from low-energy experiments with physical particles), penetrated to the end – to the point where bare particles live. Thus we insightful nailed the bare particle properties and their interaction laws correctly despite their hiding from us.
And yes, the bare non-interacting particles do interact and even self-interact. It is they who permanently do this hard work. At a first, naive, glance these statements are inconsistent, but no. It is a kind of duality in physics. This duality is not much advertised because the bare particles are really modest bricks.
(It’s a joke without humor.)