Digital physical world. This “digital” physical world. A. Grishaev. Stunning results of gravimetric measurements

THIS “DIGITAL” PHYSICAL WORLD

“The language of truth is simple.”
Seneca the Younger

In 5 Sections with Addendum.

Section 1. MAIN CATEGORIES OF THE “DIGITAL” WORLD

1.1 What are we talking about, exactly?
In the history of medicine there was such a clinical case.
« Until about the mid-19th century, maternity fever was rampant in obstetric clinics in Europe. In some years, it claimed up to 30 percent or more of the lives of mothers who gave birth in these clinics. Women preferred to give birth on trains and on the streets, rather than end up in a hospital, and when they went there, they said goodbye to their families as if they were going to the chopping block. It was believed that this disease was epidemic in nature; there were about 30 theories of its origin. It was associated with changes in the state of the atmosphere, and with soil changes, and with the location of the clinics, and they tried to treat everything, including the use of laxatives. Autopsies always showed the same picture: death was due to blood poisoning.
F. Pachner gives the following figures: “...over 60 years in Prussia alone, 363,624 women in labor died from maternity fever, i.e. more than during the same time from smallpox and cholera combined... Mortality rate of 10% was considered quite normal, in other words, out of 100 women in labor, 10 died from puerperal fever...” Of all the diseases subjected to statistical analysis at that time, puerperal fever was accompanied by the highest mortality rate.
In 1847, a 29-year-old doctor from Vienna, Ignaz Semmelweis, discovered the secret of puerperal fever. Comparing data in two different clinics, he came to the conclusion that the cause of this disease is the carelessness of doctors who examined pregnant women, delivered children and performed gyneco. logical operations with unsterile hands and under unsterile conditions. Ignaz Semmelweis suggested washing your hands not just with soap and water, but disinfecting them with chlorine water - this was the essence of the new method of preventing the disease.
Semmelweis’s teaching was not finally and universally accepted during his lifetime; he died in 1865, i.e. 18 years after its discovery, although it was extremely easy to verify its correctness in practice. Moreover, Semmelweis's discovery caused a sharp wave of condemnation not only against his technique, but also against himself (all the luminaries of the medical world of Europe rebelled).
Semmelweis was a young specialist (by the time of his discovery, he had worked as a doctor for about six months) and had not yet landed on the saving shore of any of the then existing theories. Therefore, he had no need to adjust the facts to some pre-selected concept. It is much more difficult for an experienced specialist to make a revolutionary discovery than for a young, inexperienced one. There is no paradox in this: major discoveries require the abandonment of old theories. This is very difficult for a professional: the psychological inertia of experience presses. And the person passes by the opening, fenced off with an impenetrable “it doesn’t happen”...
Semmelweis's discovery, in fact, was a verdict on obstetricians all over the world, who rejected him and continued to work with old methods. It turned these doctors into murderers, literally introducing infection with their own hands. This is the main reason why it was initially sharply and unconditionally rejected. The director of the clinic, Dr. Klein, forbade Semmelweis to publish statistics on the reduction in mortality with the introduction of hand sterilization. Klein said he would consider such a publication a denunciation. In fact, just for the discovery, Semmelweis was expelled from work (the formal contract was not renewed), despite the fact that the mortality rate in the clinic had dropped sharply. He had to leave Vienna for Budapest, where he did not immediately and with difficulty get a job.
The naturalness of such an attitude is easy to understand if you imagine the impression Semmelweis’s discovery made on doctors. When one of them, Gustav Michaelis, a famous doctor from Kiel, informed about the technique, introduced mandatory sterilization of hands with chlorine water in his clinic in 1848 and became convinced that the mortality rate had really dropped, then, unable to withstand the shock, he committed suicide. In addition, Semmelweis, in the eyes of the world professors, was too young and inexperienced to teach and, moreover, to demand anything else. Finally, his discovery sharply contradicted most of the then theories.
At first, Semmelweis tried to inform doctors in the most delicate way - through private letters. He wrote to world-famous scientists - Virchow, Simpson. Compared to them, Semmelweis was a provincial doctor who did not even have work experience. His letters had virtually no effect on the world community of doctors, and everything remained the same: doctors did not disinfect their hands, patients died, and this was considered the norm.
By 1860, Semmelweis had written a book. But she was also ignored.
Only after this did he begin to write open letters to his most prominent opponents. One of them contained the following words: “... if we can somehow come to terms with the devastation caused by childbed fever before 1847, because no one can be blamed for crimes committed unknowingly, then the situation is completely different with mortality from it after 1847 1864 marks 200 years since puerperal fever began to run rampant in obstetric clinics - it is time to finally put an end to it. Who is to blame for the fact that 15 years after the advent of the theory of the prevention of puerperal fever, women in labor continue to die? No one else , as a professor of obstetrics..."
The obstetrics professors Semmelweis addressed were shocked by his tone. Semmelweis was declared a man “with an impossible character.” He appealed to the conscience of scientists, but in response they fired off “scientific” theories, shackled in the armor of reluctance to understand anything that would contradict their concepts. There was falsification and manipulation of facts. Some professors, introducing “Semmelweis sterility” in their clinics, did not officially recognize this, but in their reports attributed the reduction in mortality due to their own theories, for example, improved ventilation of wards... There were doctors who falsified statistical data. And when Semmelweis’ theory began to gain recognition, naturally, there were scientists who disputed the priority of the discovery.
Semmelweis fought fiercely all his life, knowing full well that every day of delay in the implementation of his theory brings senseless sacrifices that might not have happened... But his discovery was fully recognized only by the next generation of doctors, who did not bear the blood of thousands of women who never became mothers. The non-recognition of Semmelweis by experienced doctors was a self-justification; the method of hand disinfection could not be accepted by them in principle. It is characteristic, for example, that the Prague school of doctors, whose mortality rate was the highest in Europe, resisted the longest. Semmelweis's discovery was recognized there only... 37 (!) years after it was made.
One can imagine the state of despair that took possession of Semmelweis, that feeling of helplessness when he, realizing that he had finally grabbed the threads from terrible disease, understood that it was not in his power to break through the wall of arrogance and traditions with which his contemporaries surrounded themselves. He knew how to rid the world of illness, but the world remained deaf to his advice.»
Unlike the luminaries of medicine, the luminaries of modern physics did not kill with their own hands - they crippled the souls of people. And the bill here is not some measly hundreds of thousands. It has been firmly hammered into the mass consciousness: the modern physical picture of the world cannot be false, because it is confirmed by practice. Here they are, they say, remarkable scientific and technical achievements of the twentieth century - the atomic bomb, lasers, microelectronic devices! All of them supposedly owe their appearance to fundamental physical theories! But the truth is that these and many other technical things were the result of experimental and technological breakthroughs. And the theorists retroactively added their “fundamental theories” to these breakthroughs. And this was done extremely poorly: theorists only say that they understand how all these technical things work - but in reality there is no such understanding.
Why do we say this so confidently? Here's why. It would make sense to talk about understanding if official theories reflected objective picture of experimental facts. But they reflect a completely different picture. An unbiased study of the experimental base of physics shows that official theories are far from corresponding to experimental realities, and that in order to create the illusion of this correspondence, some facts were suppressed, some were distorted, and even added something that did not occur in experience at all. Due to the inaccessibility of such theories for criticism, preference was given to those that turned out to be the most sophisticated. But the language of truth is simple!
Let us speak truthfully and simply. There is a fundamental axiom in the official physical doctrine that killed many generations of thinkers and plunged science into a grave crisis. This is the dogma that the physical world is self-sufficient. There is no other reality, they say, besides the physical one! And the reasons for everything that happens in the physical world are, they say, in it itself! And the fact that physical laws operate is, they say, because physical objects have such properties!
“Laws, properties...” Are properties, perhaps, primary? Are physical laws generated by properties? Or maybe it's the other way around? Isn't it a tautology to explain laws by properties? And how much can you explain this way? There are particles of matter. And they have “properties”. It turns out that particles of matter act on each other at a distance. And that all their “properties” have nothing to do with it. What to do in such a situation for those who do not accept any reality other than physical? That's right: draw a logical conclusion that there is another type of physical reality that was not previously suspected. Yes, choose a colorful name for it - for example, “field”. Well, and attribute to it all the necessary “properties”. So that action at a distance fits into these “properties”. But! After all, when attributing properties, you cannot immediately provide for all the subtleties. New problems will arise! “And problems,” they explain to us, “we will solve as they arise!”
Following these simple rules of life, theorists have already produced so many unnecessary entities that physics has long ago choked on them. In practice, experimenters deal only with matter. The same fields are judged only by the behavior of the substance: test charged particles are used to judge the electromagnetic field, and test bodies are used to judge the gravitational field. They look at the behavior of test particles and bodies, and speculate on the properties of the fields that provide such behavior. It turns out that electromagnetic and gravitational fields, as well as photons, gravitational waves, the physical vacuum with its monstrous hidden energy, virtual particles, neutrinos, strings and superstrings, dark matter– all this is pure speculation.
It is possible, however, to act not only much more simply, but also much more honestly in relation to experimental realities. Namely: to recognize that in the physical world there is only matter, and that the energies of the physical world - in all their diversity of forms - are the energies of only matter. And also assume that there is a superphysical level of reality, where there are program instructions that, firstly, form particles of matter at the physical level of reality and, secondly, set their properties, i.e. provide options for physical interactions in which these particles can participate. The physical world is what it is by no means in itself: what makes it so is the corresponding software. As long as this software operates, the physical world exists.
The mere assumption of programmatic control of the behavior of matter radically simplifies physics. The physical world, at a fundamental level, turns out to be “digital”, and even based on the simplest, binary logic! Each elementary particle - electron, proton - remains in physical existence while the program is running, which produces the corresponding cyclic changes of states. Gravity and electromagnetic phenomena are not generated by the properties of matter: not by masses and not by electric charges. Both gravity and electromagnetic phenomena are caused by “purely software means.” Which, in a certain way, transform the energy of matter from one form to another - giving rise to the illusion of the action of forces on matter. Stable nuclear and atomic structures also exist thanks to the work of the corresponding structure-forming algorithms. And even the light spreads thanks to a navigator program that “paves the way” for it. All these programs, having been debugged for a long time, work automatically - and identical situations are processed in the same way. Because of this, no offense intended, stupid automation, it turns out that physical laws operate in the world, and arbitrariness and chaos do not take place. And we see the minimum task for researchers here to comprehend at least the basic principles of organizing program instructions that support the existence of the physical world.
Why is this approach better than the traditional one? This is exactly the question that we will answer throughout this book. In short, then the proposed approach is better in that it more honestly reflects objective realities !
But, of course, the proposed approach is initially based on the assumption that the physical world is not self-sufficient. “Who wrote all these programs?” - they ask us. We answer: those who wrote these programs have many names, for example - Demiurges. “I see,” they tell us and shake their heads sympathetically. – It turns out that the physical world is a created one. But this cannot be! - “Why?” - we are interested. - “Because the question immediately arises: if the physical world is created, created, then who created the Creator?”
Amazingly, this question greatly confuses other thinkers and drives them into sadness. Therefore, we offer a simple recipe on how to quench this sadness. Let these thinkers think about the fact that the Creator is self-sufficient! And that the physical world is part of it. And so does the software of this world.

1.2 Sequential or parallel control of physical objects?
Today, even children know something about personal computers. Therefore, as a child’s illustration of the proposed model of the physical world, we can give the following analogy: a virtual reality world on a computer monitor and the software of this little world, which is not on the monitor, but on another level of reality - on the computer’s hard drive. Adhering to the concept of self-sufficiency of the physical world is about the same as seriously asserting that the reasons for the blinking of pixels on the monitor (and how coordinated they blink: pictures fascinate us!) are in the pixels themselves, or at least somewhere between them – but right there, on the monitor screen. It is clear that, with such an absurd approach, in attempts to explain the reasons for these wondrous pictures, one will inevitably have to create illusory entities. Lies will give rise to new lies, and so on. Moreover, confirmation of this stream of lies would seem to be obvious - after all, the pixels, whatever one may say, are blinking!
But, nevertheless, we brought this computer analogy for lack of a better one. It is very unsuccessful, since software support for the existence of the physical world is carried out according to principles, the implementation of which in computers today is prohibitively unattainable.
Fundamental difference here is the following. The computer has a processor that, for each working cycle, performs logical operations with the contents of a very limited number of memory cells. This is called “sequential access mode” - the larger the size of the task, the longer it takes to complete it. You can increase the processor clock frequency or increase the number of processors themselves - the principle of sequential access remains the same. The physical world lives differently. Imagine what would happen in it if the electrons were controlled in sequential access mode - and each electron, in order to change its state, would have to wait until all the other electrons were polled! The point is not that the electron could wait if the “processor clock frequency” was made fantastically high. The fact is that we see: countless numbers of electrons change their states simultaneously and independently of each other. This means that they are controlled according to the principle of “parallel access” - each individually, but all at once! This means that a standard control package is connected to each electron, in which all the envisaged behavior options for the electron are spelled out - and this package, without contacting the main “processor,” controls the electron, immediately responding to the situations in which it finds itself!
Here, imagine: a sentry is on duty. An alarming situation arises. The sentry grabs the phone: “Comrade captain, two big guys are coming towards me!” What to do?" - and in response: “The line is busy... Wait for an answer...” Because the captain has a hundred such slobs, and he explains to everyone what to do. Here it is, “sequential access”. Too centralized control, which turns into a disaster. And with “parallel access,” the sentry himself knows what to do: all conceivable scenarios were explained to him in advance. "Bang!" - and the alarming situation is resolved. Would you say that this is “stupid”? What is "automatic"? But that’s where the physical world stands. Where have you seen an electron decide whether to turn right or left while flying next to a magnet?
Of course, it is not only the behavior of electrons that is controlled by individually connected software packages. The structure-forming algorithms, thanks to which atoms and nuclei exist, also operate in parallel access mode. And even for each quantum of light, a separate channel of the navigator program is allocated, which calculates the “path” of this quantum.

1.3 Some principles of operation of software in the physical world.
The provision of the existence of the physical world with software is a death sentence for many models and concepts of modern theoretical physics, since the functioning of software occurs according to principles, the consideration of which limits the flight of theoretical fantasies.
First of all, if the existence of the physical world is software-supported, then this existence is completely algorithmic. Any physical object is the embodiment of a clear set of algorithms. Therefore, an adequate theoretical model of this object is, of course, possible. But this model can only be based on correct knowledge of the corresponding set of algorithms. Moreover, an adequate model must be free from internal contradictions, since the corresponding set of algorithms is free from them - otherwise it would be inoperative. Likewise, adequate models of various physical objects must be free from contradictions among themselves.
Of course, until we have full knowledge the entire set of algorithms that ensure the existence of the physical world, contradictions in our theoretical views on the physical world are inevitable. But a decrease in the number of these contradictions would indicate our progress towards the truth. In modern physics, on the contrary, the number of blatant contradictions only increases with time - and this means that the progress taking place here is not at all towards the truth.
What are the basic principles of organizing the software of existence of the physical world? There are programs that are a set of numbered command statements. The sequence of their execution is determined, starting with the “Start work” operator and ending with the “Finish work” operator. If such a program, while running, does not get stuck in a bad situation like a loop, then it will certainly get to the “end” and stop successfully. As you can see, it is impossible to build software that can function uninterruptedly indefinitely using programs of this type alone. Therefore, the software of the physical world, as one can assume, is built on the principles of event handlers, i.e. according to the following logic: if such and such preconditions are met, then this is what to do. And if other preconditions are met, do this. And if neither one nor the other is met, do nothing, keep everything as it is! Two important consequences follow from this.
Firstly, from the work on preconditions it follows generalized rule of inertia: There are no incentives for change yet physical conditions, no state changes are made, i.e. states remain the same. This conclusion, of course, will not please those thinkers who believe that physical objects are in continuous interaction. Alas, experience shows that at the micro level interactions are not continuous, and changes in states occur spasmodically. The illusion of continuity of interactions takes place at the macro level - where this “continuity” stems from the averaging and smoothing of the results of many elementary acts of interaction that occur according to the discrete logic of the digital world.
Secondly, from the work of programs under preconditions it follows that there are no spontaneous physical phenomena. “Spontaneous” is a phenomenon that occurs spontaneously, without apparent cause. But if we do not see the cause of a phenomenon, this does not mean that there is no cause. The dependence of physical phenomena on the work of programs precisely implies that if these programs do not fail, then they do not allow anything beyond what is provided for in them. This means that any physical phenomenon certainly has a cause. Spontaneity is physical lawlessness. And aren't they hanging around here? donkey ears, since this lawlessness, as it turns out, is subject to certain laws? Thus, “spontaneous” emission of photons, as quantum theory states, occurs with a certain probability, and the frequency of “spontaneous” radioactive transformations of nuclei in a sample decreases over time according to an exponential law... This is how “spontaneous” behavior occurs! Let's not make children laugh, let's be consistent. Let's admit that the substance does not give out any special features, that it only obeys program directives.
Such subordination, we note, does not at all lead to absolute determinism, i.e. to the complete predetermination of a series of physical events under given initial conditions - as it seemed to Laplace. Laplace determinism was a logical consequence of the equations of Newtonian mechanics. These equations are indeed deterministic, since they imply absolute mathematical accuracy of their work: set, for a certain moment in time, the initial conditions with absolute accuracy - and, using these equations, get absolutely accurate predictions for any subsequent moment in time. However, the real physical world is not a mathematical idealization. There is no continuous-absolute accuracy even for space-time physical quantities, because matter is fundamentally structured intermittently in space and time. A quantum pulsator is characterized by a discreteness in space - a non-zero size, as well as a discreteness in time - the period of its quantum pulsations. Therefore, the notorious “initial conditions” cannot be specified with absolute accuracy. There will always be some spatio-temporal scatter, there will always be corresponding uncertainty - and, therefore, there can be no talk of determinism here. Therefore, the software of the physical world cannot be based on deterministic equations.
Let us add that the inadequacy of these equations to real physical laws is due to one more circumstance. Deterministic equations work well, providing reasonable predictive accuracy as long as the process they describe is not interfered with. For example, the equations of Newtonian mechanics describe the motion of planets quite well. But these equations are of little use for describing the motion of molecules in a gas: the very first collision of a molecule with another molecule - and little remains of the continuous predictability of its motion. The software of the physical world, based on deterministic equations, would be inoperable: the programs would immediately choke on exceptional situations. By the way, another method of constructing programs, corresponding to the statistical method of description in physics, would be of little help here. Statistical method describes the behavior of large collectives of particles as a whole, ignoring the fate of individual particles of this collective. But each “exceptional situation” must be handled individually. And immediately. Let's say, if an inelastic collision of particles occurs, then one or another version of energy transformations must be put into action this very second. Moreover - this very femtosecond! And the experimenter will collect “statistics” from the totality of observations of a sufficiently large number of those same inelastic collisions - and will find, for example, that in 80% of cases the particles decay according to option No. 1, and in 20% - according to option No. 2. Moreover, knowledge of this percentage will not allow us to reliably predict which decay option will be realized in each specific case. Again we see that without an event handler, i.e. There is no way to do without programs working under preconditions.
And, since we have returned to the principle of working under preconditions, let us pay attention to another important feature of such work. Namely: in any precondition, the number of physical parameters involved is necessarily limited - since any program is capable of processing the current values ​​of only a limited number of parameters. From this obvious feature it follows, in particular, that any physical object is capable of simultaneously interacting with a fundamentally limited number of other physical objects. Thus, Newton’s law of universal gravitation, according to which each mass interacts with all other masses in the Universe, is a mathematical idealization - physically, this state of affairs is unrealistic. In particular, as we will see later, the area of ​​influence of the planet’s gravity does not extend to infinity, but has a pronounced boundary, beyond which planetary gravity is completely absent - near the Earth this boundary is approximately 900 thousand kilometers away. Do not consider this a joke, dear reader: when crossing the boundaries of regions of planetary gravity - both by light and by spacecraft - real physical effects occur that official science still cannot explain. Moreover, we see a great reason for the limited scope of the gravitational influence of stars and planets. The software of the physical world would turn out to be monstrously and pointlessly complicated - being completely inoperable - if, thanks to it, every sneeze we sneezed caused a response throughout the entire Universe.
Thus, another fundamental circumstance becomes clear: since physical laws are determined by software with limited capabilities, the nature of these laws does not allow situations in which the limits of these limitations would be exceeded. In the real physical world, those liberties with energy that are permissible in mathematics are unacceptable - for example, singularities in which the amount of energy tends to infinity are unacceptable. Also unacceptable are objects with an infinite number of degrees of freedom, and, therefore, with an infinite energy content - namely, such objects are, for example, the electromagnetic field and the “physical vacuum”. We focus on mathematical liberties with energy, since the entire content of physical laws, in our opinion, comes down to a simple algorithm: “In such and such a situation, transform such and such an amount of energy from one form to another.” Of course, with any such transformation, the amount of energy in the new form is the same amount of energy that was in the original form. From here, in our opinion, comes the law of conservation of energy - a fundamental and universal physical law.
It is appropriate to note that, due to the fundamental nature of such a physical quantity as energy, any physical object certainly possesses energies, and with any changes in physical states, certain energy transformations certainly occur. Moreover, the magnitudes and forms of the energies of an object are its most important physical characteristics, and the transformations of energies are the essence of the changes in states that occur. Therefore, if a certain theoretical model does not provide a clear explanation of questions about the energies of a physical object or about energy transformations during a certain physical process, then such a model cannot claim to correspond to physical entities. Thus, the official theory of gravity is general theory relativity - cannot be called a physical theory, if only because for almost a century it has been avoiding discussing the issue of the energy of the gravitational field and, accordingly, claims that no energy transformations occur during the free fall of a test body. Meanwhile, even children know that a brick dropped from a greater height hits the head harder. If theorists do not understand that, by falling longer, a brick gains more energy of movement, they can easily verify this from their own experience.
And after all, the realities of the “digital” world are such that they express in their pure form the essence of one or another form of physical energy. We just need to keep in mind that any form of physical energy necessarily corresponds to some form of movement. Thus, the self-energy of an elementary particle is the energy of quantum pulsations, i.e. cyclical changes of states. The binding energy at a mass defect is the energy of cyclic transfers of quantum pulsations in a pair of coupled particles. The energy of movement of an elementary particle is the energy of the chain of its elementary movements, quantum steps.
And here we discover something remarkable. The energy of any movement is always fundamentally positive. If every form of physical energy corresponds to some form of motion, then no physical energy can be negative. Problem-free transformations of some forms of energy into others are possible only for positive energies, since these transformations are consequences of transformations of the corresponding forms of motion. Purely mathematically, it is possible to increase positive kinetic energy by decreasing negative potential energy, but such mathematics has no relation to physical realities. People can work on credit, but physical laws cannot: here the exchange is always and immediately equivalent.
For comparison: in orthodox physics the essence of most forms of energy is not explained at all. What, for example, is the nature of the body’s own energy, mc 2? For a hundred years, science has not been able to answer this question! What is the nature of the so-called potential energy of a body, which depends only on its location? Isn't this a fiction - potential energy - that was only required to make ends meet in balances involving kinetic energy? And what is the nature of the energy of chemical bonds - part of which is supposedly released in the form of heat during combustion reactions? “The reactant molecules were weakly bound, the product molecules became more strongly bound - the difference was used to release heat.” That's all? How long will this babble continue?
Finally, since physical objects have energies in various forms, as well as the transformation of energies from one form to another, are determined by program instructions, then one should keep in mind the fundamental property of any program instructions: their current directives, by definition, are unambiguous. The program can be very “sophisticated”, highly branched and provide for a huge (but always finite) number of options for working out situations - but if the program has identified the occurrence of some precondition, then a single option for working out is put into action, corresponding to that very precondition. This clearly follows the most important principle by which the physical world exists: all physical phenomena are unambiguous. That is, all current physical states are unambiguous, and changes in physical states also unambiguously occur, with unambiguous energy transformations - regardless of the “points of view” of curved and oblique observers. Thus, there cannot be physical forces that act only in some reference frames. Either the force works or it doesn’t. Therefore, the concept of inertial forces, which act only in accelerated reference frames, is completely unphysical. And the favorite hobby of the special theory of relativity - the twin paradox (also known as the clock paradox) - is a dummy that was generated by a rotten theory, because in practice this paradox does not exist. Experience with transportable atomic clocks, including those installed on board navigation satellites, clearly shows that the results of comparisons of pairs of moving clocks are always unambiguous: if clock No. 1 is behind clock No. 2, say, by 300 nanoseconds, then this means that clock No. 2 has moved ahead of clock No. 1 by the same 300 nanoseconds. Moreover, these unambiguous effects due to the movement of pairs of clocks cannot be explained in terms of the relative speed of movement of the clocks in this pair! To agree with experience, it is necessary to calculate for each watch an individual change in speed, corresponding individual speed movements of these clocks, and then take the difference in the accumulated effects of both clocks. Practice clearly shows that an adequate description of the physical world cannot be constructed in terms of relative speeds - after all, even in the case of transportable clocks, one has to operate with their individual, unambiguous speeds. Below we will show how to accurately measure these speeds.
According to the logic of the above, we attach extremely important importance to the uniqueness of physical phenomena.
Firstly, the work of programs, by definition, occurs in such a way that the current states of physical objects are fundamentally unambiguous. Therefore, in our opinion, the central concept of quantum mechanics—mixed states—is a great absurdity. We are talking about the fact that a microobject can be in several “pure” states at once, while having, for example, three different energy values ​​in the same form. Allowing such miracles, which violate the law of conservation of energy, means the theorists admit their inability to explain the phenomena of the microworld on the basis of reasonable ideas.
Secondly, if, in addition to ambiguities stay in one state or another, ambiguities would be allowed in changes physical states, then, as a consequence, violations of the law of conservation of energy would be allowed. It was precisely such violations, again, that theorists needed to solve their theoretical problems: they brought to the aid the uncertainty principle, “according to which the law of conservation of energy may seem to be violated” [H1] at small time intervals.
The ambiguities of being in states and the ambiguities of changing states, allowed by the principle of mixed states and the uncertainty principle, indicate the depth of the crisis in modern theoretical physics. For she herself trampled on “the most sacred thing” that she had - the law of conservation of energy. Well, complete unprincipledness! Completely inadequate to the fact that the physical world is the embodiment of “dumb automation”!
So, let us briefly repeat the above-mentioned principles of operation of the software of the physical world. Firstly, these programs work on the principle of event handlers, i.e. according to preconditions; secondly, the capabilities of these programs are limited; and thirdly, current directives defining the states of physical objects, as well as changes in these states, are always fundamentally unambiguous.

1.4 The concept of a quantum pulsator. Weight.
To create the simplest digital object on a computer monitor screen, you need, using a simple program, to make a pixel “blink” with a certain frequency, i.e. alternately be in two states - in one of which the pixel glows, and in the other it does not glow.
Similarly, we call the simplest object of the “digital” physical world a quantum pulsator. It appears to us as something that is alternately in two different states, which cyclically replace each other with a characteristic frequency - this process is directly set by the corresponding program, which forms a quantum pulsator in the physical world. What are the two states of a quantum pulsator? We can liken them to logic one and logic zero in digital devices based on binary logic. The quantum pulsator expresses, in its purest form, the idea of ​​being in time: the cyclic change of two states in question is an indefinitely long movement in its simplest form, which does not at all imply movement in space.
The quantum pulsator remains in existence while the chain of cyclic changes of its two states continues: tick-tock, tick-tock, etc. If a quantum pulsator “freezes” in the “tick” state, it falls out of existence. If he “hangs” in the “like this” state, he also falls out of existence!
The fact that a quantum pulsator is the simplest object of the physical world, i.e. an elementary particle of a substance means that the substance is not divisible to infinity. The electron, being a quantum pulsator, does not consist of any quarks - which are the fantasies of theorists. A qualitative transition occurs on a quantum pulsator: from the physical level of reality to the software level.
Like any form of motion, quantum ripples have energy. However, a quantum pulsator is fundamentally different from a classical oscillator. Classical oscillations occur “in a sinusoid”, and their energy depends on two physical parameters - frequency and amplitude - the values ​​of which can vary. For quantum pulsations, obviously, the amplitude cannot change – i.e. it cannot be a parameter on which the energy of quantum pulsations depends. The only parameter on which energy depends E quantum pulsations is their frequency f, i.e. purely temporary characteristic. Moreover, this dependence is the simplest, linear:
E=hf, (1.4.1)
Where h- Planck's constant. Formula (1.4.1) should not be confused with a similar formula, which is believed to describe the energy of a photon - despite the fact that a clear answer to the question of what oscillates in the photon has not yet been given. Below we will present a number of evidence that photons - in the traditional sense - do not exist ( 3.10 ). Now we are not talking about photons, but about matter: we claim that formula (1.4.1) describes the self-energy of an elementary particle of matter.
The self-energy of an elementary particle is described by another formula – Einstein’s, which is called the “twentieth century formula”:
E=mc 2 , (1.4.2)
Where m- particle mass, c- speed of light. The combination of formulas (1.4.1) and (1.4.2) gives the Louis de Broglie formula:
hf=mc 2 . (1.4.3)
The meaning we see in this formula is that the three characteristics of a quantum pulsator are self-energy, quantum pulsation frequency and mass – are directly proportional to each other, being connected through fundamental constants, and, therefore, these three characteristics represent, in essence, the same physical property . From here a consistent and unambiguous definition of mass naturally follows: the mass of an elementary particle is, up to a factor c 2, the energy of quantum pulsations of this particle. We emphasize that, with this approach, mass is equivalent to one single form of energy - namely, the energy of quantum pulsations. All other forms of energy do not exhibit the properties of mass - contrary to Einstein’s approach, in which any energy is equivalent to mass. The universality of Einstein's approach, as it turns out, is unacceptable, since because of it physics has found itself in a dead end - still unable to explain, for example, the origin of the mass defect in compound nuclei. And the solution to this mystery, as we will try to show, is simple ( 4.7 ): part of the self-energy of the bonded nucleons is converted into their binding energy, which no longer exhibits the properties of mass.
De Broglie's formula (1.4.3) is so fundamental that, in our opinion, it is the “formula of the twentieth century”, and not its castrated Einsteinian version (1.4.2). Sadly, de Broglie admitted the fallacy of his formula - he was convinced that it was relativistically non-invariant! After all, the special theory of relativity (SRT) states that as the speed of a particle increases, the mass experiences a relativistic increase, and the frequency, on the contrary, decreases due to relativistic time dilation. De Broglie, alas, did not know that the evidence of relativistic mass growth was false from the very beginning ( 4.5 ) – a fast electron is deflected less strongly by the magnetic field not due to an increase in the electron mass, but due to a decrease in the efficiency of the magnetic influence. De Broglie was not presented with evidence of relativistic time dilation - it did not exist yet. Later such evidence appeared, but we know that they are also false ( 1.12-1.15 ) - in them the desired is presented as reality. Neither relativistic mass growth nor relativistic time dilation exists in nature - therefore, no matter what happens to the particle, relation (1.4.3) always remains valid! For example, for an electron, reference value whose rest mass is 9.11×10 -31 kg, relation (1.4.3) gives a quantum pulsation frequency equal to 1.24×10 20 Hz.
Note that, unlike official science, which for more than a hundred years has not explained the nature of its own energy (1.4.2), we give such an explanation: the self-energy of a particle is the energy of its quantum pulsations!
Concluding this brief introduction to a quantum pulsator, let us add that it has a characteristic spatial size, which we define as the product of the period of quantum pulsations and the speed of light. Using (1.4.3), it is easy to see that the spatial size introduced in this way for a particle with mass m, is equal to its Compton length: l C= h/(mc). For an electron at rest, this length is 0.024 Angstroms.
It is necessary, of course, to clarify what a “resting” electron is, what is the “rest” mass of an electron. In relation to which frame of reference should we speak of the electron being at rest or in motion? After all, there are many reference systems, and the velocities of the same electron in relation to them are different - and above we declared the uniqueness of the states physical systems one of the main physical principles. The point is not only that, in relation to the observer Vasya, the speed of the electron is one, but, in relation to the observer Petya, it is different. The point is also that different speeds correspond to different kinetic energies. And the kinetic energy of the electron must be unambiguous - in accordance with the law of conservation and transformation of energy. We will not be like theorists who allow any violation of this law they please. We recognize this law and put it at the forefront. Therefore, we are obliged to explain what the “true-unambiguous” speed of a physical object is, and how to correctly calculate it. This question is addressed in 1.6 .

1.5 The unsuitability of the concept of relative velocities for describing the realities of the physical world.
“The speeds of movement of bodies are relative, and it is impossible to say unambiguously who is moving relative to whom, because if body A moves relative to body B, then body B, in turn, moves relative to body A...”
These conclusions, implanted in us since school, look impeccable from a formal logical point of view. But, from a physical point of view, they would only be suitable for an unreal world in which there are no accelerations. It is not without reason that Einstein taught that STR is valid only for reference systems (FR), “moving relative to each other rectilinearly and uniformly” [E1] - however, he did not indicate any such practical reference system. So far there has been no progress on this issue. Isn’t it funny that, for more than a hundred years, the basic theory of official physics has not specified a practical area of ​​applicability?
And the reason for this anecdotal situation is very simple: in the real world, due to physical interactions, acceleration of bodies is inevitable. And then, trampling on formal logic, the movement takes on an unambiguous character: the Earth revolves around the Sun, a pebble falls on the Earth, etc. For example, the uniqueness of the kinematics when a pebble falls on the Earth - i.e., the non-physicality of the situation in which the Earth falls on a pebble - is confirmed based on the law of conservation of energy. Indeed, if when a pebble collides with the Earth, the collision speed is V, then the kinetic energy that can be converted into other forms is half the product of the square of the speed V the mass of a pebble, but certainly not the mass of the Earth. This means that it was the pebble that gained this speed, i.e. the named case is adequately described in the CO associated with the Earth. But this turn of events did not suit the relativists. In order to save the concept of relative velocities, they agreed to the point that, for the named case, CO associated with a pebble is supposedly no worse than CO associated with the Earth. True, in the CO associated with the pebble, the Earth moves with acceleration g=9.8 m/s 2 and, picking up speed V, acquires monstrous kinetic energy. According to the logic of relativists, the Earth moves with acceleration g the inertial force that acts in the CO associated with the pebble. At the same time, relativists do not bother themselves with explanations of where the Earth’s monstrous kinetic energy comes from, and where this energy goes after the Earth freezes, crashing into a pebble. Instead of these explanations, we are given the now textbook nonsense about the reality of inertial forces: if, they say, the train in which you are traveling suddenly brakes, dear reader, then it is the inertial force that will throw you forward and cause injury! This intelligible explanation has only one drawback: it is silent about the fact that the kinetic energy, again, of the passenger, and not of anything else, will be spent on causing injuries here. You can easily verify this: pick up the initial speed on your own, without the help of a train, and run into a pole or solid wall with acceleration. The injuries will come out no worse - and without the help of any inertial forces. What we mean is that the so-called “real forces of inertia”, which act only in accelerated COs, are nothing more than theoretical fabrications. And the truly real physical processes and real energy transformations occur regardless of which of the reference systems their theoretical analysis is carried out in.
Moreover, if we remember that real energy transformations must occur unambiguously ( 1.3 ), then the fact that kinetic energies participate in these transformations means something amazing. Namely: since kinetic energy is quadratic in speed, then, when analyzing the accelerated motion of a body in different reference points, in which the instantaneous speed of the body is different, it turns out that the same increment in speed gives different increments of kinetic energy in different reference points. From the uniqueness of the increments of kinetic energy it follows that the instantaneous velocity of the body must also be unambiguous, i.e. an adequate description of the motion of a body should be possible only in some one FR - in which the speed of the body is “true”.
By the way, the uniqueness of the increments of the kinetic energy of the test body, in accordance with the increments of its “true” speed, would be very problematic if the body were attracted to several other bodies at once and, accordingly, would acquire the acceleration of gravity to several attracting centers at once - such as requires the law of universal gravitation. For example, if an asteroid would experience gravity towards both the Sun and the planets, then what is the “true” speed of the asteroid, the increments of which determine the increments of its kinetic energy? The question is not trivial. And, in order not to suffer with it, it is much easier to delimit the areas of action of gravity of the Sun and planets in space - so that the test body, no matter where it is, always gravitates only towards one attracting center. To do this, it is necessary to ensure that the areas of influence of planetary gravity do not intersect with each other, and that in each area of ​​​​planetary gravity solar gravity is “turned off”. With such an organization of gravity, i.e. according to the principle of its unitary action ( 2.8 ), in the simplest way the problem of ensuring the unambiguity of the increments of the kinetic energy of a test body is solved - and at the same time the problem of counting the “true” velocities of physical objects. It is this approach that explains in one fell swoop the facts that are hushed up by official science regarding the movement of asteroids ( 2.10 ) and interplanetary stations ( 1.10 ), aberrations of light from stars ( 1.11 ), linear Doppler effect in planetary radar ( 1.9 ), as well as quadratic Doppler changes in the rate of atomic clocks ( 2.8 ).
Physicists have spent a lot of effort trying to find one single privileged reference standard - to adequately determine the absolute velocities of all physical objects in the Universe at once. But this task, alas, was set incorrectly. Experience shows that such a reference system, one for the entire Universe, does not exist, but there is a hierarchy of reference points for adequately determining absolute velocities - moreover, the working areas of these reference points are delimited in space, corresponding to the delimitation of the areas of action of gravity of large cosmic bodies. Taking this distinction into account, we will not talk about the absolute velocities of physical objects, but about their local-absolute velocities, which have a clear physical meaning.

1.6 The concept of frequency slopes. The concept of local-absolute speed.
As we stated above ( 1.4 ), the frequency of quantum pulsations, say, of an electron, is directly dictated by the corresponding program instructions. The value of this frequency could be set independent of the location of the electron: no matter where in the Universe it is, the frequency of its quantum pulsations would be the same. Then, with respect to the frequencies of quantum pulsations, space would be completely homogeneous and isotropic - therefore, the delimitation of the regions of the unitary action of gravity would have to be ensured by manipulating not the frequencies of quantum pulsations, but some other physical parameters.
However, as noted above, the frequencies of quantum pulsations, i.e., in fact, the masses of elementary particles, are their most fundamental property, and gravity, as is known, is the most universal physical effect to which all matter is subject. Doesn't this coincidence indicate that the delimitation of the areas of unitary action of gravity is caused precisely by program manipulations of the frequencies of quantum pulsations?
In our opinion, everything is so: the area of ​​action of planetary gravity is, in terms of program instructions, a spherically symmetrical “frequency funnel”. This means that, in the region of planetary gravity, the prescribed frequency of quantum pulsations is a function of the distance from the “center of gravity”: the greater this distance, the greater the frequency of quantum pulsations. Thus, the frequency gradients of quantum pulsations determine the directions of local verticals. It is these frequency gradients, programmed to be prescribed in a certain region of space, that we call “frequency slopes”. According to the logic of the above, planetary frequency funnels are built into the slopes of a more grandiose solar frequency funnel. Moreover, the planetary frequency funnel is capable of moving as a whole along the solar frequency slope, performing its orbital movement. At the same time, no matter where the planetary frequency funnel is located in its orbit, the disconnection of the solar frequency slope in its volume can be ensured without any special problems using purely software means - since, let us emphasize once again, frequency slopes and frequency funnels are not a physical, but a software reality . But – leading to physical effects!
Before talking about these effects, let's give a definition of the local-absolute speed of a physical object. Local absolute speed is the speed relative to a local portion of the frequency slope. At first glance, such a definition does not carry any practical value: how, one wonders, can we determine the speed relative to some program instructions?.. after all, the great Mach taught that in practice “we can determine the speed of a body only relative to other bodies”! Fortunately, you don’t need to look for a long time to find a reference body to correctly find local-absolute velocities: the Sun and planets are at rest in the centers of their frequency funnels. Therefore, within the planetary frequency funnel, the desired reference body is the planet, and in interplanetary space, not affected by the planetary frequency funnels, the desired reference body is the Sun.
A relevant question is: why, given the obvious presence of reference bodies for correctly finding the local-absolute speed, do we still determine it in relation to the local portion of the frequency slope? We answer: because such a definition, in our opinion, more accurately reflects the realities of the “digital” physical world. Firstly, frequency slopes are formed purely by software and exist independently of massive bodies - that is, in principle, there may not be a suitable reference body. Secondly, as we will see later, it is the frequency slopes that provide energy conversions during the free fall of small bodies ( 2.7 ). Thirdly, it is the frequency slopes that define the “inertial space”, in relation to which the speed of movement of a physical object is “true”, i.e. locally-absolute. In fact, frequency slopes play the role of ether, the need for which came to thinkers who realized that the concept of relative speeds does not stand up to criticism. But these thinkers believe that the ether is a physical object - and because of this, a workable model of the ether cannot be built, since its physical properties turn out to be too fantastic and contradictory. We offer a new way. The frequency slope model is a ready-made model of the ether, free from its contradictions physical properties, since this ether is not physical in nature, but supraphysical, programmatic. It seems that this very ether is called by the biblical term “firmament” - the term, in our opinion, is extremely successful.
In particular, in the volume of the region of earth’s gravity (the radius of which is about 900 thousand kilometers), the “firmament” is monolithically motionless in relation to the geocentric non-rotating frame of reference - despite the fact that the region of earth’s gravity moves in orbit around the Sun, and the Solar system is somehow -moves in the Galaxy. As you can see, in near-Earth space the local-absolute speed of an object is its speed in a geocentric non-rotating reference frame. If you, dear reader, are now sitting at the table, i.e. are at rest relative to the earth's surface, then your local absolute speed is not zero - it is equal to the linear speed of daily circulation at your latitude and is directed to the local east. If you are moving relative to the earth's surface, then to find your local absolute speed you should find the corresponding vector difference.
Note that in practice there is already a convenient physical implementation of reference to a geocentric non-rotating reference system - using satellite navigation systems such as GPS. The orbital planes of GPS satellites maintain their orientation relative to the “fixed stars,” and the Earth, in the center of the “rose” of these orbits, performs its daily rotation. The speed of the aircraft in the GPS system is precisely the local-absolute speed of the aircraft. In practice, it is usually necessary to know the ground speed of the aircraft, i.e. the horizontal component of its speed relative to the earth's surface. The ground speed is found by introducing an appropriate correction into the GPS speed for the movement of a local area of ​​the earth's surface due to the daily rotation of the Earth. As you can see, for the vicinity of the Earth, a procedure has already been implemented for measuring, in real time, the local-absolute velocities of physical bodies. There was an important practical need for this procedure. It is the vector of the local-absolute velocity of the spacecraft that needs to be known in order to correctly control its flight - especially if its trajectory is not ballistic. If, when calculating thrust and fuel consumption for maneuvers, we use a non-local-absolute speed as the current speed of the vehicle, then its flight along the desired trajectory and reaching the desired destination will be practically impossible.
It should be added that the local section of the frequency slope is an “inertial background”, in relation to which the local absolute velocities of not only physical bodies are measured. The phase speed of light in vacuum is a fundamental constant, also only in the local-absolute sense. In particular, in the region of terrestrial gravity, the phase speed of light in a vacuum behaves as a constant " With"only in relation to a single frame of reference - a geocentric non-rotating one - regardless of the fact that the region of earth's gravity somehow moves in solar system and the Galaxy ( 3.8 ).

1.7 The truth about the result of the Michelson-Morley experiment.
The special principle of relativity, translated into generally understandable language, states that no physical experiments inside the laboratory can detect its rectilinear uniform motion. That is, in principle, it is impossible for a device that would detect its speed autonomously - without looking at the “fixed stars” and navigation satellites.
On the contrary, according to the logic of the above, such detection is possible - but only for locally absolute speed ( 1.6 ). A device capable of this, resting on the earth's surface, would not respond either to the speed of the orbital movement of the Earth around the Sun, or to the speed of the solar system's own movement in the Galaxy. The only speed it would respond to is its linear speed due to the rotation of the Earth on its axis. Because for such a device there would be only one “ethereal breeze” - blowing from the east at a speed equal to the linear speed of the daily rotation of the earth's surface at the local latitude.
Let us remember: the official history of physics tells that the persistent search for the ethereal wind was not crowned with success. The key here is the Michelson-Morley experiment. The diagram of the Michelson interferometer, the idea of ​​the experiment and the calculation of the difference in the path of the rays are given in many textbooks, and we will not dwell on this. The “negative result” of the Michelson-Morley experiment is widely known: no no ethereal wind was allegedly detected. It is not true. The experiment was aimed at identifying the ethereal wind caused by the orbital motion of the Earth around the Sun - and it really was not detected. But an “ethereal breeze” from the east was discovered!
Indeed, S.I. Vavilov [B1] processed the results of the Michelson-Morley experiment of 1887 [M1] and calculated the most reliable shifts of interference fringes, depending on the orientation of the device. Due to the Earth's orbital motion, at a speed of 30 km/s, an effect with a range of 0.4 bands was expected there. Vavilov’s numbers demonstrate a wave with a span of 0.04-0.05 stripes, and the humps and troughs of this wave correspond to the orientations of the arms of the device in the “north-south” and “west-east” directions - regardless of the time of day and season.
Official science avoids discussing this impressive effect. We will try to explain it. Shoulder length L=11 m, wavelength l=5700 Angstrom, and device speed V=0.35 km/s (at the latitude of Cleveland), a shift of 0.05 fringes is too large to be explained based on the traditional calculation, which gives the expected fringe shift value (2 L/l)( V 2 /c 2), where c- speed of light. But we paid attention to the following: from experiment to experiment according to the Michelson-Morley scheme, the length of the arm varied most strongly, and increased “non-zero” results, in particular, from Miller, were obtained precisely with increased arm lengths. Could it be that some effect depending on the length of the arms was not taken into account?
Please note: the Michelson-Morley interferometer has a non-zero wedge angle, i.e. the angle between the planes of the equivalent air gap. A non-zero wedge angle g and, accordingly, a non-zero convergence angle of the interfering beams 2g are required here in order for the interference pattern to be stripes of equal thickness, and not stripes of equal inclination. Our analysis [G1] shows that, due to the non-zero wedge angle, the difference shift of the interference fringes for the two above-mentioned characteristic orientations of the device will be D n" 4 L g( V/c)/l. Because the experimenters did not take this effect into account, they did not report the magnitude of the wedge angle. But if we substitute the expression for D into this n the value 0.05 named by Vavilov, as well as the above values ​​of the other parameters, then for the wedge angle we get the figure g»5.5×10 -4 rad. This value for the wedge angle of the Michelson interferometer seems to us to be completely realistic. Therefore, we can assume that Michelson and Morley, in an experiment in 1887, actually detected the local-absolute speed of the device.
And what else could the Michelson-Morley device react to other than its local-absolute speed? This is not a Sagnac interferometer, in which light moves in counter directions, bypassing a contour with a non-zero area, due to which the device’s own rotation is detected. The Michelson-Morley interferometer has zero contour area! And this is not an accelerometer, which is used, for example, in inertial navigation systems - where acceleration is detected and then integrated, and thus speed is calculated. No, the Michelson-Morley device responded directly to its speed, throwing the principle of relativity into the dust. That is why relativists remain silent about the ethereal wind from the east, which was discovered by Michelson and Morley - but, on the contrary, they shout loudly that the ethereal wind was not discovered due to the orbital motion of the Earth.
Of course, they had to back up this deception with a whole string of deceptions, which in their language are called “analogs of the Michelson-Morley experiment.” These “analogues” are a whole series of experiments carried out according to different schemes, in which the results of the search for the ethereal wind turned out to be almost completely zero, as if this wind was completely absent. The fact that in these experiments the orbital motion of the Earth did not manifest itself in any way is self-evident. But why did the installation not move due to the rotation of the Earth around its axis? Because this non-manifestation was determined either metrologically or methodologically. That is, either the accuracy of the experiment was insufficient to detect an ethereal breeze from the east, with a speed of ~300 m/s, or the experiment itself was such that the detection of this breeze was fundamentally excluded.
Thus, Essen [E1] looked for variations in the frequency of a hollow cylindrical resonator at 9200 MHz, which would occur when its orientation changed relative to the ethereal wind line. When the resonator axis was horizontal, it rotated in a horizontal plane, making a revolution per minute. Every 45° of rotation, the resonator frequency was measured using a quartz standard. The relative difference in resonator frequencies for positions along and across the ethereal wind line would be (1/2)( V 2 /c 2). For the speed of the ethereal wind V=30 km/s, the effect would be ~5×10 -9 . The Essen data shows a wave with a magnitude that is an order of magnitude smaller. Such a wave indicated the absence of an “orbital” ethereal wind. But the origin of this wave itself remained unclear - and, in its presence, there was no chance of detecting the wave due to the “diurnal” ethereal wind, with a swing three orders of magnitude smaller.
Townes and co-workers [T1] measured the beat frequency of a pair of ammonia masers, installed with beams of molecules facing each other - moreover, along the “west-east” line. Then the installation was turned 180° and the beat frequency was measured again. These measurements were carried out over more than half a day so that the Earth rotated more than half a revolution around its axis. The “orbital” ethereal wind would have been detected with such a technique, but the “daily” wind would not have been detected, since, when the installation was turned, the Doppler frequency shifts of the masers simply changed roles, and the beat frequency remained the same.
In another experiment, carried out under the direction of Townes [T2], the beat frequency of two IR lasers, with orthogonally located resonators, was studied when the installation was rotated by 90° between positions in which one resonator was oriented along the north-south line, and the other – along the “west-east” line. It was assumed that the resonator, oriented parallel to the “etheric wind,” has a frequency f 0 (1-b 2), and the resonator, oriented orthogonally to the “etheric wind”, has a frequency f 0 (1-b 2) 1/2 , where f 0 – unperturbed frequency, b= V/c. Because the f 0 =3×10 14 Hz, then due to the speed of 30 km/s one could expect a difference effect with a range of 3 MHz. The range of the detected effect was only 270 kHz, and it was almost independent of the time of day, although the manifestation of the “ethereal wind” due to the Earth’s orbital motion should have been maximum at 0 and 12 o’clock, and minimum at 6 and 18 o’clock local time. The discovered effect was interpreted as a result of magnetostriction in the metal rods of the resonators due to the influence magnetic field Earth. The linear speed due to the daily rotation would give here an effect with a swing of about 300 Hz, which would be in phase with the effect of magnetostriction and would also not depend in magnitude on the time of day - and, therefore, its non-detection was even methodologically determined.
A special group includes experiments in which very high accuracy of measurements was ensured - but, alas, the orientation of all elements of the installation relative to the earth's surface was constant. Of course, there could not be any differential effects due to the linear speed of the daily rotation. Therefore, it did not manifest itself in any way, for example, in an experiment using a frequency standard on cooled ions [P1], or in two-photon absorption spectroscopy in an atomic beam [P1], or when comparing the frequencies of two visible lasers stabilized by different methods [X1].
Meanwhile, with sufficient measurement accuracy and the correct methodology, the linear velocity of the laboratory due to the daily rotation of the Earth is successfully detected. We will talk about two such experiments.
Champney et al. [Ch1] placed a Mössbauer emitter and absorber (Co 57 and Fe 57) on diametrically opposite sections of the rotor of an ultracentrifuge rotating in a horizontal plane. One gamma ray detector was installed on the north side of the rotor, the second on the south. The detectors were covered with lead screens with diaphragms that transmitted only those quanta that passed in a narrow alignment coaxial with the “emitter-absorber” line, when this line was oriented in the direction

Fig.1.7.1

"North South". The resonant absorption peak at 14.4 keV, previously obtained by the linear Doppler method (see Fig. Fig.1.7.1), corresponded to the divergence speed of the emitter and absorber ~0.33 mm/s, while the energy of the working transition of the absorber was lower than that of the emitter by ~1.1×10 -12 . The idea of ​​the experiment was based on the fact that if absolute velocities in the ether have a physical meaning, then when the installation moves in the ether (the calculation was, again, for the orbital motion of the Earth), the rotation of the rotor will give an inequality in the absolute velocities of the emitter and absorber. Accordingly, their lines will acquire unequal quadratic Doppler shifts. So, let the laboratory move in the air to the east, and the rotor rotates counterclockwise, if you look at it from above. Then the northern counter will count quanta under conditions when the linear rotation speed of the emitter is added to the speed of the installation on the air, and the linear rotation speed of the absorber is subtracted from it. Due to the resulting quadratic

Fig.1.7.2

Doppler shifts, the lines of the emitter and absorber will move towards each other, causing absorption to increase, i.e. the counting speed will decrease. Accordingly, for the southern meter everything will be the other way around. As a result, the experience allowed us to conclude whether absolute or relative velocities have a physical meaning. Indeed, for each measurement cycle, two rotor rotation speeds were used - 200 Hz and 1230 Hz - giving linear rotation speeds of 55.3 and 340 m/s. Four quantities were measured: the counting rate of the northern counter at low and high rotation speeds, N L and N H, and, similarly, for the southern counter, S L and S H – and the relation x=( S H/ S L)/( N H/ N L). If the concept of relative velocities were valid, the ratio x would be, to within errors, equal to unity. If the concept of absolute speeds were valid, the ratio x would differ from unity - moreover, if there was an ethereal wind due to the orbital motion of the Earth, x would depend on the time of day. As the results of [Ch1] show, which we reproduce (see. Fig.1.7.2), x is close to unity and does not depend on the time of day – i.e. the orbital ethereal wind did not manifest itself in any way. At the same time, the average for the given data set is, as can be seen, 1.012. Does this result indicate an ethereal breeze due to the daily rotation of the Earth?
If we denote the speed of this breeze by V, then the quadratic Doppler divergence of the lines of the emitter and absorber for the southern counter and, conversely, their convergence for the northern counter, will amount to D = 2 Vv/c 2 where v– linear speed of rotation of the emitter and absorber. Using the graph (see Fig.1.7.1), we found approximations for the functions of the counting rates of both counters on the speed V– for lower and higher speeds mentioned above v. At a lower value v we used linear approximation to S L ( V) And N L ( V), and for larger values ​​– a quadratic approximation, for S H ( V) And N H ( V). The above combination of these four functions gives the dependence of the ratio x on V, which is shown on Fig.1.7.3.

Fig.1.7.3

As you can see, in this graph the value x=1.012 corresponds to two values V: 6.5 and 301 m/s. For the first of them, we do not see any physical meaning, and the second differs by only 7.9% from 279 m/s - the linear speed of daily rotation at the latitude of Birmingham, where the experiment was carried out. There can hardly be any doubt that the authors of [Ch1] detected the local-absolute speed of the laboratory - but, strangely, they ignored this result.
Another experiment where the local-absolute speed of the laboratory was manifested was carried out by Brilet and Hall [B1]. They placed a helium-neon laser (3.39 microns) and an external

Fig.1.7.4

1.8 Linear Doppler effect in the local-absolute velocity model.
According to the special theory of relativity (SRT), the magnitude of the linear Doppler effect is
, (1.8.1)
Where f- radiation frequency, V cosq - relative speed of divergence or convergence of the emitter and receiver, c- speed of light. According to our model, in which the phase speed of light in vacuum is a fundamental constant in relation to only the local portion of the “inertial space”, realized using frequency slopes, the magnitude of the linear Doppler effect is
, (1.8.2)
Where V 1 cosq 1 and V 2 cosq 2 – projections of the local absolute velocities of the emitter and receiver onto the straight line connecting them.
Note that if the emitter and receiver are in the same region of “inertial space” - for example, if they are both near the surface of the Earth - then expression (1.8.2) is reduced to expression (1.8.1). In this particular case, the predictions made on the basis of both concepts - relative and local-absolute velocities - coincide and, accordingly, here both of these concepts are equally well confirmed by experience. But the situation changes radically for cases when the emitter and receiver are in various areas“inertial space” - for example, on opposite sides of the boundary of the earth’s gravitational region. A similar situation occurs, for example, during radar detection of planets or during radio communication with an interplanetary spacecraft. Here, predictions based on the concepts of relative and local-absolute velocities are different, and they cannot be equally well confirmed by experience. The concept of local-absolute velocities predicts here a completely “wild”, by relativistic standards, behavior of linear Doppler shifts. Official science for a long time inspired us that nothing of the kind is observed here, and that the linear Doppler effect occurs here in complete agreement with the predictions of STR. It turned out that this was a lie. Now we will illustrate that in reality exactly the same “wild” behavior of linear Doppler shifts occurs.

1.9 Where is the Doppler effect in Venus radar?
The planets are at rest in their planetary frequency funnels, therefore the local-absolute velocities of the planets are identically equal to zero. From here, based on expression (1.8.2), a fantastic conclusion follows: the Doppler shift in conditions when the emitter and receiver are on different planets should have components caused only by the movements of the emitter and receiver in their planetocentric reference systems - but there should be no component which corresponds to the mutual approach or distance of these planets. A planet, when carrying out its radar, can approach the Earth, or move away from it, at a speed of tens of kilometers per second - but this approach-removal should not cause a corresponding Doppler shift!
It was this phenomenon that was discovered during the radar survey of Venus in 1961 by a group led by V.A. Kotelnikov [K1-K3]. It is energetically advantageous to carry out radiolocation of a planet when it comes closest to the Earth. The culmination of the conjunction of Venus with the Earth occurred on April 11; the results were published starting from observations on April 18, when the speed of Venus’s removal was approximately 2.5 km/s. The corresponding Doppler shift - doubled when reflected from the “moving mirror” - should have, in relative terms, a value of 1.6 × 10 -5. The absolute value of this shift, with a carrier frequency of the emitted signal of 700 MHz, would be 11.6 kHz. Since the bandwidth in which the echo signal was searched did not exceed 600 Hz, then, according to traditional logic, compensation for the Doppler effect was required so that the echo signal carrier fell into the analysis band. For this compensation, the receiving path was not reconfigured, but the carrier of the emitted signal was shifted by a precalculated value. Of course, there could be no question of direct observation of the Doppler effect, i.e. mixing the sent and received frequencies, highlighting their difference frequency. This technique required a wide bandwidth of the receiving path, in which the echo signal could not be separated from the noise. A multi-stage transfer of the spectrum of the received noisy signal to the low-frequency region was used, in which a recording was made on magnetic tape, and then this recording was analyzed. The principle of signal separation from noise was based on the fact that the emitted signal had rectangular amplitude modulation with a depth of 100%. Thus, in one half of the modulation cycle both the useful signal and noise had to be received, and in the other half only noise. With the right moment of starting magnetic recording processing, a systematic increase in the received power in the first halves of the modulation cycles, compared to the second, would indicate detection of a useful signal.
The analysis was carried out in a “wide” band (600 Hz) and in a “narrow” band (40 Hz). In the obtained spectra of the broadband component (see [K2]) no systematics similar to the detected signal are visible. Particularly puzzling is the fact that in all spectra of the broadband component there is no narrowband component, which, according to traditional logic, should certainly fall into the wide band of analysis. It’s amazing: the same article shows excellent spectra of the narrow-band component, the positions of the energy maxima of which made it possible to clarify the value of the astronomical unit, i.e. the average radius of the Earth's orbit, by two orders of magnitude! Why were the spectra of the narrow-band component, which made this breakthrough possible, not detected when analyzed in a wide band?
The answer to this question is suggested by the article [K3], where the following is literally written: “The narrow-band component is understood as the component of the echo signal, corresponding to reflection from a stationary point reflector"(emphasis added). It must be assumed that readers stumbled over this phrase: what kind of stationary reflector could there be on a receding rotating planet? And why is it a point? What kind of power, one wonders, can be reflected from a point reflector? The point, apparently, is that the term “point” is used here not to describe the size of the reflector, but in order to exclude the possibility of understanding the term “stationary” in the sense of “non-rotating”. That is, “immovable” means “not moving away.” But how could one get an echo “corresponding” to a “not receding” reflector if in fact it was moving away? Those skilled in the subtleties of physical terminology must agree that the true meaning of the quoted phrase is as follows: “The narrowband component is the echo signal that was observed when compensation for the Doppler effect corresponding to the distance of the planet was not carried out.” But this means that when a Doppler correction was made to the carrier of the emitted signal for the removal of the planet, the echo signal was not detected, and when this correction was not made, the echo signal was detected! This clearly indicates that the Doppler effect, which was supposed to be caused by the removal of Venus, was in fact absent. According to our model, this should have been the case; These results are incompatible with the official theory.
Let us add that the radar of Venus with a narrow-band signal was also carried out by foreign groups of researchers, and, apparently, they all had to solve the same problem: to present their results so that the breakthrough was not overshadowed by a scandal. Subsequently, however, Doppler shifts were discovered in echo signals reflected from the western and eastern edges of the disk of Venus - due to its slow rotation around its axis. But the main component of the Doppler shift, due to the approaching and moving away of Venus, was stubbornly not detected (see also 2.13 ).
Subsequently, thanks to the rapid development of experimental technology, during planetary radar it became possible to detect echo pulses in real time, which made it possible to measure the time delays in the movement of radio pulses to the planet and back. However, with this technique, experimenters deal with broadband signals, when the detection of Doppler shifts is fundamentally excluded - and the problem of these shifts has become “irrelevant”. The secret of the successful radar detection of Venus in 1961 remained unknown to the general scientific community.

1.10 Why did radio contact with the AMS disappear on the first approaches to Venus and Mars?
While spacecraft were flying within the region of earth's gravity, their trajectories and maneuvers were calculated, with acceptable accuracy, in a geocentric reference system, and for Doppler carrier shifts, during radio communication with them, formula (1.8.1) worked well. But this idyllic agreement between the traditional theoretical approach and practice collapsed with the very first interplanetary flights.
As noted above ( 1.6 ), for correct flight control, when calculating thrust and fuel consumption, it is necessary to know the “true” speed of the spacecraft. It is reliably known that, in near-Earth space, this speed is GEOcentric speed. It is no less reliably known that, in interplanetary space, this speed is HELIOcentric speed - try to calculate corrective maneuvers differently, and the device will not fly where you would like. It is absolutely clear that at some distance from the Earth there is a buffer layer, when passing through which the GEOcentric speed of the apparatus is replaced by HELIOTENCENTRIC. Official science avoids talking about the details of what happens in this layer. You see: according to the law of universal gravitation, terrestrial and solar gravity act everywhere, adding to each other, but the problem of the movement of a test body under the influence of attraction to just two centers of force no longer has an analytical solution. Oh, this is no accident! But mathematicians got out of it: they invented a way to calculate the trajectory of the apparatus using the method of numerical integration. They take the initial position and initial velocity vector of the device, take into account the acceleration that the “force centers” impart to it, and obtain increments of the position and velocity vector acquired over a short period of time - the step of numerical integration. In this way, a small segment of the trajectory is calculated, then the next one, and so on. This is where the moment of truth lies - with the current vector of true speed. If here it is still geocentric, and over there it is already heliocentric, then what is it like in the buffer layer? It can’t be 70% geocentric and 30% heliocentric! The theorists got out of it here too. Instead of honestly saying that there is a rather clearly defined limit, beyond which the “true” speed of the device abruptly changes the system for its reference, they introduced into use the concept of sphere of action. Thus, the “sphere of action of the Earth relative to the Sun” is a region of near-Earth space in which, when calculating the free movement of a test body, only Earth’s gravity should be taken into account, and solar gravity should be completely neglected; outside this region, on the contrary, one should neglect terrestrial gravity, because solar gravity completely dominates there... Isn’t this the principle of the unitary action of gravity ( 1.5,1.6 ) in its pure form? “No, no,” they try to assure us, “this is just a formal technique, for the sake of convenience in calculating the trajectory.” So, we read from Levantovsky: “ When a spacecraft passes the boundary of the sphere of action, it has to move from one central gravitational field to another. In each gravitational field, motion is considered, naturally, as Keplerian, i.e. as occurring along any of the conic sections - an ellipse, parabola or hyperbola, and at the boundary of the sphere of action the trajectories according to certain rules are mated, “glued together”"... [L1]. Specialists are well aware of these simple “conjugation rules”, according to which one Keplerian trajectory in the first reference system jumps into another Keplerian trajectory in the second reference system. So, we read further: “ The only meaning of the concept of sphere of action lies precisely in the boundary of separation of two Keplerian trajectories"[L1]. Here, however, there is no mention of two reference systems. But this is already clear: if in one reference system the motion of the apparatus is Keplerian, then in another reference system, moving relative to the first at cosmic speed, the same motion of the apparatus is not Keplerian at all. This means that two different Keplerian trajectories are stitched together only through a jump physical transition from one reference system to another. The most interesting thing is that it is through this broken jump, i.e. in blatant contradiction with the law of universal gravitation, the flight of the device is calculated CORRECTLY!
The same Levantovsky [L1] clearly states how to make this correct calculation of the jump in the “true” speed of the device. Let the device be brought to the so-called the Hohmann flight trajectory to the target planet is the most energetically favorable. Such a trajectory represents, in a simplified way, half of a circumsolar ellipse, the perihelion and aphelion of which touch the orbits of the Earth and the target planet. If the target planet is more distant from the Sun than the Earth, then, when approaching the planet, the heliocentric speed of the vehicle is less than the orbital speed of the planet. In this case, crossing the boundary of the region of planetary gravity is possible only through its front hemisphere: the planet catches up with the vehicle. To find the vector of the initial velocity of the vehicle in the planetocentric system immediately after its entry into the gravitational region of the planet, it follows from the velocity vector of the vehicle in heliocentric system subtract the velocity vector of the planet's orbital motion. For example, if Mars, whose orbital speed is 24 km/s, catches up with a vehicle moving in the same direction at a speed of 20 km/s, then the initial speed of the vehicle inside the gravitational region of Mars will be equal to 4 km/s and directed opposite to the orbital velocity vector of Mars . Thus, the jump in the magnitude of the local-absolute velocity ( 1.6 ) of the device will be 16 km/s. Everything happens similarly when flying into the gravitational region of a planet closer to the Sun than the Earth - with the only difference that in this case the transition of the boundary occurs through its rear hemisphere, since here the heliocentric speed of the vehicle is greater than the orbital speed of the planet.
Now we note that a jump in the local-absolute speed of the device (by tens of kilometers per second!) should, according to (1.8.2), cause a jump in the Doppler shift of the carrier during radio communication with the device - and given the narrow bandwidth of the paths in long-distance space communication systems, such a jump will take the carrier far beyond the current operating band, and the connection will be interrupted. Facts indicate that it was precisely under this scenario that contact with Soviet and American automatic interplanetary stations was lost for everyone first approaches to Venus and Mars.
From open sources (see, for example, [WEB1-WEB3]) it is known that the history of the first launches of spacecraft to Venus and Mars is an almost continuous series of failures: explosions, “failures” to reach the calculated trajectory, accidents, failures of various onboard systems ... They did this: during the next “window” in time, favorable for launch, spacecraft were launched in batches - in the hope that at least one of them would complete the planned program. But this didn’t help either. Open sources are silent about the fact that, on the approaches to the target planet, the device encountered an incomprehensible misfortune: radio contact with it was lost, and it “went missing.”
Here are some examples. In 1965, on November 12, the interplanetary automatic station “Venera-2” was launched towards the “morning star”, and on November 16, after that, “Venera-3”. Before approaching the planet, communication with Venera 2 was lost. According to calculations, the station passed on February 27, 1966 at a distance of 24 thousand km from Venus. As for Venera 3, on March 1, 1966, its descent module reached the surface of the planet for the first time. However, the TASS report kept silent about the fact that contact with this station was lost as it approached the planet [WEB2]. And this is what the beginning of the “Mars race” was like. Interplanetary automatic station "Mars-1": launch on November 1, 1962, communication lost on March 21, 1963. Interplanetary automatic station "Zond-2": launch on November 30, 1964, communication lost on May 5, 1965. Similar things happened with American spacecraft, and one case deserves special attention: “ In July 1969, when Mariner 7 reached the ill-fated region of space where previous vehicles had gone missing, contact with it was lost for several hours. After the connection was restored, to the bewilderment of the flight directors, ... its speed was one and a half times higher than the calculated one"[WEB3]. It is clear that the restoration of communication did not occur by itself, but as a result of successful compensation for the changed Doppler shift - since it was by the Doppler shift that the speed of the device was judged. Only after we learned how to restore lost radio communications in this way, successes in interplanetary astronautics began to fall one after another.
Since the phenomenon of Doppler shift jumps, when the apparatus crosses the boundary of planetary gravity, did not fit into the official theoretical doctrine at all, representatives of official science tried to silence this phenomenon. But - in vain! It is too widely known that on the first approaches to Venus and Mars, communication with the devices was lost. I personally had the opportunity to talk with specialists who, being true to their scientific duty, denied to the last that the connection, they say, was lost not at all because of some kind of “jumps”, but because the devices “died.” equipment". Then the question is: why various equipment from everyone the first devices died at the same distance from the planet? And why later, as if by magic, did it stop “dying” at all? Experts have not yet worked out answers to these simple questions.
Therefore, let us take into account these experimental facts that are deadly for relativism - a jump in the “true” speed of a spacecraft when crossing the boundary of the region of planetary gravity, as well as the resulting loss of radio communication with the apparatus, which can be restored with the help of a very specific shift of the carrier.
By the way, at first we were perplexed by the question of why communication with the devices was not lost even after they flew beyond the boundary of earth’s gravity. And the solution is apparently simple. To send the apparatus along the Hohmann trajectory (see above), it is necessary to remove it from the region of earth’s gravity in such a way that its heliocentric speed is by the required value greater than 30 km/s - for flight to the outer planet, or, accordingly, less - for flight to inner planet. Moreover, it is desirable to cross the boundary of earth's gravity - again, for energy reasons - at an acute angle, almost tangentially to this boundary. Combining these requirements, crossing the border was carried out at one of its two sections - either the closest to the Sun or the most distant. At the same time, despite a significant (about 30 km/s) jump in the local-absolute speed of the vehicle when crossing the border, there was a very insignificant change in the projection of this speed onto the straight line “Earth-station” - and, therefore, according to (1.8.2), there was the corresponding change in Doppler shift is also insignificant. Of course, when the vehicle flew into the gravitational region of the target planet, the situation was completely different.
In continuation of this storyline You can also mention the so-called gravitational maneuvers, with the help of which they change the parameters of the heliocentric trajectory of a spacecraft - when it flies through the area of ​​influence of the gravity of a particular planet. Such gravity maneuvers are presented to the public as aerobatics. We don't deny this; we only add that such aerobatics became possible after specialists learned to correctly work out the above-described boundary effects.

1.11 Another boundary effect: the annual aberration of light from stars.
Aberrational shifts in the apparent positions of stars were discovered by Bradley in the 18th century. It was discovered that, with a period of one year, the stars write ellipses on the celestial sphere, the more elongated the smaller the angle between the direction to the star and the plane of the earth's orbit. It was clear that this phenomenon was somehow connected with the orbital motion of the Earth, and, for two main reasons, this phenomenon was not reduced to annual parallax. Firstly, the parallactic shift of distant objects occurs in the direction opposite to the observer’s displacement - while the annual aberration shifts are co-directed with the Earth’s orbital velocity vector. Secondly, the greater the distance to the object, the smaller the parallactic shifts, while the semimajor axis of the annual aberration ellipses is the same for all stars: in angular measure, it is approximately equal to the ratio of the Earth’s orbital speed to the speed of light.
The annual aberration was easily explained on the basis of Newtonian ideas about light corpuscles. Explaining it from the standpoint of ideas about light, as waves in the ether, was quite problematic. In fact, ground-based optical experiments, for example, the Michelson-Morley experiment, showed that the near-Earth ether, together with the Earth, participates in its orbital motion. How then does the near-Earth ether make its way through the interplanetary ether without any turbulence? Stokes showed that this problem, along the lines of hydrodynamics, would be eliminated if the density of the ether at the surface of the Earth was several orders of magnitude greater than in interplanetary space. But it is known that the speed of light at the surface of the Earth and in interplanetary space is almost the same, and yet light was considered waves of elastic deformations in the ether! It is unthinkable that when the density of a medium changes by several orders of magnitude, the speed of elastic waves in this medium would not change! Finally, Einstein abolished the ether and, following the logic of relative speeds, declared that the angle of aberration depends on the relative tangential speed of the emitter and the observer [E2].
This statement, as it turned out, is not at all consistent with experimental facts. Thus, visual double stars obviously have different tangential velocities relative to the earthly observer - but they experience the same aberration shifts as single stars, and these shifts for double stars are the same not only in magnitude, but also in direction. The concept of relative velocities obviously does not work: the annual aberration of stars depends only on the annual motion of the observer! Until now, relativists pretend that the problem does not exist - although, in fact, they lack an understanding of one of the key phenomena in the optics of moving bodies.
Meanwhile, this phenomenon finds a natural explanation based on our model, according to which frequency slopes play the role of that very “firmament”, relative to which the phase speed of light in vacuum is locally fixed. That is, this speed is a fundamental constant only in the local-absolute sense. For example, while light moves within the region of planetary gravity, its speed is equal to c only in the planetocentric reference frame. And in the heliocentric reference system, it vectorially adds up to the heliocentric speed of the planet. On the contrary, light moves through interplanetary space at a speed c only in the heliocentric reference system - for its speed relative to any planet, one should, again, do the corresponding vector recalculation. Note that these recalculations should be done not according to the relativistic law of addition of velocities, but according to the classical one!
According to this logic, light from distant star, passing through the boundary of the Earth's gravity region, “ignores” the fact that this region is moving through interplanetary space. Light moves through this area at a speed c– moreover, the direction of movement is determined simple rule: Light continues to move in the direction in which it crossed the boundary. And this is the direction, i.e. the angle of entry is determined by the classical combination of the vector of the orbital velocity of the region of earth's gravity and the vector of the speed of light on approach to the boundary. In the particular case when these vectors are orthogonal, the ratio of their modules gives the tangent of the annual aberration angle - one of the fundamental constants in astronomy.
Thus, the phenomenon of annual aberration finds an elementary explanation as a boundary effect that occurs when light from stars passes the boundary of the region of earth's gravity - with the light speed vector switching to a new local-absolute reference. In one fell swoop, the features of the annual aberration are explained, which so far have not been able to be explained on the basis of the concept of relative velocities. Firstly, this is the sameness of the semimajor axes of the annual aberration ellipses for all stars, regardless of their other proper movements in the celestial sphere. Secondly, this is the result of checking whether an aberration “kink” in the movement of light occurs on the telescope with which observations are made. For this test, Airy filled the telescope with water. The speed of light in water is approximately one and a half times less than in air. If the “kink” occurred on a telescope, then the ratio of the speed of movement of the telescope to the speed of light in it would give a one and a half times greater aberration effect. However, the effect remains the same - it means that light enters the telescope that has already experienced aberration deviation somewhere above. Finally, thirdly, this is a kind of selectivity of the phenomenon: annual aberration is observed for objects located outside the region of Earth's gravity - but is not observed for objects located inside this region, for example, for the Moon and artificial satellites of the Earth.
As you can see, the logic of the “digital” world – in which there is a place for “ether” – again looks preferable. We should just not forget that the “ether” we are talking about is not a physical reality, but a supraphysical one: these are program instructions. Therefore, when the planetary “ether” moves through the interplanetary “ether,” no problems arise either along the line of hydrodynamics or along the line of superposition of these “ethers” on each other. The program instructions are such that the planetary and interplanetary “ethers,” so to speak, do not mix, and the boundary between them retains its original sharpness.

1.12 Quadratic Doppler effect in the local-absolute velocity model.
According to SRT, the magnitude of the quadratic Doppler effect is
, (1.12.1)
Where f- radiation frequency, V- speed of the emitter in the reference frame of the receiver. This effect is also called the transverse Doppler effect, since it occurs even when the emitter moves orthogonal to the emitter-receiver line. But the term “transverse Doppler effect”, in our opinion, is unfortunate, since the effect also occurs when the emitter moves away and approaches.
Since, according to SRT, the cause of the quadratic Doppler effect is considered to be relativistic time dilation in moving object, then a problem arises here with all its severity: a theory built on relative velocities turns out to be powerless to answer the question of which of the two objects under consideration is moving and which is at rest. The simplest example: two spacecraft exchange radio signals. In the reference system of the first apparatus, with speed V the second of them moves, which means that “time slows down” on the second one – i.e. the frequency received on the first device will be reduced. But in the reference frame of the second apparatus, with speed V the first of them moves, which means that “time slows down” on the first one – i.e. the frequency received on it will be increased. This is an example of an internal contradiction in SRT, which is called the “twin paradox” (or “clock paradox”). This paradox killed several generations of thinkers who were told that the quadratic Doppler effect was observed experimentally in full agreement with the predictions of STR. In reality, there is no such agreement. The first experiments with transportable atomic clocks ( 1.13 ) showed that the results of their comparisons, after the action of “relativistic time dilation”, are fundamentally unambiguous - in full agreement with common sense. Moreover, these results turned out to be impossible to explain on the basis of the concept of relative speeds. For correct calculation, we had to take into account individual slow down the speed of the laboratory and transported clocks, and then take the corresponding difference in the time intervals counted by both clocks.
This state of affairs easily and naturally follows from the concept of local-absolute velocities ( 1.6 ). According to this concept, the quadratic Doppler effect is not caused by “time dilation”, but, according to the logic of the “digital world”, by a decrease in the frequencies of quantum pulsations in moving particles of matter - and, accordingly, downward shifts in quantum energy levels in moving physical bodies, only movement here should be understood in a local-absolute sense. Quadratic Doppler shifts of quantum levels are described by a formula similar to (1.12.1), namely:
, (1.12.2)
but the role V Local-absolute speed plays a role here. Thus, quadratic Doppler shifts (1.12.2) of quantum energy levels in a moving physical body are an objective physical sign that the body is moving with a locally absolute speed equal to V.
We will return to the question of the origin of quadratic Doppler shifts (1.12.2), which are an elementary consequence of the law of conservation of energy. 4.7 . Now we will talk about experiments in which the quadratic Doppler effect clearly indicates the inconsistency of the concept of relative velocities and the validity of the concept of local absolute velocities. Actually, we have already talked about one of these experiments - [Ch1], using the Mössbauer effect - in paragraph 1.7 ; in this experiment, the emitter and receiver moved on a laboratory table. Now let's talk about experiments that used global transportation of atomic clocks.

1.13 What did the round-the-world transportation of atomic clocks show?
In October 1971, Hafele and Keating performed the outstanding experiment [X2,X3] with a transportable atomic clock using a cesium beam. The four of these clocks were carefully compared with the time scale of the United States Naval Observatory (USNO), and then, on regular passenger flights, two round-the-world air transports of this four were carried out - in the eastern and western directions.
After each of these round-the-world trips, the four hours were again compared to the USNO scale. The resulting differences between the clock readings and the USNO scale are reproduced in Fig.1.13.1. Zero of the abscissa axis corresponds to 0 hours of Universal Time (UT) on September 25

Fig.1.13.1

1971 Three-digit digital markers are individual watch numbers from the working four, the “Average” label indicates the average of four differences. The behavior of this averaged difference in the vicinity of time intervals during transportation is reproduced in Fig.1.13.2. This figure clearly demonstrates how additional changes in readings accumulated during transportation were judged. Namely: they made a forecast of the drift of the average difference, and found the shift between its predicted and actual values ​​- at the time of resuming comparisons.
Now - about the interpretation of these shifts. It was believed that they were caused by the combined action of two effects: gravitational and kinematic, i.e. relativistic, time dilation. Gravitational time dilation is predicted by the general theory of relativity (GTR) - according to which, at altitude time flows somewhat faster than on the earth's surface. Therefore, clocks on the ground must monotonically accumulate lag compared to the same clocks raised to altitude - in particular, on board an airplane. The calculated contribution of this effect was approximately the same for both circumnavigations (see. Fig.1.13.3). We will analyze the phenomenon of gravitational changes in the rate of clocks below, in 1.14 ; here we will focus on the kinematic change in the clock rate.

Fig.1.13.2

According to STO, moving clocks should monotonically accumulate lag compared to the same resting for hours. Within the framework of the concept of relative speeds, Hafele and Keating had to solve a difficult problem: to figure out which of the two groups of clocks - the laboratory one, on which the USNO scale was formed, or the transported four - was moving and which was at rest. Do not think, dear reader, that we are scoffing when we call this problem difficult. It is only at first glance that it seems that the laboratory clock was at rest, and the clock that was transported was moving. If everything were so simple, then, during both round-the-world trips, the transported clocks would accumulate approximately the same kinematic lags compared to the laboratory clocks. And, for both trips around the world, the resulting sums of gravitational and kinematic effects would be approximately the same. But take another look at Fig.1.13.2: these resulting sums for the eastern and western circumnavigations turned out to be, in fact, different not only in magnitude, but also in sign! The conclusion of Ives [A1] and Bilder [B2] was confirmed that a correct calculation of the relativistic divergence of readings for a pair of arbitrarily moving clocks is impossible if one operates only with their relative speed.

Fig.1.13.3

Hafele and Keating had to abandon the non-working concept of relative velocities and look for a way to calculate kinematic effects that would give a more adequate description of the results they obtained. This method, in hindsight, was quickly found. Calculations were made of the slowdown for both groups of watches - both transported and laboratory - based on individual velocities of both groups in a geocentric non-rotating frame of reference. From this “point of view”, not only the transported group was moving, the laboratory group was moving too - due to the daily rotation of the Earth. Accordingly, it was necessary to calculate the accumulated kinematic “lags” for both groups, and take the difference of these “lags” as a detectable kinematic effect. These calculations gave quite acceptable agreement with experience: the prediction of the total effect for the eastern circumnavigation was -40±23 ns, and for the western it was +275±21 ns.
Now let us remember that the speeds of clocks in a geocentric non-rotating frame of reference are, in this case, their local absolute speeds ( 1.6 ). It turns out that the Hafele-Keating experiment clearly demonstrated the unsuitability of the concept of relative velocities and, conversely, the workability of the concept of local absolute velocities. Hafele and Keating seem to have guessed something like this - judging by their reasoning that the reference frame associated with the USNO laboratory is non-inertial due to participation in the daily rotation of the Earth, and a non-rotating geocentric reference frame is inertial, and therefore -the calculations were made in it. Excuse me, how can a reference system that has centripetal acceleration during orbital motion around the Sun be inertial? Or are reference systems more or less inertial?! If someone believes that this is the case, then let him take an even “more inertial” frame of reference - associated with the Sun - and let him make calculations for the Hafele-Keating experiment in it. This calculation will turn out to be monstrously incorrect. The beauty of the quadratic Doppler effect is that it is quadratic – in speed. Because of this, for each specific problem there is only one frame of reference in which the “true” velocities must be taken and squared to obtain correct predictions. And these “true” speeds are precisely local-absolute.

1.14 How satellites “confirmed” the theory of relativityGPS andTIMATION.
With the beginning of the “GPS era”, the unquestionable thesis was drummed into the mass consciousness that this navigation system works, confirming with great accuracy - daily, hourly and every minute - the predictions of SRT and GRT regarding changes in the rate of time on board satellites. But, in a strange way, they hid from the public exactly how these predictions were confirmed. Thus, in one of the most famous books on the basics of GPS [T3], the author did not say a word about exactly how relativistic and gravitational effects are taken into account when GPS operates. This contrasts so much with the breadth of material covered and the details of the presentation in [T3], that the question involuntarily arises: why is evidence of Einstein’s genius hidden from us?
And the answer is simple: because there is no such evidence. Because the concept of relative speeds does not work in the case of GPS either - quite obviously. Here, look: let the user of the GPS navigator Vasya receive signals from several GPS satellites. Each satellite from this working constellation has its own speed relative to Vasya’s GPS navigator. According to the logic of relative speeds, for Vasya, the on-board clocks on each of these satellites should experience quadratic Doppler decelerations in accordance with their speeds relative to Vasya. How does the onboard clock know these speeds? In addition, Vasya is not alone, there are other users of GPS navigators - Petya, for example. If the velocities of the same satellites relative to Petya are not the same as those relative to Vasya, then the quadratic Doppler decelerations of the onboard clocks should be “not the same” as for Vasya. And this does not fit into any gates. After all, experience shows that the movements of the on-board GPS clocks are unambiguous. This clock sneezed at Vasya, Petya and millions of other users - it “ticks” the same for everyone. GPS satellite tracking stations scattered over different longitudes indicate: the progress of each onboard clock permanent– accurate to small random fluctuations, and to corrections for small differences in GPS orbits from circular ones, as well as for periodic corrections of these moves. Only thanks to the almost constant movements of the on-board GPS clock, it is possible to fulfill one of the main points of the technical specification: to keep the GPS time scale within a small difference with the Coordinated Universal Time (UTC) scale. At the dawn of the GPS era, this difference should not exceed ±100 ns, then ±50 ns. Today this difference should not exceed, if we are not mistaken, ±20 ns. Thus, GPS operation is based on the almost synchronous progression of the GPS scale generated by the on-board clock and the UTC scale generated by the ground clock. How is this possible if, in relation to the ground clock, the on-board clocks experience relativistic and gravitational effects?
The answer is this. With the help of the first experimental GPS satellites, we were convinced that the combined action of these two effects takes place [X2]. After that, " satellite clocks are adjusted to such a speed before launch to compensate for these... effects"[F1]. This terrible secret has already been disclosed in official textbooks[O1]. Strictly speaking, they adjust the output frequency not of the onboard standard, but of the onboard synthesizer - but oh well. The fact of making unambiguous corrections for gravitational and relativistic effects is obvious. No more ridiculous “clock paradox” for you!
However, Van Flandern believes that, in the case of GPS, " we can say with confidence that the predictions of the theory of relativity are confirmed with high accuracy"[F1]. He tries to convince us that the on-board GPS clocks are in perfect agreement with Einstein's predictions. " General Relativity predicts... that atomic clocks at GPS satellite orbital altitudes run faster by about 45,900 ns/day because they are in a weaker gravitational field than atomic clocks on the Earth's surface. Special Relativity (SRT) predicts that atomic clocks moving with orbital speed GPS satellites are slower by about 7200 ns/day than stationary terrestrial clocks"[F1]. Excuse me - where did the SRT predict that the relativistic slowdown of the on-board clock is constant in relation to all “fixed ground clocks”? After all, the speed of each onboard clock is different in relation to the various “fixed ground clocks” - and even changes periodically! The sameness of the relativistic correction for all boards and its independence of time means that it is determined by the same, constant speed - namely, the linear speed of the orbital motion of GPS satellites. And, indeed, the working GPS reference system is geocentric non-rotating [T3]. Taking into account the above ( 1.6 ), we state: the quadratic-Doppler slowdown of the on-board GPS clocks is determined only by their local-absolute velocities, which are approximately the same for all GPS satellites. Thus, the work of GPS does not confirm the concept of relative speeds, but, on the contrary, leaves this concept in the dust. Moreover, if in the Hafele-Keating experiment ( 1.13 ), which gave a similar result, the magnitude of the measured effect exceeded the measurement error by only several times, then, in the case of GPS, the accuracy margin was almost four orders of magnitude.
But that is not all. Relativistic and gravitational changes in the course of on-board satellite clocks are indisputable facts. But are these changes in the course consequences of time dilation? No, they are not. There are known facts, also indisputable, that indicate that the issue here is NOT a matter of time dilation. Indeed, such a fundamental phenomenon as time dilation would affect the speed of all physical processes without exception. In particular, the output frequencies of generators of very different designs would change in the same way - in relative terms. However, this is not so: unlike the frequencies of quantum standards, the frequencies of quartz oscillators do not experience relativistic and gravitational shifts!
Thus, in May 1967 and September 1969, the United States launched the first pair of satellites of the TIMATION low-orbit navigation system (see, for example, [I1]). On their sides there were precision quartz oscillators, the frequencies of which were controlled with an accuracy of no worse than 10 -11 [I1]. For TIMATION satellites, with an orbital altitude of 925 km, the total effect of relativistic and gravitational effects would be –2.1×10 -10 [G2]. This modulus figure is 20 times coarser than the above-mentioned frequency control accuracy. Therefore, if the frequencies of the quartz oscillators on board TIMATION would experience relativistic and gravitational shifts, then their sum would certainly be detected. Moreover, this discovery would be a sensation - the first confirmation of SRT and GTR using on-board satellite clocks. However, the sensation did not materialize. It was arranged later, after the launch of the first experimental GPS satellites with quantum frequency standards on board.
These facts are deadly for STR and GTR. The frequencies of quantum oscillators experience relativistic and gravitational shifts, but the frequencies of quartz oscillators do not! This means that in the case of quantum generators, these shifts are not due to time dilation at all - which, as we remember, would affect all physical processes. We will talk about the reasons that, in our opinion, provide these shifts in 4.7 . If very briefly, then, according to the logic of the “digital” world, the point here is in software manipulations that control the position of quantum energy levels in matter. These software manipulations affect the frequencies of quantum generators directly, but only indirectly affect the frequencies of classical generators. The difference is that the natural frequency of a classical generator is determined not so much by the frequencies of the quantum pulsators from which it is built, but by the laws structural organization substances that provide this construction. This is why relativistic and gravitational shifts of quantum energy levels, transformed to the structural level of a classical generator, can lead to completely different resulting shifts in its frequency [G2].
The fact remains: the quartz oscillators on board the TIMATION satellites did not reveal relativistic and gravitational frequency shifts, although the accuracy was quite sufficient for this. On specialized Internet forums, where we started talking about TIMATION satellites, relativists began to go hysterical. Guided by the principle “Deny everything!” - they put forward the most ridiculous objections. And that there were no TIMATION satellites is, they say, our invention. And that relativistic and gravitational frequency shifts were not discovered there simply because such a task, supposedly, was not set. And that there are no quartz oscillators with a frequency control accuracy of up to 10 -11 - this figure cannot be better than 10 -8 (although there are already examples with a value of this parameter of 1.1 × 10 -12 [M2]). Why do relativists react so inadequately? Because the TIMATION satellites have demonstrated too clearly: relativistic and gravitational time dilation does not exist in nature. No amount of theoretical verbiage can explain away this conclusion. We will, of course, be told that there were experiments in which relativistic and gravitational time dilation were discovered. This is not true: either the experimenters themselves were mistaken, or they deliberately misled you and me, dear reader. We will now analyze the key of these “experiments”.

1.15 Comedy with the lifetime of muons.
There is a well-known myth that some of the historically first evidence of relativistic time dilation was obtained by measuring the lifetime of mu-mesons, or muons. We say "myth" because even in educational literature and reviews of experiments, the authors hold back details and try to quickly get past this slippery place. Even such a well-known specialist in the experimental basis of the theory of relativity as W.I. Frankfurt casually gave three bare references on this subject - and not a word more [F2]. In the case of muons, the crudeness of the fake is too striking.
Here, Professor A.N. Matveev teaches students: “ Exist various ways...measure the path lengthm-meson between the moment of its birth and the moment of its decay and independently determine its speed. Thanks to this, the lifetime of the particle can be found. If there is a time dilation effect, then the lifetime of a meson should be longer, the greater its speed..."[M3] – and further that the experiment confirmed all this, and its own lifetime m+ -meson was »2×10 -6 s. These teachings are some kind of shame. If only because in the experiments on the basis of which the agreement on these very two microseconds was accepted, the “moments of birth” of muons and, accordingly, their “path lengths” were fundamentally unknown!
The fact is that in these experiments they worked with muons of natural origin, which flew down through the atmosphere, born when cosmic ray protons hit air particles. These protons are high-energy, and the muons turned out to be relativistic - having a starting speed close to the speed of light. The fact that muons are unstable was evidenced, for example, by the following fact: the absorption of muons in a layer of air is 1.4 times greater than in a layer of water equivalent in mass [F3]. Since the losses due to interaction with matter in these cases are practically the same, and the difference is only in the paths traversed, the conclusion suggested itself about the spontaneous decay of the muon. Its lifetime was initially determined based on the strange assumption that all muons were born at the same height - somewhere between 15 and 20 km. We used a muon telescope - a pair of scintillators separated by some distance. If a muon flew through both scintillators, then the muon was recorded in two flashes - in coincidence mode. So, they tilted the telescope at a certain angle from the vertical and measured the counting rate. Then the telescope was placed vertically and a dense absorber was placed above it, which compensated for the decrease in the mass of the air column traversed by the muon. With losses due to interaction with matter equalized in this way, the counting rates for the two named cases were different. Knowing the geometric difference in the paths traversed by the muon, its average lifetime was calculated.
The weak point here was the unconfirmed assumption that all muons were born at the same height. If this assumption turns out to be wrong, all the results will go to waste. And so it happened: today it is well known that muons are born throughout the entire thickness of the atmosphere, penetrated by cosmic ray protons. But students still perform laboratory work in which they tilt the muon telescope. Now they are already told in advance what “birth altitude” of muons needs to be taken so that their own lifetime is close to the reference one. Having received five points for this bullshit, the boys then shout on Internet forums that they “felt with their own hands the increase in the lifetime of muons”!
Where is it, the increase? And here's how relativists explain it. If the muon’s own lifetime is 2 microseconds, then, moving even at the speed of light, it would fly only 600 m, but it flies many kilometers - which means only due to an increase in the lifetime! No, don't confuse us. The proper lifetime of a muon is, by your own relativistic standards, the time in the frame of reference of the muon itself. But in this reference system it does not fly not only kilometers, but even millimeters - because it is at rest in it. It is in the laboratory frame of reference that it “flies”, and it is unknown how long. What are you, gentlemen, comparing if you take time in one reference system, and the path in another? Moreover, you do a relativistic transformation for time, but not for path! You can't do anything without deception? And without deception, here it is: you need to know the life time resting in the laboratory muon - then you can estimate how far it would have flown during this time. But where did the muons at rest in the laboratory come from when they pierced the telescopes right through?
From this “flight” technique we moved to a more advanced one – “semi-flight”. Two lead absorbers were placed in the telescope - one that slowed down and one that stopped. Scintillators were added, and the coincidence circuits were adjusted so that only those muons were recorded that flew through the first absorber, but did not fly through the second. By varying the thickness of the first absorber, it was possible to selectively record muons with certain energies - in a “window” with a width specified by the thickness of the second absorber - and thus obtain data for a fairly wide range of muons in energy! However, when working with monoenergetic muons, only the ratio of the muon’s own lifetime to its rest mass [Ф3] was determined, which had not yet been precisely established. It was necessary to make a strong-willed decision about this mass... But a scheme was used that made it possible not to think about at what altitude all the muons were born - 15 or 20 km. The measurements were carried out at two altitudes above sea level - with a difference of a couple of kilometers - and the corresponding difference in count rates was interpreted as an indicator of muon decays along this two-kilometer path. So, all these innovations were applied by Rossi and co-authors [P2]. True, instead of the promised spectrum, for some reason they gave only two points, 515 and 972 MeV, for which own times the lives of the muons coincided quite well - which supposedly confirmed “ the presence of a relativistic increase in life expectancy with increasing energy"[F3]. Was this good coincidence due to the fact that the required difference in count rates was provided by the corresponding difference in relativistic factors - or simply because there were initially slightly fewer muons with energies of 972 MeV than with energies of 515 MeV? After all, their initial energy distribution was unknown! And the authors did not take into account the birth of muons in the interval between the two altitudes at which the telescope operated... Whatever one may say, there were much more unknowns in this problem than equations. And in such a situation there are no clear-cut solutions - the first, and the second, and the fifth, and the tenth are suitable. If you like the one that confirms the theory of relativity, choose it!
These highly scientific confirmations, using the “flight” and “semi-flight” methods, were worthily crowned by the “non-flight” method - with the help of which, as we are assured, the lifetime of a muon at rest was finally measured. The idea was to use absorbers, in the last of which the muon was guaranteed to get stuck - and the moment of the end of its life was recorded by the emission of an electron or a decay positron. As for the moment when the muon began its life... well, yes, it was not recorded. How can you order it to be recorded if the muon was born God knows where? The only moment that was still recorded was the moment the muon entered the installation, i.e. in fact, the moment it gets stuck in the absorber. So they collected statistics on the time intervals between the muon getting stuck in the absorber and the electron or decay positron escaping from there. Follow the logic: during this period of time, the muon, firstly, lived, and, secondly, was at rest. This served as the basis for statements that the lifetime of a muon at rest was measured in this way. Literally, so to speak!
Dear reader, we're not kidding. The installation diagram and measurement technique are given not only in the original articles [P2, P3], but also in the same Feinberg [F3], and in educational literature, for example, in [M4], [L2]. Those interested can verify that everything was done as described above. Let us only clarify that the desired “lifetime” was not found by simple averaging of the recorded time intervals. Statistically, a decreasing exponential dependence of the number of decays on the time interval between entry into the absorber and decay was discovered. This dependence is a typical curve describing radioactive decay. Therefore, the characteristic time interval that corresponded to the decay of the exponential in e times, and agreed to call it “the lifetime of a muon at rest.” And they included this value - about 2.2 μs - in reference books.
All this would be wonderful if we forget that muons lived before they flew into the absorber. But if the muon flew from a height of 20 km, then, according to the laboratory clock, it covered this path in about 67 μs. Even if we assume that relativistic time dilation exists, then with a relativistic factor equal to 10, the muon in this flight lived “on its own clock” for about 6.7 μs - i.e. significantly longer than the notorious 2 µs. It turns out that the reference value of the lifespan of a muon at rest in no way characterizes the lifespan of a muon “according to its own clock.” And the results of subsequent experiments - in which, say, with a relativistic factor equal to 10, the muon lived for 22 μs - do not at all indicate relativistic time dilation. These results have no physical meaning at all; their meaning is purely political. The muon was the first unstable particle to “prove” the existence of relativistic time dilation. It was easier to lie further.
No, how is it possible: to argue that a muon lives in an absorber for only 2 microseconds, and during this time it would not have time to fly much - while knowing full well that the muon spends a completely different, and not small, segment of its flight life? The theory of relativity is in very bad shape if it has to be “confirmed” with such babble. Truth does not need lies to support it. Lies need lies.

A1. H.E.Ives. Journ. Opt. Soc. Amer., 27 , 9 (1937) 305.
B1. A. Brillet, J. L. Hall. Phys.Rev.Lett., 42 , 9 (549) 1979.
B2. G.Builder. Australian Journal. Phys., 11 (1958) 279.
IN 1. S.I. Vavilov. Experimental foundations of the theory of relativity. Collection cit., vol. IV, p.9. M., Publishing House of the USSR Academy of Sciences, 1956.
WEB1. Web resource martiantime.narod.ru/History/lant1.htm
WEB2. Web resource epizodsspace.narod.ru/bibl/nk/1992/21/ub-v4.html
WEB3. Web resource www.incognita.ru/hronik/planet/p_004.htm
G1. A.A.Grishaev. Michelson-Morley experiment: local-absolute velocity detection? – Available on the website newfiz.narod.ru
G2. A.A.Grishaev. Are relativistic and gravitational frequency shifts the same for quantum and classical oscillators? - Right there.
I1. R. L. Easton. The role of frequency and time in navigation satellite systems. In the collection “Time and Frequency”, M., Mir, 1973, p.114. (Translation of Proc. IEEE, 60 , 5 (1972), special issue “Time and Frequency”).
K1. V.A. Kotelnikov and others. Radar installation used in the radar of Venus in 1961. Radio engineering and electronics, 7 , 11 (1962) 1851.
K2. V.A. Kotelnikov et al. Radar results of Venus in 1961. Ibid., p. 1860.
K3. V.A.Morozov, Z.G.Trunova. Weak signal analyzer used in the radar survey of Venus in 1961. Ibid., p. 1880.
L1. V.I. Levantovsky. Mechanics of space flight in an elementary presentation. M., “Science”, 1974.
L2. A. Lyubimov, D. Kish. Introduction to experimental particle physics. "Fizmatlit", M., 2001.
M1. A.A. Michelson, E.W. Morley. On the relative motion of the Earth and the luminiferous ether. On Sat. articles “Ethereal Wind”, V.A. Atsyukovsky, ed. M., “Energoatomizdat”, 1993. P. 17. Articles from this collection are also available on the Internet – ivanik3.narod.ru
M2. M.Mourey, S.Galliou, R.J.Besson. Proc. of 1997 IEEE International Frequency Control Symposium, p.502. 28-30 May 1997, Hilton Hotel, Disney World Village, Orlando, Florida, USA.
M3. A.N.Matveev. Mechanics and theory of relativity. " graduate School", M., 1976.
M4. K.N.Mukhin. Experimental nuclear physics. T.2. "Atomizdat", M., 1974.
H1. A.I. Naumov. Physics atomic nucleus and elementary particles. "Enlightenment", M., 1984.
O1. C. Audouin, B. Guino. Measuring time. GPS Basics. "Technosphere", M., 2002.
P1. J.D. Prestage, et al. Phys.Rev.Lett., 54 , 22 (1985) 2387.
P1. E. Riis, et al. Phys.Rev.Lett., 60 , 2 (1988) 81.
P2. B. Rossi, et al. Phys.Rev., 61 (1942) 675.
P3. F. Rasetti. Phys.Rev., 59 (1941) 706.
P4. B. Rossi, A. Neresson. Phys.Rev., 62 (1942) 417; 64 (1943) 199.
C1. forum.syntone.ru/index.php?act=Print&client=html&f=1&t=14717
T1. J. P. Cedarholm, et al. Phys.Rev.Lett., 1 (1958) 342.
T2. T. S. Jaseja, et al. Phys.Rev., 133 , 5A (1964) 1221.
T3. James Bao-Yen Tsui. Fundamentals of Global Positioning System Receivers: A Software Approach. John Wiley & Sons, Inc., 2000.
F1. Tom Van Flandern. What the Global Positioning System Tells Us about Relativity. metaresearch.org/cosmology/gps-relativity.asp Russian translation is available at ivanik3.narod.ru
F2. U.I. Frankfurt. Special and general theory of relativity. "Science", M., 1968.
F3. E.L. Feinberg. Meson decay. In the collection of articles “Mezon”, “State. Publishing house of technical and theoretical literature", M.-L., 1947. pp. 80-113.
X1. D.Hils, J.L.Hall. Phys.Rev.Lett., 64 , 15 (1990) 1697.
X2. M. D. Harkins. Radio Science, 14 , 4 (1979) 671.
Part 1. D.C.Champeney, G.R.Isaak, A.M.Khan. Phys.Lett., 7 , 4 (1963) 241.
E1. L.Essen. Nature 175 , 4462 (1955) 793.
E2. A. Einstein. On the electrodynamics of moving bodies. Collection Scientific Proceedings, vol.1. "Science", M., 1965.

G2. A.A.Grishaev. A New Look on the essence of the Mössbauer effect. - Right there.

G3. A.A.Grishaev. About temperature and thermal effects chemical reactions. - Right there.

G4. A.A.Grishaev. On the question of the detonation mechanism. - Right there.

G5. A.A.Grishaev. Metals: non-stationary chemical bonds and two mechanisms of electrical transfer. - Right there.

G6. A.A.Grishaev. Temperature dependence of the frequency of switching of directional valences in metal atoms. - Right there.

G7. A.A.Grishaev. Switchable chemical bonds in complex compounds and the phenomenon of ferroelectricity. - Right there.

D1. A. Dalgarno. Mileage and energy loss. In: Atomic and molecular processes. "Mir", M., 1964.

D 2. V.D. Dudyshev. New electrical technology for extinguishing and preventing fires. "Ecology and Industry of Russia", December 2003, pp. 30-32.

E1. A.S. Enochovich. Handbook of physics and technology. "Enlightenment", M., 1976.

E2. M.A. Elyashevich. Atomic and molecular spectroscopy. “Mr. Publishing house of physical and mathematical literature", M., 1962.

Z1. V.B. Zenkevich, V.V. Sychev. Magnetic systems based on superconductors. "Science", M., 1972.

Z2. M. Zerlauth, A. Yepes Jimeno and G. Morpungo. The electrical circuits in the LHC reference database, LHC-LD-ES-0003, http://cdsweb.cern.ch/record/1069436

I1. F. Jonah, D. Shirane. Ferroelectric crystals. "Mir", M., 1965.

K1. S.G. Kalashnikov. Electricity. "Science", M., 1977.

K2. V.N.Kondratiev. Structure of atoms and molecules. “Mr. Publishing house of physical and mathematical literature", M., 1959.

K3. R. Christie, A. Pitti. The structure of matter: an introduction to modern physics. "Science", M., 1969.

K4. T. Cottrell. Strength of chemical bonds. "Publishing House of Foreign Literature", M., 1956.

K5. A.K.Kikoin, I.K.Kikoin. Molecular physics. "Science", M., 1976.

K6. S. Knoop, et al. Magnetically Controlled Exchange Process in an Ultracold Atom-Dimer Mixture. Phys.Rev.Lett., 104 , 053201 (2010).

K7. V. Kononenko, et al. Comparative study of ablation of materials with femtosecond and pico/nanosecond laser pulses. Quantum electronics, 28 , 2 (1999) 167.

K8. M. R. H. Knowles, et al. Micro-machining of Metals, Silicon and Polymers using Nanosecond Lasers. International Journal of Advanced Manufactured Technology, 33 , No. 1-2, May 2007, p. 95-102.

K9. M.I.Kaganov. Electrons, phonons, magnons. "Science", M., 1979.

K10. M.G. Kremlev. Superconducting magnets. Advances in physical sciences, 93 , 4 (1967) 675.

L1. A. Leshe. Physics of molecules. "Mir", M., 1987.

L2. M.A. Leontovich. Introduction to thermodynamics. Statistical physics. "Science", M., 1983.

L3. B.G. Livshits. Metallography. "Metallurgy", M., 1971.

M1. G. Messi. Negative ions. "Mir", M., 1979.

M2. K.N.Mukhin. Experimental nuclear physics. T.1. "Atomizdat", M., 1974.

P1. R.V.Paul. The doctrine of electricity. “Mr. Publishing house of physical and mathematical literature", M., 1962.

P2. L. Pauling. general chemistry. "Mir", M., 1974.

P3. A.M. Privalov. Photoprocesses in molecular gases. "Energoatomizdat", M., 1992.

P4. R. Pearce, A. Gaydon. Identification of molecular spectra. "Publishing House of Foreign Literature", M., 1949.

P5. L. Pauling. Nature chemical bond. "Goskhimizdat", M.-L., 1947.

P1. A.A. Radzig, B.M. Smirnov. Handbook of Atomic and molecular physics. "Atomizdat", M., 1980.

P2. O. W. Richardson. Molecular Hydrogen and its Spectrum. 1934.

C1. Chemist's Handbook. Ed. B.P. Nikolsky. T.1. "Chemistry", L., 1971.

C2. N.N. Semenov. Chemistry and electronic phenomena. UFN, 4 (1924) 357. Also published in: Selected Works, Vol. 2, Combustion and Explosion. "Science", M., 2005.

C3. N.N. Semenov. Chemical kinetics and theory of combustion. In: Selected works, Vol.2, Combustion and explosion. "Science", M., 2005.

T1. I.E.Tamm. Fundamentals of the theory of electricity. “Mr. Publishing house of technical and theoretical literature", M., 1956.

T2. Tables of physical quantities. Directory. Ed. acad. I.K. Kikoina. "Atomizdat", M., 1976.

T3. R. C. Tolman, T. D. Stewart. Phys. Rev., 8 (1916) 97.

F1. Physical encyclopedic Dictionary. Ch. ed. A.M. Prokhorov. “Owl. Encyclopedia", M., 1983.

F2. U. Fano, L. Fano. Physics of atoms and molecules. "Science", M., 1980.

F3. I.F. Fedulov, V.A. Kireev. Textbook of physical chemistry. "Goskhimizdat", M., 1955.

F4. Physical quantities. Directory. Ed. I.S.Grigorieva, E.Z.Meilikhova. "Energoatomizdat", M., 1991.

F5. V.K.Fedyukin. Not superconductivity of electric current, but supermagnetization of materials. St. Petersburg, 2008. Available at: http://window.edu.ru/window_catalog/pdf2txt?p_id=26013

F6. Ya.I.Frenkel. Superconductivity. M.-L., ONTI, 1936.

X1. A.R.Hippel. Dielectrics and waves. "Publishing House of Foreign Literature", M., 1960.

X2. Chemistry. Encyclopedia for children, T.17. "Avanta +", M., 2001.

Part 1. O.P. Charkin. Problems of the theory of valency, chemical bonding, molecular structure. "Knowledge", M., 1987.

Ch2. B. Chalmers. Physical metallurgy. “Mr. scientific and technical publishing house of literature on ferrous and non-ferrous metallurgy", M., 1963.

Ш1. G. Schulze. Metal physics. "Mir", M., 1971.

E1. Experimental nuclear physics. Ed. E. Segre. T.1. "Publishing House of Foreign Literature", M., 1955.

Addition: FINAL PHRASES.

Final phrases.

The tragedy of many talented individuals who try to rethink or even edit the official physical picture of the world is that they do not base their constructions on experimental realities. Talented loners read textbooks - naively believing that they contain facts. Not at all: the textbooks present ready-made interpretations of facts, adapted to the perception of the crowd. Moreover, these interpretations would look very strange in the light of the genuine experimental picture known to science. Therefore, the true experimental picture is deliberately distorted - we have provided a lot of evidence that the FACTS are partly suppressed and partly distorted. And for what? For the sake of making interpretations seem plausible - being in agreement with official theoretical doctrines. In words, learned men turn out beautifully: we are looking for, they say, truth, and the criterion of truth is practice. But in fact, their criterion of truth turns out to be accepted theoretical doctrines. For, if the facts do not fit into such a doctrine, then it is not the theory that is redrawn, but the facts. A false theory is confirmed by false practice. But the pride of scientists does not suffer. We, they say, have walked the right path, we are walking, and we will continue to walk!

“Yes, this is another conspiracy theory! - others guess. - Estimate how many scientists, separated by time and space, had to agree to So fool the public!” This baby talk is familiar to us. To So There is no need for any conspiracy to fool the public. It’s just that every scientist understands that if he “tramples against the tide,” he will risk his reputation, career, funding... “Everything trivial is simple!”

And so representatives of this public ask us: “Why is your new physics needed instead of the one that exists? After all, everything is fine. Atomic bombs explode! Satellites are flying! Mobile phones are working!” A caveman probably behaved approximately the same way, warming himself by the fire and roasting his prey on it. “And so everything is fine,” he thought. - The fire is heating! The food is fried! And don’t worry about the fact that there are some chemical reactions going on in the fire!”

The tragedy of many talented individuals who try to rethink or even edit the official physical picture of the world is that they do not base their constructions on experimental realities. Talented loners read textbooks - naively believing that they contain facts. Not at all: the textbooks present ready-made interpretations of facts, adapted to the perception of the crowd. Moreover, these interpretations would look very strange in the light of the genuine experimental picture known to science. Therefore, the true experimental picture is deliberately distorted - the book provides a lot of evidence that the FACTS are partly suppressed and partly distorted. And for what? For the sake of making interpretations seem plausible - being in agreement with official theoretical doctrines. In words, learned men turn out beautifully: we are looking for, they say, truth, and the criterion of truth is practice. But in fact, their criterion of truth turns out to be accepted theoretical doctrines. For, if the facts do not fit into such a doctrine, then it is not the theory that is redrawn, but the facts. A false theory is confirmed by false practice. But the pride of scientists does not suffer. We, they say, have walked the right path, we are walking, and we will continue to walk!

This is not just another conspiracy theory. It’s just that every scientist understands that if he “tramples against the tide,” he will risk his reputation, career, funding...

Success modern technologies have almost nothing to do with physical theories. We used to be very familiar with the situation where it was sometimes possible to do something useful with buggy and faulty software. It turns out that physical theories can compete with the products of the cool guys from Redmond. For example, Einstein slowed down physics with his creations for exactly a hundred years. AND atomic bomb didn't do

thanks to

theory of relativity, and

to her. But the problem is not only with Einstein personally with the epigones, who, following the master, began vying to impose their far-fetched “axioms” and “postulates” on reality, “making” a “scientific reputation” and “specific money” on this. Everything is much more serious.

Welcome to the real, that is, “digital” physical world!

Section 1. MAIN CATEGORIES OF THE “DIGITAL” WORLD

1.1. What are we talking about, exactly?

In the history of medicine there was such a clinical case.

Until about the mid-19th century, maternity fever was rampant in obstetric clinics in Europe. In some years, it claimed up to 30 percent or more of the lives of mothers who gave birth in these clinics. Women preferred to give birth on trains and on the streets, rather than end up in a hospital, and when they went there, they said goodbye to their families as if they were going to the chopping block. It was believed that this disease was epidemic in nature; there were about 30 theories of its origin. It was associated with changes in the state of the atmosphere, and with soil changes, and with the location of the clinics, and they tried to treat everything, including the use of laxatives. Autopsies always showed the same picture: death was due to blood poisoning.

F. Pachner gives the following figures: “...over 60 years in Prussia alone, 363,624 women in labor died from maternity fever, i.e. more than during the same time from smallpox and cholera combined... Mortality rate of 10% was considered quite normal, in other words, out of 100 women in labor, 10 died from puerperal fever...” Of all the diseases subjected to statistical analysis at that time, puerperal fever was accompanied by the highest mortality rate.

In 1847, a 29-year-old doctor from Vienna, Ignaz Semmelweis, discovered the secret of puerperal fever. Comparing data in two different clinics, he came to the conclusion that the cause of this disease was the carelessness of doctors who examined pregnant women, delivered babies and performed gynecological operations with unsterile hands and in unsterile conditions. Ignaz Semmelweis suggested washing your hands not just with soap and water, but disinfecting them with chlorine water - this was the essence of the new method of preventing the disease.

Semmelweis’s teaching was not finally and universally accepted during his lifetime; he died in 1865, i.e. 18 years after its discovery, although it was extremely easy to verify its correctness in practice. Moreover, Semmelweis's discovery caused a sharp wave of condemnation not only against his technique, but also against himself (all the luminaries of the medical world of Europe rebelled).

1.2. Sequential or parallel control of physical objects?

Today, even children know something about personal computers. Therefore, as a child’s illustration of the proposed model of the physical world, we can give the following analogy: a virtual reality world on a computer monitor and the software of this little world, which is not on the monitor, but on another level of reality - on the computer’s hard drive. Adhering to the concept of self-sufficiency of the physical world is about the same as seriously asserting that the reasons for the blinking of pixels on the monitor (and how coordinated they blink: pictures fascinate us!) are in the pixels themselves, or at least somewhere between them – but right there, on the monitor screen. It is clear that, with such an absurd approach, in attempts to explain the reasons for these wondrous pictures, one will inevitably have to create illusory entities. Lies will give rise to new lies, and so on. Moreover, confirmation of this stream of lies would seem to be obvious - after all, the pixels, whatever one may say, are blinking!

But, nevertheless, we brought this computer analogy for lack of a better one. It is very unsuccessful, since software support for the existence of the physical world is carried out according to principles, the implementation of which in computers today is prohibitively unattainable.

The fundamental difference here is the following. The computer has a processor that, for each working cycle, performs logical operations with the contents of a very limited number of memory cells. This is called “sequential access mode” - the larger the size of the task, the longer it takes to complete it. You can increase the processor clock frequency or increase the number of processors themselves - the principle of sequential access remains the same. The physical world lives differently. Imagine what would happen in it if the electrons were controlled in sequential access mode - and each electron, in order to change its state, would have to wait until all the other electrons were polled! The point is not that the electron could wait if the “processor clock frequency” was made fantastically high. The fact is that we see: countless numbers of electrons change their states simultaneously and independently of each other. This means that they are controlled according to the principle of “parallel access” - each individually, but all at once! This means that a standard control package is connected to each electron, in which all the envisaged behavior options for the electron are spelled out - and this package, without contacting the main “processor,” controls the electron, immediately responding to the situations in which it finds itself!

Here, imagine: a sentry is on duty. An alarming situation arises. The sentry grabs the phone: “Comrade captain, two big guys are coming towards me!” What to do?" - and in response: “The line is busy... Wait for an answer...” Because the captain has a hundred such slobs, and he explains to everyone what to do. Here it is, “sequential access”. Too centralized control, which turns into a disaster. And with “parallel access,” the sentry himself knows what to do: all conceivable scenarios were explained to him in advance. "Bang!" - and the alarming situation is resolved. Would you say that this is “stupid”? What is "automatic"? But that’s where the physical world stands. Where have you seen an electron decide whether to turn right or left while flying next to a magnet?

Of course, it is not only the behavior of electrons that is controlled by individually connected software packages. The structure-forming algorithms, thanks to which atoms and nuclei exist, also operate in parallel access mode. And even for each quantum of light, a separate channel of the navigator program is allocated, which calculates the “path” of this quantum.

1.3. Some operating principles of physical world software.

The provision of the existence of the physical world with software is a death sentence for many models and concepts of modern theoretical physics, since the functioning of software occurs according to principles, the consideration of which limits the flight of theoretical fantasies.

First of all, if the existence of the physical world is software-supported, then this existence is completely algorithmic. Any physical object is the embodiment of a clear set of algorithms. Therefore, an adequate theoretical model of this object is, of course, possible. But this model can only be based on correct knowledge of the corresponding set of algorithms. Moreover, an adequate model must be free from internal contradictions, since the corresponding set of algorithms is free from them - otherwise it would be inoperative. Likewise, adequate models of various physical objects must be free from contradictions among themselves.

Of course, until we have complete knowledge of the entire set of algorithms that ensure the existence of the physical world, contradictions in our theoretical views on the physical world are inevitable. But a decrease in the number of these contradictions would indicate our progress towards the truth. In modern physics, on the contrary, the number of blatant contradictions only increases with time - and this means that the progress taking place here is not at all towards the truth.

What are the basic principles of organizing the software of existence of the physical world? There are programs that are a set of numbered command statements. The sequence of their execution is determined, starting with the “Start work” operator and ending with the “Finish work” operator. If such a program, while running, does not get stuck in a bad situation like a loop, then it will certainly get to the “end” and stop successfully. As you can see, it is impossible to build software that can function uninterruptedly indefinitely using programs of this type alone. Therefore, the software of the physical world, as one can assume, is built on the principles of event handlers, i.e. according to the following logic: if such and such preconditions are met, then this is what to do. And if other preconditions are met, do this. And if neither one nor the other is met, do nothing, keep everything as it is! Two important consequences follow from this.

Firstly, from the work on preconditions it follows

1.4. The concept of a quantum pulsator. Weight.

To create the simplest digital object on a computer monitor screen, you need, using a simple program, to make a pixel “blink” with a certain frequency, i.e. alternately be in two states - in one of which the pixel glows, and in the other it does not glow.

Similarly, we call the simplest object of the “digital” physical world a quantum pulsator. It appears to us as something that is alternately in two different states, which cyclically replace each other with a characteristic frequency - this process is directly set by the corresponding program, which forms a quantum pulsator in the physical world. What are the two states of a quantum pulsator? We can liken them to logic one and logic zero in digital devices based on binary logic. The quantum pulsator expresses, in its purest form, the idea of ​​being in time: the cyclic change of two states in question is an indefinitely long movement in its simplest form, which does not at all imply movement in space.

The quantum pulsator remains in existence while the chain of cyclic changes of its two states continues: tick-tock, tick-tock, etc. If a quantum pulsator “freezes” in the “tick” state, it falls out of existence. If he “hangs” in the “like this” state, he also falls out of existence!

The fact that a quantum pulsator is the simplest object of the physical world, i.e. an elementary particle of a substance means that the substance is not divisible to infinity. The electron, being a quantum pulsator, does not consist of any quarks - which are the fantasies of theorists. A qualitative transition occurs on a quantum pulsator: from the physical level of reality to the software level.

Like any form of motion, quantum ripples have energy. However, a quantum pulsator is fundamentally different from a classical oscillator. Classical oscillations occur “in a sinusoid”, and their energy depends on two physical parameters - frequency and amplitude - the values ​​of which can vary. For quantum pulsations, obviously, the amplitude cannot change – i.e. it cannot be a parameter on which the energy of quantum pulsations depends. The only parameter on which energy depends

1.5. The unsuitability of the concept of relative velocities for describing the realities of the physical world.

“The speeds of movement of bodies are relative, and it is impossible to say unambiguously who is moving relative to whom, because if body A moves relative to body B, then body B, in turn, moves relative to body A...”

These conclusions, implanted in us since school, look impeccable from a formal logical point of view. But, from a physical point of view, they would only be suitable for an unreal world in which there are no accelerations. It is not without reason that Einstein taught that STR is valid only for reference systems (FR), “moving relative to each other rectilinearly and uniformly” [E1] - however, he did not indicate any such practical reference system. So far there has been no progress on this issue. Isn’t it funny that, for more than a hundred years, the basic theory of official physics has not specified a practical area of ​​applicability?

And the reason for this anecdotal situation is very simple: in the real world, due to physical interactions, acceleration of bodies is inevitable. And then, trampling on formal logic, the movement takes on an unambiguous character: the Earth revolves around the Sun, a pebble falls on the Earth, etc. For example, the uniqueness of the kinematics when a pebble falls on the Earth - i.e., the non-physicality of the situation in which the Earth falls on a pebble - is confirmed based on the law of conservation of energy. Indeed, if when a pebble collides with the Earth, the collision speed is

That is, the kinetic energy that can be converted into other forms is half the product of the square of the speed

the mass of a pebble, but certainly not the mass of the Earth. This means that it was the pebble that gained this speed, i.e. the named case is adequately described in the CO associated with the Earth. But this turn of events did not suit the relativists. In order to save the concept of relative velocities, they agreed to the point that, for the named case, CO associated with a pebble is supposedly no worse than CO associated with the Earth. True, in the CO associated with the pebble, the Earth moves with acceleration

and, picking up speed

Moreover, if we remember that real energy transformations must occur unambiguously (

By the way, the uniqueness of the increments of the kinetic energy of the test body, in accordance with the increments of its “true” speed, would be very problematic if the body were attracted to several other bodies at once and, accordingly, would acquire the acceleration of gravity to several attracting centers at once - such as requires the law of universal gravitation. For example, if an asteroid would experience gravity towards both the Sun and the planets, then what is the “true” speed of the asteroid, the increments of which determine the increments of its kinetic energy? The question is not trivial. And, in order not to suffer with it, it is much easier to delimit the areas of action of gravity of the Sun and planets in space - so that the test body, no matter where it is, always gravitates only towards one attracting center. To do this, it is necessary to ensure that the areas of influence of planetary gravity do not intersect with each other, and that in each area of ​​​​planetary gravity solar gravity is “turned off”. With such an organization of gravity, i.e. according to the principle of its unitary action (

Section 2. ORGANIZATION OF GRAVITY IN THE “DIGITAL” WORLD

2.1. Do you believe that gravity is generated by masses?

The law of universal gravitation, as Newton formulated it, was purely postulate. Based on observations of the movement of celestial bodies and the fall of small bodies to Earth, it was declared that any two masses in the Universe are attracted to each other with a force equal to

Gravitational constant,

Masses attracting each other,

The distance between them. Few people know: from the accelerations of free fall to large cosmic bodies - to the Sun and planets - only the products of the gravitational constant are determined

on the masses of these bodies, but these masses themselves are by no means determined. If the accepted value

would be, say, twice as large, and the accepted masses of the Sun and planets would be half as large (or vice versa) - this would not in any way affect the results of a theoretical analysis of the motion of bodies in the Solar System. That is, the accepted values ​​of the masses of the Sun and planets are dictated by the accepted value of the gravitational constant. But whether these accepted mass values ​​coincide with their true values, corresponding to the amount of matter in the Sun and planets, is still unknown to science.

Why did Newton put the product of masses into formula (2.1.1)? – it’s on his conscience. But it became like this: more mass - stronger attraction to it, less mass - weaker attraction to it, no mass at all - no attraction to it at all... So, what generates this attraction? Of course, by mass - this is purely mathematically clear!

But physically this was not at all clear. Newton did not explain what caused the mutual attraction of massive bodies. All he said about this was that massive bodies act on each other at a distance through some intermediary. But to speculate about the nature of this mediator would mean resorting to hypotheses - and, as Newton believed, he “did not invent hypotheses.”

2.2. How Cavendish and his followers obtained “attraction” between laboratory blanks.

It is believed that the first experiment that proved the existence of gravitational attraction between laboratory discs is the famous Cavendish experiment (1798). It would seem that, in view of the exceptional importance of this experience, its technical and methodological details should be easily accessible. Learn, students, how to conduct fundamental experiments! But it was not there. Students are fed an obscenely adapted version. They say that Cavendish used a torsion balance: a horizontal beam with weights at the ends, suspended from its center on a thin elastic string. It can rotate in a horizontal plane, twisting the elastic suspension. Cavendish allegedly brought a pair of blanks closer to the rocker weights - from opposite sides - and the rocker turned at a small angle, at which the moment of gravitational attraction of the weights to the blanks was balanced by the elastic reaction of the suspension to twisting. That's it, guys! Got it? Well done! Five points for everyone! Don't bother with the details!

But this is strange, damn it! Even in specialized publications like [C1], the details of Cavendish’s experience are not presented! It’s fortunate that we managed to get to them in a book on the history of physics [G1], where a translation of the original source is given - the work of Cavendish himself. This is some kind of wonderful dream. The technique used by Cavendish clearly shows that there was no sign of gravitational attraction of the blanks!

Look: the Cavendish torsion balance is a highly sensitive system that performs long-period and high-quality free oscillations. They are difficult to calm down. Therefore, the idea of ​​the experiment was as follows: after moving the blanks from the far “non-attracting” position to the near “attracting” one, the rocker had to continue its oscillations - turning so that the average positions of the weights approached the blanks.

And how did this idea come to fruition? Yes, I had to puff! Initial position: the rocker arm oscillates, and the blanks are in a distant, “non-attracting” position. If it is expected that, as a result of their movement to the near position, the rocker arm will rotate to a new average position of oscillations, then when should the blanks be moved so that this rotation of the rocker arm appears in its purest form? Of course, when the rocker arm passes the current average position and moves towards the expected turn. That's exactly what was done. And - oh, miracle! – the rocker began to turn. It would seem - wait until a new average position is revealed, and it’s done! But no. Here's what Cavendish wrote:

There is reason to believe that Cavendish’s “secret of success” was associated with microvibrations, under the influence of which the parameters of the torsion balances changed, so that the scales changed their behavior. This change is as follows. Let, when the rocker arm passes the middle position, microvibrations begin - for example, at the bracket to which the rocker arm suspension is attached. The experience of using vibrations in technology [B1] shows that under the influence of microvibrations, the effective rigidity of the suspension should decrease: the string will soften, as it were. And this means that the rocker will deviate from the average position by a significantly greater amount than with free deflection without microvibrations. Moreover, if this increased deviation does not exceed a certain critical value, then another interesting effect will be possible. Namely: if the microvibrations stop before the rocker reaches its maximum deflection, then free vibrations will resume with the same amplitude, but with a shifted average position. Moreover, this effect will be reversible: with a new suitable addition of microvibrations, it will be possible to return the rocker oscillations to their previous average position. Thus, the behavior of Cavendish's torsional balances could well be due to just suitable additions of microvibrations to the torsional vibrations of the rocker arm.

2.3. What does the geoid shape tell us?

If the Earth were a homogeneous ball, then, according to the law of universal gravitation, the gravitational force acting on a test body near the surface of the Earth would depend only on the distance to its center. But the Earth is an oblate ellipsoid, having a so-called “equatorial convexity”. The equatorial radius of the Earth is approximately 6378.2 km, and the polar radius is 6356.8 km [A1]. Due to the fact that the equatorial radius of the Earth is greater than the polar one, the gravitational force at the equator should be slightly less than at the pole. Moreover, it is believed that the shape of the geoid is hydrodynamically equilibrium, i.e. that the equatorial bulge was formed not without the help of centrifugal forces caused by the Earth’s own rotation. If we find the increment Δ

equatorial radius from the condition that the resulting decrease in gravitational acceleration at the equator is equal to the centrifugal acceleration at the equator, then for Δ

we get a value of 11 km [G3]. Note that if the globe turns into an oblate ellipsoid while maintaining its volume, then, in accordance with the formula for the volume of an ellipsoid, an increase in the equatorial radius by 11 km will cause a decrease in the polar radius by the same 11 km. The final difference will be 22 km, i.e. a value close to the actual one. This means that the model of the hydrodynamically equilibrium shape of the geoid is very similar to the truth.

Now let us pay attention to the fact that in our calculations we did not take into account the gravitational effect of the substance located in the volume of the equatorial bulge - this action, if it had taken place, would not be the same in gravimetric measurements at the equator and at the pole. In gravimetric measurements at the pole, the effect of the entire equatorial bulge would be an order of magnitude less than the effect of a small characteristic part of the equatorial bulge adjacent to the measurement point at the equator. Therefore, due to the presence of the equatorial bulge, the force of gravity at the equator would be further increased compared to the force of gravity at the pole - and hence the equilibrium increase in the equatorial radius Δ

Thus, if the equatorial bulge had an attractive effect, then the hydrodynamically equilibrium shape of the geoid would differ markedly from the actual one. But these noticeable differences are not observed. From this we conclude: hundreds of trillions of tons of matter in the equatorial bulge of the Earth do not have an attractive effect.

This amazing, “surface” conclusion has not yet been disputed by anyone. Unless the ballisticians who calculate the movement of artificial Earth satellites assured us that they take into account, in their calculations, the gravitational effect of the equatorial bulge. Well, what can you do? We know that when optimizing many parameters, this is exactly what they do: they take into account non-existent effects. Everything is fine!

2.4. Stunning results of gravimetric measurements.

The surface masses of the Earth are distributed non-uniformly. There are powerful mountain ranges there, with a rock density of about three tons per cubic meter. There are oceans in which the density of water is only a ton per cubic meter - even at a depth of 11 kilometers. There are valleys below sea level - in which the density of matter is equal to the density of air. According to the logic of the law of universal gravitation, these inhomogeneities in the distribution of masses should act on gravimetric instruments.

The simplest gravimetric instrument is a plumb line - when calmed down, it is oriented along the local vertical. Attempts have long been made to detect deviations of the plumb line due to the attraction of, for example, powerful mountain ranges. Only the role of a plumb line here was, of course, not played by a simple weight on a string - for how can one know where and how much it is deflected? And the method used was to compare the geodetic coordinates of the measurement point (obtained, for example, using triangulation) and its coordinates obtained from astronomical observations. Only the second of these methods uses reference to the local vertical, which is implemented, for example, using the mercury horizon at the telescope. Thus, by the difference in the coordinates of a point obtained by these two methods, one can judge the deviation of the local vertical.

So, the resulting deviations in most cases turned out to be much less than those expected due to the action of mountain ranges. Many textbooks on gravimetry (see, for example, [Ts1,Sh1]) mention measurements that were carried out by the British south of the Himalayas in the mid-19th century. Record evasions were expected there, because the most powerful mountain range Earth, and from the south - Indian Ocean. But the detected deviations turned out to be almost zero. A similar behavior of the plumb line is found near the sea coastline - contrary to the expectation that land, which is denser than seawater, will attract the plumb line more strongly. To explain such miracles, scientists adopted the isostasy hypothesis. According to this hypothesis, the effect of inhomogeneities of surface masses is compensated by the action of inhomogeneities of the opposite sign located at a certain depth. That is, under the surface dense rocks there should be loose rocks, and vice versa. Moreover, these upper and lower inhomogeneities should, by joint efforts, everywhere nullify the effect on the plumb line - as if there were no inhomogeneities at all.

You know, when the readers of our articles reached the passages about isostasy, they did not believe the possibility of such babble in modern science, they rushed, for example, to Wikipedia - and were convinced that everything was so. And - as they put it - “the patztuls fell from laughter.” Well, really: the deeper the ocean, the more powerful the dense compensating deposits under its bottom. And the higher the mountains, the more and more loose their foundation they appear on. Moreover, everything is perfect! Even children find it funny! But children do not yet know that the concept of isostasy directly contradicts the realities of the dynamics of the earth’s crust [M1] - otherwise they would laugh even louder.

Note that the deviations of the plumb line indicate the horizontal components of the local gravity vector. Its vertical component is determined using gravimeters. The same miracles happen with gravimeters as with plumb lines. But there are a lot of measurements with gravimeters. Therefore, in order not to make people laugh, experts have piled up terminological and methodological jungle, through which it is difficult for the uninitiated to get through.

2.5. Where is the attractive effect of small bodies of the Solar System?

In the Solar System, the Sun, planets and Moon clearly have their own gravity; and also, judging by the presence of an atmosphere, on Titan. As for the remaining satellites of the planets, we find the following.

Firstly, even in the case of the largest satellites (including Titan), the dynamic reaction of their planets has not been detected - which, in accordance with the law of universal gravitation, should revolve around a common center of mass with the satellite.

Secondly, the presence of atmospheres would indicate the gravity of the planets’ satellites. But, with the exception of Titan, no clear signs of atmospheres were found in any of them.

Thirdly, none of the six dozen planetary satellites known to date have discovered a single satellite of their own. In the light of probability theory, this state of affairs looks rather strange.

Fourthly, the so-called dynamic determinations of satellite masses, based on the axiom that satellites of one planet will certainly disturb each other’s motion. If in reality the satellites do not attract each other, then dynamic determinations of their masses are attempts to solve an incorrectly posed problem. And the signs of this are indeed evident: the results of using this technique turn out to be vague and ambiguous. Here are comments on de Sitter’s determination of the masses of the four large satellites of Jupiter, based on the periodic solution he obtained: “

The actual orbits of the satellites do not correspond exactly to the periodic solution, but can be obtained from the periodic solution by varying the coordinates and velocity components...

…the difficulty is the slow convergence of the analytical expansion in powers of mass

"[M2]. However, the mass values, "

"[D1]. The “most probable” values ​​of satellite masses chosen here - from a set of non-repeating values ​​- can hardly serve

Sections 4 and 5 of the book are devoted to this topic. Paragraph 4.1 largely repeats paragraph 1.4, which introduced the concept quantum pulsator. It is elementary electric charge, an electron oscillating with a frequency f and having energy E = hf, Where h- Planck's constant. The Planck energy is equated to the “intrinsic energy of an elementary particle,” i.e. to the “Einstein formula”, resulting in the “Louis de Broglie formula”: E = hf = mc². The frequency of quantum pulsations is equal to 1.24 · 10 20 Hz, if we take the electron mass to be equal to 9.11 · 10 –31 kg. The size of the pulsator is determined by the Compton wavelength: λ = h/mc, which is 0.024 Angstrom.

Despite the familiar appearance of the formulas, their interpretation according to Grishaev is very different from the usual one accepted in physics. Comprehensive explanations are given at the beginning of paragraph 1.4: “To create the simplest digital object,” writes Grishaev, “on the screen of a computer monitor, you need, using a simple program, to make a pixel “blink” with a certain frequency, i.e. alternately be in two states - in one of which the pixel glows, and in the other it does not glow.

Similarly, we call the simplest object of the “digital” physical world quantum pulsator. It appears to us as something that is alternately in two different states, which cyclically replace each other with a characteristic frequency - this process is directly set by the corresponding program, which forms a quantum pulsator in the physical world.

What are the two states of a quantum pulsator? We can liken them logical unit And logical zero in digital devices based on binary logic. The quantum pulsator expresses, in its purest form, idea existence in time: the cyclic change of two states in question represents an indefinitely long movement in its simplest form, which does not at all imply movement in space.

The quantum pulsator remains in existence while the chain of cyclic changes of its two states continues: tick-tock, tick-tock, etc. If a quantum pulsator “freezes” in the “tick” state, it falls out of existence. If he “freezes” in the “like this” state, he also falls out of existence!

That a quantum pulsator is the simplest object physical peace, i.e. an elementary particle of a substance means that the substance is not divisible to infinity. The electron, being a quantum pulsator, does not consist of any quarks - which are the fantasies of theorists. On a quantum pulsator, a qualitative transition occurs with physical reality level at program"(1.4).

So, according to Grishaev, a quantum pulsator is something extremely speculative, where “a qualitative transition occurs from physical reality level at program" Thus he expresses idea time and at the same time represents physical an object having spatial dimensions equal to the Compton wavelength.

Is this possible, the reader will ask. Perhaps, if we are dealing with a religious picture of the world. The program level, as we already know, is the domain of the Lord God. But according to the view just outlined, the Creator enters the real world and controls it through a quantum pulsator.

Divine miracles appear immediately after the concept of a charge sign is introduced. After all, electricity can be negative and positive. What's the difference? “Positive charges “pulse” in phase,” writes Grishaeva, “and negative charges “pulse” in phase, but both pulsations are shifted in phase by 180° relative to each other” (4.1).

The author explains: “...Quantum pulsations themselves at an electronic frequency - with a phase of positive or negative charge - do not generate any interactions at a distance. These pulsations of a particle are only a label, an identifier, for a software package that controls free charged particles so that we create illusion their interactions with each other. If a particle has an identifier of a positive or negative charge, then it is covered by the control of this software package. The algorithms for this control of free charges, in brief, are as follows.

First, move in such a way [the Creator commands the charges] that deviations from the equilibrium spatial distribution of charges are equalized, in which the average density of positive charges everywhere is equal to the average density of negative charges (although the value of this density may differ from place to place). The equalization of volumetric densities of opposite charges is a manifestation of the action of “electric forces”.

Secondly, move in such a way [the Creator again commands the charges] that, if possible, the collective movements of the charges are compensated, i.e. to compensate for electrical currents. Compensation for collective movements of charges is a manifestation of the action of “magnetic forces”. Electromagnetic phenomena occurring according to these algorithms are energetically provided by the fact that part of their own energy is converted into kinetic energy of particles” (1.4).

The orders of the Creator arise immediately after the author of the New Physics refused the principle of self-sufficiency of the physical world, as mentioned at the very beginning of this critical review. Along with this refusal, supernatural forces appear in the form of a software package that implements the algorithm for controlling electrical charges that Grishaev (who also acts as the Lord God) needs.

The picture of the world that appeared before the author’s eyes was so simple and understandable to him that he easily declared all other properties inherent in the electron to be non-existent. For example, it is known that the electron has spin. No, says Grishaev, “electron spin is a joke among theorists” (title of section 4.2). This characteristic of an elementary charge introduced by Pauli does not have an adequate spatial-mechanical image, therefore, it does not exist. The experiment of Stern and Gerlach, theorists Goudsmit and Uhlenbeck, interpreted incorrectly.

Another error arose when in the experiment of Davisson and Germer the electron was presented in the form of a wave. This cannot be, Grishaev said, they did not interpret the results correctly: “Davisson and Germer did not discover any “wave properties” of electrons. Their results appear to be a special case of a phenomenon well known to specialists in low-voltage electron diffraction” (4.3). According to the author, the experimenters were confused by the additional electrons from the secondary emission, which produced a diffraction pattern as if the incident electrons appeared to be waves.

A proton, according to Grishaev, is as simple as an electron. “Let quantum pulsations at a frequency f modulated at the interrupt frequency B, (B). Let the duty cycle of interruptions be 50%, i.e., at each interruption period, during its first half-period, quantum ripples occur at the frequency f, and during its second half-period these pulsations are absent. Quantum pulsations modulated in this way, having a frequency f, are in existence only half the time. But at the same time, their energy is not reduced by half, as it might seem at first glance. According to the unusual laws of the “digital” world, the energy of modulated quantum pulsations, as we believe, is reduced by the energy corresponding to the interruption frequency:

E mod = hf–hB" (4.6)

These laws are not just unusual, as the author wrote, but were taken entirely from the ceiling. Grishaev does not know how to calculate energy spectra represented by an infinite chain of rectangular pulses. As already mentioned, the simplicity of the formulas and the corresponding primitive graphical interpretation shown in Fig. 4.6 (hereinafter the numbering of the figures corresponds to the book) does not at all guarantee their truth. Any explanation of any physical phenomena (in particular, mass defect, birth and annihilation of electron-positron pairs, etc.) using these artificial models of elementary particles will look arbitrary and erroneous.

“Unlike the electron and positron, the proton has two frequencies of quantum pulsations: nucleonic, which almost completely corresponds to the mass of the proton, and electronic, the presence of which means that the proton has an elementary electric charge - with a phase corresponding to a positive charge. The presence of two components in the spectrum of quantum pulsations of a proton means that it has two corresponding characteristic sizes. But at the same time, there are no subparticles in the proton: it cannot be said that it is a compound, for example, of a massive neutral core and a positron. As you can see, the combination of two characteristic quantities in the proton - a mass almost 2000 times greater than that of an electron, and an elementary charge - is realized the simplest, according to the logic of the “digital” world, in a way: through the modulation of quantum pulsations. The positive charge here is not attached to a large neutral mass, but is “sewn” into it through modulation” (4.6).

Just as the gravitational field of the Earth, the Sun and other celestial bodies were limited by the unitary principle, Grishaev limited the action in a similar way electric field electron and proton. For them, he introduced a special “algorithm that forms atomic proton-electron bonds.” This principle “implies that a quantum pulsator can be associated, over a certain period of time, with only one partner.” “Thus, a neutral atom consists of stationary proton-electron bonds,” the number of which is equal to the atomic number. These bundles are held together due to the fact that protons are dynamically bound in the nucleus, and important role neutrons play in the dynamic structure of the nucleus” (4.9). In Fig. Figure 4 shows a time diagram of a hydrogen atom.

“Therefore,” explains Grishaev, “we do not share either the Rutherfordian approach, according to which atomic electrons revolve around the nucleus, or the quantum mechanical approach, according to which they are spread out across electron clouds. The forces that form atomic proton-electron bonds are not forces of attraction or repulsion: they are forces of retention at a certain distance. We believe that each atomic electron resides in an individual confinement region, in which the above-mentioned mechanism of connecting interruptions acts on it. This confinement region apparently has a spherical shape and a size that is an order of magnitude smaller than the distance from the nucleus” (4.9).

One can, of course, not accept the Bohr-Rutherford planetary model of the atom. Nevertheless, based on it, it was possible to obtain a formula for the frequency emitted or absorbed by a hydrogen atom:

f mn = (E n – E m) / h = =

Where m < n.

Below is a diagram of the electron energy levels in a hydrogen atom, consistent with the formula above (more on these things in the sections Bohr atom model And Schrödinger equation).

.

Based on the Grishaev model (Fig. 4.6), how can energy spectra, for example, the Balmer series be explained? Answer: no way! This cannot be done precisely because of its primitiveness, i.e. vaunted simplicity. However, we will continue to quote the author of digital theory.

“The neutron, in our opinion,” writes Grishaev, “is precisely a compound, but a compound whose composition of participants is forcibly renewed cyclically: the “proton plus electron” pair is replaced by the “positron plus antiproton” pair, and vice versa. Rice. 4.10 schematically shows the “tracks” of the resulting quantum pulsations, taking into account their phase relationships. The envelope of one of these tracks sets a positive electrical charge, and the envelope of the other - a negative one. High-frequency filling, i.e. nucleon pulsations are thrown from one envelope to another - with a frequency half that of the electron one. At those periods of the electronic frequency when nucleon pulsations are in the “positive track”, the pair that makes up the neutron is a proton and an electron, and at those periods when the nucleon pulsations are in the “negative track” - a positron and an antiproton” (4.9).

“Fig. 4.12 schematically illustrates the optimal phase relationships when the pulsations of a proton and two neutrons with which it is associated are interrupted” (4.12).

“When the duty cycle shifts in one direction or another from the central value, a charge occurs , due to the dominance of the charge of one or another sign in being. The approach outlined is schematically illustrated in Fig. 5.1.1, where for each period of interruptions connecting a proton and an electron, the corresponding duty cycle is indicated, as a percentage” (5.1)

In Fig. Figure 5.4 shows one period of “thermal oscillations” in the valence bond.

The further content of the “new physics” comes down to linking known physical phenomena to the program representation of the electron, proton and neutron. As the reader dives deeper and deeper into this strange science, he understands more and more how the author becomes a hostage to his own starting principles. Moreover, if the facts contradict the Creator’s control algorithms, so much the worse for them, he believes.

Remember, Grishaev wrote: “if the facts do not fit into such an [official] doctrine, then it is not the theory that is redrawn, but the facts” (Add.). Now he himself is performing a similar execution on defenseless facts. His digital theory seems simple and consistent to him. And if experiments contradict it, then, the author assures us, they were interpreted or carried out with violations.

Conclusion: Be thrice careful, dear reader, when someone claims that this or that concept is confirmed by experience or even practice.

Share