new icn messageflickr-free-ic3d pan white
Back to photostream




The World's First View of SAKURAI's Precise Subatomic Spectroscopy and The Comparisons of These Antimatter Elements Against Their Hydrogen/Helium Counterparts.



The speed of light in vacuum, commonly denoted c, is a universal physical constant important in many areas of physics. Its precise value is 299,792,458 metres per second (approximately 3.00×108 m/s), since the length of the metre is defined from this constant and the international standard for time. According to special relativity, c is the maximum speed at which all matter and hence information in the universe can travel. It is the speed at which all massless particles and changes of the associated fields (including electromagnetic radiation such as electrons, light, and gravitational waves, travel in a vacuum. Such particles and waves travel at c regardless of the motion of the source or the inertial reference frame of the observer. In the theory of relativity, c interrelates space and time, and also appears in the famous equation of mass–energy equivalence E = mc2.


The speed at which light propagates through transparent materials, such as glass or air, is less than c; similarly, the speed of radio waves in wire cables is slower than c. To slow the speed of light even further, especially for image capture, shown, SAKURAI has redefined the way electrons and light may travel by the use of a medium called S1A3 (see also SAKURAI S1A3) and a process called Initial Boundary Reduction (reducing the electron orbital diameter). The ratio between c and the speed v at which light travels in a material is called the refractive index n of the material (n = c / v). For example, for visible light the refractive index of glass is typically around 1.5, meaning that light in glass travels at c / 1.5 ≈ 200000 km/s; the refractive index of air for visible light is about 1.0003, so the speed of light in air is about 299700 km/s (about 90 km/s slower than c). In a medium such as with S1A3, the velocity of an electron can now be reduced to c / 99.2 without having any boundary effects such as atomic instability or driving reactions such as nuclear fission and / or fusion. For this to happen, a containment field was provided using CERN Alpha facility and provided for an environment that may be described similar to that of the core of the Sun, but sustained for only short periods.


Further, cryogenics attachment and stochastic cooling provided for low temperatures and high performance / results.


For many practical purposes, electrons, light, and other electromagnetic waves will appear to propagate instantaneously, but for long distances and very sensitive measurements, their finite speed has noticeable effects. In communicating with distant space probes, it can take minutes to hours for a message to get from Earth to the spacecraft, or vice versa. The light seen from stars left them many years ago, allowing the study of the history of the universe by looking at distant objects. The finite speed of light also limits the theoretical maximum speed of computers, since information must be sent within the computer from chip to chip. The speed of light can be used with time of flight measurements to measure large distances to high precision. Likewise, and currently, short distances, such as with the the orbital of an electron, are also made with precision, With this technological breakthrough and method and with still further refinements using the Antiproton Decelerator and dense electron gas orbital vacuum, we at SAKURAI produced this image.


Ole Rømer first demonstrated in 1676 that light travels at a finite speed (as opposed to instantaneously) by studying the apparent motion of Jupiter's moon Io. In 1865, James Clerk Maxwell proposed that light was an electromagnetic wave, and therefore traveled at the speed c appearing in his theory of electromagnetism. In 1905, Albert Einstein postulated that the speed of light c with respect to any inertial frame is a constant and is independent of the motion of the light source. He explored the consequences of that postulate by deriving the special theory of relativity and in doing so showed that the parameter c had relevance outside of the context of light and electromagnetism.


After centuries of increasingly precise measurements, in 1975 the speed of light was known to be 299792458 m/s with a measurement uncertainty of 4 parts per billion. In 1983, the metre was redefined in the International System of Units (SI) as the distance travelled by light in vacuum in 1/299792458 of a second. As a result, the numerical value of c in metres per second is now fixed exactly by the definition of the metre.


The next significant step forward

In 1931 the physicist Paul Dirac proposed that every particle of matter should have an antimatter counterpart. But shortly after the big bang, most of the antimatter disappeared, leaving behind the tiny portion of matter that constitutes the universe we live in today. What happened to swing the balance away from antimatter is one of the greatest puzzles in physics.


Astronomers search for antimatter in space, but was hard to come by on Earth. In order to study it, physicists had to make it themselves. And because antimatter annihilates in a flash of energy when it interacts with regular matter, storing it presented a challenge.


Creating Antihydrogen and Antihelium


The antimatter counterpart to the simplest atoms, hydrogen and helium, is a neutral antihydrogen and antihelium atom, which consists of a positively charged positron orbiting a negatively charged antiproton and double that for helium.


In 1995, physicists of the CERN and associate specialists at SAKURAI announced that they had successfully created the first atoms of antihydrogen and antihelium. The antiparticles were highly energetic; each one travelled at nearly the speed of light over a path of 10 metres and then annihilated with ordinary matter after about forty billionths of a second. While creating the antihydrogen / helium was a major achievement, the atoms were too energetic —too “hot”— and did not lend themselves to easy study.


In order to understand antimatter atoms, CERN / SAKURAI physicists needed more time to interact with them. So they developed techniques to capture and trap anti hydrogen / helium for longer periods. The Antiproton Decelerator established at CERN in the late 1990s began providing slower moving, lower-energy antiprotons for antimatter experiments such as ATHENA, ATRAP and ALPHA.


In these experiments, electric and magnetic fields hold the antiprotons separate from positrons in a near-perfect vacuum that keeps them away from regular matter. The antiprotons pass through a dense electron gas, which slows them down further.


When the energy is low enough, ALPHA physicists use the electric potential to nudge the antiprotons into a cloud of positrons suspended within the vacuum. The two types of charged antiparticles combine into low-energy antihydrogen atoms. Since antihydrogen atoms don’t have an electric charge, the electric field can no longer hold them in place. So instead, two superconducting magnets generate a strong magnetic field that takes advantage of the antihydrogen’s magnetic properties. If the antihydrogen atoms have a low enough energy, they can stay in this magnetic “bottle” for a long time.


Currently the only way to know whether antimatter was actually trapped is to let it annihilate with regular matter. When the magnets are switched off, the antihydrogen atoms escape their trap and quickly annihilate with the sides of the trap. Silicon detectors pick up the energetic flare to pinpoint the antiatom’s position. Only then can the physicists be sure that they had trapped antihydrogen / helium.


Trapping Antimatter at CERN / SAKURAI


In MAY 2007, ALPHA reported that it had succeeded in trapping antimatter atoms for over 16 minutes. On the scale of atomic lifetimes, this was a very long time — long enough to begin to study their properties in detail. By precise comparisons of hydrogen / helium and anti hydrogen / helium, several experimental groups hope to study the properties of anti hydrogen / helium and see if it has the same spectral lines as hydrogen / helium. One group, AEGIS, will even attempt to measure g, the gravitational acceleration constant, as experienced by anti hydrogen / helium atoms. Side-by-side SAKURAI imaging comparisons / results, once-and- for-all, have shown EXACT LIKENESS OF ANTIMATTER AND MATTER EVENTS. DISCREPANCIES WERE THAT THEY WOULD AND MUST HAVE BEEN TAKEN PLACE IN THE SAME LOCATION AT THE SAME TIME EXPERIENCE. QUANTUM FUNDAMENTAL LAWS OF TIME AND SPACE PREDICATE THIS. HERE THE PREDECESSOR HAD A BIT MORE TIME IN COMPARISON.



The longer these experiments can trap antihydrogren, the more accurately they can measure it, and physicist will be closer to demystifying antimatter.


Accelerating Quantum Science for The World of Tomorrow - SAKURAI -



This content is archived on the CERN Document Server

Current updates related to engineering and repairs ongoing on electrical installations at CERN


2 May 2016 – The LHC went into standby on Friday last week following an electrical perturbation at point 8, caused by a small animal

Vibration tests for High-Luminosity LHC project begin


17 Dec 2015 – In preparation for civil engineering work for the High-Luminosity Large Hadron Collider, vibration measurements have been carried out near the LHC

Engineers refine protection system for LHC magnets


3 Sep 2015 – During this week's planned technical stop, engineers modify the electrical system that protects magnet components from high current

A superconducting shield for astronauts


5 Aug 2015 – Magnetic shielding technology developed at CERN could protect astronauts from cosmic radiation in space

First technical stop for the LHC


11 Jun 2015 – The first planned technical stop of the LHC started Monday, with five days of maintenance work scheduled for the accelerator and its experiments.




Cryogenics: Low temperatures, high performance

Pulling together: Superconducting electromagnets

Powering CERN

A vacuum as empty as interstellar space

Radiofrequency cavities

Stochastic cooling


Storing antimatter

Restarting the LHC: Why 13 Tev?

About CERN

How a detector works



© Copyright CERN / SAKURAI 2016



INTRODUCTION and The Bohr Model

The most important properties of atomic and molecular structure may be exemplified using a simplified picture of an atom that is called the Bohr Model. This model was proposed by Niels Bohr in 1915; it is not completely correct, but it has many features that are approximately correct and it is sufficient for much of our discussion. The correct theory of the atom is called quantum mechanics; the Bohr Model is an approximation to quantum mechanics that has the virtue of being much simpler.


A Planetary Model of the Atom

The Bohr Model is probably familar as the "planetary model" of the atom illustrated in the adjacent figure that, for example, is used as a symbol for atomic energy (a bit of a misnomer, since the energy in "atomic energy" is actually the energy of the nucleus, rather than the entire atom). In the Bohr Model the neutrons and protons (symbolized by red and blue balls in the adjacent image) occupy a dense central region called the nucleus, and the electrons orbit the nucleus much like planets orbiting the Sun (but the orbits are not confined to a plane as is approximately true in the Solar System). The adjacent image is not to scale since in the realistic case the radius of the nucleus is about 100,000 times smaller than the radius of the entire atom, and as far as we can tell electrons are point particles without a physical extent.


This similarity between a planetary model and the Bohr Model of the atom ultimately arises because the attractive gravitational force in a solar system and the attractive Coulomb (electrical) force between the positively charged nucleus and the negatively charged electrons in an atom are mathematically of the same form. (The form is the same, but the intrinsic strength of the Coulomb interaction is much larger than that of the gravitational interaction; in addition, there are positive and negative electrical charges so the Coulomb interaction can be either attractive or repulsive, but gravitation is always attractive in our present Universe.)


The Orbits Are Quantized

The basic feature of quantum mechanics that is incorporated in the Bohr Model and that is completely different from the analogous planetary model is that the energy of the particles in the Bohr atom is restricted to certain discrete values. One says that the energy is quantized. This means that only certain orbits with certain radii are allowed; orbits in between simply don't exist.


Take, for example such, quantized energy levels for the hydrogen atom; these levels are labeled by an integer n that is called a quantum number. The lowest energy state is generally termed the ground state. The states with successively more energy than the ground state are called the first excited state, the second excited state, and so on. Beyond an energy called the ionization potential the single electron of the hydrogen atom is no longer bound to the atom. Then the energy levels form a continuum. In the case of hydrogen, this continuum starts at 13.6 eV above the ground state ("eV" stands for "electron-Volt", a common unit of energy in atomic physics).



Although this behavior may seem strange to our minds that are trained from birth by watching phenomena in the macroscopic world, this is the way things behave in the strange world of the quantum that holds sway at the atomic level.


One of the implications of these quantized energy states is that only certain photon energies are allowed when electrons jump down from higher levels to lower levels, producing the hydrogen spectrum. The Bohr model successfully predicted the energies for the hydrogen atom, but had significant failures that were corrected by solving the Schrodinger equation for the hydrogen atom


Atomic Excitation and De-excitation

Atoms can make transitions between the orbits allowed by quantum mechanics by absorbing or emitting exactly the energy difference between the orbits. SAKURAI captures for the first time the spectrum of an atomic excitation state caused by absorption of a photon and an atomic de-excitation caused by emission of a photon.


Excitation by absorption of light and de-excitation by emission of light

In each case the wavelength of the emitted or absorbed light is exactly such that the photon carries the energy difference between the two orbits. This energy may be calculated by dividing the product of the Planck constant and the speed of light hc by the wavelength of the light. Thus, an atom can absorb or emit only certain discrete wavelengths (or equivalently, frequencies or energies.


The speed of light electron orbital system disperse multiple weights of mass simultaneously including zero point, thus producing a full spectrum of colors at all points and levels and also emitting multiple point energies of mass ranges simultaneously in multiple boundary locations and multiple vectors....and the science is continuing.





Introduction Part 2 -The Basics of Quantum Mechanics/Waves and Modes


Many misconceptions about quantum mechanics may be avoided if some concepts of field theory and quantum field theory like "normal mode" and "occupation" are introduced right from the start. They are needed for understanding the deepest and most interesting ideas of quantum mechanics anyway. Questions about this approach are welcome on the talk page.



Waves and modes

A wave is a propagating disturbance in a continuous medium or a physical field. By adding waves or multiplying their amplitudes by a scale factor, superpositions of waves are formed. Waves must satisfy the superposition principle which states that they can go through each other without disturbing each other. It looks like there were two superimposed realities each carrying only one wave and not knowing of each other (that's what is assumed if one uses the superposition principle mathematically in the wave equations).


Examples are acoustic waves and electromagnetic waves (light), but also electronic orbitals, as explained below.

A standing wave is considered a one-dimensional concept by many students, because of the examples (waves on a spring or on a string) usually provided. In reality, a standing wave is a synchronous oscillation of all parts of an extended object at a definite frequency, in which the oscillation profile (in particular the nodes and the points of maximal oscillation amplitude) doesn't change. This is also called a normal mode of oscillation. The profile can be made visible in Chladni's figures and in vibrational holography. In unconfined systems, i.e. systems without reflecting walls or attractive potentials, traveling waves may also be chosen as normal modes of oscillation (see boundary conditions).


A phase shift of a normal mode of oscillation is a time shift scaled as an angle in terms of the oscillation period, e.g. phase shifts by 90° and 180° (or π / 2 {\displaystyle \pi /2} \pi /2 and π {\displaystyle \pi } \pi ) are time shifts by the fourth and half of the oscillation period, respectively. This operation is introduced as another operation allowed in forming superpositions of waves (mathematically, it is covered by the phase factors of complex numbers scaling the waves).



Helmholtz ran an experiment which clearly showed the physical reality of resonances in a box. (He predicted and detected the eigenfrequencies.


SAKURAI's experiments with standing and propagating waves. Electromagnetic and electronic modes


Planck was the first to suggest that the electromagnetic modes are not excited continuously but discretely by energy quanta h ν proportional to the frequency. By this assumption, he could explain why the high-frequency modes remain unexcited in a thermal light source: The thermal exchange energy k B T is just too small to provide an energy quantum h ν if ν is too large. Classical physics predicts that all modes of oscillation (2 degrees of freedom each) — regardless of their frequency — carry the average energy k B T, which amounts to an infinite total energy (called ultraviolet catastrophe). This idea of energy quanta was the historical basis for the concept of occupations of modes, designated as light quanta by Einstein, also denoted as photons since the introduction of this term in 1926 by Gilbert N. Lewis.


An electron beam (accelerated in a cathode ray tube similar to TV) is diffracted in a crystal and diffraction patterns analogous to the diffraction of monochromatic light by a diffraction grating or of X-rays on crystals are observed on the screen. This observation proved de Broglie's idea that not only light, but also electrons propagate and get diffracted like waves. In the attracting potential of the nucleus, this wave is confined like the acoustic wave in a guitar corpus. That's why in both cases a standing wave (= a normal mode of oscillation) forms. An electron is an occupation of such a mode.


An electronic orbital is a normal mode of oscillation of the electronic quantum field, very similar to a light mode in an optical cavity being a normal mode of oscillation of the electromagnetic field. The electron is said to be an occupation of an orbital. This is the main new idea in quantum mechanics, and it is forced upon us by observations of the states of electrons in multielectron atoms. Certain fields like the electronic quantum field are observed to allow its normal modes of oscillation to be excited only once at a given time, they are called fermionic. If you have more occupations to place in this quantum field, you must choose other modes (the spin degree of freedom is included in the modes), as is the case in a carbon atom, for example. Usually, the lower-energy (= lower-frequency) modes are favoured. If they are already occupied, higher-energy modes must be chosen. In the case of light, the idea that a photon is an occupation of an electromagnetic mode was found much earlier by Planck and Einstein, see below.


Processes and particles

All processes in nature can be reduced to the isolated time evolution of modes and to (superpositions of) reshufflings of occupations, as described in the Feynman diagrams (since the isolated time evolution of decoupled modes is trivial, it is sometimes eliminated by a mathematical redefinition which in turn creates a time dependence in the reshuffling operations; this is called Dirac's interaction picture, in which all processes are reduced to (redefined) reshufflings of occupations). For example in an emission of a photon by an electron changing its state, the occupation of one electronic mode is moved to another electronic mode of lower frequency and an occupation of an electromagnetic mode (whose frequency is the difference between the frequencies of the mentioned electronic modes) is created.


Electrons and photons become very similar in quantum theory, but one main difference remains: electronic modes cannot be excited/occupied more than once (= Pauli exclusion principle) while photonic/electromagnetic modes can and even prefer to do so (= stimulated emission).


This property of electronic modes and photonic modes is called fermionic and bosonic, respectively. Two photons are indistinguishable and two electrons are also indistinguishable, because in both cases, they are only occupations of modes: all that matters is which modes are occupied. The order of the occupations is irrelevant except for the fact that in odd permutations of fermionic occupations, a negative sign is introduced in the amplitude.


Of course, there are other differences between electrons and photons:


The electron carries an electric charge and a rest mass while the photon doesn't.

In physical processes (see the Feynman diagrams), a single photon may be created while an electron may not be created without at the same time removing some other fermionic particle or creating some fermionic antiparticle. This is due to the conservation of charge



Mode numbers, Observables and eigenmodes


The system of modes to describe the waves can be chosen at will. Any arbitrary wave can be decomposed into contributions from each mode in the chosen system. For the mathematically inclined: The situation is analogous to a vector being decomposed into components in a chosen coordinate system. Decoupled modes or, as an approximation, weakly coupled modes are particlularly convenient if you want to describe the evolution of the system in time, because each mode evolves independently of the others and you can just add up the time evolutions. In many situations, it is sufficient to consider less complicated weakly coupled modes and describe the weak coupling as a perturbation.


In every system of modes, you must choose some (continuous or discrete) numbering (called "quantum numbers") for the modes in the system. In Chladni's figures, you can just count the number of nodal lines of the standing waves in the different space directions in order to get a numbering, as long as it is unique. For decoupled modes, the energy or, equivalently, the frequency might be a good idea, but usually you need further numbers to distinguish different modes having the same energy/frequency (this is the situation referred to as degenerate energy levels). Usually these additional numbers refer to the symmetry of the modes. Plane waves, for example — they are decoupled in spatially homogeneous situations — can be characterized by the fact that the only result of shifting (translating) them spatially is a phase shift in their oscillation. Obviously, the phase shifts corresponding to unit translations in the three space directions provide a good numbering for these modes. They are called the wavevector or, equivalently, the momentum of the mode. Spherical waves with an angular dependence according to the spherical harmonics functions (see the pictures) — they are decoupled in spherically symmetric situations — are similarly characterized by the fact that the only result of rotating them around the z-axis is a phase shift in their oscillation. Obviously, the phase shift corresponding to a rotation by a unit angle is part of a good numbering for these modes; it is called the magnetic quantum number m (it must be an integer, because a rotation by 360° mustn't have any effect) or, equivalently, the z-component of the orbital angular momentum. If you consider sharp wavepackets as a system of modes, the position of the wavepacket is a good numbering for the system. In crystallography, the modes are usually numbered by their transformation behaviour (called group representation) in symmetry operations of the crystal, see also symmetry group, crystal system.


The mode numbers thus often refer to physical quantities, called observables characterizing the modes. For each mode number, you can introduce a mathematical operation, called operator, that just multiplies a given mode by the mode number value of this mode. This is possible as long as you have chosen a mode system that actually uses and is characterized by the mode number of the operator. Such a system is called a system of eigenmodes, or eigenstates: Sharp wavepackets are no eigenmodes of the momentum operator, they are eigenmodes of the position operator. Spherical harmonics are eigenmodes of the magnetic quantum number, decoupled modes are eigenvalues of the energy operator etc. If you have a superposition of several modes, you just operate the operator on each contribution and add up the results. If you chose a different modes system that doesn't use the mode number corresponding to the operator, you just decompose the given modes into eigenmodes and again add up the results of the operator operating on the contributions. So if you have a superposition of several eigenmodes, say, a superposition of modes with different frequencies, then you have contributions of different values of the observable, in this case the energy. The superposition is then said to have an indefinite value for the observable, for example in the tone of a piano note, there is a superposition of the fundamental frequency and the higher harmonics being multiples of the fundamental frequency. The contributions in the superposition are usually not equally large, e.g. in the piano note the very high harmonics don't contribute much. Quantitatively, this is characterized by the amplitudes of the individual contributions. If there are only contributions of a single mode number value, the superposition is said to have a definite or sharp value.



The Basics of Wave-Particle Duality.

If you do a position measurement, the result is the occupation of a very sharp wavepacket being an eigenmode of the position operator. These sharp wavepackets look like pointlike objects, they are strongly coupled to each other, which means that they spread soon.




In measurements of such a mode number in a given situation, the result is an eigenmode of the mode number, the eigenmode being chosen at random from the contributions in the given superposition. All the other contributions are supposedly eradicated in the measurement — this is called the wave function collapse and some features of this process are questionable and disputed. The probability of a certain eigenmode to be chosen is equal to the absolute square of the amplitude, this is called Born's probability law. This is the reason why the amplitudes of modes in a superposition are called "probability amplitudes" in quantum mechanics. The mode number value of the resulting eigenmode is the result of the measurement of the observable. Of course, if you have a sharp value for the observable before the measurement, nothing is changed by the measurement and the result is certain. This picture is called the Copenhagen interpretation. A different explanation of the measurement process is given by Everett's many-worlds theory; it doesn't involve any wave function collapse. Instead, a superposition of combinations of a mode of the measured system and a mode of the measuring apparatus (an entangled state) is formed, and the further time evolutions of these superposition components are independent of each other (this is called "many worlds").


As an example: a sharp wavepacket is an eigenmode of the position observable. Thus the result of measurements of the position of such a wavepacket is certain. On the other hand, if you decompose such a wavepacket into contributions of plane waves, i.e. eigenmodes of the wavevector or momentum observable, you get all kinds of contributions of modes with many different momenta, and the result of momentum measurements will be accordingly. Intuitively, this can be understood by taking a closer look at a sharp or very narrow wavepacket: Since there are only a few spatial oscillations in the wavepacket, only a very imprecise value for the wavevector can be read off (for the mathematically inclined reader: this is a common behaviour of Fourier transforms, the amplitudes of the superposition in the momentum mode system being the Fourier transform of the amplitudes of the superposition in the position mode system). So in such a state of definite position, the momentum is very indefinite. The same is true the other way round: The more definite the momentum is in your chosen superposition, the less sharp the position will be, and it is called Heisenberg's uncertainty relation.


Two different mode numbers (and the corresponding operators and observables) that both occur as characteristic features in the same mode system, e.g. the number of nodal lines in one of Chladni's figures in x direction and the number of nodal lines in y-direction or the different position components in a position eigenmode system, are said to commute or be compatible with each other (mathematically, this means that the order of the product of the two corresponding operators doesn't matter, they may be commuted). The position and the momentum are non-commuting mode numbers, because you cannot attribute a definite momentum to a position eigenmode, as stated above. So there is no mode system where both the position and the momentum (referring to the same space direction) are used as mode numbers.



The Schrödinger equation, the Dirac equation etc.

As in the case of acoustics, where the direction of vibration, called polarization, the speed of sound and the wave impedance of the media, in which the sound propagates, are important for calculating the frequency and appearance of modes as seen in Chladni's figures, the same is true for electronic or photonic/electromagnetic modes: In order to calculate the modes (and their frequencies or time evolution) exposed to potentials that attract or repulse the waves or, equivalently, exposed to a change in refractive index and wave impedance, or exposed to magnetic fields, there are several equations depending on the polarization features of the modes:



Electronic modes (their polarization features are described by Spin 1/2) are calculated by the Dirac equation, or, to a very good approximation in cases where the theory of relativity is irrelevant, by the Schrödinger equation]] and the Pauli equation.

Photonic/electromagnetic modes (polarization: Spin 1) are calculated by Maxwell's equations (You see, 19th century already found the first quantum-mechanical equation! That's why it's so much easier to step from electromagnetic theory to quantum mechanics than from point mechanics).

Modes of Spin 0 would be calculated by the Klein-Gordon equation.





It is much easier and much more physical to imagine the electron in the atom to be not some tiny point jumping from place to place or orbiting around (there are no orbits, there are orbitals), but to imagine the electron being an occupation of an extended orbital and an orbital being a vibrating wave confined to the neighbourhood of the nucleus by its attracting force. That's why Chladni's figures of acoustics and the normal modes of electromagnetic waves in a resonator are such a good analogy for the orbital pictures in quantum physics. Quantum mechanics is a lot less weird if you see this analogy. The step from electromagnetic theory (or acoustics) to quantum theory is much easier than the step from point mechanics to quantum theory, because in electromagnetics you already deal with waves and modes of oscillation and solve eigenvalue equations in order to find the modes. You just have to treat a single electron like a wave, just in the same way as light is treated in classical electromagnetics.


In this picture, the only difference between classical physics and quantum physics is that in classical physics you can excite the modes of oscillation to a continuous degree, called the classical amplitude, while in quantum physics, the modes are "occupied" discretely. — Fermionic modes can be occupied only once at a given time, while Bosonic modes can be occupied several times at once. Particles are just occupations of modes, no more, no less. As there are superpositions of modes in classical physics, you get in quantum mechanics quantum superpositions of occupations of modes and the scaling and phase-shifting factors are called (quantum) amplitudes. In a Carbon atom, for example, you have a combination of occupations of 6 electronic modes of low energy (i.e. frequency). Entangled states are just superpositions of combinations of occupations of modes. Even the states of quantum fields can be completely described in this way (except for hypothetical topological defects).


As you can choose different kinds of modes in acoustics and electromagnetics, for example plane waves, spherical harmonics or small wave packets, you can do so in quantum mechanics. The modes chosen will not always be decoupled, for example if you choose plane waves as the system of acoustic modes in the resonance corpus of a guitar, you will get reflexions on the walls of modes into different modes, i.e. you have coupled oscillators and you have to solve a coupled system of linear equations in order to describe the system. The same is done in quantum mechanics: different systems of eigenfunctions are just a new name for the same concept. Energy eigenfunctions are decoupled modes, while eigenfunctions of the position operator (delta-like wavepackets) or eigenfunctions of the angular momentum operator in a non-spherically symmetric system are usually strongly coupled.


What happens in a measurement depends on the interpretation: In the Copenhagen interpretation you need to postulate a collapse of the wavefunction to some eigenmode of the measurement operator, while in Everett's Many-worlds theory an entangled state, i.e. a superposition of occupations of modes of the observed system and the observing measurement apparatus, is formed. - SAKURAI


43 faves
Uploaded on June 3, 2016