Pure derivation of the exact fine constant: structure y as a ratio of two inexact metric constants

String theorists at the July 2000 Conference were asked what mysteries remain to be revealed in the 21st century. Participants were invited to help formulate the ten most important unsolved problems in fundamental physics, which were ultimately selected and ranked by a distinguished panel of David Gross, Edward Witten, and Michael Duff. No question was more valuable than the first two problems posed respectively by Gross and Witten: #1: Are all the dimensionless (measurable) parameters that characterize the physical universe calculable in principle or are some simply determined by incalculable historical or quantum mechanical accidents? #2: How can quantum gravity help explain the origin of the universe?

A newspaper article on these ancient mysteries expressed some interesting comments on question #1. Perhaps Einstein did, in fact, “put it more sharply: Did God have a choice in creating the universe?“, which also summarizes dilemma #2. While the Eternal certainly ‘may’ have had a ‘choice’ at Creation, the following arguments will conclude that the answer to Einstein’s question is an emphatic “No”. precise fundamental physical parameters are demonstrably calculable within a unique dimensionless universal system which naturally understood a literal “Monolith.”

Also, the article went on to ask whether the speed of light, Planck’s constant and electric charge are determined indiscriminately – “or the values ​​have to be what they are due to some deep and hidden logic”. These types of questions come to a head with a riddle involving a mysterious number called alpha. If you square the charge on the electron and then divide it by the speed of light by Planck’s (‘reduced’) constant (multiplied by 4p by the permittivity of vacuum), all the (metrics) ) (of mass, time and distance) cancel out, producing the so-called “pure number”: alpha, which is a little more than 1/137. But why isn’t it precisely 1/137 or some other value altogether? even mystics have tried in vain to explain why.”

Which is to say that while constants, such as the mass of a fundamental particle, can be expressed as a dimensionless relation relative to the Planck scale or to a known or available unit of mass with somewhat more precision, the inverse of the constant alpha electromagnetic coupling is exceptionally dimensionless. have a cigar ‘fine structure number’ to ~137,036. On the other hand, assuming a single, invariably discrete or exact fine-structure numeric exists as a “literal constant”, the value has yet to be confirmed empirically as a ratio of two inaccurately determinable ‘metric constants’, bar h and electric charge e (the speed of light c is exactly definite in the 1983 adoption of the SI convention as an integer number of meters per second.)

So while this riddle has been profoundly perplexing almost from its inception, my impression on reading this article in a morning paper was one of utter astonishment: a numerological question of invariance deserved such a distinction from eminent modern authorities. For I had been vicariously obsessed with the number fs in the context of my colleague AJ Meyer’s model for several years, but had come to accept his experimental determination in practice, periodically musing to no avail about the dimensionless issue. Gross’s question served as a catalyst for my complacency; recognizing a unique position as the only partner that could provide a categorically complete and consistent answer in the context of Meyer’s main fundamental parameter. Still, my pretentious instincts led me into two months of stupid intellectual posturing until I sensibly repeated a simple procedure explored a few years earlier. I just I look in the result using the value CODATA 98-00 of TOand the next solution hit immediately with full heuristic force.

Because the fine structure relation effectively quantifies (via bar h) the electromagnetic coupling between a discrete unit of electric charge (e) and a photon of light; in the same sense a integer is discreetly ‘quantified’ compared to the ‘fractional continuum’ between it and 240 or 242. One can easily see what this means by considering another integer, 203, from which we subtract the 2-based exponential from the square of 2pi. Now add the inverse of 241 to the resulting number, multiplying the product by the natural logarithm of 2. It follows that this pure calculation of the fine structure number is exactly the same 137.0359996502301… – that here (/100) is given to 15, but is calculable to any number of decimal places.

By comparison, given the experimental uncertainty in h-bar and e, the NIST assessment varies up or down around half 6 of ‘965’ in the invariant sequence defined above. The following table provides the values ​​of h-bar, e, their calculated relationship as, and NIST’s actual choice for TO in each year of their files, as well as the 1973 CODATA, where the two-digit +/- standard experimental uncertainty is in bold in parentheses.

year…h- =Nh*10^-34 Js…… e = Ne*10^-19 C….. h/my ^ 2 = TO =….. NIST value & ±(South Dakota):

2006: 1,054,571,628(053) 1,602,176 487(040) 137,035,999.661 137,035,999 679(094)

2002: 1,054,571,680(18x) 1,602,176 53o(14or) 137,035,999.062 137.035.999 11th(46to)

1998: 1,054,571,596(082) 1,602,176 462(063) 137,035,999.779 137.035.999 76o(fiftyto)

1986: 1,054,572 66x(63x) 1,602,177 33x(49x) 137.035.989,558 137,035,989 5xx(61xx)

1973: 1,054,588 7xx(57xx) 1,602,189 2xx(46xx) 137.036.043.335 137.036. 04x(elevenX)

So it seems that NIST’s choice is roughly determined by the measured values ​​for h I’m alone However, as explained at http://physics.nist.gov/cuu/Constants/alpha.html, in the 1980s interest shifted towards a new approach that provides a direct determination of TO exploiting the quantum Hall effect, as corroborated independently both with the theory and with the experiment of the anomaly of the magnetic moment of the electron, thus reducing its already finer uncertainty. However, it took 20 years before an improved measurement of the magnetic moment gram/2-factor was published in mid-2006, where the first estimate from this group (led by Gabrielse for Hussle at Harvard.edu) for TO was (A:) 137.035999. 710(0)96) – explaining the greatly reduced uncertainty in the new NIST list, compared to that of h-slash and e. However, more recently a numerical error was discovered in the initial (A:) QED calculation (we’ll call it the second B: document) that changed the value of a to (B:) 137.035999. 070 (098).

Although reflecting almost identically small uncertainty, this assessment is clearly outside the NIST value that is in agreement with the estimates for h-bar and elemental charge, which are determined independently by various experiments. NIST has three years to figure this out, but in the meantime faces an embarrassing irony in that at least the 06 options for h-bar and e appear to be slightly biased toward the expected fit for TO! For example, fitting the last three digits of data 06 for hye according to our pure fs number produces a negligible fit only for e in the ratio h628/e487.065. If the QCD bug had been fixed before the actual NIST publication in 2007, it could easily have been uniformly tuned to h626/e489; although questioning its consistency in the last 3 digits of TO with respect to the comparative data 02 and 98. In any case, much greater improvements in multiple experimental designs for a comparable reduction in hye error will be required to solve this problem definitively.

But again, even then, no matter how ‘precisely’ the metric measure is held, it is still infinitely short of ‘literal accuracy’, whereas our pure fs number fits current h628/e487 values ​​quite nicely. precision. In the first sense, I recently discovered that a mathematician named James Gilson (see http://www.maths.qmul.ac.uk/%7Ejgg/page5.html ) also came up with a pure numerical value = 137.0359997867… closer to the 98 Revised -01 Standard. Gilson further argues that he has calculated numerous parameters of the Standard Model, such as the dimensionless ratio between the masses of a weak-gauge Z and W boson. But I know that he could never construct a single proof using equivalences capable of deriving the masses Z and/or W per se from then precisely confirmed masses of heavy quarks and higgs field (see the referenced essay in the resource box), which in turn result from a single primordial dimensionless tautology. By the numerical discretion of the fraction 1/241 allows construction physically significant dimensionless equations. If, instead, one took Gilson’s numerology, or the refined empirical value of Gabreilse et. al., for the fs number, would destroy this discretion, precise self-consistency, and ability to even write a meaningful dimensionless equation! Conversely, it is perhaps not too surprising that after I literally ‘found’ the integer 241 and got the exact fine structure number of the resulting ‘Monolith Number’, it took me only about 2 weeks to calculate all six quark masses. using real dimensionless data. analysis and various fine structure relationships.

But since we are now not really talking about the fine structure number per se more than the integer 137, the result definitely answer Gross’s question. For those “dimensionless parameters that characterize the physical universe” (including alpha) are ratios between selected metric parameters that lack a single unified dimensionless mapping system from which metric parameters such as particle masses are calculated from established equations. The ‘standard model’ provides a single set of parameters, but No means to calculate or predict any and/or all within a single system; therefore, the experimental parameters are entered arbitrarily by hand.

Final irony: I am doomed to be demoted as a ‘numerologist’ by ‘experimentalists’ who continually fail to recognize a solid empirical test for the masses of quarks, Higgs or hadrons that can be used to calculate exactly the current standard for the largest known precision. and the heaviest mass in high-energy physics (the Z). So, contrary foolish demons: Empirical confirmation is just the final cherry the chef puts on top before presenting a “Pudding Test” that no sentient being could possibly resist just because he didn’t put it together himself, so in his place makes a mess imitated the real deal no doesn’t look alike Because the base of this pudding is made from melons I call Mumbers, which are really just numbers, pure and simple!

Leave a Reply

Your email address will not be published. Required fields are marked *