The Wondrous Design and Non-random Character of Chance Events

Robert A. Herrmann

(3 Dec. 1998; Last revision 9 MAY 2004

Abstract:

In this article, it is shown specifically that natural system chance events as represented by theory predicted (a priori) probabilistic statements used in such realms as modern particle physics, among others, are only random relative to the restricted language of the theory that predicts such behavior. It is shown that all such "chance" natural events are related one to another by a remarkably designed, systematic and wondrous collection of equations that model how the natural laws specifically yield such natural events. A second result shows theoretically that all such "chance" behavior is caused by the application of well-defined ultralogics. These results show specifically that the fundamental underlying behavior associated with all natural systems that comprise our universe is controlled internally by processes that cannot be differentiated from those that mirror the behavior of an infinitely powerful mind.

Before I start this article, I must make an important disclosure. All the Herrmann models mentioned and the results they predict utilize a few common everyday human experiences that can be verified within a laboratory setting. Except for the use of principle (1) and the most basic mathematical axioms used throughout all of science, NO additional mathematical or physical axioms or presuppositions are required for the construction of these models. This must always be kept in mind, for there will arise the myth that the results have somehow or other been included within the axioms or presuppositions. This myth is entirely and utterly false.

From Nobel Laureate Louis de Broglie, comes the statement:

. . . the structure of the material universe has something in common with the laws that govern the workings of the human mind. (March, 1963, p. 143)

These two quotations present a rather obvious fact that should come as no surprise. What Feynman and all other materialistic scientists do during their lectures, within their textbooks, and journal papers is to **describe** in words, diagrams, computer images, and the like, natural laws that they claim will lead to a natural event or a change in a natural system. In this article, the term "natural law" means descriptions for natural-system behavior under specified conditions. For humanly comprehensible "descriptions," as broadly defined, the following is fully discussed in Herrmann (1994) and is denoted as * Principle (1).* **How nature combines together natural laws to
produce the moment-to-moment "evolution" (i.e. development or changes in the behavior) of a natural system is modeled (i.e. mirrored, mimicked imitated) by certain aspects of human mental activity.** One of the most fundamental deductive procedures that models the development of a natural system is, itself, modeled by the "consequence" operator. Such operators are discussed in more detail later in this article and many interesting properties appear in Herrmann (1987).

In the appendix, Principle (1) is illustrated, where electromagnetic radiation is being emitted by a collection of atomic structures as the collection travels further and further into an ever increasing gravitational potential. There appear to be 12 natural laws being combined together using 9 human deductive processes. It is shown that even to just determine the correct structure to which this natural law applies and to produce the stated outcome requires additionally some type of "natural law" that also mimics human mental procedures. When all of this has been accomplished and this illustrated natural law is applied to a collection of excited hydrogen atoms, then the results that are observed are but "images" that display the changes in the the "color" of the emitted photons. There are displayed no actual descriptions for natural laws anywhere within "nature" itself. The natural laws and regions of application are constructed by means of human mental processes. Indeed, the usual machines used to "verify," by various modes of measurement, various natural laws do not exist in "nature" until they are built by application of human thought processes. If natural laws exist, at all, they must be hidden as actual objects from either direct or indirect human observation. Possibly this is what the following, attributed to Hermann Weyl, is attempting to convey in the form of a question.

Is it conceivable that immaterial factors having the nature of images, ideas, 'building plans' also intervene in the evolution of the world as a whole?

Principle (1), although clearly restrictive, completely materialistic and secular in character is verified, relative to deductive science, by the largest amount of empirical evidence that could ever exist since whenever a scientist predicts natural system behavior from a collection of hypotheses deductive human mental activity is applied. This even includes the application of principles used to gather and analyze evidence that would tend to verify a scientific hypothesis. Thus, the secular scientist is using modes of "description" to tell the world "how" nature "works," as well as describing experiments and machines to verify the claimed "how." Since, no one has ever observed such "laws" as some type of objects within nature itself, even those claimed "laws" describing the "how" of the "unobserved" world of particle physics, then a **cautious** choice for secular science would be to admit that there is "something" going on within nature, called natural laws, that is forcing objects to behave in a predicted manner. And, whatever these "things" are they are but being "modeled" by human modes of "description" and logical deduction. These last two sentences depict the actual character of what science terms as "natural laws." [This notion as to what constitutes "natural laws" is not necessary in the "Solution to the General Grand Unification Problem . . . " (Herrmann 1994) for the natural laws are replaced by the much more general idea - the developmental paradigm. This allows the actual solution method to be apply to infinitely many different "universes" not just the one in which we dwell.]

During his 1998 American tour, Stephen Hawking stated while visiting the President at the White House that in a very few years scientists will be able to "describe" all the natural laws that govern the behavior of every natural system that exists within the universe and use these laws to predict correctly such behavior. Once again, Hawking emphasizes the basic philosophy of science that drives the secular scientist. It is immaterial whether or not the Hawking philosophy is correct for the driving force behind **basic research** is that the human brain, let's call it the mind instead, is, at the least, capable of describing and comprehending all of the so-called natural laws that govern the workings of our universe. This does not alter the above remarks as to what the phrase "natural law or process" actually signifies. Let's call this Hawking philosophy the *Strong Principle (2)*.

I point out that during recent history strong principle (2) was not accepted by all scientists. For example, Nobel Laureate Max Planck wrote that:

I point out that during recent history strong principle (2) was not accepted by all scientists. For example, Nobel Laureate Max Planck wrote that:

Nature does not allow herself to be exhaustively expressed in human thought. (1932, p. 2)

Of course, Max Planck is one of the founders of "Quantum Mechanics." Since strong principle (2) is not as yet established, it is better to consider (2) as but a partial statement. Thus, we modify (2) to include the word "probably," and accept that some of the present day humanly constructed theories are "probably" correct predictors for objective reality. [Philosophically, the theories themselves should be considered as probable in character (Cohen & Nagel, 1934, p. 393).] When theories are considered, they will be tacitly assumed to be among the ones that are "probably" correct predictors.

In a course on Quantum Information and Computers given at Cal. Tech. by John Preskill, we are told in the textbook he wrote for the course that:

. . . fundamentally the universe is quantum mechanical. . . . For example, clicks registered in a detector that monitors a radioactive source are described by atruly randomPoisson process. In contrast, there is no place for true randomness in deterministic classical dynamics (although of course a complex (chaotic) classical system can exhibit behavior that is in practice indistinguishable from random). (Preskill, 1997, p. 4)

* Principle (3)* is the notion that **fundamental natural system behavior is probabilistic in character.**

**[A]** Clearly, Preskill requires his students to take a specific stance in the philosophy of science, a philosophical stance that cannot be scientifically verified in any manner. Although there are deterministic mathematical descriptions for behavior that cannot be distinguished from a pure statistical distribution that it is claim is produced by the "pure" random behavior of a natural system, an individual must not accept these deterministic models as reality. I repeat that
according to Preskill and his quantum view, you must not accept these deterministic models as mirroring reality although there is no scientific method that can distinguish such deterministic design for natural system behavior from the claimed non-design displayed by pure random behavior. Of course, we do have the very great mystery why there is, indeed, a Poisson mathematical design being displayed
when a large number of clicks are being considered. But relative to the behavior of each individual click, we are to believe that there is no possible intelligent relationship between the occurrence of one click and the occurrence of the very next click. It is this apparent "non-relation" between successive clicks that appears to make them
"random" in character.

First, no one can truly force you to accept this philosophy. You can study chaos theory and other mathematical areas and accept that there are deterministic and highly designed natural world processes being modeled by the mathematics and these natural world processes only give the appearance of random behavior. Of course, I suppose that you would not voice your opinion while a student in such courses at Cal. Tech. But more importantly, does the basic idea of "random" imply lack of design? Is the apparent random character of quantum mechanics the true underlying feature that describes natural system behavior at all levels of human comprehension?

The particle physicists claim that all natural system behavior is fundamentally controlled by the "invisible" and individually undetectable entities that abound within the realm of quantum physics and, especially, within a nonempty physical background called the "vacuum." The major evidence that such entities might exist in objective reality is that various theories predict that under certain hypotheses gross matter, which **can** be detected by a machine or a human sensor, will behave in a certain way. This is called **indirect evidence**. But the entities being described need only be objects within an **analog model**, a collection of fictitious entities that are used solely to aid the human mind to comprehend and describe processes that predict an experimental outcome although the entities and processes need not exist in objective reality.

Many times the subatomic entities start out as imaginary. In the original paper Einstein wrote where he gives his model for the photoelectric effect, he called the photon an "imaginary" particle-like entity. But, Richard Feynman in his book "QED: The Strange Theory of Light and Matter" (1985) [QED = quantum electrodynamics] insists that light particles exist in objective reality. However, in this same book in order to explain partial reflection Feynman states relative to the direction of an arrow that will measure a probability:

To determine the direction of each arrow, let's imagine that we have a stopwatch that can time a photon as it moves. This imaginary stopwatch has a single hand that turns around very, very rapidly. When a photon leaves a source, we start the stopwatch. As long as the photon moves, the stopwatch hand turns . . . ; when the photon ends up at the photomultiplier, we stop the watch. The hand ends up pointing in a certain direction. This is the direction we will draw the arrow. (Feynman, 1985, p. 27.)

I point out that not only is the stopwatch imaginary, at least at the present time, but the vectors, the arrows, also do not exist in objective reality. Such descriptions form but an analog model for such behavior and nothing more than that. Indeed, some physicists who describe QED properties use such phrases as "a photon is absorbed" or "emitted by an electron" without ever considering any description as to how an "electron" absorbs or emits anything. An assumed process such as the directly unobserved interaction between a photon and an electron is an essential part of this QED theory. But, not withstanding these obvious logical faults, I will consider as another principle of modern science, * Principle (4),*** the principle of indirect verification** as fundamental to modern research. This principle states that if assuming the existence of undetectable entities or processes yields correct predictions for the behavior of gross matter, then these entities or processes should be assumed to exist in objective reality. Of course, principle (4) immediately implies that principle (1) is empirical fact. Also note that, from a theoretical point of view, a theory **T** is often consider better than another theory if **T** uses few hypotheses.

Assuming principle (2) and (4), human beings describe laws and processes of nature as part of human theories and predict correctly from these theories natural system behavior, even if all such predictions are probabilistic in character. Consequently, the following should hold. Let G be a set of hypotheses for a scientific theory. The facts are that G need not contain all of the natural laws that lead to a prediction. The set G need only contain a description for the physical entities to which these natural laws are applied. Then the human brain takes G and preforms upon G
some sort of mental process that yields a prediction P. This prediction is then verified in the laboratory or, similar to the Big Bang cosmology, is **assumed** to have occured sometime in the past. These, at present, unknown human mental processes are indicated by the **turnstile** symbol |--.
If G does not contain all of the applicable natural laws, then these laws or processes become part of the human mental process symbolized by |--.
In either case, this simple sequence of events is symbolized by

In all that comes next, the fundamental hypotheses will include the previously mentioned four principles. These principles are re-stated as follows:

(1) How nature combines together natural laws to produce the moment-to-moment "evolution" (i.e. development or changes in the behavior) of a natural system is modeled or mirrored by certain aspects of human mental activity.(2) Some of the present day humanly constructed theories are "probably" correct predictors for objective reality. When theories are considered, they will be tacitly assumed to be among the ones that are "probably" correct predictors.

(3) Predictions of correct fundamental natural system behavior will always require the behavior to be associated with a statistical statement that implies that natural system behavior originates from behavior that is always probabilistic in character.

(4) Although natural entities and processes described by means of a language using "physical" terms may not be observed directly, they will be considered as existing in objective reality if their continued use yields correct predictions.

It is self-evident that according to principle (2) that [1] displays an absolute fundamental process that occurs within the human brain. Expression [1] corresponds to principle (1) in that it displays a fundamental relationship between human mental processes and natural system behavior. Expression [1] and the next expression [2] can also be related to the **informational content** of a theory and Preskill claims that information is "physical" (Preskill, 1997, p. 4) in character. Thus Preskill and many others, I included, claim that there exists "something" within the natural world that cannot be observed either directly or indirectly called information. However, information is a notion that corresponds to natural law or processes in the sense that it also is modeled by "language" and "descriptions."

[There is a simple (if not trivial) analogue model for specific information, which shows how it is related to logical processes. The base of this is the image notion as displayed by a TV or computer screen as a representation for all of the sensory perception that will be achieved via the notion of virtual reality. One can devise a displayable program that will allow the screen to show pigs flowing over Washington DC. The program represents the instructions or laws that would require this behavior. Although the program language is fixed, an alteration in the program leads to an alteration in pig behavior. However, it is the inner logical workings of the computer that translate this program into the images on the screen. It is the inner logical workings restricted to specific instructions that are guided by the "specific information" contained within the programs and this leads to the displayed images. The reason I wrote "contained within the programs" is that these inner workings would not function in this manner unless the instructions were presented in a translatable program language and, of course, translated in such a manner that yields specific actions. You could consider the "translation into appropriate action" process and the results of this translation process as an analogue model for the "operational content" of the specific information contained within the programs. (Of course, although not completely necessary, it can be rationally assumed that specific information is contained in the "mind" of the programmer and is simply represented by the specific "written" program.) Thus, following this logical analogy, in the natural world, a consequence operator applied to a collection of symbol strings yields the operational content for the specific information contained within that collection. Thus, the notion of specific information is understood not as you would a material entity but rather operationally.]

Mathematicians study patterns associated with human mental activity in various ways. There are general properties associated with these mental patterns and these mental properties can be modeled by means of a mathematical object called a **consequence** operator C
rather than the turnstile |--. The turnstile relation defines a **consequence operator** as follows:

Expression [2] displays, in the simplest possible manner, the possible existence of a natural process that is being mirrored by the properties of operator C. Since the consequence operator properties are fundamental to all physical science theory and principle (2) states that some of the probably correct humanly generated theories are known and these theories predict the behavior of natural systems within the universe, then using principle (4) we should accept that it is absolutely true in objective reality that the behavior of operator C is mirroring the most fundamental of all natural processes.

One of the major characteristics of a modern scientific community is that it is exceptional myopic when it deals with scientific theory. A scientist learns and then concentrates upon a very complex often mathematically based theory and predicts verified results. But, usually, the scientist cannot step back and observe the actual essence of what has been accomplished through such efforts. What is actually happening in nature itself is almost always hidden among thousands of pages of symbolic representations. This also happens when mathematicians study the patterns presented by consequence operators as models for what, using principle (1), must be natural processes.

A consequence operator is defined upon all of the subsets of a language L and L can be very broad in character and not only include the ordinary strings of symbols one associates with a language such as those strings displayed in this article but can include audio and visual impressions as well. Indeed, as mentioned, it can include all representations for all modes of human sensory perception via the notion of virtual reality.
You need not start with a turnstile operator |-- as your basic operator. It has been customary to include in the set G not only a specific source event (the input) that leads to a specific outcome but all of the assumed natural laws that the human mind would need to predict the probability that the outcome will occur. For example, the natural source event for radioactive decay is the presence of a "decaying" radioactive material. It should be possible to apply certain general natural laws that explain such decay and, of course, application of human mental processes, and predict the probability, with respect to time intervals, of the occurrence of the detector "clicks." Then as another example, using the Feynman described theory for partial reflection, the actual source is a "photon generator" and the detectors are photomultipliers. As Feynman states it on page 17 of the above mentioned book, QED will predict that for a glass surface and a supply of photons of a fixed frequency, say red light, on the average 4% of the photons out of 100 photons will be reflected by the surface at a specific angle and be counted by a specially placed photomultiplier. The input or **trials** are the 100 photons emitted from the source and the output events are the 4 photons detected by the photomultiplier. This customary approach can be continued. However, as mentioned, since the entire collection of all natural laws could be included G, then the only differences would be in the source event information. Thus we need only consider G, for the radioactive case, as a single description for the radioactive material, and, for the photon case, a description for the photon source. All the natural laws can be consider as what actually generates the consequence operator relation, a binary relation that has the set G as a first coordinate and the set of all P as the second coordinate.

With respect to these event predictions, the essence of the concept of random is that the occurrence of one of these events has no influence upon the occurrence of the "next event." (Olkin, et al. 1994, p. 7.) The basic problem is what does the phrase "no influence" mean. One of the aspects of this new material is that the phrase "no influence" does not mean no design. Further, it will be shown that the phrase "no influence" relative to predictions produced by any physical theory means **[B] no influence that can be described using the theory's restricted language (Theory Randomness)**, with the exception of trivial identity styled relations. But, although principle (3) is used by most secular scientists to mean "no influence" by means of any natural law or process, which is the concept called **absolute randomness**, we show that there is no such scientific concept as absolute randomness. Indeed, absolute randomness is a false statement. We show that the myopia of the scientific community has prevented it from "seeing," so to speak, the actual wondrous design and the actual basic event influences that must be present before any so-called random events can occur.

In explaining the following new research findings, certain philosophical statements made by Feynman will be upheld. Relative to theoretic constructs, Feynman writes:

The . . . reason that you might not understand what I am telling you is, while I am describinghowNature works, you won't understandwhyNature works that way. But you see, nobody understand that. (1985, p. 10)

From my experience, another aspect of the Feynman philosophy is correct with the exception of, at the least, one notable research discovery.

Finally, there is the possibility: after I tell you something, you just can't believe it. You can't accept it. A little screen comes down and you don't listen anymore. I'm going to describe to you how Nature is -- and if you don't like it, that's going to get in the way of your understanding it. It's a problem that physicists have learned to deal with: They've learned to realize whether they like a theory or they don't like a theory isnotthe essential question. Rather, it is whether or not the theory gives predictions that agree with experiment. It is not a question of whether a theory is philosophical delightful, or easy to understand, or perfectly reasonable . . . . (1985, p. 10)

The notable exception is a **theory of everything** (Herrmann, 1994) that solves the General Grand Unification problem. This theory uses processes that satisfy, in general, principles (1) and, since it predicts the behavior of all of the natural systems that exist within our universe, it should, by principle (4), be accepted by all of the scientific community. In general, the model shows, with respect to principle (4), that our universe was created by processes that mirror the processes one would associate with an infinitely powerful mind. The model is called the GGU-model and uses the processes and concepts associated with a mathematical entity called the Nonstandard Physical World that also, according to principle (4), should be accepted as existing in objective reality. The fact that many in the scientific community reject this model is a counterexample to the last sentence in the above Feynman quotation. Since the GGU-model does depend upon the existence of a "background" or "substratum" universe that is the domain for the universe creating operators, then, counter to principle (4) and the Feynman statement, this model is further rejected based upon the philosophical stance that no such undetectable substratum exists.
However, postulating away the existence of such a background universe is not sufficient in order to eliminate the conclusion that natural systems within our universe are, indeed, controlled by processes that mirror exceptionally
remarkable and wondrous mental processes.

As final re-enforcements to the above four principles, where I have added remarks between the [ and ], Feynman states:

I am not going to explain how the photons actually "decide" when to bounce back or go through: this is not known . . . . I will only show you how to calculate the correct [Indeed, as perfect as one wishes]probabilitythat light will be reflected from the glass of a given thickness, because that's the only thing physicists know how to do! What we do to get the correct answer tothisproblem is analogous to the things we have to do to get the answers toevery otherproblem explained by quantum electrodynamics. (1985, p. 24)The theory [QED] describes

allof the phenomena of the physical world except the gravitational effects . . . and radioactive phenomena [nuclear physics] . . . [ Note: recent research tends to uphold, somewhat, the original Hawking principle in that procedures similar to those used for QED seems to predict some gravitational and most nuclear effects.] Most phenomena we are familiar with involve suchtremendousnumbers of electrons that it's hard for our poor minds to follow that complexity. [But, in theory, a computer, which follows the specific logical rules of the propositional logic, should be able to make such calculations.] (1985, p. 7-8)

The following discussion and results, although applicable to various physical scenarios, are being restricted to the realm of "particle physics."
All probability statements can be re-expressed in terms of the number n of describable **trials** and specific describable natural events **A** that can occur during these specific trials.
Since all of these theories use mathematical procedures to predict a probability p, the basic requirement for a probability statement is that the trials form a
*random sequence* under a given set of conditions. If during the n trials, where n is a large number of trials, m events **A** occur, then the event **A** occurs with a probability p approximated by m/n. The approximation becomes more certain as the number of trials increases "without limit." With the exception that one might claim, as Preskill has done, that certain events seem to follow a mathematical distribution, the "random sequence" statement cannot be established within a laboratory setting with certainty. Also as pointed out in the dictionary by James and James (1968, p. 285 p. 307), *such definitions have either logical or empirical difficulties.*

Of course, any description for the source that produces the trials, the trials themselves and a description for the events require a specific scientific language. Note that some trial descriptions are solely relative to "time." One calculates that over certain time intervals the probability that the detector will click is such and such. On the other hand, you might have a collection of different events that can be described in the appropriate language. For example, suppose that out of 100 photons the probability that they will be "reflected" at the measured angle is 0.40 (i.e. event **A** occurs) and the probability that they will be "scattered" or not reflected at the measured angle is 0.96 (i.e. event **A** does not occur). Each trial n and each event produced by a trial must correspond to some sort of "counting label" or else you would have no such probability statement.

Since mental activity is used to predict probability notions, the n trials can be modeled by means of n very simple consequence operators, one for each of the counting labels and the n possible outcomes, of which m are the **A** events. Let G be a fixed description for the source such as "A photon from S." Let each b in B, where B contains more than one member, correspond to a description for the event outcome produced by each of the n trials. There needs to be n applications of a mentally produced statement such as "G yields a member of B." In the case of **A** events, m of these trials correspond to a fixed b in B that describes the **A** events. Using the concept called "a power set map," the following set of statements model the essence of this probabilistic scenario, where if b in B, then there is an i such that 1 <= i <= n and b = a_i.

where the counting label is represented by the n applications of the operator H (as denoted by the H_1, . . . ,H_n symbols) as H is associated with the labeled a_1, . . . , a_n events.

To indicate the intuitive ordering of any sequence of events, the set T of Kleene styled "tick" marks, with a spacing symbol, is used (Kleene, 1967, p. 202) as they might be metamathematically abbreviated by symbols for the non-zero natural numbers. Using the philosophy of science concept of simplicity, assume that our language L = {G} U B U T. In 1987, it was discovered that there exists a set of consequence operators

that satisfies the requirements of [3] and is minimal with respect to L and [3] (the Occam Razor requirement). (Herrmann, 1987, p. 2, Definition 2.4 (i). Note: This 1987 paper, unfortunately, contains numerous topographical printers errors.) In order to maintain the "no order" requirement and, hence, prediction that **[C] no two or more of the trial
events b be in a recognizable order (an aspect of theory randomness)**, principle (1) requires that any two or more members of Ç in expression [4], when combined together, be a simple type of consequence operator such that the corresponding event outputs of this combination maintain this requirement. The appropriate combination that is absolutely necessary is called the **union** consequence operator and is denoted by C' = C_1 V C_2 V · · · V C_n (i.e. C'(X) = C_1(X) U · · · U C_n(X)). The union consequence operator does not usually exist, but in the case of the set of operators in Ç such an operator defined on L does exist and is minimal in the sense of it being the weakest consequence operator (mental activity) that is stranger than each C_i and, hence, satisfies the Occam Razor requirement for models (Herrmann, 1987, p. 5). These consequence operators are the most simplistic entities one can use to model the actual rational processes that yield the probabilistic predictions and theory requirements.

There is another minimal mental-like operator that is applied after each C_i. In human communication, one of the most significant processes is the selection from a vast collection of words and phrases a particular collection that faithfully describes an event. This operator is called the **finite human choice** operator. This operator mirrors the natural process known as the **realism** operator and might be considered as a slight generalization of the natural selection operator used throughout secular evolutionary theories. However, this operator is usually considered as but an integral part of each C_i. This operator, denoted by R, is restricted to members of Ç as they are applied to {G} and such operators as C'. Define R(C({G}) = C({G}) - {G}. This yields for each (trial) R(C_i({G})) = {a_i} and R(C'({G}) = B.

A source G_1 can have numerous associated but different B_j, 1 <= j <= k, **A** described events probabilistically predicted by a specific theory. For example, simply consider the source as sunlight. Again these probabilistic predictions correspond to a set of consequence operators Ç_1 each defined on a language L_1 = {G_1} U B_1 U · · · U B_k U T. Again these consequence operators must satisfy the union operator requirement. The simplest set of such consequence operators that contains Ç_1 and satisfies the union operator property forms one of the most wondrous and beautifully designed of all mathematical objects: it is a complete distributive lattice of finitary consequence operators (Herrmann, 1987, p. 5). In the appendix zip file, is a short PDF file that shows that if {G_1} is replaced by L, then even more remarkably this structure is a complete Boolean algebra. [For reference purposes, the set is H = {C(X,{G_1}) | X subset L_1}, where if G_1 in Y, then C(X,{G_1})(Y) = Y U X; if G_1 not in Y, then C(X,{G_1})(Y) = Y.]

Usually, the probability statement is considered to be more accurate only if you increase the number of trials n without limit (i. e. n -> infinity). In order to avoid the philosophical difficulties one might have with the "infinity" concept, one can use the "potential" infinite notion. In this case, one can describe this process by saying, "Pick any natural number you wish, then the necessary number of trials for an accurate prediction is greater than your choice." The conclusion that the set of all such consequence operators has this remarkable mathematical property is not dependent upon the number of trials.

Such sets of consequence operators have a much more startling property, however. Any set of consequence operators such as those in [4] and that satisfies the requirement that C' is a consequence operator are related one to another. In the appendix is a new nontrivial result that shows that for any nonempty set of consequence operators {C_i | 1<= i <= n, 1 < n}, each of which is defined on a language L and for which C' is a consequence operator defined on L, it follows that

Although, there may not be a relation between individual events a_i that can be described in terms of the restricted language used for the predictive physical theory, there is the relation [5] between each of the individual operators described above for a probabilistic model and that are needed to predict each a_i. Using principle (1), the consequence operators C_i represent the combined theory dependent natural laws that yield the events as predicted probabilistically. Hence, the natural laws modeled by the C_i are individually related by the n(n - 1)/2 equations that appear in [5]. Expression [5] is what one might describe as being in a symmetric form. But, actually, there are many more such equations since the operator V is commutative and, hence, any permutation of the operators between the ( and ) will also lead to many more such equations. Consequently, the events a_i that are produced by the necessary consequence operators C_i are not absolutely random in character when the processes are described by the theory of consequence operators. This also indicates that the actual concept of "randomness" should be considered as only relative to the original theory restricted language that predicts the probabilistic statement.

It is self-evident from paragraph **[A]** on page 2, the theory randomness statement **[B]**, and statement **[C]** relative to recognizable order that such "randomness" need not be considered as a hypothesis within such probabilistic theories as QED. Indeed, theory randomness is a statement about and, hence, exterior to such theories and as such is a statement within the employed philosophy of science. For this reason, expression [5] is falsifiable. For if it can be demonstrated that **[C]** is false, then this will falsify the expression [5].

The GGU-model (Herrmann, 1994) shows that it is rational to assume that an external mental-like consequence operator and an external object that behaves like an actual collection of symbols, a word, are the underlying entities that produce and sustain not only our universe in its evolutionary development but can be used to create and sustain other "universes" as well. The particle physics community tends to accept the original Hawking principle, especially since their contention is that it is the probabilistic rules and processes of quantum physics that will be shown, shortly, to govern completely the four fundamental interactions produced by what are often called the electromagnetic, strong, weak, and gravitational "forces." Due to the often-stated requirement that probabilities be considered as related to potentially infinitely many trials, many would consider the n that appears in display [5] to be potentially infinite in character. Notwithstanding this last possible requirement, since the n mental-like consequence operators that appear in [4] and [5] must apply to every predictive quantum physical scenario in the vicinity of every spacetime location throughout our universe, as assumed by particle physics, then the fundamental behavior that governs our universe can be described, in general, in the exact same terms as those used to describe the GGU-model conclusions if there exists an ultralogic that is the underlying control. If such is the case, then all natural system behavior for all of the natural systems that comprise our universe is controlled, both internally and externally, by processes that mirror the behavior of an infinitely powerful mind.

The research result Theorem 2, in the appendix, gives a very significant signature for the fundamental control of all natural system behavior. Relative to quantum physical behavior of photons and partial reflection as QED predicates the probability that an event will occur, Feynman writes:

I am not going to explain how the photons actually "decide" whether to bounce back or go through; that is not known. (Probably the question has no meaning.) (1985, p. 24)Feynman uses at this point in his lecture the terms "bounce back" and "go through" for what he later terms as new photons (emitted from electrons) and that reach the detector or photons that are "scattered" by electrons. The probability that an observed event will occur depends upon the

The above Feynman statement may, indeed, be correct for the language of QED and quantum mechanics in general, but it is a false statement relative to the theory of ultralogics and the physical-like behavior of ultralogics. Relative to theory predicted probabilistic behavior, Theorem 2, in the appendix, shows explicitly that for any sequence of relative frequencies m/n that converges to a predicted probability p that an individual event will occur, whether the convergence be fast or slow, there is a physical-like ultralogic P_p that does, indeed, force the event to occur or not to occur in such a manner that the relative frequency sequence is duplicated exactly. Except for satisfying the basic consequence operator properties for what are called internal sets, it is significant, that the ultralogic P_p that forces a specific natural system to so comply with such a probabilistic pattern is **not** related to any known human deductive process. It is also significant that the P_p is a member of the nonstandard extension ***H** of the set H.

Preskill's (1997, p. 4) quoted remark is about a probability distribution, the Poisson distribution. The facts are that all such distributions that are obtained from a frequency (mass, density) function follow patterns dictated by an ultralogic generated by a finite set of ultralogics P_p(i). Relative to Theorem 2, the only difference
is that the "source" statement G may be more complex in that it describes a particular physical scenario and the "event" statement **E** may also be more complex. Under the view that equivalent descriptions represented by G and **E** contain all the necessary information that depicts natural-system behavior for a particular physical scenario, then a frequency function f(x) is used to obtain the probability that the scenario will occur. A product consequence operator (an ultralogic) is generated by finitely many of the chosen P_p that appear in Theorem 2, and yields or sustains the appropriate sequence of events that satisfies the required distribution.

Using these event sequence ultralogics, one would conclude that the so-called "unregulated or random" behavior that one often associates with quantum mechanics and the behavior of entities within our universe is actually one of the most powerful signatures that processes descriptively represented by an infinitely powerful mind are controlling all aspects of natural system behavior.

The appendix mathematical theorems have been composed in PDF format and can be obtained from the following: chance.zip. This file can be viewed with the Adobe Reader at 125% and use of the zoom-in tool. The screen view for this particular PDF file is not completely satisfactory. However, printing at 300 - 600 DPI gives excellent results.

Cohen, M. R. and E. Nagel. 1934. An introduction to logic and scientific method. Harcourt, Brace and Co., New York.

Feynman, R. 1985. QED The Strange Theory of Light and Matter. Princeton University Press.

Herrmann, R. A. 1987. Nonstandard Consequence Operators. *Kobe. J. Math.* 4:1-14. http://www.arxiv.org/abs/math.LO/9911204

Herrmann, R. A. 1994. Solutions to the "General Grand Unification Problem," and the Questions "How Did Our Universe Come Into Being?" and "Of What is Empty Space Composed?" Presented before the MAA, at Western Maryland College, 12 Nov. http://www.arxiv.org/abs/astro-ph/9903110

Kleene, S. 1967. Mathematical Logic. John Wiley & Son. New York.

James and James. 1968. Mathematical Dictionary. D. Van Nostrand Co, N. Y.

March, A. and I. M Freeman. 1963. The New World of Physics. Vintage Books, N. Y.

Olkin, I. et al. 1994. Probability Models and Applications. Macmillion College Publishing, N.Y.

Planck, M. 1932. The Mechanics of Deformable Bodies. Vol II. Introduction to Theoretical Physics. Macmillion, N. Y.

Preskill, J. 1997. Quantum Information and Quantum Computing. Physics 229, Advanced Mathematical Methods in Physics. Cal. Tech.

Mathematics Department, U. S. Naval Academy, 572 Holloway Rd., Annapolis, MD 21402-5002