My rather complex theorem on probability models, that can be found most easily at the reference below, is only concerned with the probability that an event will or will not occur. No probability notions including "randomness" are used to obtain standard objects that have the exact same properties as a converging sequence of relative frequencies for Bernoulli trails, where 0 signifies that an event does not occur and 1 that an event does occur. But, many who have some knowledge in this area will find it hard going since the proof actually reverses the usual probabilistic way of describing such a Bernoulli trials model. Further, I show how to pre-design such a sequence. However, from the viewpoint of a higher-intelligence all such occurrences are deterministic in character. They are not randomly produced in that they have no cause.
Randomness and Patterns
Robert A. Herrmann Ph.D.
The results of this theorem are also applied to distributions that measure the probability of occurrence of an event over a range of a parameter and even to such things as the collapse of the wave function. An immense problem for the randomness notion is that there is considerable confusion between the no-cause and indeterminacy facets. Further, these notions are often discussed at different levels of observation. For the area of my concern, I consider a combined definition for "randomness" that eliminates this confusion.
Causes: For certain physical events, whether the events occur or do not occur cannot be traced to causal scientifically described entities or processes. (Single events included.)
Or probabilistic determinacy: For certain physical events, the occurrence or non-occurrence of a finite collection of these events does not scientifically determine an exact prediction that another event will or will not occur. However, an occurrence of an event does follow the laws of probability. (I mean by "scientific," using an accepted language for scientific discourse and accepted modes for investigation.)
In the book relative to "theory" randomness, I specifically state a definition for randomness that corresponds to these two notions, where a cause corresponds, at least, to the theory prediction for the occurrence or none occurrence of an event. But, I do not delve into the often long and tiring philosophic ramifications this definition yields since it is not necessary. The higher-intelligence results counter both of these "randomness" notions.
In my book, I give examples where individuals can produce, within a material environment, behavior of objects that would be classified as random under an observational definition. This random nature for the described motion of the leaves is not considered as caused by the human process that has led to the behavior. The random behavior occurs after applying the process and to certain aspects of the patterns being displayed. It is claimed that each leaf is undergoing "random fluctuations" over small periods of time that is analogous to Brownian movement and even would satisfy a type of Heisenberg indeterminacy expression at that level of observation. The actual production process itself is not a random process, but aspects of the results are claimed to be random.
One needs to go through a level or two for the indeterminacy part of this random notion. Then one needs to go through atomic levels, probably, to get to the no-cause aspect. But, is this the last level? David Bohm proposed that it is not. He hypothesizes a sub-quantum level that causes and determines atomic behavior, although at this lower level you still have a major "randomness" problem. He states: "let us note that in our model we have not insisted on a purely causal theory, for we have also utilized the assumption of random fluctuations originating at a deeper level."
Bohm claims, however, the laws at this sub-quantum level are considerably different from that of the quantum level. However, his new laws for the sub-quantum area do not eliminate the requirement for "random fluctuations" in intensity, so to speak, of a field. What his proposal only does is to eliminate the discreteness (quanta) aspect, something I also did many years ago, and replace it with a sub-quantum "continuous" field but with "random fluctuations." I point out that another field has also been proposed - the zero-point radiation field (ZPF) - that hides in the particle-physics vacuum. Again this field behaves classically, produces the apparent discrete quantum-physical behavior in many cases. However, you still must postulate "random field fluctuations."
For the leaf illustration, the fact that an individual is the cause for a behavior at one level, such as a finite choice from a finite collection, makes such a choice process not in itself a truly random process from the mathematical model viewpoint. The individual is the absolute cause for the choice of a finite collection of leaves. But, further aspects of the choice may lead to "random" behavior as here defined. The idea is that the behavior is random as observed. There is indeterminacy at other observed or imagined levels.
I mentioned above the notion of finite choice. It seems that such a notion would predict physical-system behavior if a finite choice was used as a guide to such behavior. Each member of such a chosen set is used to determine physical-system behavior. Mathematically and I suppose philosophically, it is not enough to simply consider finite choice. It appears that, at least, the potential infinite is necessary. From this viewpoint, it is claimed that, at least from the viewpoint of how our universe probabilistically behaves, that finite choice is not a form of determinacy that would contradict true random behavior. That is, if true random behavior comes first, so to speak, then finite choice will not actually contradict this belief.
(Mathematically this is so since the convergence of a sequence of real numbers, say, cannot be determined by examining a finite collection of finite subsequence. I note that the law of large numbers must apply to the sequences generated in my theorem. Mathematically, however, the law of large numbers does not actually indicate how many terms of such a sequence one must have before one recognizes the value to which the sequence converges. Of course, it does say that as you increase the number of terms it is "more" probable that the sequence terms will more closely approximate the actual probability. But, in actual practice, such determinations seem to occur more readily. (A higher-intelligence may be responsible for this.) There are deterministic sequences of event occurrences that would appear to be Bernoulli in character that actual do not converge to any real number. See the example below.)
The higher-intelligence model for the calmed scientifically modeled randomness is not merely a cause-and-effect statement for the production of claimed random behavior under the no-cause part of the definition. In all cases, this higher-intelligence deterministically produces the actual patterns as well. It does this, in general, by hyperfinite choice. This observation is developed after the existence of the higher-intelligence operator is obtained mathematically and appears in the referenced arxiv.org version of my published paper on ultralogics and probability models. The choice determines the actual and exact occurrence or non-occurrence of an event relative to the trial notion. Hence, these higher-intelligence results counter both of the above-defined aspects attributed to "randomness."
For the so-called random fluctuations of a leaf, the operator determines whether the leaf exhibits one set of specific movement characteristics as required by the higher-intelligence or it does not exhibit them. Of course, a mere human cannot make such a "prediction" since a human cannot make such a choice. Thus, for such higher-intelligence operators, there is no way for a human to make such individual behavioral predictions. This is why I consider the scientific-communities insistence upon observed or imagined random behavior as important indirect evidence for the existence of such operators with their higher-intelligence signatures. Now, of course, the "higher-intelligence" terms used in the above can be removed and one gets a pure GGU-model interpretation.
For the theorem mentioned above, see Herrmann, Robert A. Probability model and ultralogics
The following sequence of rational numbers is generated deterministically and does not converge to any real number. However, it could be used to generate event occurrences via the numerator numbers. See if you can guess how it is generated deterministically.
0/1, 1/2, 1/3, 2/4, 2/5, 2/6, 3/7, 4/8, 4/9, 4/10, 4/11, 4/12, 5/13, 6/14, 7/15, 8/16, 8/17, 8/19, 8/20, 8/21, 8/22, 8/23, 8/24, 9/25, 10/26, 11/27, 12/28, 13/29, 14/30, 15/31, 16/32, 16/33, . . ., 16/48, . . .
There is an actual deterministic statement you can write down that will specifically generate each term in this "infinite" series. I note that for any natural number n, there will be term numbers greater than n with the values 1/2 and 1/3. Hence, the series cannot converge. Obviously, this can be related to the occurrence of events. As is done in the proof of the referenced theorem, the same type of ultralogic can produce the events that would lead to such a series of relative frequencies. Thus, we have intelligently designed patterns for the occurrence or none occurrence of events, where each occurences is controlled by a higher-intelligence. But, the occurrences are not probabilistic in character.Notice you start with 0/1. Now you increase the numerator number by 1 until you get 1/2. Then you repeat the numerator until you get 1/3. Then increase each numerator by 1 until you get 1/2, etc. The only thing one needs to do is to show that the points at which you alter the numerators or repeat the numbers will always occur after a finite number of steps.