Simplifying the Technical Aspects of the GD-world, GGU-model, GID-model and the Rapid Formation Model

Robert A. Herrmann Ph. D.
Mathematics Department
U. S. Naval Academy
572 Holloway Rd.
Annapolis, MD 21402-5002
1996. Last revision 9 FEB 2016.

1. Introduction, Language Usage and Other Stuff.

Before I start this informally presented article, I must make one very, very important disclosure. All of the GD-world, the (General Grand Unification Model) GGU-model, the (General Intelligent Design) GID-model results are PREDICTED from but a few common everyday human experiences that can be verified within a laboratory setting. Except for the most basic mathematical axioms used throughout all of science, NO additional mathematical or physical axioms or presuppositions are assumed. This must always be kept in mind, for there will arise the myth that the results have somehow or other been included within the axioms or presuppositions. This myth is entirely and utterly false.

The discussion presented here does not contain the most recent significant results and simplifications that appear in my book "Science Declares Our Universe IS Intelligently Designed" (Xulon Press) and as they appear at the mathematics and physics archives - or But, this article can help one to comprehend aspects of these models as well as new material not presented there. The most recent attempts at simplifying the GGU and GID models are at the link Introduction. The book method is compared with this new approach at the end of this article.

Section 8 of this article is a revision from the method discussed in the above book. That book is still viable for the notion that a universe is a collection of fundamental physical-systems. However, a new approach that applies to universes in general has been developed recently. This new approach allows for a greater in-depth analysis.

From Nobel prize winner Louis deBroglie, comes the statement:

. . . the structure of the material universe has something in common with the laws that govern the workings of the human mind. [March, 1963, p. 143]

Then from the viewpoint not of a scientist but a philosopher, recall what C. S. Lewis writes:

. . . that events in the remotest parts of space appear to obey the laws of rational thought....According to it what is behind the universe is more like a mind than it is anything else we know. [Lewis, 1978]

In this article, I'll present the intuitive and foundational concepts needed to establish scientifically these two conjectures, but this will be done in a simplified manner. Much of the seemingly incomprehensible material that appears in various technical papers will be removed by showing that, in reality, the foundations are easily understood. The great difficulty only comes when the modern methods of mathematical modeling are employed in an attempt to predict new results. First, we need a definition as to what is the most common feature of all of scientific discourse.

Science, "at the least," is composed of systematized and reasoned descriptions.

By the way, one term in this definition has been altered. The usual term "knowledge" has been replaced by the less innocuous term "descriptions" since a "description" need not mean this illusive thing we call "fact." The term "reasoned" means a specific type of reasoning called scientific reasoning. Such reasoning is, usually, a combination of classical logic and the somewhat weak process of induction.

Is there a common feature associated with my basic definition of a science that's so obvious that it may have been over looked? Having systematized and reasoned descriptions is somewhat worthless if such descriptions can't be communicated to others. "Obviously," it's assumed that such descriptions will be communicated, somehow or other, to other individuals working within a particular discipline. The ability to communicate ideas is one of the crowning achievements of the human mind and shouldn't be underestimated. Let's consider some of the more basic aspects of this common feature -- basic aspects that apply to all disciplines, in general.

In order to transmit information, oral communication is translated into written symbolism. Such communication is comprised of words, phrases, sentences, paragraphs, books, and libraries of written expressions. Along with these written expressions are the common rules - the rules of grammar - that guide us in combining such expressions into collections so that they have meaning to a particular object with which you wish to communicate. Notice that I just used the word "object" since, today, we tend to communicate with, what is still classified as, nonhuman objects. You may have been taught to use a special language to communicate with a computer or an automated teller machine. You discover quickly the trouble that occurs if these special languages aren't used. Such experiences have certainly increased our understanding of what exact communication means.

Clearly, all of the symbolic means employed to communicate haven't been discussed as yet. There are diagrams, figures, and drawings that one might classify as symbols for something or other. But then what would you do with the great visual communicators that are photographs, motion pictures, television, television tape, the DVD and other devices that may reproduce sensory information? First, since photographs, motion pictures, television pictures, audio and sensory information associated with virtual reality can be stored on devices such as the DVD and only such devices need to be considered. Would you consider the information contained on a DVD as expressible in a written form?

Modern technology is used to produce an exact string of symbols that will reproduce, via translation devices, visual images, sound and other sensory information stored on DVD. Today, information is "digitized." Using the older type of cathode-ray (CR) tube as an example, each small fluorescent region on a TV screen is given a coded location. For simplicity, consider the intensity of the beam and the like as coded in a series of binary digits. The computer software then decodes this information and the beam sweeps out a glowing picture on a "CR TV-monitor." At the next sweep, a different decoded series of digits yields a slightly different picture. And, after many hundreds of these sweeps, the human brain coupled with the eye's persistence of vision yields a faithful mental motion picture. Each of these single glowing pictures can also be faithfully reproduced in a written language. All one needs to do is to state that at a particular moment of atomic-clock time a picture element (pixel) has a particular intensity and a particular "color." The entire description composed of such statements for every pixel will correspond to a single CR TV-image and when "translated" by the electronic hardware, that can also be completely described, becomes the desired visual image. Other such sensory-replicating devices are also excited via digitally-coded information.

Did you know that when you make a telephone call that the major telephone communication networks digitize the sounds and then faithfully reassemble the numerical codes into a reproduction of what the listener believes is "your voice"? Record companies digitize music in order to improve upon the quality. Schematics for the construction of equipment can be faithfully described in words and phrases if a fine enough map-type grid is used. Thus, the complete computer software expressed in a computer language, the digitized or other types of inputs along with schematics of how to build the equipment to code and decode digitized information, taken together, can be considered as an enormous symbol string the exact content of which will be what you perceive on TV, hear from your CD or DVD player, or other such devices.

Today scientists use these mechanical devices in the place of human observation whenever possible since human observation is often not considered a trusted observational procedure. Thus, in all that follows, the notion of a "string of symbols" as a representation for scientific information will always include the idea that portions of such a string also communicate sensory information to the human brain. But, since it's communicated to the human brain, then the notion of the content also reveals itself.

All of the effects that such a string of symbols has upon the human brain is the content contained within that string of symbols.

Now the exact content of a string of symbols, certainly, depends upon an individual's personal experiences and technical training for the same written term or visual image often conveys a different content relative to different disciplines. Thus, the content is determined by the rules or standards accepted by specific communities such as science-communities. Under this constraint, much of what science considers to be perception may be replaced by a long exact string of symbols (i.e. symbol strings). This leads to what is truly the common feature shared by all sciences.

The common feature is communication by means of strings of symbols of one sort or another.

Members of a specific discipline create what is termed as a technical dictionary that contains strings of symbols that are accepted by its members. [Recall that this will always include visual images, audio impressions and, if you wish, tactile or olfactory descriptive information.] For the present discussion, you don't need to consider this dictionary as containing any basic meanings for the individual strings of symbols. What's required is a relatively fixed set of rules for combining different symbol strings into longer more complex collections that all members of a discipline use. The technical symbol dictionary and all the accepted combinations dictated by these rules forms what is termed a discipline or technical language.

Although the content of a particular set of symbols may be described as all of the effects it has upon an individual's brain, the simplicity of this definition stops at the point where you analyze its properties. Content has many diverse characteristics that often lead to the difficulties humans experience in communicating ideas. Content is exquisitely related to human experience, and it's particularly significant for what is called a description.

Individuals experience the results of mental impressions, whether such impressions are internally or externally generated.

A description is a combination of symbol strings (including those representing visual or audio information etc.) that when considered by an individual evokes mental impressions.

First, an individual's mental impressions lead to the formation of a description. Secondly, a description is intended to evoke within the same individual, and others, approximately the same mental impressions which originally determined the description.

A description may only be an approximate reproduction of the original mental impressions. As a child, I had a great deal of difficulty communicating orally. I was very frustrated for I didn't have a large enough vocabulary to communicate my ideas. This difficulty in communicating mental impressions continues to plague any scientist who discovers a "truly" new concept. The new concept requires new terms. And, when these new terms are created, they need to be defined by descriptions using previous dictionary terms. The combinations of these old dictionary terms, if possible, should not occur in previous definitions so that these new terms will definitely have content different from all the other technical terms. Unfortunately, even if I'm careful in term-definitions, my experience has been that a description using these new terms never seems to convey all of the content I intend. This becomes more obvious when I ask other scientists to re-describe the content of my description in their own words, so to speak. Their partial descriptions often fall far short of what, I perceive to be, is the true content of the new terms. Of course, in actuality, there's no absolute way to determine that the content a description conveys to one individual is the exact same content that it conveys to another individual since one would need an independent form of communication to make such a determination.

Using machines to recored sensory information does not remove completely this content problem between different science-communities. Science, today, still requires linguistic descriptions to be communicated faithfully within the community. One basic assumption is that a faithful description does exist. What does the display of descriptive information have to do with the deBroglie and Lewis statements? It is not the description itself that is significant but it's what comes after such a description that's the bases of modern science.

Suppose that a description conveys information about the behavior of a natural (i.e. physical) object at a particular time or place. The modern scientist takes such a description, couples it with additional statements called natural laws, and deduces scientifically a new description that is claimed to represent the behavior of a physical object at another time or place. It's the notion of scientific deduction that ties together descriptions and the deBroglie and Lewis statements. The important point is that the rules and processes used for scientific deduction are constructed in such a way that they are not related to the content of a description.

2. Predictions.

(Note: The word "physical" as defined by a specific science-community is used in the following and corresponds to the word "natural" in much of my older writings and the above mentioned book.) First, consider the following very useful definition.

A physical-system is an arrangement of physical objects that are so related as to form an identifiable unity. Usually, scientists will declare that a particular arrangement forms a physical-system and give such a system a name.

Thus the sun, the earth, the solar system and a virus are physical-systems.

Suppose that you're conducting an experiment with a named physical-system. Let's call a linguistic-like object a description if it describes using a language, images, or other sensory information an observed or assumed real physical phenomenon. Let the actual observed or assumed real physical phenomenon itself, the one assumed to exist in objective reality (outside of the brain or mind of the observer), be called an event. Often where there is no confusion, the term "event" is used for a description. Consider event E(1) as an observed or assumed real physical phenomenon. Suppose that your experience and training leads you to conclude that a second observation of an event E(2) is closely related to the first observation. The relation you decide upon, the physical law, contains one or more described process termed the cause that changes event E(1) into E(2). This change is the effect of application of this law. However, for the GGU-model there are no "cause and effect" statements used for physical behavior internal to a universe since a possible universe need not exhibit any such relation between any two or more events.

As an example, using the simplified "force" language, when a knife slides off a table, E(1), it falls, E(2), towards the floor. One might state that "the force of gravity" causes the knife to fall towards the floor. For the GGU-model, this is not the correct idea. As the step-by-step frames presented by a DVD, when payed back, shows this behavior, there does not appear to be any such "force" that yields the changes in position. Repeating what a physical law actually does is to present a description that represents a relation between events. For the GGU-model, this description need not be invariant that is, universal in time or place. Thus you have an a type of sequential event concept. In this case, you have the knife just starting its fall - event E(1). The Newton law of gravity (this includes all rational implications) relates such parameters as distance and time. This law allows us to predict, with a good approximation, the time it takes the knife to hit the floor. Hopefully, there is enough time for you to move your foot in order to avoid it.

In other cases, an assumed law does other things that appear to be more cause and effect describable. For example, it is claimed that the "vibration" of a quantum field E(3) yields electron properties- an event E(4). One can say that the "field vibration" E(3) causes the effect E(4). So, from this view, E(3) causes effect E(4). From the GGU-model viewpoint, event E(4) is but a prediction based upon E(3).

As mentioned, being a cosmogony, the GGU-model cannot be generated by physical laws since, at least, one universe would not possess such notions from a description viewpoint. Consider any event E(1). Any other event E(2) is not related in any way describable or not to event E(1). Event E(1), does not cause, produce, yield or predict E(2) and this is not due to an inability to describe such possibilities. The fact is that for the GGU-model the generation of a universe need not be governed by knowable nor unknowable relations and leads to the rejection of a concept that has been assumed for more than 2400 years. Due to our training in all things within physical science, this is a difficult but necessary assumption to accept.

For the GGU-model, given any universe-wide frozen-frame (UWFF) E'(1), the "before event" (i.e. a universe as it is at a fixed moment in its development), then no other universe-wide frozen-frame E'(2), an "after event," is produced by any physical processes or entities associated with any physical law. What happens is that the E'(1) and E'(2) simply "might" satisfy or verify describable physical laws. There are no "causes" nor "effects" for physical behavior; there are only relations between them. However, most significantly, such described regulations do yield models that allow us to predict future events E(t) contained in a future E'(t). In this prediction sense, a physical law is said to be "verified" or "satisfied" if the predictions occur in the physical-world. (What is physical is defined by a list of entities.)
In Herrmann (2003), it is shown how to obtain operator U, the best possible unification for the collection of all physical laws and accepted physical theories. This means that when U is applied to a collection of physical events E(1) a collection of predicted physical events E(2) satisfies U. (Mathematically, the operator U is not, in general, considered as restricted to these sets of events. In the application being made here only specific sequential restrictions are being considered.) This is symbolized by the following:

(1) U(E(1)) => E(2).

This is a standard mathematics symbolism and is read "U applied to E(1) predicts E(2)" and other similar expressions.

(i) From a symbol string point of view, you have one set of symbols "E(1)" coupled or associated with symbol "E(2)" by the symbol "U".

(ii) From the physical point of view, you have one real physical phenomena (an event) physically related to another physical phenomena by description U. This is a most remarkable fact that relates language physical events.

(iii) Hence, you have two coupled concepts. Symbols strings on one hand and what they represent on the other. Unfortunately, the terms applied to such strings of symbols can take on two different meanings and are confused. For example, writing s = Ct x t is but a set of symbols. If they represent nonzero real numbers, then we can move them around and write (s/C) = t x t. But, very often the symbol "s" is called distance and symbol "t" time. Actually, for the physical world and a model for the behavior a falling knife, one needs to add that s is "the measure of" distance and t is "the measure of the lapsed" time.

These dual characteristics associated with scientific communication are very significant for the prediction concept. Why did members of E(2) occur? Did the unification U "force" E(2) to occur based upon the occurrence of E(1)? How can a mere description do this? Significantly, for this to occur, classical logic is applied and the events in E(2) are predicted. So, however this is done, the step-by-step generation of our universe, at the least, follows the rules of classical logic. One can assume that this is mere chance and our brains have "simply" evolved in some unknown way so as to replicate this process when we use a descriptive language. One may ask relative to our earthly home, "Why us and not other living creatures?" That is another rather remarkable fact.

So, it appears that for humankind to comprehend a physical law, we assume that Nature is "putting together" members of E(1) to yield the members of E(2) via processes that parallel human rules for deduction. Indeed, scientists have stated thousands upon thousands of times that they know "how" Nature is "doing it." This "how" requires deductive thought. But, in a pure physical sense, they do not know "why Nature does it this way?" Notice that logical processes are the absolute basis for all computer simulations for physical-system behavior.

Let's discuss and illustrate more fully exactly what is actually being done. There are certain laws and procedures that have come down to us from Aristotle and others as to how we should put strings of symbols together and arrive at another string of symbols. These symbols, of course, correspond to spoken statements. How this needs to be done is codified by the classical laws of human logic. These laws correspond to our everyday linguistic experiences where strings of symbols are used to describe those physical events that we perceive by our senses. These logical laws are also those accepted by the majority of humanity as the proper way "to think" since these logical laws are used to communicate and construct the material things we use in everyday life. Indeed, we cannot even move from one place to another unless we mentally apply such laws.

If we can't perceive a physical event, we simply assume that the same logical laws hold. However, the laws of human logic are more general in character and can be applied to any string of symbols not just a string that appears to describe a physical event. It's the pen and paper process - a game to some - of writing down strings of symbols and combining them together by use of certain accepted laws of human logic that yields a scientific theory.

One of the accepted patterns for human thought as applied to strings of symbols is easily illustrated. Since this law of logic applies to various distinct string of symbols, one probably shouldn't write done a specific string of symbols to illustrate this pattern of human thought. So, to simplify matters, let the bold face symbols X, Y, and Z denote statements taken from a language. Let & denote the word "and." Now consider a new string of symbols which, in two equivalent ways, is expressed explicitly as follows:

"(logically) implies or (logically) predicts" is symbolized by =>.

The idea of understanding intuitively a new explicitly expressed string of symbols shouldn't be new to you if you ever had a class in high school geometry. After all, to what does a straight line correspond? It's anything, using the textbook language for geometry, that can be classically deduced using a set of basic "straight line" properties. Notice also that in the "gravity example" logical deduction is used to declare that it is the "force" of gravity that alters event E(1). The prediction is the "alters" statement; the position E(1) of the knife is altered and becomes E(2). There are many other predictions as well.

The above could be written symbolically as

(2) X & U => Y & U.

The unification U is considered the simplest part of our deductions. It predicts itself by a one-step deduction via writing down the members of U. It can be suppressed; it is "understood." The form (2) is a type of logic-system representation. However, applying a very basic rule of logic one has A & B => A. So, assuming this rule holds for physical events, we have that X & U => Y & U => Y as well. One can add this to the expressions that follow under this assumption.

Now let E(1) be the event described by X and E(2) the event described by Y. Then we have the pattern

(3) E(1) & U => E(2) & U,

where our interest is in the event pattern.

Let's put the three patterns together for reference purposes.

(1) U(E(1)) => E(2)
(2) X & U => Y & U.
(3) E(1) & U => E(2) & U.

Note the following important fact. The U is not part of an actual "observed" event, they are part of the implies content. The event is simply described. Thus, the U is like an axiom system that is always present and can be used at any moment. The actual descriptions for the observed events have the pattern

(4) E(1) => E(2).

It matters not whether the U is considered as part of the description for a physical event, or suppressed entirely and considered as part of the logical rules. Expression (4) uses actual "observations" for physical-system behavior and is the bases for the actual logical patterns. A repeated pattern (4) is what leads to a search for physical laws that show a comprehensible relation between the two events. Now one of the most basic properties for "logically implies," using a fixed U and fixed rules of inference, is illustrated by the following set of directions. Write down a set of statements:

X => Y,
(5) Y => Z.

The classical laws of human logic now allow you to write down the statement:

(6) X => Z.

(You can "sort of" erase the Y.)

Letting E(3) be the event described by Z, we have the event patterns

E(1) => E(2)
E(2) => E(3)
(7) E(1) => E(3).

Such patterns are accepted within science as correct in that it's assumed that this represents how logically comprehensible physical events, relative to U, are actually being combined together within a physical-system. Of course, this is a very elementary illustration for the deBroglie and Lewis statements. By the way, if you want an immediate "proof" for a portion of this last remark, consider the classical propositional logic and a fixed U. These are the exact patterns used to construct each and every modern computer. However, not all physical-systems need follow all the patterns for classical logic. But, all physical-systems do follow a certain weak portion of the rules for classical logic and this weak portion is the actual set of rules used to obtain the GD-world, and GGU-model results.

What can be gleaned from observing the absolute similarity between the linguistic patterns (5) and (6) and the event patterns (7) and the exact same similarity between more complex linguistic patterns associated with human thought and event patterns is that

the behavior patterns for certain physical mechanisms is being mirrored or "modeled," or "represented" by linguistic patterns associated with mental processes.

Although controversial language using the "cause and effect" phrase is not employed, one of the questions answered by the GGU-model is that it yields an "ultimate cause" for each physical event that occurs within our universe.

3. What does the term model mean?

I published a long article discussing the different meanings for this term but in the present context it means one and only one thing. A "model" (i.e. an analogue model) need have no "resemblance" to the actual physical objects being modeled. After we have assigned to each physical object its corresponding object within the model, all that such a model does is to rationally represent behavior and properties where the objects used for the representation may be different from the actual physical objects being investigated. This correspondence is called an interpretation. One example should suffice.

In the Big Bang theory for how our universe behaves, we have the concept of the expanding universe. One basic way to model this is by the balloon analogy. Take a rubber balloon. Using small spots of glue, glue pennies on this balloon while it's somewhat inflated. The correspondence is that the pennies correspond to galaxies. Now begin blowing up this balloon. What do you see? Well, one of the things you see is that the pennies are moving slowly away from each other. This behavior is suppose to illustrate, or be a model for, the expanding universe. But, it doesn't do so in any fashion except one.

In this balloon model, you aren't suppose to notice the balloon. You are to suppress the fact that it's the rubber that's stretching. You aren't suppose to notice that it's being blown up by a lot of hot air. You aren't suppose to notice that you are observing the entire event from "outside" of the so-called universe as a god might. You are only suppose to "see" the pennies (galaxies) moving away from one another, and you are also to conclude that this motion isn't produced by any force produced by the individual galaxies themselves nor produced by any other "perceivable" material within the universe. This is a model for this and only this behavior. To model other cosmological events, dust particles take the place of star elements and the motion of galaxies is modeled via fluid dynamics.

The actual physical way that the balloon produces this behavior does not correspondence to anything in reality that produces the expansion of space. Indeed, what is the reality? The reality is that the balloon represents a "nothingness" or a complete emptiness from viewpoint of observational astronomy. Although there have been various ad hoc attempts to describe this nothingness, only the GGU-model predicts from actual very simple physical considerations the entities that may actually comprise this nothingness. But, this "nothingness" still can't be directly perceived by us or by any machines constructed by any "intelligent" life form.

Let's expand slightly upon the concept of "behavior." In the context of this strict definition for a model, the behavior means a description for a relationship between objects within a model where you substitute for the modeling objects the corresponding physical objects. Thus, in the balloon case, we have that the pennies = galaxies are moving away from each other not due to any forces or properties of the individual pennies = galaxies themselves. Although this might lead one to ask a question such as how is this possible, one should suppress this desire. Why? Well, just because we can ask such questions, doesn't mean that there's an answer that can be expressed in any human language or that we could comprehend the answer if it were expressed by means of some string of symbols. After all, our comprehension is based upon how our minds (brains if you wish) function.

4. Modeling Thinking.

Patterns such as those that appear in expressions (5) and (6) and much more complex ones have been studied by philosophers for thousands of years. Mathematicians began studying such patterns in the middle 1800s and have created the applied mathematics discipline called Mathematical Logic. In the 1930s, the mathematician Tarski abstracted three of the most basic properties relative to how the human mind puts together various strings of symbols and logically deduces new strings of symbols. The mathematical "thing" or operator that's used to express these three relations is called a (finitary) consequence operator (or operation). Each such operator is equivalent to a logic-system, where specific rules of inference are displayed.

Various terms are used for the objects that this operator "operators" upon. Tarski calls them sentences. They are also called readable sentences as well as words or readable words. Whatever they are intuitively, they are sets of finitely long strings of symbols. To be consistent, from this point on and when no confusion will result, the term word will be used for a finitely long string of symbols. Some are very long words such as the string of symbols that started at the S in the title of this article and ends with the period symbol . that appears at the end of the last reference just after the symbols NY. Due to our modern language constructions, you can also use a special symbol for the spaces used within this somewhat long word. The consequence operator "models" the actual process the brain goes through when it deduces a conclusion by considering words as inputs and what words one produces mentally as outputs. But, is this the same concept as the balloon model? Well, it's almost the same.

You know that a door is unlocked. So you push on it, but it doesn't open; so you immediately pull the handle. Why did you do this? Your brain has gone through a very rapid process and deduced that this is what you should immediately attempt. The same thing happens thousands of times a day. In reality, science doesn't actually know what your brain has done internally although there are various theories. Now there are some individuals who would continue to push the door. For some reason, their brain doesn't process the information in the same way as, say, 99.9% of humankind processes the information. But our human society is built upon the ability for each of us to process this information so that we all, well almost all, arrive at this same simple conclusion. Individuals who can't do this can't survive very long within our society and are usually confined to an institution where they will not harm themselves or others.

Suppose you write down a statement X about a physical process. You also have learned a so-called physical law that applies to this process. For some reason, your brain processes all this information and you conjecture that statement X implies Y. Scientists, usually, will not accept your conjecture in this fashion, they want a derivation or proof that what you conjecture, Y, does follow "logically" from the physical laws and application of X. What you must do is to show how the members of U can be combined together using accepted logical laws and at the end of the informal deduction the final thing you write down is the Y. But, in informal work, you don't actually write down the names of the laws of logic used. You might not even know that such named laws of logic exist.

Since each step in the derivation is small enough, you hope that other scientists will simply "feel" that everything is okay logically. Indeed, it may be impossible to convince all individuals that what you have derived is logically correct since you express the derivation at the logical and experiential level of the audience you are intending to convince AND, usually, everyone in your audience assumes the same unstated presuppositions. That is, you don't express everything. In mathematical logic, mathematicians attempt to model many aspects of how the brain functions but without knowing how it functions biologically. That is, they model, among other things, the logical procedures that don't appear in the informal argument the scientist gives, the procedures that would simply appear to be correct to another individual without any further justification. But, this modeling process isn't exactly the same as the balloon model.

Besides the general consequence operator properties, certain fixed and very simple rules have been discovered that allow anyone, who can apply the rules, to take the X and the U and, in theory anyway, deduce the Y. Indeed, in some cases, a machine can actually apply the rules and determine whether the X logically implies Y. Different types of logical deduction correspond to different sets of rules. Of course, a certain amount of "thinking" must be done in order to apply the rules. But, the rules are statements written in words and sentences; they aren't biological processes that the brain uses. Thus, there's no correspondence between these rules and the processes that the brain uses to combine all this stuff together. This is the great difference between modeling logical processes that take a word from a language and produce another string of symbols. All we know is that we get the same result by application of these rules.

What is important in this model is the pattern of the information to which the logical process is applied and the pattern of the results, among other interesting patterns. Suppressing the actual rules for the logical process is, in this case, equivalent to suppressing the rubber as the stuff for the expanding universe since we don't know how the brain actual does the deduction. Of course, you can spend your entire life deducing predictions without ever knowing of the existence of these sets of strict logical rules.

Two examples should suffice as to how this model is used to investigate thinking. Suppose that you have two individuals Bob and Ray. Now Bob and Ray use a word X, as a hypothesis. Supposedly both Bob and Ray have studied the same set U. Now Bob deduces, in one hour, 35 consequences, conclusions or predictions; the Ys. But Ray deduces, in one hour, 135 consequences, conclusions, etc. and these include Bob's 35. Indeed, every time this mental experiment is done Ray always deduces Bob's conclusions and many, many more. One might conclude that Ray is a more powerful thinker than Bob. This is exactly the definition that is applied to Ray's deductive processes, Ray's deductive processes are called great than, or better than, or more powerful than Bob's. We don't know anything else about Bob and Ray's actual thinking processes. But, we model these unknown processes by the number of symbol strings obtained.

In the first example, we simply looked at the mental inputs and the outputs as words, but we also have the additional refined process of looking at specific rules of logic. Let's restrict Bob and Ray to the specific written rules for their logical approach. When the rules are applied, one makes a list of the actual rules used. Each time a rule is used, a step must be written down on a piece of paper and the specific rule used must be stated. You give both Bob and Ray the same hypothesis X and you tell each of them that it's possible to deduce, using the rule approach, a conclusion Y. Well, after one hour, Ray has written down 36 steps, with reasons for each step, and his last step is Y. Bob, after one hour, hasn't reached, by application of the rules, the Y. He is "stuck" and has only 8 steps. Indeed, it's known that no one can apply less than 30 steps and obtain the result Y. Hence, the fact that Ray could do the much longer application of the rules and Bob couldn't, within the allotted time, would indicate that Ray has "stronger" mental powers.

By the way, although I can "think" in pictures any time I wish to do so, most of the time, when I'm conscious of my thinking, I think in words. These words are spoken by what I call my own mental voice; my "reading to myself voice." I assume this is the case with most individuals. If so, then language is, indeed, exceptional significance.

How do we model the above examples? There are various ways to do this modeling. The most basic is to model only a few words that appear in the symbol strings, such as the words "and, or, not, implies" by means of special symbols and use letters, such as P, Q, R, X, Y, etc. to represent or correspond to the other portions of the symbol strings. We study the patterns produced by these new symbols when the logical rules are applied and relations between these new symbols.

For consequence operators, the basic approach is a weak set-theory, where we correspond a specific set of words to another possibly different set of words using three consequence operator defining properties. These properties are the common features for all known forms of logical deduction used by science-communities. Your author has shown that these operators correspond to actual rules of inference (a logic-system) and conversely and to obtain one from the other a very simply mentally applied algorithm is applied. This is the most basic algorithm needed to write a "formal proof" using the rules individuals learn when they study mathematical logic. Of course, to count things such as the number of steps in a logical derivation, the number of specific symbols in a given word and things like that, basic number theory is used. Number theory is the most empirically consistent and has the longest history of any of the mathematical theories for the "numbers" used today. But, for the GID, GD-world and GGU-models, an additional step is used, a step very loosely associated with a Gödel modeling notion.

First, each and every distinct word is given a unique natural number name. Then a model is created so that each string of symbols is modeled via the usual method of expressing a symbol string one portion at a time, say by writing it from left-to-right using single symbols or combinations of symbols. For example, the word "atom" is considered as constructed in the forms, a t o m, at o m, ato m, a tom, a to m, at om, ato m, atom. A given word is thus composed of all of the possible strings of symbols that can be put together in this fashion to produce the given string. The use of number theory yields what is technically called a numerical coding. This allows the modeling to be structured by entities from a very empirically consistent mathematical theory. Further, the process allows for the counting of all the symbols that make up a specific word and a specific mathematical entity allows the mathematician to study the behavior of a particular symbol that is located at a particular place within a word. Of course, computers do the same thing for the single members of a symbolic alphabet. This is the ACSII code numbers.

The logical processes are modeled by relations between the corresponding numerical codes where the eight constructions that yield a word such as "atom" are modeled. Suppose that the numerical codes for "a, t, o, m" are 1,2,3,4, respectively, and the code for "atom" is 15. Then the "(partial) sequences" f = (0,1), (1,2), (2,3), (3,4) and F = (0,15) represent two ways of constructing the word "atom." as follows: you write the word represented by (0,1), (1,2), (2,3), (3,4) from left-to-right in the 0,1, 2, 3, order. This yields "atom". But there is another representation (0,15) which also yields "atom". There are six other such "sequences" that yields "atom". (Writing symbols from left-to-right or from right-to-left in a special way is a fundamental requirement in mathematical logic. For the actual construction used, the numbering is reversed.) You construct all the 8 partial sequences, then put them into a finite set X, which represent the single word "atom."

Mostly, the actual mathematics uses the theory of the natural numbers as its foundation. The consequence operators are represented in the mathematical model by special collections of the sets such as X. For example, suppose that the hypothesis is X and a consequence operator conclusion Y. Then these two sets are related by the operator. That is, the relation contains ({X},{Y}) . To determine what "Y" means, one goes backwards using the numerical codes and determines the actual word to which it corresponds.

This special numerical coding is the first step used to obtain the standard model for the GD-world, GGU and GID models. The standard model includes all of the additional properties that are obtained by means of mathematical reasoning. There's, however, one more very important concept. If we stayed with language and only with language concepts, then what we have is a model for linguistic behavior, a model for how human beings think with strings of symbols and an immediate correspondence of a symbol string to the behavior that the string may be describing. This process is generalized to include representations for all forms of (human) sensory data. Now suppose that a word is a description for an event. Are there physical notions that can be modeled by relations between corresponding symbol strings?

5. Nature and Logic.

The meaning of the last statement in section 2 should now be somewhat more easily understood. We don't need to know how the brain biologically puts information together in order to proceed from X to Y. However, we do know of a set of written rules that will yield the same result. From my view point, we also don't know how a secular Nature puts together very specific members of U and only those and takes an event described by X and yields an event described by Y. Does Nature simply say, "Event E(1) you will, indeed, you must follow these and only these members of U and nothing else and event E(2) must be the result"? It will serve no further purpose to discuss this aspect of Nature, at present. Most scientists simply assume that whatever "forces" entities to follow certain specific members of U is whatever "forces" entities to follow certain specific members of U.

Let's illustrate what comes next by the art of cartoon animation. An animator has decided that he wants to make a cartoon of flying pigs. In our immediate physical environment, pigs don't seem to fly. The animator has no idea as to how he would adjust physical laws so that pigs could fly. He just puts his plan into action and starts to draw hundreds and hundreds of images of how he believes the pigs would appear as they fly over a lake. Flipping through the drawings at just the correct speed, due to his persistence of vision, the pigs actually appear to fly and their flight doesn't seem chaotic but is rather smooth or, should we say, logical in character. If he wants to reproduce these drawings on motion picture film, using the 30 frames per second mode, he needs 300 drawings for 10 seconds of pig flight. Technically each drawing might be called a frozen frame, freeze frame, (like the "still" or "pulse" control on a VCR or a DVD device), or simply a frame.

Now each drawing can be made to correspond to a specific string of symbols, a word, that is called a frozen segment in order to correlate it to the freeze frame idea. The entire set of all such symbols can be made to correspond to a special single string of symbols, a special standard word. What is interesting is that there's a very simple logical process A, a set of actual rules we can all apply (I hope), and a specific logic-system V(300), that yields every single pig flight "image" (i.e. the symbol string that corresponds to an image) in the exact same order needed for the animation process to yield the flying pigs. The combined process A(V(300)) models the "flipping through process" when it is applied to the single element, the stack of drawings. In other words, one could say that the combined process A(V(300)) "forces" the pigs to appear to fly when this process is applied.

But, clearly 300 drawings are not enough for a sustained flight of any significance, So, he makes four more sets of drawings V(350) and V(400) and V(600) and V(1000). These are each placed in a tray. The symbolic form "V(350) and V(400) and V(600) and V(1000)" can be considered as a "set of " V(n). Each drawing, image, is a member of a language L. The set "V(350) and V(400) and V(600) and V(1000)" = W represents the tray. There is a logic-system S and logical process A that when applied to W yields each of the "V(n)" individually. This process replicates the process we use to view the tray as a single entity and then view each object in the tray individually. The next step is to select a specific one V(n).

The animator selects from the tray the stack of drawings V(300) and shows his animation to another animator who will continue the flying pigs cartoon and asks, "Can you figure out how I've altered the physical laws so that pigs now fly?" Determining this might take a great deal of scientific experimentation with the animator's flying pigs. But, the facts are that the animator did not modify any physical laws. He just imagined the scenario and reproduced his thoughts. Indeed, for the observer, how the drawings are made and stacked in piles is not significant. An observer can assume that an intelligent agent constructs the stacks and that is all that is necessary. This is done all the time in science with the notion of the "primitive," where one does not consider a primitive as constructed from more elementary members, but rather simply assigns properties to it.

A basic aspect of the scientific method is the assumption that there's a describable cause for each alteration in the behavior of a physical-system. Secular science attempts to discover the physical processes that seem to lead to such alterations and using such processes the scientist can create a series of freeze frames that will yield the moment-by-moment behavioral changes where each change is an actual event. From our previous discussion, the collection U can be considered as such a "cause." One must also apply this fixed correspondence between each frozen segment and a specific event. Since this correspondence is always assumed as part of the process, the mentioning of it is often suppressed. But, there's an additional difficulty. How refined, how many freeze frames, are needed to express properly the actual alterations in physical-system behavior? For a flying pig and motion picture film reproduction, you only need one drawing per film frame. Is this all you need for a refined description for how physical-systems alter their behavior?

Relative to the method used by Richard Feynman to describe how electromagnetic radiation interacts with physical objects, as mentioned, he writes, ". . . while I am describing to you how Nature works, you won't understand why Nature works that way. But you see, nobody understands that. I can't explain why Nature behaves in this peculiar way" [Feynman, 1985, p. 10]. Relative to one implication, I must disagree with Feynman.

Scientists do not know how Nature works, in this case. Feynman has only "modeled" Nature by little probability arrows on a piece of paper, stop watches, rules for combining arrows, rules for calculating probabilities, and the so-called paths taken by photons. He has also applied logical deduction. I respectfully submit that Nature probably doesn't do geometric diagramming with probability arrows, Nature probably doesn't do the mathematics and other such human paper and pencil activities. His geometric approach is, I submit, but a model for how Nature works and nothing more AND as with models like the balloon model, except for the behavior between entities, many aspects of the Feynman approach need not correspond directly to anything in objective reality. Indeed, certain predicted aspects of the GGU-model can replace various aspects of these Feynman diagrams.

The science of the invisible microscopic world is the science of models as I have defined them. Scientists don't really know how or why Nature does something in many, many cases. Because of all of these uses of models, it shouldn't be difficult to comprehend the GGU-model. But what about the paths of motion for photons? Considering the mathematics used, you can't use a finite collection of freeze frames as a complete refined description for such behavior. Thus, in such cases, there doesn't seem to exist an actual standard V(n) such that when A is applied that yield the refined alterations in photon behavior that produces its path of motion if the motion is actually "continuous", which it need not be. Of course, one could allow infinitely long standard words, but other methods are used that predict that there logically exist "words" that do essentially yield all of the refined photon path behavior. Indeed, they exit, in the same manner, for all paths used in quantum dynamics.

When they exist, words such as W might be described as the images, or building plans for how physical-systems will evolve since it's the force-like process that would put them together to form the actual development. After the concepts associated with "W" were discovered, there was also discovered the following quotation attributed to the exceptionally brilliant scientist Hermann Weyl.

Is it conceivable that immaterial factors having the nature of images, ideas, "building plans" also intervene in the evolution of the world as a whole?

As will be seen in the remainder of this article, such immaterial factors are indeed conceivable.

Finally, logic-system S, when coupled with A, yields a consequence operator. It's a simple matter to show that Tarski's three properties for the consequence operators expressed in event-language and hundreds of other properties seem to "model the behavior" of any evolving physical-system. To accommodate the four possible modes for universe development and for more refined details, S and A are replaced with predicted *S and *A. The properties for *S and *A are similar to those of S and A, respectively, but differ in some essential features.

It is rather easy to show that the accepted physical laws or processes and the logical procedures used by members of any science-community in order to predict physical-system behavior generate specific consequence operators. Of course, this is the entire idea behind theoretical science, and much of science in general, that "logical" behavior governs the development of any physical-system. Whether or not such logical behavior is simply a requirement for human perception and comprehension, or Nature actually behaves in this fashion is a philosophic question, the discussion of which may depend upon considerations that need not be scientific in character.

6. The Ultra-stuff.

Although not originally constructed in this manner, the GGU-model uses processes that satisfy modeled human mental processes, processes we use to construct physical objects. The results of these processes are describable and observable. The results duplicate how physical-systems behavior. These processes are embedded into a mathematical structure which automatically extends them. The GID-model is the "intelligent agent" interpretation. It is after this embedding that the ultra-stuff appears. This "ultra-stuff" is interpreted for the GID-model as signatures for a higher-intelligence. But, what is this ultra-stuff?

The mathematical theory that correlates directly to the model for "thinking" in section 4 is also called a mathematical structure. In the 1960s, Abraham Robinson discovered a general method to expand such mathematical structures so that the new structure would essentially include the old structure and often a lot more. Now this is a general approach. No new mathematical axioms are used, and no logical or physical presuppositions are added. The new structure obtained is usually one that can be simply termed as a nonstandard model. It's within this new structure that the ultra-stuff is predicted to exist mathematically. The word mathematically is very important here. It simply means that we have mathematical proofs using mathematical reasoning that state that "For each logic-system S, there exists an ultra-object *S with such and such properties." Now recall that the properties of the original S are really properties about collections of natural numbers. BUT, when interpreting the linguistic properties of *S, the numerical codes are suppressed, just like the rubber in the balloon model.

The same suppression of the numerical codings is used for the ultra-stuff associated with the material discussed in section 5 with an additional suppression. But, objects such as *S have linguistic-like properties similar to those of the S. These *-objects, and others, can be interpreted using similar linguistic terms. This linguistic correspondence is suppressed in the GGU-model and the physical-like behavior being described by the "event, the force-like and object" interpretation is used. Although, *S is "intelligently designed" it can be considered as a physical-like primitive to which *A is applied and the individual *V(n) produced. Further, each of these *V(n), although designed, can be considered as a primitive. Scientifically, this occurs in Quantum Field theory. The primitive fields yield primitive atomic entities.

Each altered frozen frame is interpreted as a (physical) event. The A (or *A) is the force-like process that ultimately yields the events in there proper order. There are a few more physical-like interpretations associated with the GGU-model that correspond to other suppressed linguistic concepts. Thus, for the GGU-model, the actual process starts with mathematical relations between "numerically" coded things that behave like strings of symbols. Then the linguistic interpretation is suppressed and the physical-like interpretation brought to the forefront. Such an interpretation scheme isn't foreign to theoretical science since this is done within the subject of quantum logic and was done, in the early 1970s, in an attempt to answer the pre-geometry cosmological problem.

Often the same terms are used for the linguistic and physical-like interpretations. But, the context of a discussion always implies which of these two interpretations is to be used.

It's a remarkable fact that mathematicians are able to "see" mentally the relations between both the original S and the new *S. Indeed, the entire idea of Nonstandard Analysis is that the mathematician is exterior to these processes and can make such observations. Unfortunately, there were no linguistic-like terms that could be used to discuss such a relation for GD-model comparisons and no physical-like terms for GGU-model comparisons. New terms had to be invented and this is difficult to do.

The difficulty in selecting terms is that they should be mentally related to the previous terms, yet different from these terms and, if possible, not be the same as any other scientific terms. The prefix "super" was chosen first. But, it was discovered that another scientist had also chosen this term for an entirely different purpose in discussing logical operators. Hence, the prefix "ultra" is employed and you will see exactly how it is used. It should be pointed out that within the actual mathematics itself, the "ultra" term need not be the actual general term used. Recall that we have the original interpretation for such linguistic terms as a string of symbols, the word W, and A, a mental processor, etc. What we now have is an additional interpretation for similar but different objects within the nonstandard model that carry an "ultra" prefix. But what do I mean by similar?

One of the most significance properties of this mathematical approach is that if the properties of S are characterized mathematically, then *S has these same properties with respect to standard and other types of W if one is careful how one expresses these properties. But, *S has new and startling properties that S doesn't have under the new interpretation. Moreover, these new properties are predicted by the nonstandard model all by itself. All one has to do is to find them. Nothing is ever added by the mathematician except for the names given to these new objects.

It's very important, as these relations are discussed, that the concept of the model always be kept in mind. The discussion is only about behavior and not about the stuff of which the model is composed. The discussion is about new objects that are predicted logically by mathematical means. At no time does this ever imply that these new objects exist in objective reality. But, it's scientifically rational to assume they exist. Such an assumption is much more rational in character than many of the assumptions within theoretical science since this assumption is based entirely upon observable behavior as mathematically model and prediction. Within subatomic physics and cosmology, assumptions may only be partially based upon mathematical structures. In these sciences, the belief that certain objects or processes exist in objective reality is a philosophic assumption that cannot be directly verified. But the GD-world, GID-model and GGU-models are based upon observable behavior that predicts unobservable behavior that is indirectly verified. For this reason, a stronger form of the science method is being applied than applied for much of today's physical science.
7. The G and GD-world Model.

What is an intelligent life-form? In this work, an intelligent life-form is defined as any entity within the physical universe and that (1) did, does or will communicate by strings of symbols in the manners discussed previously and that (2) uses mental deductive processes that can be modeled mathematically by methods similar to those previously discussed. In order to correspond to later work, intelligent life-forms, as defined, will also be called standard intelligent life-forms. Note that the methods used in this analysis are methods that can be applied at any time and to any such intelligent life-form. This should always be understood. What has been developed uses the simple methods used to compared the mental power and other such concepts for various intelligent life-forms; but, as will be seen, the conclusions refer to ANY standard intelligent life-form at any moment during its development, not just humankind. Of course, other such life-forms need not exist.

The Grundlegend model (G-model) is a rather different model than the GD-world, GGU and GID-models. This term is used for the actual mathematical theory termed a structure. The term "model" is a technical term from the abstract mathematical subject "Model Theory." The GD-world (or model) is the first application of the G-model and is a model, as described above, for behavior. This special application uses words to discuss what one might mean by one attribute being better than or greater than a similar attribute. The method used is a form of mental deduction not previous analyzed called adjective reasoning. If you consider the word "good" as a measure of a defined set of behaviors, then we tend to consider the phrase "very good" as somehow or other better than just being good. The simple deduction used is that "very good" isn't only stronger than "good" but implies "good" as well. There's also the concept of "reasoning from the perfect." If one defines "perfect" as meaning a certain collection of attributes where each is of a certain strength, than such reasoning will take the word "prefect" and deduce each of these attributes. It indicates that the collection of attributes is complete, that is no attribute is added to it.

These two types of mental processes can be modeled mathematically. That is, they are modeled by a standard mathematical structure and this structure becomes the standard model. Then everything is embedded into another mathematical structure and we get a nonstandard model. (Of course, the word "nonstandard" doesn't mean that there's something wrong with the model. This is but a technical term. This model is a "sub-model" in a model for set-theory.)

Remember that, technically, entities in the GD-world, GGU and GID models are mathematical objects associated with the natural numbers. But, the relations and terms are interpreted consistently in terms of certain general aspects associated with linguistics, logical discourse and behavior for the GD-world and GID models, and physical-like terms for the GGU-model. The GID-model is translated (interpreted) solely in terms of intelligent agency via the "signatures" being displayed by the generating operators. (There are many papers on this website that simplify the GID-model notions and these will not be repeated here.)

For the GGU-model, it's the relations between translated terms that give rational descriptions for behavior. The simplest way to illustrate this is with a simple example. Let P denote the set of standard words, as defined previously, used throughout the world at the very moment you read this sentence. Due to how science-communities describe notions from their disciplines, P is technically not finite. Thus P is the standard language used by, we hope, standard intelligent life-forms. One great confusion is that there is another language "the metalanguage" used to discuss P. This is like the language one uses to discuss how to use the "hyperterminal markup language" (html). It contains the html symbols used. (The language analyzed, P, can be written in one color and the metalanguage another.) The mathematical model predicts, via mathematical reasoning, that since P exists, then rationally there exists another set of stuff *P, generally called a nonstandard language, with certain properties. Whether *P exists in some "reality" is a philosophic stance, but indirect evidence indicates that its existence is a definite possibility.

First, members of P are members of *P. This is also stated as "P is a subset of *P." But what properties does *P have? Well, members of *P have many of the same properties as do members of P. Indeed, if certain properties for P are stated in a fixed but special way called a first-order statement, then certain members of *P have these "same" properties. For example, given any natural number, say  n = 1,000, there's in P a word with  n = 1,000  symbols counting left-to-right, with repetition. This property can be more generally stated as follows: given any natural number, there exists in P a word with length equal to that natural number. Further, there's an intuitive relation called the "length" relation that assigns to every word the number of symbols, with repetition, contained in that string of symbols as you might count them from left-to-right. The above statement gives a property for this length relation.

Mathematically, the length relation is also termed as a function or map or mapping. What this length function does is to take a given member of P, call it  q, and assign to p its length  n. This might be denoted as L(q) = n. More formally the above property says that

(A) given any natural number  n, there's some q in P such that  L(q) = n.

To understand properly how the length function behaves, you might write down a few statements about counting the number of symbols from left-to-right such as if there's a word of length 1,000, then adjoining a symbol at the right most end will give a word of length 1,001. (Notice that mathematics has as part of its basis human experience. We know what it means to "adjoin a symbol to the right most letter of a finite string of symbols.") Now does *P have these same properties? The answer is yes if you translate these properties correctly. For example, one translation would formally look like the following:

(*A) given any *natural number  n, there's some p in *P such that  *L(p) = n.

In mathematics, it's said that (*A) "holds" in our nonstandard model. This notion of "holds" has two meanings. First, a mathematical proof has (*A) as a conclusion. Then, using a specific definition, the statement satisfies, in a set-theoretic manner, relations and objects that are members of the nonstandard "model." (Knowing this definition is not necessary for this article.) How would this formal statement with the "star" notation be translated into a statement about symbol strings, about words?

In technical work, the "star" takes on various names such as "hyper" or simply "star." But if the starred object had a name prior to its being starred, then that name is also included. [The reason for this will be illustrated below. Further, for certain objects, the "hyper" is replaced by the prefix "ultra."] Thus "*natural numbers" might be read "hypernatural numbers." In this example, there are two new entities to examine, the hypernatural numbers and the *L.

How do the hypernatural numbers behave? Well, the same translation method is used with the properties of the natural numbers. Please note that the interest is in behavior patterns. Also, to look at some interesting properties, we adjoin the concepts of the addition of two numbers and  <  (less than) to the natural number basic properties. Thus, we know that given any natural number  n, then  n + 1  is a natural number and further  n < n+1. This property translates to the following: given any hypernatural number  n, then  n *+ 1  is a hypernatural number and further  n *< n *+ 1. Note that the exact same behavior pattern is being displayed and holds mathematically for the hypernatural numbers. This is the reason the standard name is retained as part of the nonstandard name. (Note: Technically, if n is considered as a specific natural number that the inequality would be written as  *n *< *n *+ *1. However, by a special construction, the *n and *1 are considered as representing the number n and 1, respectively.)

We also have the apparently new objects  *+ and  *<. But to learn how these new objects behave, one writes down in the required special way the properties for  +  and  <  and then interprets these properties in terms of the new object  *+  and  *<. Again the exact same behavior patterns appear.

By the way, the process of translating into the star notation is called "star-transfer" or simply "transfer." Now if the set of hypernatural numbers behave exactly like the set of natural numbers in all respects, then hypernatural numbers would be of no additional interest. But, the facts are that in certain respects the hypernatural numbers don't behave exactly like the natural numbers and it's only because the mathematician can stand "outside" or "external" to this star-transfer process and make comparisons that it's known that the behavior is somewhat different or that the hypernatural numbers behave in a nonstandard way. That is, they have properties that the natural numbers don't have.

It's not the purpose of this article to discuss in-depth the mathematical differences between the hypernatural numbers and the natural numbers. These few mentioned differences are just being used as illustrations. However, when a comparison is made, it's discovered that the relation  <  and the relation  *<  have different properties. For any set of natural numbers, <  has a special property. But, for certain sets of hypernatural numbers, *<  do not have this same special property. It is the "well-ordered" property. One of the conventions used in nonstandard analysis is to consider <  as a restriction of *<  to the natural numbers which again by convention can be considered as a subset of the hypernatural numbers. Thus the * on *<  is omitted. This can be confusing to those who do not know about such conventions. Thus < , when restricted to the natural numbers has this property, but for the hypernatural numbers it does not. For the GID-model, what about the more interesting, at the least for this investigation, length function  *L?

It's customary to use a positive language when discussing objects that exist mathematically in the sense that the statement being made is a conclusion of a mathematical proof. This customary procedure will be followed throughout the remainder of this article. This is illustrated by the next statement. First, its important to note that there are (i.e. there exist mathematically) pure nonstandard words in *P and these pure nonstandard words are not members of P. Since this application of these mathematical techniques is relative to linguistic concepts, some of the more significant of these pure nonstandard *words are, in general, called ultrawords. Finite sets of these nonstandard words have the same basic properties as the words that appear in P. But, they have other properties that they don't share with any member of P.

A simple analysis shows that some of these nonstandard words have lengths that are greater than, in the usual sense, any standard word. A deeper analysis shows the remarkable fact that some of the ultrawords actually contain objects that would correspond to an hyperalphabet. That is, objects that behave like the alphabet from which the words in P were constructed but which can't be members of any standard alphabet. If you take from P only those strings of symbols that, intuitively, "have meaning" and characterize what one might define by the phrase "have meaning" for a standard intelligent life-form, then there are ultrawords that have ultrameaning and they would, in general, behave in a similar fashion. Although this article is designed only to give a basic illustration of how these models behave, it's useful to illustrate one of the greatest differences between ultrawords and standard words.

As mentioned in section 6, symbols such as S and A, the one used next, can be used as part of a mathematical model for how standard intelligent life-forms think. Further, for a given logic-system C, the ultra-logic-system *C has the exact same *-transferred behavior patterns as does C. But, *C actually differs greatly in its overall behavior. Just a few examples will suffice. Let w be a standard meaningful word upon which the logical process A(C) is applied. The logical process being modeled by this combination is the simplest one that human beings apply thousands of times a day, the propositional logic. These are the same logical patterns used to construct modern computer microprocessors.

Well,*A(*C) also applies to w. Comparing the two results of these processes, we find that every standard conclusion obtained when A(C) is applied to w is also obtained when *A(*C) is applied to w. But, numerously many pure nonstandard words are also obtained. Thus it's ALWAYS the case that *A(*C) is more powerful than A(C) under our definition of such a concept.

Analyzing any of our measures for the strengths of a logic-system C leads to the conclusion that *A(*C) is stronger than A(C) in each and every case. Further, there are *words to which *A(*C) can be applied, but A(C) can't be applied. (As mentioned, ultrawords are special *words.) It's also interesting that when *C is applied to certain *words, than standard words may be part of the "logical" conclusions so obtained. These results and many more generate the GD and GID models. Finally, its very important to note again that the significant GD-model results are all PREDICTED and they can be applied to any discipline that might need such scientifically obtained conclusions.

8. The GGU-model.

Approximately 90 years ago, Einstein introduced the idea of the energy elements (photons) as an "imaginary" physical entity that carried energy. It carries the light (electromagnetic radiation) energy among other properties. How does this object behave? Can we truly relate this to anything within our human experience? Is a photon a particle or a manifestation of a wave, a wave produced within a medium? Thomas Barnes (and many others) states that "Light is a wave." [Barnes, 1983, p. 59] Nobel Prize winner Feynman states, "The first important feature about light is that it appears to be particles. . ." [Feynman, 1985, p. 36] Indeed, if one doesn't assume that it behaves let a particle, one would probably have a difficult time passing a modern physics course. But, does a photon actually behave like a particle?

What is a particle? Usually, it's a very small material object. If this is the case, then a photon doesn't behave like anything that we have ever experienced within our material world. Feynman's major contribution to physical science is in the area of quantum electrodynamics (QED) and this theory uses as its basic entity the photon. But Feynman states, "The theory of quantum electrodynamics describes Nature as absurd from the point of view of common sense." [Feynman, 1985, p. 10] My dictionary states that absurd means "clearly untrue or unreasonable." I suppose that Feynman means "unreasonable" in the sense that the behavior being described is actually counter to our experiences. Of course, it cannot be contrary to a logically obtained theory since if it were QED would not exist.

When a photon interacts with an assumed material object such as an electron we are told that, "The laws of conservation of momentum and conservation of energy are then applied to this interaction as they would be to the collision of one billiard ball with another." [Barnes, 1983, p. 66] But a photon doesn't behave like a billiard ball since the first billiard ball doesn't vanish after interaction while the specific photon ceases to display its presence after interaction. Further, the photon doesn't have a mass. Depending upon how one expresses the relation, the actual energy and momentum appear to depend upon the frequency or wave-length of the photon. Indeed, the velocity component of the photon isn't transferred to the electron except that the product of the frequency and wave-length is the velocity of the photon.

Since the entity described as a photon seems to vanish, there's another way to describe this interaction. Duff states that the electron "absorbs" the photon and, when an electron accelerates, it also "emits" photons [Duff, 1986, p. 26]. But an electron is supposed to have no internal structure. How does something with no internal structure do such things as absorb or emit a photon? (Of course, as one would expect, there are rather imaginary ways to give it an internal structure, where a point electron is surrounded by a "cloud" of photons.) I mentioned that a photon has no mass, but it's affected by a gravitational field as if it had some mass. Every particle of matter known that has ever been observed to be ejected from some source, will add its velocity to the velocity of the source from which it was ejected. But, for our physical world, this isn't the case with the photon.

Feynman calls the entire theory he uses as "crazy" and "strange" and it's clearly distinct from anything that humans have ever experienced within their observed environment. But, Feynman was able to introduce "picture thinking" into this theory, the Feynman diagrams. So, something that doesn't follow any of our human experiences does follow, according to Feynman, a set of geometric diagrams. This means that individuals who wish to apply this theory to predict behave must learn all the rules and procedures as specifically stated, apply a fixed scientific logic and not gave any further thought as to the obvious "strange" behavior presented or the complete lack of any experiential intuition. Further, don't ask certain questions. Just do as you are told to do. But, why have I brought up this "absurd" theory as a major illustration?

(Note: The metamorphic-anamorphosis model (MA-model) forms a portion of the GGU-model that deals with special and "sudden" alterations in physical components or behavior. In this article, the term "subparticle" was previously used. To prevent incorrect mental images as to models for subparticles, the term "properton" replaces the term "subparticle." Without visualizing, a properton is an entity characterized only by a list of properties.)

The General Grand Unification Model (GGU-model) has a few things in common with the basic philosophy of science used in the theory of quantum electrodynamics. The fact is that the GGU-model describes the behavior of new physical-like entities, entities that don't follow human experiential intuition as physical entities but do follow human linguistic experiences. If one can learn how photons are supposed to behave within quantum electrodynamics, then it shouldn't be difficult to learn how the GGU-model behaves. If one accepts the strange, absurd, or crazy behavior of photons as described within quantum electrodynamics, then it shouldn't be difficult to accept the behavior of ultrawords, ultralogics, ultra-logic-systems and propertons within the GGU-model without having any further in-depth or additional comprehension. Indeed, we have more knowledge about such things as an ultraword or ultra-logic-system than we have about photons since their internal structure can be analyzed. But, of course, it would be nonsense, would it not, to consider a photon as having an internal structure or that it's composed of yet other fundamental stuff? Of course, one might think of it as composed of a "field" vibration.

Well, the GGU-model predicts that a photon is composed of other fundamental stuff and that everything within our universe is composed of the exact same fundamental stuff. How this new stuff is put together to produce such things as photons, electrons, quarks, etc. is much, much simpler than putting together the words found in this particular sentence. Human behavior is the basis for the process employed. It is the gathering "together" of basic building materials. For example, all one needs to do is to bundle together collections of ultra-propertons, apply a type glue and what you get is an electron. Ultra-properton, at the least, carry each defined physical property as an individually encoded infinitesimal.

Remember that the GGU-model uses same of the terms as used within the GD-model; but, like a very, very intelligent animator, the GGU-model assigns to these entities physical-like behavior. But, first and foremost, the GGU-model is a scientific cosmogony. Such a cosmogony gives processes and entities that can produce many different cosmologies, where a cosmology is a theory for how a particular universe is formed or evolves over time. On the other hand, such a cosmogony can't be what is called a tautological description. It should, at the least, be "logically falsifiable." This is the case with the GGU-model. There are certain statements about GGU-model processes that are speculation. On the other hand, simply due to the existence of the GGU-model, there are statements that are general conclusions about the GGU-model and they are fact. Before presenting some of the GGU-model speculations, these facts are stated.

(1) Any scientific description that describes a developing physical-system and that includes any statements relative to mechanisms that at any time T, in the past, could produce or sustain such a development has the GGU-model as an absolute alternative.

(2) For a given scientific theory that you might select as consistent with your personal philosophy, whether it be a Big Bang, biological evolution or otherwise, the GGU-model can reproduce the same scientific theory predictions.

(3) There are infinitely many scientific descriptions for how a specific physical-system may have developed prior to T including all of the standard theories. Further, there may never be a scientific language that can describe in any detail how a specific physical-system developed prior to T.

(4) Scientifically, the GGU-model predicts that there are infinitely many different scenarios, different "beginnings," and the like that could have produced the universe in which we dwell, the earth on which we live and everything about it, and even you yourself. This includes an actual beginning for a possible cyclic universe that may appear to "infinitely repeat itself" or other describable "eternal" universes. Each of these infinitely many scenarios could produce the exact same results, the exact same images on photographic plates, the exact same readings on data gathering machines as we observe these machines, today, within every laboratory. On the other hand, among these scenarios there may be one that gives a "simpler" explanation for evidence found today within our solar system and, especially, for seemingly anomalous evidence.

(5) It's possible logically for a physical-system to appear suddenly in complex and structured form. Such a physical-system can also vanish suddenly. These physical-systems need not be subatomic.

How are all of the above conclusions obtained? In this article, only the most basic and least technical aspects of the GGU-model are discussed. As mentioned in section 5, we have descriptions (i.e. symbol strings) that yield information on how a physical-system evolves over a period of time. Such a sequence of descriptions, like the animator's drawings, follow from members of U that the physical-system is somehow or other required to follow. Further, the generation of these descriptions from known members of U follows the basic rules of logic. The descriptions exist even if the appropriate members of U haven't as yet been applied properly and specifically to such a development by a standard intelligent life-form.

Of course, the actual development might vary somewhat depending upon the intervention of other processes. But, we, at the least, have a sequence of descriptions that yields an ideal behavior. Each member of this sequence of descriptions corresponds to a physical event. Thus we also have an actual event sequence as well. A particular sequence of such descriptions is called a developmental paradigm. Recall that such a sequence includes representations for visual, audio, etc. impressions. Hence, from a standpoint of what is allowed within science, the developmental paradigm represents all that science can "know" about the actual appearance and development of a physical-system.

In what follows, recall the Richard Feynman statement. Often science is interested in patterns that illustrate how Nature behaves and such patterns will not tell us why Nature behaves as illustrated by such patterns. The claim is that there may be no possible standard way to know why Nature behaves in such a way. Also recall the Hermann Weyl description for a possible physical-like object that gives "images, ideas or building plans" for an evolving physical-system.

Originally, as discussed in the above mentioned book, ultrawords and methods that appear in Herrmann (1994), and elsewhere, are coupled with "choice" processes to yield a physical-system, where the description is directly related to an event. For this new approach, the entity defined as an ultraword is not directly employed although equivalent ultrawords that can be defined. An ultimate ultraword is definitely not employed. However, the actual objects used do appear to model the "building plans" notion, where the images are part of the plans. In general, these predicted objects can be called ultra-sets. (For the more formal term a specific value is substituted for the term "ultra.") In this new approach, the idea of "instructions" that represent simple substratum "laws" is employed. However, the notions of descriptions are still employed in that the instructions, when applied, yield entities that correspond to the descriptions. For this article, only the new approach for descriptions is mostly illustrated.

For the description approach, what is modeled corresponds to mental-like entities and processes that are actually composed of specific collections *development paradigm entities. The GGU-model predicts that for any universe that is described by a developmental paradigm there exists an ultra-set W that contains ultra-logic-systems. Then when another mental-like combined process *A(*S) is applied to W an appropriate *logic-system *V(n) is selected. The "appropriate" one depends upon the type of universe being constructed. Then the same *A type process is applied to *V(n) and an entire developmental paradigm is produced in the exact moment-by-moment sequence that corresponds to the exact moment-by-moment sequence of the actual events.

This description approach does not correspond to the pure physical-like generation of a universe, a method that can be considered secular if the "magic" term "random" is employed. For this "instructions" approach, the term "instructions" is substituted for each entity in a developmental paradigm and this yields the instruction paradigm. Such instructions correspond within the substratum to the scientists idea of a physical law. The same scheme as for the developmental paradigm is employed with to two more processes adjoined. The predicted ultra-propertons are "gathered" into sets of sets of sets of . . . as we do when packing objects for shipment. These are produced by either a higher-intelligence or by "random" behavior. The last step is physical realization that is related to selection of numerical-like quantities. Even if one selects the "random" approach, the higher-intelligence signature is not eliminated although it can be ignored.

For the description approach, the selected *V(n) can be analyzed. What is discovered is that there are *deduced members that can be described as hyper-images or hyper-descriptions. These are like additional drawings in the animators stacks of drawings. These are predicted and not part of the original "drawings." They correspond to hyper-instructions that produce substratum entities, substratum events, that cannot be eliminated. These events that have various interpretations. But, what type of events are these?

They have any of the *-transferred properties one might associate with standard physical events. Depending upon the three possible types of original descriptions, distinct, repeated or empty, there can be hyper-descriptions that can't correspond to any standard physical event. Hence, they correspond to *events. Indeed, the actual symbols that appear in a *description for these nonstandard events use nonstandard alphabet symbols. This means that no standard intelligent like-form can express or comprehend completely a description for such nonstandard events. It just wouldn't "register," so to speak, on the brain as an event. For these reasons, such events that might be associated with these nonstandard images are called ultranatural events.

As an event sequence (analogue) illustration, consider a DVD and a machine that plays back the images via a old style cathode-ray TV-tube. Start the DVD and at the first displayed image push the pause-button. Now go forward one image (i.e. frame) at a time. The step-by-step collection of all the images is a standard event sequence, where the events are the images on the TV-tube. After the first frame, the next frame yields an image that is slightly different than the first image. Theories for subatomic behavior claim that to produce the second image it may require millions of physical events to occur within the TV-tube. These events are, at least, controlled by mathematically described statements. But, none of these events are recorded on the DVD. You cannot sense that any of these events has actually occurred. If you accept the predictions of subatomic physics, then you are often forced to accept that the second image is indirect evidence for the physical existence of all of these "unsensed" events.

Now consider a *DVD and a *event sequence. You use the machine, and it will yield the standard event sequence in the exact same step-by-step manner. Consider the first image (or frame) followed by the second image (or frame). Independent from any physical theory, there exist on the *DVD possibly millions of *events that have occurred between the two images you sense. These *events are not necessarily in the tube. These *events are mathematically predicted to occur and cannot be sensed in the usual physical way. To differentiate them from the standard images you sense, the term ultranatural event is used. Using the same rules used in subatomic physics, the second image is indirect evidence for the existence of an ultranatural event.

What good are these ultranatural events? Well, the GGU-model need not exist without these ultranatural events and within the GGU-model every physical event sequence has associated with it these ultranatural events. Further, there can be ultranatural events even without there being any standard physical events. For what behavioral purpose could they be used? They can simply be repeated events. But, in many cases, they represent additional and necessary physical-like "stuff" that is needed to sustain and hold together, so to speak, all of the physical events. Indeed, if they were not present throughout the development of a physical-system, then the physical-system, from the GGU-model viewpoint, need not exist. For certain cosmologies, like the present eternally expanding Big Bang, such nonempty nonstandard *events are predicted to exist.

The simplest way to approach "participator alterations" is via the distinct developmental and instruction paradigms and a collection of primitives {*S} one for each altered universe. This is similar quantum fields, where there is one type for each particle.

9. The Nothingness.

In this section, relative to the expanding universe and other concepts, it's shown how the GGU-model actual predicts the composition of a basic nothingness. Unfortunately, this may be the most difficult aspect for a practicing physical scientist to comprehend. Physical scientists often, but not always, apply a certain sequence of mental processes when mathematical models are constructed. When a student first begins describing the behavior of physical-systems, the technical dictionary terms are combined together. For example, a photon with a certain amount of energy interacts with an electron and the electron is ejected from a piece of metal.

After this statement is made, a mathematical model is chosen from which additional predictions can be made. For example, what would be the relative velocity of the ejected electron? The model says the if f is the frequency of the photon, h is Planck's constant, v is the electron's maximum relative velocity, m is the apparent electron mass and W is something called the work function that represents the amount of energy needed to remove the electron from the metal, then the mathematical model states, assuming that this event actually occurs, that these numbers are all related by the equation hf = (1/2)mv^2 + W. Thus knowing everything but, v, the relative velocity can be calculated. In an important way, the GGU-model reverses this process. The GGU-model can actually "talk," so to speak, and create its own physical theory stated in terms of a technical dictionary of its own.

One of the statements associated with electromagnetic radiation is a statement about its energy spectrum. In order to express this concept in words, one can write down a large collection of actual symbol strings that state the various energies that such radiation can acquire. The set of all these statements yields what is called a general paradigm. Each of these "words" can be coded with natural number relations as discussed previously and embedded into the mathematical structure used. After this, it's observed that there are nonstandard words that are members of the embedded general paradigm. These aren't the same things as the previous ultrawords and these new nonstandard words can be analyzed carefully. When this is done, it's discovered that they describe the exact same behavior as for such electromagnetic radiation but they have one symbol missing from each description.

Having this one symbol missing doesn't detract from our comprehension of the behavior of these possible new entities. A new or different symbol is selected and inserted into the place where a symbol is missing. This inserted symbol is used to represent a new mathematical measure called an infinitesimal and would have the same strange behavior as the set of all mathematical infinitesimals. It's from considering these general paradigms and what the model is actually "saying" that the theory of propertons has evolved. In other words, the theory of propertons is NOT an ad hoc construction; but, it's based entirely upon what the GGU-model is trying to communicate.

Before discussing a very simple illustration for properton behavior, I mention the quantum physical concept of the virtual particle. According to many scientists, these are real particles; but, they can't be detected by any laboratory process. [Feynman, l983, p. 95] Some of, or often all of, their characteristics just pop into existence from the nothingness and very quickly these characteristics disappear into the nothingness. As a theory, quantum electrodynamics can't exist without virtual photons and since this theory predicts actual verifiable laboratory results so well, then it's claimed that they must exist. But, where do virtual particles come from and where do they go? Of course, this nothingness is believed to by a quantum field composed of ? One does not ask that question since it's a primitive. (Unless one invents another primitive, the "string.")

The GGU-model predicts the existence of a dense classical field of ultra-propertons. Based entirely upon the experiential process of choosing a finite set of objects from a bag of objects, the GGU-model gives an actual refined mechanism that combines these identical ultra-propertons together in such a way that the end result is the complex collections of configurations that will correspond to the physical systems that are present within each universe-wide frozen-frame. This configuration is called an "info-field." The final step in this process is the application of the well known mathematical operator, the standard part operator, an operator that has been recently shown to satisfy the axioms for a finite consequence operator. This operator takes the combinations of ultra-propertons and activates them and the result is a realized universe-wide frozen-frame.

In quantum physics, you also have such operators but they are applied to the nothingness and not to a describable and predicted object. Further, due to this properton mechanism, the virtual particles of quantum physics can be eliminated from the subject as real entities and replaced by this properton mechanism. Of course, the virtual particle notion can still be used as a model for behavior.

One physical object is scientifically differentiate from others by its properties and relative location. Given an info-field, the standard part operator is turned on. Then each physical system is realized as an event described by its configuration properton generated physical properties. As long as the standard part operator is applied the physical system continues to appear within the physical world. When the standard part operator is removed or turned off, then the info-field returns to its non-physical world state, mere combinations of ultra-propertons. But, there are other info-fields that are used for the next member of the event sequence. Via a participator choice, the standard part operator is then turned on and the next universe-wide frozen-frame is physically realized. By-the-way, theologically God is a participator.

10. A Powerful Mind.

In this section, I expand upon the notion of a higher-intelligence. Obviously, the GGU-model can be discussed in linguistic terms - the General Intelligent Design (GID) model. This gives an enhanced view of what the concept of a model truly signifies. All of the standard aspects from which the GD-world and GGU-model are obtained are finite in character. The term finite is to be understood intuitively. Indeed, the dictionary definition that a finite thing has measurable or definable limits is quite adequate for our purposes. Mathematically, it's possible to model such "measurable limits." But, discussing such a model within this article is unnecessary. For our purposes, the term infinite simply means "not finite."

The "words" (descriptions) deduced from a standard logic-system have various finite characteristics such as the length of the word and the number of alphabet symbols used within its construction. In the GGU-model and for all cases discussed, various *words deduced from ultra-logic-systems, when externally described, are not finite with respect to their characteristics. A difficulty occurs when this external view is not used. In this case, these *words have the same behavior patterns with respect to the finite concept as do the standard words. The term hyperfinite is used to indicate this mathematical fact. Various *words used in the GGU-model are hyperfinite in character; they behave as if they have finite characteristics but, in comparison to the finite character of the standard words, they are infinite. The ultra-logic-system *S also behaves, in certain respects, as a hyperfinite object. Certain properties of the standard S are finite in character. These same properties for *S are hyperfinite in character. For certain nonstandard objects to which *S applies, when a comparison is made, these hyperfinite properties have an infinite character.

The mathematical fact is that *A is more powerful, than A in that, from a deduction viewpoint, the application of *A yields infinitely many deductions over a small time interval, where as A can yield only finitely many deductions. Coupling this with the above information on finite versus infinite, there seems to be no other general linguistic way to describe the GGU-model than to say that it represents, as a model, an infinitely powerful mind - the GID-model interpretation. Thus, the deBroglie and Lewis quotations given above are justified scientifically from the viewpoint of the GGU-model. Of course, the GID-model interpretation need not be made. However, it's interesting to note that using linguistic terms that describe mechanisms that produce physical-systems or alter their behavior can be found in many ancient documents.

11. The Rapid Formation Model and My Personal Belief Statements

In the book that contains The Theory of Infinitesimal Light-Clocks [Herrmann, 1995], I derive certain physical "metrics" that can be used to increase or decrease physical time-rates of change. However, application of members of U for these purposes is rather unnecessary. The most direct way that any such alterations and other significant behavior can be made is via distinct event sequences. That is, the set U does not come first.

The pure event sequence approach is the one I now take as the bases for alterations in physical-system behavior that do not correspond to those obtainable from members of U as this set is accepted today by most science-communities. Moreover, I do not accept as real physical entities such things as quantum fields, virtual photons, etc. I accept these notions as imaginary constructs for behavior that we cannot otherwise comprehend. They form a vast part of U and are used to predict. This is termed an instrumentalists philosophy for these aspects of physical science. This does not mean that others need to reject the reality of such entities and the entities can exist from the GGU-model viewpoint. However, there existence is not physically necessary.

For a theological interpretation, a very strict Biblical creationary scenario is produced by application of the Rapid Formation Model (RFM). In general, this mode of development is independent from a cosmology chosen. That is, any of the known cosmologies can be employed and not only does RFM produce the universe as described by such a cosmology, but it does so in such a manner that a strict Genesis interpretation is maintained. Indeed, this approach also yields present-day evidence that satisfies both an ancient earth and a young earth interpretation without producing contradictions.

As discussed in my personal belief statement for astrict Genesis creation scenario, during what could be described as past history, physical-system behavior obtained by a general event sequence can contradict behavior generated by members of U as they are, at present, verified. This is only a paradox, however, and is resolved by this approach in that what we perceive today as physical law has been such only since the Flood.

12. A Comparison of Approaches

In the original approach to the GGU-model as it appears in the above book, Herrmann 1994 and elsewhere, the theory of finite consequence operators is employed. In particular, one operator, which I denote here by S', is employed. A developmental paradigm, d, (or now an instruction paradigm) is used. Mathematically, an "ultraword" W is predicted. This is a mathematical object that behaves, in general, like a written or spoken word (a finite list of symbols, drawings or images). When S' is applied to W, many entities are *deduced. From these, choices must be made.

Consider the finite set of rational numbers {1.23, 6.78, 5.345, 1.22}. We have learned how to apply some unknown mental process and place these numbers in the order, 1.22 < 1.23 < 5.345 < 6.78. The same type of process must be done for a hyperfinite infinite set of numbers in order to select the appropriate members of the developmental paradigm. Of course, this is but a model for behavior, a model for some process that selects the appropriate members of the developmental paradigm. The complete approach does not employ such a technically vague ordering process, but rather each member of a developmental paradigm is *deduced in the correct order. The ultraword approach requires many choices to be made from a vast collection of different types of entities. This new approach requires that one identifiable choice be made from similar entities.

Both approaches require the same rationally describable properton "gathering process" and the rational application of the "physical reality operator" (the standard part operator) that yields physical reality. In the new approach, these two operators are applied in the instruction paradigm case. Although not mentioned in the book, the exact same original processes can be applied to an instruction paradigm and it is these results to which the last two operators need to be applied.

One advantage for the original approach is that for participator alterations in physical behavior distinct universes are designed. These can all be incorporate with one ultimate ultraword and the operator S' rationally yield each ultraword from which specific altered universes are obtained by another application of S'. This ultimate ultraword method has not, as yet, been applied to this new method. Thus this new scheme is applied individually to the required pre-designed altered universes.

In this article, I have attempted to explain in "simple" terms the basic intuitive concepts needed to more easily comprehend various aspects of the GD-world, GGU and GID models. I may have failed to do so. However, it's sincerely hoped that the basic notion of a "model" for behavior is understood fully and that the behavior being exhibited by these models is the primary predictions to be considered.

For the GGU-model, linguistic entities are replaced consistently by physical-like entities. I repeat that such replacements aren't foreign to theoretical science since this has been done within the subject of quantum logic and was done, in the early 1970s, in an attempt to answer the pre-geometry cosmological problem. The GID-model corresponds to certain notions associated with logical discourse and is a re-interpretation of the GGU-model in terms of intelligent agency.

These models have yielded answers to many perplexing questions within science, philosophy and theology. As mentioned, the intuitive foundations for these models follow from our every day experiences. If you have requisite training, then you should now be prepared to undertake an in-depth study of the more complex aspects of these models as they are discussed in various, but more formal, journal publications. For example, in Herrmann (1994) and many other "" articles and monographs and more recent "" articles.


Barnes, T., 1983. Physics of the Future, ICR, El Cajon CA.

Duff, B. G., 1986. Fundamental Particles: an introduction to quarks and leptons, Taylor and Francis, London.

Feynman, R., 1985. QED, the Strange Theory of Light and Matter, Princeton University Press, Princeton.

Herrmann, R. A., 2014. The GGU-model and generation of developmental paradigms.

Herrmann, R. A. 2003. The best possible unification for any collection of physical theories.

Herrmann, R. A., 1995. The Theory of Infinitesimal Light-Clocks or as a free-book in #18, "Einstein Corrected."

Herrmann, R. A., 1994. Solutions to the "General Grand Unification Problem. . . . "

Herrmann, R. A., 1983. Mathematical philosophy and developmental processes, Nature and System, 5(1/2):17-36. The mathematics appears Books..

Lewis, C. S., 1978. Miracles, MacMillan Paperback, NY, p. 32: 1960. Mere Christianity, MacMillan Paperback, NY, p. 32.

March, A. and I. M. Freeman, 1963. The New World of Physics, Vintage Books, NY.

Original 1996, revised 24 DEC 2000, 10 MAY 2008, 24 NOV 2009, 21 SEP 2011, 11 JUN 2012.

Click back button, or if you retrieved this file directly from the Internet, then return to top of home page. If you retrieved this file while on my website, then return to top of home page.