Footnote MAs.118

MAs.118. Though, as the Bradley-Russell debate shows, relations constitute an imperfect vehicle for representing human reality, they are nevertheless integral to human communication.

I regard a relation as a verb-like construct that pairs two noun-like constructs. The noun can represent an abstract picture of some ponderable object or of an "action." An action is abstracted from a number of specific events. For example, there is no such thing as running devoid of runners. We may say that runs relates some object, usually some animal, to something else.

Joe runs

doesn't seem to be a relation, though.

In such a case, there are two ways to deal with the issue:

We may say that for aRb (or R) , b = ∅. Or we may note that "Joe runs" abbreviates the picture "Joe applies a running motion to his body." Here the relation is the truncated predicate "applies a running motion."

But infinity is a difficulty, for there is nothing to stop an infinite regress type of relation, as in
R>>> etc. In fact such relations are typical in mathematics. The philosophical problem arises in trying to define a relation in terms of another relation.
Or, we can write aRb, b=cSd and so on.

The regress problem, in fact, has much of Russell's paradox about it, at least insofar as the self-referencing aspect is concerned.

Then there is aR(aRb), which also brings up the self-referencing issue.

Despite these cautions, the way we ordinarily communicate with each other is well expressed as a system of relations, both empirical and logical/formal.

We may take every word in the Oxford dictionary as the set A and form the pairings A X A. We then take every word -- along with a certain subset of words teamed with auxiliary verbs and pseudo-auxiliary verbs like "get" -- in that dictionary that has even a vague possibility of representing some action-type notion and call that a relation.

Among subsets of relations are R and R where a and b rep variables and k is held constant.

All such relations represent propositions or assertions.

Empirical assertions are judged true or false according to whether we believe they accord with a purported objective reality.

Logical propositions follow rules laid down by logicians that reflect commonly accepted means of human reasoning. Such formal propositions are true or false based on their internal consistency. AB true requires that both A and B be adjudged true, and that this judgment must stem from axioms (though even that proviso is not altogether so, as Goedel showed). A~A is false by almost universal agreement.

I don't wish to rehearse all the arcana of the theory of logic here. My point is that the notion of sets of relations covers the field of logic very well.

Well, yes, it is true that many relation pairs are gibberish -- at present -- though someone may pair two words/images/ideas in some technical or poetic way that would be meaningful to a subset of hearers/viewers. Thus, we may say that most if not all pairs and their relation potentially are meaningful. By meaningful, we are saying that a binary truth value is applicable to the specific relation/proposition.

But it follows that only some small subset C of A X B would for any particular human generation or two represent meaningful propositions.

In a sense, can we not say that the formalism of relations and the generally accepted rules of logic show without further ado that the human brain/mind is wired to a primary in-built grammar? What human language cannot be cast in the form of ordered pairs of relation sets? This is not to say that the work of Noam Chomsky et al is pointless. But we do say that it seems that Chomsky's youthful quest was long prefigured by the work of the logicians and philosophers.

Consider how computers operate. The most basic task of any computer is to compare (check for a match) binary strings. It then follows a procedure (algorithm) for further "decisions."

Perhaps a more convincing example is the modern large language model computing system. As fancy as AI systems are, they remain computing systems. Yet AI can learn what one might term the "architecture" of any human language (or machine language for that matter). If the pre-electronic computer universal Turing machine implies the large language model, then TM's are the skeletal blueprints for AI. You might say they are the building blocks of all AI grammars.

Yet, as noted elsewhere, computation cannot account for the "non-scientific" activity of intuition.

Critique of Chomsky's hard-wiring views
https://www.scientificamerican.com/article/evidence-rebuts-chomsky-s-theory-of-language-learning/

Chomsky's syntactic structures (Wikipedia)
https://en.wikipedia.org/wiki/Syntactic_Structures

No comments:

Post a Comment

Footnote dgh.754

FN dgh.754. Science and Human Behavior by B.F. Skinner (Macmillan 1953).