I.Q.

© Paul Cooijmans

Explanation

In the context of "I.Q. Tests for the High Range", "I.Q." is an abbreviation of "Intelligence Quantifier", and meant to approximate where a particular score belongs on the scale of adult intelligence. The word "Quantifier" is used instead of the common "Quotient" because I.Q., as currently computed, is in no way a "quotient", that is, an answer to the question "how often" (does the one fit into the other). The word "Quantifier" therefore fits the meaning better: a number quantifying an amount, either continuous or discrete.

This I.Q. has been derived directly from the proportion of high-range candidates outscored, using a table such that the resulting I.Q. is comparable to an adult deviation I.Q. when the general population standard deviation is set at 15. This table is based on past experience from the period when high-range tests were anchor-normed to other tests via reported prior scores, and has replaced that method. The table may be adjusted when future studies show this to be necessary. It may be noted that this way of deriving I.Q. does not assume a normal distribution. Future adjustments of the protonorm-I.Q. relation may employ a different method, such as one that is absolute (as opposed to depending on the current population of high-range candidates, which should be allowed to change without affecting the meaning of particular I.Q. values, after all).

It is striven for to make I.Q. approximately intervallic (linear), and objective, independent of age, sex, or population. I.Q. is not a true ratio scale in that it does not have an absolute and meaningful zero. It is attempted however to ascertain that a given I.Q. corresponds to the same intelligence level across candidates and across time (years, decades, centuries, eras).

Inherent in the present method for deriving I.Q. is that higher I.Q.s are rarer than lower ones (upward of the high-range mode in the low 130s). Some worry that this makes it impossible for a suspected "bump" in "gifted" range to show up. Such a bump is present in traditional childhood scores. The reply to these worries must be that the notion of a "bump" in the I.Q. distribution is meaningless as long as we do not have a true physical absolute scale for I.Q. (as we have for distance, mass, et cetera), and therefore can not know if such a bump exists, or how the distribution is altogether. We only know the ranking of scores, and do our best to construct an intervallic scale underneath it. The bump in childhood scores is most likely a result from the fact that the method for computing those scores - dividing the mental by the biological age - was an inferior method. An indication that adult deviation I.Q., at least in the below-average to average range, is indeed intervallic, is the virtually linear relation that exists between the physical measure of brain volume and I.Q., when both are averaged across populations.

It is important not to confuse I.Q. with childhood scores (either mental/biological age ratio scores or standard scores by age group), age-corrected or age-based scores for adults, estimated or quoted "I.Q."s of famous people or self-assumed "I.Q."s of megalomaniacs, each of which tend to be much higher than real I.Q.s. While it is true that "I.Q." started out as the ratio of a child's mental and biological age, this concept is meaningless and impossible for adults, and has even for children been abandoned decades ago.

- [More statistics explained]

The Imperial Seal