Study 2: Logical Reasoning

Sep 21, 2024
6 min read 1109 words
Table of Contents

We conducted Study 2 with 3 goals in mind.

  1. We wanted to replicate the results of Study 1 in a different domain that focused on intellectual rather than social abilities.

We chose logical reasoning, a skill central to the academic careers of the participants we tested and a skill that is called on frequently.

We wondered if those who do poorly relative to their peers on a logical reasoning test would be unaware of their poor performance.

Examining logical reasoning also enabled us to compare perceived and actual ability in a domain less ambiguous than the one we examined in the previous study.

It could reasonably be argued that humor, like beauty, is in the eye of the beholder.2

The imperfect interrater reliability among our group of professional comedians suggests that there is considerable variability in what is considered funny even by experts. This criterion problem, or lack of uncontroversial criteria against which self-perceptions can be compared, is particularly problematic in light of the tendency to define ambiguous traits and abilities in ways that emphasize one’s own strengths (Dunning et al., 1989).

Thus, it may have been the tendency to define humor idiosyncratically, and in ways favorable to one’s tastes and sensibilities, that produced the miscalibration we observed-not the tendency of the incompetent to miss their own failings. By examining logical reasoning skills, we could circumvent this problem by presenting students with questions for which there is a definitive right answer.

Finally, we wanted to introduce another objective criterion with which we could compare participants’ perceptions. Because percentile ranking is by definition a comparative measure, the miscalibration we saw could have come from either of two sources. In the comparison, participants may have overestimated their own ability (our contention) or may have underestimated the skills of their peers. To address this issue, in Study 2 we added a second criterion with which to compare participants’ perceptions. At· the end of the test, we asked participants to estimate how many of the questions they had gotten right and compared their estimates with their actual test scores. This enabled us to directly examine whether the incompetent are, indeed, miscalibrated with respect to their own ability and performance.

Method

Participants. Participants were 45 Cornell University undergraduates from a single introductory psychology course who earned extra credit for their participation. Data from one additional participant was excluded because she failed to complete the dependent measures.

Procedure. Upon arriving at the laboratory, participants were told that the study focused on logical reasoning skills. Participants then completed a 20-item logical reasoning test that we created using questions taken from a Law School Admissions Test (LSAT) test preparation guide (Orton, 1993 ).

Afterward, participants made three estimates about their ability and test performance. First, they compared their “general logical reasoning ability” with that of other students from their psychology class by providing their percentile ranking. Second, they estimated how their score on the test would compare with that of their classmates, again on a percentile scale. Finally, they estimated how many test questions (out of 20) they thought they had answered correctly. The order in which these questions were asked was counterbalanced in this and in all subsequent studies. 2 Actually, some theorists argue that there are universal standards of - beauty (see, e.g., Thornhill & Gangestad, 1993), suggesting that this truism may not be, well, true.

Results and Discussion

The order in which specific questions were asked did not affect any of the results in this or in any of the studies reported in this article and thus receives no further mention.

As expected, participants overestimated their logical reasoning ability relative to their peers. On average, participants placed themselves in the 66th percentile among students from their class, which was significantly higher than the actual mean of 50, onesample t(44) = 8.13, p < .0001. Participants also overestimated their percentile rank on the test, M percentile = 61, one-sample t(44) = 4.70, p < .0001. Participants did not, however, overestimate how many questions they answered correctly, M = 13.3 (perceived) vs. 12.9 (actual), t < 1. As in Study 1, perceptions of ability were positively related to actual ability, although in this case, not to a significant degree. The correlations between actual ability and the three perceived ability and performance measures ranged from .05 to .19, all ns.

What (or rather, who) was responsible for this gross miscalibration? To find out, we once again split participants into quartiles based on their performance on the test. As Figure 2 clearly illustrates, it was participants in the bottom quartile (n = 11) who overestimated their logical reasoning ability and test performance to the greatest extent. Although these individuals scored at the 12th percentile on average, they nevertheless believed that their general logical reasoning ability fell at the 68th percentile and their score on the test fell at the 62nd percentile.

Their estimates not only exceeded their actual percentile scores, ts( 10) = 17 .2 and 11.0, respectively, ps < .0001, but exceeded the 50th percentile as well, ts(lO) = 4.93 and 2.31,.r.espectively,ps < .05. Thus, participants in the bottom quartile not only overestimated themselves but believed that they were above average. Similarly, they thought they had answered 14.2 problems correctly on average-compared with the actual mean score of 9.6, t(lO) = 7.66, p < .0001. Other participants were less miscalibrated. However, as Figure 2 shows, those in the top quartile once again tended to underestimate their ability. Whereas their test performance put them in the 86th percentile, they estimated it to be at the 68th percentile and

Bottom Quartile

-Perceived Ability –a-Perceived Test Score -•····-Actual Test Score 2nd Quartile 3rd Quartile Top Quartile

Figure 2. Perceived logical reasoning ability and test performance as a function of actual test performance (Study 2). estimated their general logical reasoning ability to fall at only the 74th percentile, ts(12) = -3.55 and -2.50, respectively, ps < .05. Top-quartile participants also underestimated their raw score on the test, although this tendency was less robust, M = 14.0 (perceived) versus 16.9 (actual), t(12) = -2.15, p < .06.

Summary

In sum, Study 2 replicated the primary results of Study 1 in a different domain. Participants in general overestimated their logical reasoning ability, and it was once again those in the bottom quartile who showed the greatest miscalibration. It is important to note that these same effects were observed when participants considered their percentile score, ruling out the criterion problem discussed earlier. Lest one think these results reflect erroneous peer assessment rather then erroneous self-assessment, participants in the bottom quartile also overestimated the number of test items they had gotten right by nearly 50%.

Send us your comments!