Content area
Much of the reading that we do occurs near our hands. Previous research has revealed that spatial processing is enhanced near the hands, potentially benefiting several processes involved in reading; however, it is unknown whether semantic processing-another critical aspect of reading-is affected near the hands. While holding their hands either near to or far from a visual display, our subjects performed two tasks that drew on semantic processing: evaluation of the sensibleness of sentences, and the Stroop color-word interference task. We found evidence for impoverished semantic processing near the hands in both tasks. These results suggest a trade-off between spatial processing and semantic processing for the visual space around the hands. Readers are encouraged to be aware of this trade-off when choosing how to read a text, since both kinds of processing can be beneficial for reading. [PUBLICATION ABSTRACT]
Much of the reading that we do occurs near our hands. Previous research has revealed that spatial processing is enhanced near the hands, potentially benefiting several processes involved in reading; however, it is unknown whether semantic processing-another critical aspect of reading-is affected near the hands. While holding their hands either near to or far from a visual display, our subjects performed two tasks that drew on semantic processing: evaluation of the sensibleness of sentences, and the Stroop color-word interference task. We found evidence for impoverished semantic processing near the hands in both tasks. These results suggest a trade-off between spatial processing and semantic processing for the visual space around the hands. Readers are encouraged to be aware of this trade-off when choosing how to read a text, since both kinds of processing can be beneficial for reading.
Thanks to the Internet, we have access to virtually unlimited quantities of reading material. We bounce back and forth among the news, blogs, product reviews, and e-mail, all with a couple of clicks of the mouse. And yet, when it comes time to really read an electronic document (i.e., one of importance that we truly want to absorb), many people would rather print it out to read than read it in its electronic form. There are, of course, functional advantages to a hard copy: It is portable, and it can be written on. Yet in our informal polling, these advantages are rarely invoked to justify the preference. Rather, a common response seems to be, "I just like to hold it," as if having the text in one's hands somehow fundamentally alters the way in which it is read.
There is indeed good reason to suspect that visual processes that do not inherently involve the hands (e.g., reading) could nevertheless be affected by the hands. Several studies have shown that the actions we perform with our hands can influence how we see (e.g., Bekkering & Neggers, 2002; Fagioli, Hommel, & Schubotz, 2007; Vishton et al., 2007; Wohlschläger, 2000) and how we perform complex visual transformations like mental rotations (e.g., Wexler, Kosslyn, & Berthoz, 1998; Wohlschläger & Wohlschläger, 1998). These studies have revealed an intimate relationship between perception and action- in particular, the capacity for the latter to affect the former. Furthermore, there are known interactions between the hands and language processing, as can be observed through gesturing in prelinguistic children (Iverson & Goldin-Meadow, 2005) and through sign language (Goldin- Meadow, 2006). Finally, comprehension of action language may rely on the activation of the comprehender's own motor system (Fischer & Zwaan, 2008; Holt & Beilock, 2006). Not only can hand actions affect what we see, therefore; they may also play a key role in language acquisition, comprehension, and communication.
Recent research has also revealed a more direct manner in which the hands may alter the visual processing of objects that are nearby, such as handheld material that is being read. In particular, several results have suggested that visual processing may be biased toward the space around the hands. For example, Schendel and Robertson (2004) described a patient with considerable left visual field loss (a probe-detection rate of approximately 15%) who was able to substantially improve his detection of probes in the damaged field simply by holding his hand near the left side of the display. Reed, Grubb, and Steele (2006) showed a similar enhancement in detection of stimuli near an outstretched hand in non-neurologically compromised individuals. In their study, subjects performed a basic covert visual attention task. On each trial, one of two boxes flanking either side of a central fixation cross was cued. On most trials (i.e., noncatch trials), a target subsequently appeared in either the cued or the uncued box (70% cue validity), and subjects were to respond via mousepress as quickly as possible once they detected the target. The primary manipulation was the proximity of the nonresponding hand to the stimulus display: Subjects held their hand either directly next to the box on the same side of the display, or in the lap (the responding hand remained on the table in front of them, holding the mouse). Subjects were faster to respond to targets that were presented next to the hand than to those that occurred away from the hand (on the opposite side of the screen), regardless of whether the target was validly or invalidly cued. Taken together, the results of Reed et al. and Schendel and Robertson suggest that visual attention may be drawn to objects that are near the hands.
In addition to the effect that the hands may have on the locus of visual attention, recent research has shown that visual stimuli near the hands may be processed fundamentally differently from those farther away. For example, Abrams, Davoli, Du, Knapp, and Paull (2008) tested subjects in three basic visual attentional paradigms: (1) visual search ("find the H or the S amongst distractor letters"); (2) spatial cuing ("respond to the location of a target following a variable cue-target interval"); and (3) rapid serial visual presentation ("following the termination of a stream of characters, respond to the parity of the lone digit [T1], and to the identity of a subsequent, but temporally variable, A or B [T2]"). Subjects held their hands either near the display (on either side of the monitor) or far from the display (on their laps) while performing the tasks. In all tasks, subjects exhibited delayed disengagement of attention from objects near their hands: slower rates of search in the first task; reduced inhibition-of-return from the cued location in the second task; and a more pronounced attentional blink for T2 in the third task. The delayed disengagement would force a more thorough evaluation of stimuli near the hands (though it has consequences for the processing of subsequent events). Importantly, it is clear that the effects of hand proximity were not due to kinesthetic changes (Abrams et al., 2008; Davoli & Abrams, 2009; Schendel & Robertson, 2004) or to differences in the visibility of the hands between the hands-near and hands-far postures (Abrams et al., 2008; Davoli & Abrams, 2009; Reed et al., 2006). Rather, the proximity of the hands to visual stimuli appears to have been sufficient to produce changes in visual processing.
There also exist neurophysiological data that support the conclusion that vision of the space around the hands is special. In particular, there appear to be distinct brain mechanisms that are dedicated to the representation of the space immediately surrounding the body (known as peripersonal space), including that near the hands. For example, single-cell recordings in monkeys have identified bimodal neurons with tactile receptive fields on the hand that also respond to visual stimulation from objects that are in close proximity (within 20-50 cm) to the hand (see, e.g., Graziano, Hu, & Gross, 1997). Humans also have brain areas that respond specifically to objects near the hand, as evidenced by functional magnetic resonance imaging (fMRI; Makin, Holmes, & Zohary, 2007). Finally, evidence from patient populations corroborates the dissociation between representations of peripersonal and extrapersonal space in the human brain: Following damage to the right hemisphere, some patients have demonstrated extinction (di Pellegrino, Ládavas, & Farné, 1997; Ládavas, di Pellegrino, Farné, & Zeloni, 1998) or neglect (Halligan & Marshall, 1991) that varies in severity depending on the proximity of a stimulus to the hand.
The effects of hand proximity reviewed thus far suggest that spatial processing is enhanced near the hands- evidenced through tasks that emphasize sensitivity to lowlevel visual signals and to movements of visual attention through semantically shallow displays (Abrams et al., 2008; Reed et al., 2006; Schendel & Robertson, 2004). Certainly, reading is a process that requires spatial processing. In particular, efficient reading requires the precise control of movements of attention and the eyes through the text, as well as spatial memory to help retain one's place on the page (see, e.g., Stein, 2003). It is thus quite possible that the preference for holding one's reading material may be attributable in part to the enhancement of spatial processing that occurs near the hands; however, reading also requires semantic processing-the extraction of meaning from the text. And at the present time it is not known whether or how hand proximity might affect such processing. We conducted the first test of that question in the present study.
In our experiments, we manipulated the proximity of subjects' hands to a visual display while they performed two tasks that drew on semantic processing. In Experiment 1, subjects read sentences and judged them on the quality of their sensibleness. In Experiments 2 and 3, subjects performed the Stroop color word interference task. There exist several possibilities regarding the manner in which hand proximity might affect semantic processing. Just as spatial processing is enhanced near the hands, perhaps semantic processing also receives a boost. Such a result would reflect an overall increase in the resources devoted to visual attentional processes near the hands. Conversely, it is possible that the enhanced spatial processing that has been reported is achieved at the expense of semantic processing, reflecting a trade-off between the two that is altered by hand proximity. In that case, semantic processing would be expected to be poorer near the hands. Finally, it remains possible that hand proximity does indeed enhance visuospatial processing for the reasons identified earlier, but that it has no effect on semantic processing. To anticipate the results, in all experiments we found evidence for impoverished semantic processing near the hands.
EXPERIMENT 1
We had subjects read sentences presented on a computer monitor while they held their hands either on the sides of the monitor (near the stimuli; proximal posture) or on their laps (far from the stimuli; distal posture) (see Figure 1). We presented two kinds of sentences-sensible and nonsensical-and asked subjects to make speeded judgments of the semantic sensibility of each one. For our purposes, we defined a nonsensical sentence as one that followed the conventions of English grammar but contained one word that did not belong according to the context otherwise established by the sentence. For example, a sensible sentence might read: Tim carried his suitcase to the car. A nonsensical version of this same sentence might read: Tim typed his suitcase to the car. Because all sentences followed the conventions of English grammar, subjects needed to process the semantic content of each sentence in order to evaluate its sensibleness.
Method
Subjects. Twenty experimentally naive Washington University undergraduates each participated in one 30-min session. Subjects were compensated with course credit.
Apparatus. The apparatus used in this study was the same as that of Abrams et al. (2008). Subjects sat at a desk facing an 18-in. CRT display on which all stimuli were presented. Viewing distance was maintained at 42.5 cm by use of a chinrest. In the proximal posture (see Figure 1A), subjects rested their arms on pillows and held their hands on two response buttons fixed to either side of the monitor. The buttons were aligned with the vertical center of the display. In the distal posture (Figure 1B), subjects supported a lightweight board on their laps to which the response buttons were fixed, and they held one hand on each button.
Stimuli, procedure, and Design. Sentences were obtained from Unsworth, Heitz, Schrock, and Engle (2005) and adapted for consistency by S. Hale (personal communication, September 19, 2007). Prior to beginning the experiment, subjects were shown examples of sensible and nonsensical sentences. All stimuli were presented in black on a white background in the middle of the display. Each experimental trial began with presentation of a central cross. After 1,000 msec, a sentence replaced the cross. Subjects indicated whether the sentence was sensible or nonsensical by pressing the corresponding response button. The assignment of buttons to responses was counterbalanced across subjects. If subjects did not respond within 5,000 msec or responded incorrectly, a 1,000-msec error message appeared on-screen. There was a 500-msec intertrial interval.
We employed a 2 (posture: proximal, distal) × 2 (semantic sensibleness: sensible, nonsensical) within-subjects design. Subjects performed one half of the experiment with their hands in the proximal posture and the other half with their hands in the distal posture, with posture order counterbalanced across subjects. In each posture, subjects completed a block of trials containing 48 sensible and 48 nonsensical sentences, for a total of 192 experimental trials. The order of trials for each posture was assigned randomly. Across subjects, each sentence was used with each posture equally often.
Results and Discussion
One subject was excluded from the analysis because of very low accuracy (~65% correct); however, this subject's removal did not change the overall pattern of results.
Mean response accuracy is shown in Figure 1C for each posture and sentence type. In the distal posture, subjects were equally good at classifying sensible and nonsensical sentences as such. In the proximal posture, however, subjects were poorer at classifying the nonsensical sentences correctly [posture × sensibleness interaction: F(1,18) = 5.24, p < .05, η^sup 2^^sub p^ = .225]. Neither posture nor sensibleness had a main effect on accuracy [Fs(1,18) < 1]. Importantly, response time did not differ across conditions (all Fs < 1; see Table 1), suggesting that it was not simply more difficult to read and respond in the proximal posture.
The present results thus suggest that semantic processing is impoverished near the hands. This occurred despite the known spatial-processing enhancements that have been reported near the hands. On the surface, the results also appear consistent with a simple bias to respond that sentences are more sensible when one's hands are near the stimuli; however, the bias could also arise from a reduction in semantic processing. This is possible because all sentences that were presented to subjects were grammatically correct (e.g., nouns and verbs were in the right place, nouns and verbs agreed, etc.). If semantic processing was impoverished near the hands, then it is possible that subjects instead relied on other aspects of the sentences-in this case, their grammatical structure-to determine their sensibleness. Because all sentences were grammatically correct, subjects might have been more likely to respond "sensible" when semantic processing was impaired, yielding the observed results.
In order to more directly examine the extent to which the extraction of meaning may be impoverished near the hands, we employed a stronger test of semantic processing in a second experiment, using the same postures.
EXPERIMENT 2
The traditional Stroop interference effect (Stroop, 1935) is a cognitive phenomenon in which the meaning of a color word (e.g., white) interferes with a response to the color in which it appears when the color (e.g., black) is incongruent with the meaning (see Egeth, Blecker, & Kamlet, 1969). This effect is thought to reflect the relative speed with which words are read compared with that with which colors are named (see MacLeod, 1991, for a review), and it thus constitutes a strong test of semantic processing. If semantic processing is in fact impoverished near the hands, we would expect to find a reduced Stroop interference effect when subjects hold their hands near the color words.
Method
Subjects. Thirty new experimentally naive Washington University undergraduates each participated in one 30-min session. Subjects were compensated with course credit.
Apparatus. All details of the apparatus were identical to those in Experiment 1.
Stimuli, procedure, and Design. All stimuli were presented against a black background. Each trial began with a white fixation cross (0.64 wide × 0.64 high) presented at the center of the screen. After 500 msec, a letter string replaced the cross. Subjects were instructed to identify as quickly and as accurately as possible the color (green or red) in which the letter string appeared, and they indicated their response by pressing the corresponding response button. The assignment of responses to buttons was counterbalanced across subjects. Subjects were also instructed that they should try to ignore all other aspects of the letter string, including its meaning. Subjects heard an error tone if they pressed the wrong button or did not respond within 1,500 msec. There was a 1,500-msec intertrial interval.
There were three kinds of letter strings: Congruent strings spelled out color words that were consistent with the color in which they appeared (e.g., reD appearing in red); incongruent strings spelled out color words that were inconsistent with the color in which they appeared but consistent with the alternative (i.e., incorrect) response (e.g., reD appearing in green); and neutral strings consisted of a series of Xs in red or green. Half of the neutral strings were three Xs in length (XXX) and the other half were five Xs long (XXXXX), matching the lengths of the strings reD and Green. Half of the neutral strings of each length were presented in green, and the other half were presented in red. All letter strings were 0.86 high × approximately 2.14 or 3.42 wide (for the three- and five-letter strings, respectively).
We employed a 2 (posture: proximal, distal) × 3 (congruency: neutral, congruent, incongruent) within-subjects design. Subjects performed one half of the experiment in one posture and the other half in the other posture, with posture order counterbalanced across subjects. There were four blocks of trials in each posture. Each block consisted of 12 neutral, 12 congruent, and 12 incongruent trials, for a total of 288 experimental trials. The order of trials for each posture was assigned randomly. There was a brief break between blocks.
Results and Discussion
To more easily illustrate the influence of posture on Stroop interference, we report the results of a 2 (posture: proximal, distal) × 2 (congruency: congruent, incongruent) repeated measures ANOVA, excluding data from the neutral trials.1 The mean response times are shown in Figure 2. We obtained a typical Stroop interference effect, in which the to-be-ignored meaning of the color words influenced the speed with which subjects could name their colors. Specifically, subjects were on average 19.7 msec slower to respond in the incongruent condition than in the congruent condition [main effect of congruency: F(1,29) = 11.04, p &;lt; .005, η^sup 2^^sub p^ = .276]; however, we found dramatically reduced interference from incongruent stimuli in the proximal posture (M = 8.93 msec, SD = 24.2) compared with the distal posture (M = 30.44 msec, SD = 52.5) [posture × congruency interaction: F(1,29) = 5.63, p < .03, η^sup 2^^sub p^ = 162]. There was no main effect of posture on response time [F(1,29) = 3.09, p = .09]. Analysis of the error rates did not indicate the presence of a speed- accuracy trade-off.2
The Stroop interference effect has been shown to be highly reliable (MacLeod, 1991), and, indeed, we obtained it here. Nevertheless, we found that the magnitude of the effect was dramatically reduced when subjects held their hands near the stimuli. Reduced interference near the hands could be attributable to impoverished semantic processing, or it could be attributable to enhanced color processing. If the latter was true, we would expect to find faster response times to neutral stimuli in the proximal posture than in the distal posture; however, response times to neutral stimuli in the proximal (M = 384.86 msec, SD = 48.0) and distal (M = 396.53 msec, SD = 69.4) postures were not reliably different from one another [t(29) = 1.37, p = .182]. Interestingly, however, we did find a positive correlation between the difference in the size of the Stroop effect across postures and the difference in neutral-trial response times across postures (r = .65, p < .001). That is, the subjects who showed the largest reduction in Stroop interference when in the proximal condition also showed the largest response time advantage on neutral trials in the proximal condition. Although enhanced color processing does not appear to be responsible for our overall effect, therefore, there is some evidence of enhanced color processing in some subjects.
Another potential limitation to our interpretation of the results is the possibility that the proximal posture simply made reading more difficult, perhaps by virtue of being less comfortable or less natural than the distal posture. If that was the case, our results would not be due to impoverished semantic processing near the hands, but instead to a sort of dual-task interference between maintenance of an uncomfortable posture (the proximal posture) and performance of the (semantic) tasks. In order to rule out that possibility, and to further explore the color-processing enhancement that some subjects exhibited in the proximal posture, we conducted a third experiment.
EXPERIMENT 3
Up to this point, we have explained our effects in terms of the proximity of the hands to the stimuli; however, it is possible that we have confounded hand proximity with the relative awkwardness or unnaturalness of maintaining each posture. In the third experiment, therefore, we had subjects perform the same Stroop task as before with a similar manipulation of hand proximity, but we selected a distal posture that was as awkward and unusual as the proximal posture.
We were also interested in examining more closely the correlation found in Experiment 2 between Stroop-effect difference and neutral-trial difference across postures. Although the overall pattern of results did not suggest that our effect was caused by enhanced color processing near the hands, the possibility remained open that enhanced color processing played a role. The extent to which we found a similar correlation in the present experiment could shed light on the mechanism through which Stroop interference is reduced.
Method
Subjects. Sixteen new experimentally naive Washington University undergraduates each participated in one 30-min session. Subjects were compensated with course credit.
Apparatus. Stimuli were presented on a 15-in. LCD flat-panel display within an area 28.1 high × 36.9 wide. Viewing distance was fixed at 45.7 cm by a chinrest. In the distal posture, subjects sat with their hands held behind their backs and positioned over buttons that were fixed to either side of the lower backside of their chair. In the proximal posture, the response buttons were affixed to either side of the back of the flat-panel display. In this posture, subjects rested their arms on pillows, wrapped their hands around either side of the display, and rested their fingers on the response buttons.
Stimuli, procedure, and Design. All details of the stimuli, procedure, and design were identical to those in Experiment 2, with the following exceptions: The fixation cross was 0.40 wide × 0.40 high; all letter strings were 0.40 high × approximately 1.19 or 2.19 wide (for the three- and five-letter strings, respectively); and there was a full (36-trial) practice block prior to beginning the experimental blocks for each posture. The practice blocks were included primarily to allow subjects to acclimate themselves to buttonpressing while in the potentially unusual postures.
Results and Discussion
We again report the results of a 2 (posture: proximal, distal) × 2 (congruency: congruent, incongruent) repeated measures ANOVA, excluding data from the neutral trials.3 The mean response times are shown in Figure 3. We obtained a typical Stroop interference effect, in which the to-be-ignored meaning of the color words influenced the speed with which subjects could name their colors. Specifically, subjects were on average 20.7 msec slower to respond in the incongruent condition than in the congruent condition [main effect of congruency: F(1,15) = 14.93, p < .0025, η^sup 2^^sub p^ = .499]; however, we found dramatically reduced interference from incongruent stimuli in the proximal posture (M = 10.73 msec, SD = 17.2) compared with the distal posture (M = 30.70 msec, SD = 34.8) [posture × congruency interaction: F(1,15) = 5.42, p < .035, η^sup 2^^sub p^ = .265]. There was no main effect of posture on response time [F(1,15) = 1.05, p = .32]. Analysis of the error rates did not indicate the presence of a speed- accuracy trade-off.4
In Experiment 2, some subjects showed a pattern of results suggesting that their reduced Stroop interference effect was due in part to enhanced color processing instead of impoverished semantic processing. In the present experiment, however, the correlation between Stroop-effect difference and neutral-trial difference across postures was nonexistent (r = .009, p = .974). Importantly, however, we continued to find a marked reduction in the Stroop interference effect while subjects were in the proximal posture. At the present time, it is unclear how to account for the difference between experiments, but it is clear that the effects of posture on Stroop interference did not require an enhancement in the processing of color information.
The results of the present experiment also provide strong confirmation that our earlier effects cannot be explained by difficulties in maintaining the proximal posture. Here, the distal posture was similarly awkward-if not more so! In fact, at the conclusion of the study, all of our subjects acknowledged that the distal posture was less comfortable than the proximal posture. In support of those self-reports, the data revealed that subjects were less accurate in the distal posture. If our effect was driven by postural discomfort, then we would have expected to see a reduction in Stroop interference in the distal condition compared with the proximal condition in the present experiment. Instead, we replicated the findings of Experiment 2, in that the Stroop interference effect was markedly reduced when the hands were held near the stimuli. This finding strongly supports the notion that proximity of the hands was the primary factor that influenced semantic processing. More specifically, semantic processing near the hands did indeed seem to be impoverished.
GENERAL DISCUSSION
Much of the reading that we do occurs in our hands, and, indeed, many readers even express a preference for holding their reading material. Previous research has revealed enhanced spatial processing for the area near the hands (Abrams et al., 2008; Reed et al., 2006; Schendel & Robertson, 2004), and it seems possible that the preference for holding reading material could stem in part from those enhancements. Up until now, however, it was not known whether semantic processing, also critical for reading, is affected by the proximity of the hands. The present experiments have shown that there is a reduction in semantic processing in the region near the hands.
Before proceeding with the rest of the discussion, there is one alternative interpretation worth considering. Perhaps the unusualness of the proximal posture, and not hand proximity, was responsible for the observed effects. Importantly, several elements of our results can be used to rule out that explanation. In Experiment 3, we had subjects adopt a distal posture that was more awkward than the proximal posture, yet we found the same pattern of results as in Experiment 2. Next, if the proximal posture affected Stroop performance because the posture was difficult to maintain, then we would have expected to observe a main effect of posture on response time in all experiments-that is, a consistent decrement in performance in the proximal posture reflecting its inherent difficulty-which we did not. (Indeed, subjects were actually somewhat faster, but not significantly so, in the proximal conditions of Experiments 2 and 3.) Finally, we found no differences between the two postures in the time required to read and respond to the sentences in Experiment 1, which would also not be expected if reading was more difficult in one of the postures.
Additionally, the evidence just discussed is in accordance with findings from previous studies. Abrams et al. (2008), using the exact same postural manipulation we used in Experiments 1 and 2, also reported the absence of main effects of posture on their tasks of spatial and temporal attention. Meanwhile, Davoli and Abrams (2009) had their subjects simply imagine themselves in a proximal or distal posture, thus completely eliminating the need for physical postural differences. They nevertheless reported the same effects on rate of search through a visual display as were reported in Abrams et al. Our findings thus arise from the importance of the proximity of the hands to the stimuli being viewed.
Our results are consistent with the presence of a trade-off between semantic processing and spatial processing that can be altered by hand proximity: The enhanced spatial processing that has been observed near the hands (Abrams et al., 2008; Reed et al., 2006; Schendel & Robertson, 2004) might be achieved at the expense of semantic processing (the present study). Indeed, it seems plausible that visual processing near the hands would be biased toward the spatial properties of objects and away from semantic ones. Objects near the hands may be critically important because they might be objects that need to be grasped or obstacles that should be avoided. And processing that is biased toward spatial attributes may be more likely to facilitate the necessary motor responses in such cases. The bias may not always be beneficial for tasks like reading, however.
It is worth noting that our findings are not the only example of a trade-off between spatial and semantic processing. Results from studies on the processing of emotional stimuli have revealed a similar trade-off. In particular, viewing an emotionally arousing image has been shown to produce enhanced spatial processing near the image (Phelps, Ling, & Carrasco, 2006) but impaired semantic processing of subsequently presented words (Ihssen, Heim, & Keil, 2007). Despite the overt similarities, it remains to be seen precisely how these results are related to those reported in the present article.
Although we have suggested that our results reflect impoverished semantic processing near the hands, an alternative account would attribute the findings instead to an enhancement of cognitive control near the hands. This account is especially appropriate in explaining the results from the Stroop task. Critical to performance of the Stroop task, one must suppress the automatic response to read the word and instead attend to its color-a process that relies critically on cognitive control. Indeed, there is precedent for interpreting the Stroop effect as a measure of cognitive control. For example, Koch, Holland, Hengstler, and van Knippenberg (2009) recently found a marked reduction in Stroop interference when subjects took a step backward prior to performing the task, compared with when subjects took steps forward or to the side. Koch et al. suggested that stepping backward induced an avoidance mindset-a state in which it would be important to have boosted cognitive control. Further work will be needed to determine which explanation is most appropriate.
Although we began this article with the example of reading on a computer screen versus reading a hard copy, we tested subjects on a computer screen only. To be sure, our primary interest was exploring not the potential effects of medium on reading, but rather how reading changes near to (as often occurs when reading on paper) versus far from (as often occurs when reading on a monitor) the hands. The broader notion of reading near the hands is more conventionally accomplished with a hard copy, however: A printed article may be read on a desk near the hands, and when we sit down with a good book, we may hold it comfortably in our laps. And although more and more electronic reading is occurring near the hands with the profusion of mobile handheld devices, the influence of hand proximity on reading via hard copy remains an important and as yet unanswered question. Related to that issue, it is possible that many prefer to hold a hard copy rather than read on a computer monitor because of expertise in reading in this manner. Might the practice that some have with reading in a certain medium outweigh the potential effects of hand proximity?5 The resolution of all these issues will require further study.
Do the results of the present study mean that reading should never occur near the hands? Certainly not. In many cases, this kind of reading is unavoidable and may even be beneficial. As has been discussed, spatial processing, which is essential for successful reading, is enhanced near the hands. It appears, however, that this enhancement might have an adverse impact on the extraction of meaning, which is also clearly important for reading. Because spatial and semantic processing cannot both be optimized, it appears that readers may be faced with a difficult choice to make when choosing how to read a document. We suggest that readers be aware of the advantages and disadvantages that come with reading near or far from the hands. Depending on the type of text that is being read and the ultimate goals that the reader has, one type of processing may be more favorable than the other. The bottom line, however, is that when meaning really matters, look but don't touch.
NOTES
1. When the neutral condition was included in the ANOVA, we obtained the same Stroop interference effect as before: Subjects were slower to respond in the incongruent condition than in the neutral and congruent conditions [main effect of congruency: F(2,58) = 10.13, p < .0005, η^sup 2^^sub p^ = .259]. Importantly, we continued to find dramatically reduced interference from incongruent stimuli in the proximal posture [posture 3 congruency interaction: F(2,58) = 3.85, p < .03, η^sup 2^^sub p^ = .117]. There was again no main effect of posture on response time [F(1,29) = 2.96, p = .10].
Facilitation effects for each posture were calculated by subtracting the mean response time on neutral trials from the mean response time on congruent trials. There was no significant facilitation in either the proximal [M = 22.99 msec, SD = 15.01 msec; t(29) = 1.09, n.s. compared with zero] or the distal [M = -1.56 msec, SD = 32.38 msec; t(29) = 0.264, n.s. compared with zero] posture, nor were the facilitation effects for each posture different from one another [t(29) = 0.230, n.s.]. The absence of facilitation in the Stroop task is not uncommon, especially when using a nonword control like a string of Xs, as we did here (Mac- Leod, 1991).
2. The overall error rate was 3.5%. Subjects made more errors on incongruent trials (4.9%) than on congruent (2.5%) or neutral (3.0%) trials [F(2,58) = 10.8, p < .001, η^sup 2^^sub p^ = .272]. Subjects also made more errors in the distal posture (4.2%) than in the proximal posture (2.8%) [F(1,29) = 6.6, p < .02, η^sup 2^^sub p^ = .186]. Posture and congruency did not interact [F(2,58) = 1.4, p = .25]. Error rates for the distal posture for congruent and incongruent trials were 3.7% and 5.7%, respectively. Error rates for the proximal posture for congruent and incongruent trials were 1.4% and 4.2%, respectively.
3. When the neutral condition was included in the ANOVA, we obtained the same Stroop interference effect as before: Subjects were slower to respond in the incongruent condition than in the neutral and congruent conditions [main effect of congruency: F(2,30) = 12.94, p < .001, η^sup 2^^sub p^ = .463]. Importantly, we continued to find dramatically reduced interference from incongruent stimuli in the proximal posture [posture 3 congruency interaction: F(2,30) = 4.93, p < .015, η^sup 2^^sub p^ = .247]. There was again no main effect of posture on response time [F(1,15) = 0.26, p = .62].
4. The overall error rate was 3.4%. There was no reliable effect of congruency on error rate [F(2,30) = 1.87, p = .17]. Subjects made more errors in the distal posture (3.9%) than in the proximal posture (2.9%) [F(1,15) = 4.6, p < .05, η^sup 2^^sub p^ .234]. Posture and congruency did not interact [F(2,30) = 0.21, p = .82]. Error rates for the distal posture for congruent and incongruent trials were 4.3% and 4.4%, respectively. Error rates for the proximal posture for congruent and incongruent trials were 3.4% and 2.9%, respectively.
5. We thank Sian Beilock for bringing this possibility to our attention.
REFERENCES
Abrams, R. A., Davoli, C. C., Du, F., Knapp, W. H., III, & Paull, D. (2008). Altered vision near the hands. Cognition, 107, 1035-1047.
Bekkering, H., & Neggers, S. F. W. (2002). Visual search is modulated by action intentions. Psychological Science, 13, 370-374.
Davoli, C. C., & Abrams, R. A. (2009). Reaching out with the imagination. Psychological Science, 20, 293-295.
di Pellegrino, G., Ládavas, E., & Farné, A. (1997). Seeing where your hands are. Nature, 388, 730.
Egeth, H. E., Blecker, D. L., & Kamlet, A. S. (1969). Verbal interference in a perceptual comparison task. Perception & Psychophysics, 6, 355-356.
Fagioli, S., Hommel, B., & Schubotz, R. I. (2007). Intentional control of attention: Action planning primes action-related stimulus dimensions. Psychological Research, 71, 22-29.
Fischer, M. H., & Zwaan, R. A. (2008). Embodied language: A review of the role of motor system in language comprehension. Quarterly Journal of Experimental Psychology, 61, 825-850.
Goldin-Meadow, S. (2006). Talking and thinking with our hands. Current Directions in Psychological Science, 15, 34-39.
Graziano, M. S. A., Hu, X. T., & Gross, C. G. (1997). Visuospatial properties of ventral premotor cortex. Journal of Neurophysiology, 77, 2268-2292.
Halligan, P. W., & Marshall, J. C. (1991). Left neglect for near but not far space in man. Nature, 350, 498-500.
Holt, L. E., & Beilock, S. L. (2006). Expertise and its embodiment: Examining the impact of sensorimotor skill expertise on the representation of action-related text. Psychonomic Bulletin & Review, 13, 694-701.
Ihssen, N., Heim, S., & Keil, A. (2007). The costs of emotional attention: Affective processing inhibits subsequent lexico-semantic analysis. Journal of Cognitive Neuroscience, 19, 1932-1949.
Iverson, J. M., & Goldin-Meadow, S. (2005). Gesture paves the way for language development. Psychological Science, 16, 367-371.
Koch, S., Holland, R. W., Hengstler, M., & van Knippenberg, A. (2009). Body locomotion as regulatory process: Stepping backward enhances cognitive control. Psychological Science, 20, 549-550.
Ládavas, E., di Pellegrino, G., Farné, A., & Zeloni, G. (1998). Neuropsychological evidence of an integrated visuotactile representation of peripersonal space in humans. Journal of Cognitive Neuroscience, 10, 581-589.
MacLeod, C. M. (1991). Half a century of research on the Stroop effect: An integrative review. Psychological Bulletin, 109, 163-203.
Makin, T. R., Holmes, N. P., & Zohary, E. (2007). Is that near my hand? Multisensory representation of peripersonal space in human intraparietal sulcus. Journal of Neuroscience, 27, 731-740.
Phelps, E. A., Ling, S., & Carrasco, M. (2006). Emotion facilitates perception and potentiates the perceptual benefits of attention. Psychological Science, 17, 292-299.
Reed, C. L., Grubb, J. D., & Steele, C. (2006). Hands up: Attentional prioritization of space near the hand. Journal of Experimental Psychology: Human Perception & Performance, 32, 166-177.
Schendel, K., & Robertson, L. C. (2004). Reaching out to see: Arm position can attenuate human visual loss. Journal of Cognitive Neuroscience, 16, 935-943.
Stein, J. (2003). Visual motion sensitivity and reading. Neuropsychologia, 41, 1785-1793.
Stroop, J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18, 643-662.
Unsworth, N., Heitz, R. P., Schrock, J. C., & Engle, R. W. (2005). An automated version of the operation span task. Behavior Research Methods, 37, 498-505.
Vishton, P. M., Stephens, N. J., Nelson, L. A., Morra, S. E., Brunick, K. L., & Stevens, J. A. (2007). Planning to reach for an object changes how the reacher perceives it. Psychological Science, 18, 713- 719.
Wexler, M., Kosslyn, S. M., & Berthoz, A. (1998). Motor processes in mental rotation. Cognition, 68, 77-94.
Wohlschläger, A. (2000). Visual motion priming by invisible actions. Vision Research, 40, 925-930.
Wohlschläger, A., & Wohlschläger, A. (1998). Mental and manual rotation. Journal of Experimental Psychology: Human Perception & Performance, 24, 397-412.
CHRISTOPHER C. DAVOLI , FENG DU, JUAN MONTANA, SUSAN GARVERICK, AND RICHARD A. ABRAMS
Washington University, St. Louis, Missouri
C. C. Davoli, [email protected]
AUTHOR NOTE
Correspondence concerning this article should be addressed to C. C. Davoli, Department of Psychology, Washington University, Campus Box 1125, One Brookings Drive, St. Louis, MO 63130 (e-mail: ccdavoli@ artsci.wustl.edu).
Copyright Springer Science & Business Media Jul 2010