Original research report

The Fragility of the NearHand Effect

Authors: {'first_name': 'Jill A.', 'last_name': 'Dosso'},{'first_name': 'Alan', 'last_name': 'Kingstone'}


Recent literature has demonstrated that hand position can affect visual processing, a set of phenomena termed Near Hand Effects (NHEs). Across four studies we looked for single-hand NHEs on a large screen when participants were asked to discriminate stimuli based on size, colour, and orientation (Study 1), to detect stimuli after a manipulation of hand shaping (Study 2), to detect stimuli after the introduction of a peripheral cue (Study 3), and finally to detect stimuli after a manipulation of screen orientation (Study 4). Each study failed to find a NHE. Further examination of the pooled data using a Bayesian analysis also failed to reveal positive evidence for faster responses or larger cueing effects near a hand. These findings suggest that at least some NHEs may be surprisingly fragile, which dovetails with the recent proposition that NHEs may not form a unitary set of phenomena (Gozli & Deng, 2018). The implication is that visual processing may be less sensitive to hand position across measurement techniques than previously thought, and points to a need for well-powered, methodologically rigorous studies on this topic in the future.

Keywords: near hand effectattentionembodimenthand posturecueing effect 
DOI: http://doi.org/10.1525/collabra.167
 Accepted on 02 Jul 2018            Submitted on 27 Apr 2018

Recent evidence suggests that one’s hands can bias attention and perception for nearby items: the Near-Hand Effect (NHE). Reed and colleagues studied such effects in a healthy population by measuring target detection speed for hand-near and hand-far locations on a screen (Reed, Grubb, & Steele, 2006). They found that target items were detected more rapidly when located near a hand, regardless of the validity of a preceding cue. These findings have subsequently been extended to the functional surfaces of recently used tools, suggesting that they represent a change in attention or stimulus processing that is action-related (Brockmole, Davoli, Abrams, & Witt, 2013; Reed, Betz, Garza, & Roberts, 2010). In addition to the finding the near-hand stimuli are detected more quickly (Reed et al., 2006), data suggest that near-hand stimuli are biased towards being perceived as figure rather than ground (Cosman & Vecera, 2010), that the hands can effectively “shield” attention from distractors (Davoli & Brockmole, 2012), and that involuntary, reflexive shifts of attention are sensitive to hand position while voluntary shifts of attention are not (Le Bigot & Grosjean, 2016). Higher level cognition, like semantic processing (Davoli, Du, Montana, Garverick, & Abrams, 2010), the processing of emotional stimuli (Du, Wang, Abrams, & Zhang, 2017), and the contents of short-term memory (Tseng & Bridgeman, 2011) have also been shown to be sensitive to nearby hands. Thus, a number of different cognitive processes are sensitive to hand position and can be considered under the umbrella of NHE(s).

Two primary theories have been advanced to explain NHEs. The original theory, sometimes called the attentional prioritization theory, claims that near-hand space is prioritized by attention (Reed et al., 2006). Reed et al. suggest that spatial attention could be biased based on peripersonal space representations. Bimodal neurons in frontal and parietal regions, including the ventral premotor cortex, have receptive fields that are sensitive to tactile stimulation of a particular body part (e.g. a hand, the face) as well as visual information originating near that body part (Graziano & Gross, 1998). Crucially, the visual receptive field of such neurons is body-part centred. So, a hand-centred bimodal neuron would respond to tactile stimulation of the hand and to visual stimuli occurring near the hand (regardless of the hand’s position in space). Reed and colleagues proposed that the distribution of spatial attention may be biased towards the near-hand space by the activity of these bimodal neurons (Reed et al., 2006). More recently, Gozli and colleagues have advanced a visual pathway theory of the NHE based on the subcortical structure of the lateral geniculate nucleus (Gozli, West, & Pratt, 2012). They propose that near-hand stimuli are preferentially processed by the magnocellular visual pathway at the expense of processing by the parvocellular visual pathway. This is supported by data showing differential effects of near-hands on task features that are thought to rely on each stream. Evidence for this theory includes data showing that holding a display with two hands facilitates temporal acuity and impairs spatial acuity (Gozli et al., 2012) and increases memory for orientation information while reducing memory for colour information (Kelly & Brockmole, 2014).

Recently, evidence has suggested that dual- and single-hand tasks might produce different patterns of results. In one study, two hands near stimuli were found to improve temporal acuity at the expense of spatial acuity, perhaps by biasing processing towards the magnocellular pathway. However, a single hand placed nearby the display showed the opposite pattern: increased sensitivity for the spatial task, consistent with a parvocellular pathway bias (Bush & Vecera, 2014). In addition, colour discrimination (thought to be parvocellular) was enhanced by the presence of a single near hand (Dufour & Touzalin, 2008 Study 4). Note however that these single-hand data are contradicted by neural recording work performed with rhesus monkeys which found that a single near hand sharpened orientation tuning in V2, which would be consistent with a bias towards the magnocellular pathway (Perry, Sergio, Crawford, & Fallah, 2015).

To address this discrepancy, the current work directly compared the impact of single near hands on discrimination performance of three stimulus features that are thought to be preferentially processed by the magnocellular or parvocellular visual pathways. Based on the work of Bush and Vecera (2014), one would predict that single near hands should improve performance on a (parvocellular) colour discrimination task (Derrington & Lennie, 1984; Lee, Pokorny, Smith, Martin, & Valberg, 1990; Livingstone & Hubel, 1987). The magnocellular pathway is thought to carry low spatial frequency information like orientation (Bar, 2003; Kelly & Brockmole, 2014; Maunsell, Nealey, & DePriest, 1990), so one would predict that single near hands would impair performance on orientation discrimination. Finally, based on the higher spatial acuity of the parvocellular pathway one would predict that single near hands should enhance discrimination of stimuli based on their size (Leonova, Pokorny, & Smith, 2003).

Study 1

Study 1 investigated the potential contributions of single near hands to rapid discrimination of three stimulus features: colour, orientation, and size. Furthermore, it was evaluated whether such near-hand effects follow the near-hand space as the hand is placed across the midline (Lloyd, Azañón, & Poliakoff, 2010).

Materials and Methods

Participants were recruited from a pool of undergraduate students compensated with course credit and a paid pool of members of the public and gave informed consent prior to participating. Ethical approval for this and all work reported in this manuscript was obtained from the Behavioural Research Ethics Board of the University of British Columbia. Thirty-one participants were tested. One was excluded due to a failure to follow task instructions. Demographic data were missing for one other participant. Participants were classified (Oldfield, 1971) as right-handed (n = 23), left-handed (n = 2), or ambidextrous (n = 4). Mean age was 23.4 years (SD = 5.3 years). Participants’ self-reported gender was female (n = 26) or male (n = 3), and their ethnicity was Asian (n = 17), Caucasian (n = 10), Multiethnic (n = 1), or not disclosed (n = 1).

The task was programmed and delivered using PsychoPy 1.82.00 (Peirce, 2007). Participants were seated at a large (42-inch, 60 Hz) touchscreen in a horizontal, table-like configuration. Responses were made as keypresses on a small keypad affixed to a black lap-desk (a small tray attached to a cushion) that was placed in their lap. Stimuli were small colourful lines (Figure 1) that varied in orientation (3.5°, 176.5°), size (2.4 cm, 3.0 cm), and colour. The RGB colour values were coded as [.84, .45, –.48] and [.84, .29, –.48]. Values represent points between the minimum (–1) and maximum (1) decrements from grey.

Figure 1 

Sample stimuli (not shown to size).

Each trial began with a central fixation cross, which remained on throughout the trial. The stimulus appeared at a distance of 22 cm either to the left or the right of fixation after a delay of 800, 1200, or 1600 ms, and was displayed for 250 milliseconds. The participants’ task was to discriminate the relevant stimulus feature. There were six positions, presented in a randomized order: uncrossed-right, uncrossed-left, crossed-right, crossed-left, far-right, and far-left. In the crossed positions (Figure 2), the hand rested flat on the screen immediately below the contralateral stimulus location (at a distance of 4.5 cm). In the uncrossed positions, the hand rested flat below the ipsilateral stimulus location (4.5 cm). In the far positions, the hand rested in the lap at midline (a distance of 39.3 cm from either stimulus location). Responses were made with the non-specified hand on two buttons of a response keypad located in the centre of a lapdesk placed on the lap at the edge of the screen and 16 cm below the height of the screen surface. The mappings between left and right response keys and stimulus features were randomized across participants. For each position, 48 randomized trials were performed (stimuli were balanced across left and right locations, colour, size, and orientation). This procedure was repeated three times, once for each stimulus feature (colour, orientation, and size, presented in a randomized order) resulting in a total of 18 blocks of trials. When a new feature was introduced, there were two practice blocks of 16 sample trials, one for each hand. An example sequence of events would be as follows: a subject would be instructed to press the left key when they saw a 2.4 cm-long stimulus, and the right key when they saw a 3.0 cm-long stimulus (i.e. to discriminate stimuli based on the feature of size). They would practice this task for 32 trials divided equally between a block of 16 trials for the left hand and a block of 16 trials for the right hand. Then, they would perform six blocks of this task, one per hand position. This would be followed by a comparable set of two practice blocks and six experimental blocks each for the colour and orientation features. The entire experiment took approximately one hour.

Figure 2 

An example of the crossed-hand posture. On-screen stimuli are not shown; black portion of the image represents a dark curtain that was drawn in front of participants during testing.


Data for all studies are available at https://osf.io/xhu7z/. Data were removed if reaction times (RTs) were under 150 milliseconds or over 2400 milliseconds (approximately 2.5 standard deviations above the mean reaction time). Data from the colour trials were excluded for two participants because of technical issues during testing, and data from the orientation trials were excluded for one participant for the same reason. Data from the orientation trials were additionally excluded from five participants because they failed to discriminate the targets’ orientations above 55% accuracy.

Accuracy data were analysed using a repeated-measures ANOVA (Figure 3). Feature (colour, orientation, size), non-responding (NR) hand (left, right), position (uncrossed, crossed, lap), and target location (left, right) were included as within-subjects independent factors. Error rate was the dependent variable. There was an interaction between position and target location (F(2, 42) = 4.70, p = .01, partial η2 = .18). More errors were made for right-side as compared to left-side targets when uncrossed hand positions were used (F(1, 22) = 8.30, p = .009, partial η2 = .27). No other main effects or interactions reached significance. Crucially, the three-way interaction between non-responding hand, position, and target location was non-significant (F(2, 42) = .25, p = .78, partial η2 = .01), as was the four-way interaction between feature, non-responding hand, position, and target location (F(4, 84) = .40, p = .81, partial η2 = .02).

Figure 3 

Error rates for each position, feature, and target location. Means and SEs are shown.

Next, RTs (for correct responses only) were analysed using a repeated-measures ANOVA (Figure 4). Feature (colour, orientation, size), non-responding hand (left, right), position (uncrossed, crossed, lap), and target location (left, right) were included as within-subjects independent factors, and RT was the dependent variable.

Figure 4 

Reaction Times for Study 1, mean ± SE, for each position, feature, and target.

First, there was a main effect of feature (F(2, 42) = 52.20, p < .001, partial η2 = .71). Reaction time was slower for the orientation feature than the colour and size features. This effect simply means that the three discrimination tasks were not equally difficult, i.e. orientation judgements were more difficult. Second, there was a main effect of non-responding hand (F(1, 21) = 12.29, p = .002, partial η2 = .37). The main effect of non-responding hand likely reflects the fact that our (predominantly right-handed) sample responded more quickly when the right hand was used to make the response. Again, the critical predicted interactions were not significant. Specifically, there was no three way interaction between non-responding hand, position, and target location (F(2, 42) = 1.61, p = .21, partial η2 = .07), nor was there a 4-way interaction including feature (F(4, 84) = .38, p = .82, partial η2 = .02).

Finally, to correct for any speed-accuracy trade-offs, efficiency score data were calculated as reaction time per participant per condition divided by proportion of accurate responses (Figure 5). A repeated-measures ANOVA was performed, as above. There was a main effect of feature (F(2, 42) = 32.0, p < .001, partial η2 = .60); responses were most efficient when participants judged target colour. There was also an interaction between position and target location (F(2, 42) = 5.7, p = .007, partial η2 = .21). For the uncrossed positions only (left hand on the left or right hand on the right), the left target location produced more efficient responses than right target locations (F(1, 22) = 6.0, p = .02, partial η2 = .21). No other effects or interactions were found. As before, there was no three way interaction between non-responding hand, position, and target location (F(2, 42) = .02, p = .98, partial η2 = .001), nor was there a 4-way interaction including feature (F(4, 84) = .34, p = .85, partial η2 = .02).

Figure 5 

Efficiency scores for Study 1, mean ± SE.


This study failed to obtain the crucial hand by position by target location interaction for any of the three target features (colour, orientation, size) that participants were asked to discriminate. Thus, evidence was not found for a magnocellular enhancement near the hand in the form of reduced colour discrimination performance or enhanced orientation or size discrimination, as would have been predicted by the work of Perry et al. (2015). Moreover, evidence was not found for the reverse effect: a parvocellular enhancement near the hand in the form of enhanced colour discrimination or reduced orientation or size discrimination (per Bush & Vecera, 2014; Dufour & Touzalin, 2008).

Despite many reports of near-hand effects in the literature, the current work is not the first to fail to find evidence of a NHE. Some investigations report significant changes in performance only for the left or right hand, or fail to detect NHEs when a large number of hand positions are tested (Langerak, La Mantia, & Brown, 2013; Le Bigot & Grosjean, 2016; Lloyd et al., 2010; Schultheis & Carlson, 2013; Thomas, 2013), and more recently others have failed to find predicted near-hand effects on visual memory, visual search, and change detection across multiple experiments (Andringa, Boot, Roque, & Ponnaluri, 2018; Sahar & Makovski, 2017). We should also note that the literature found that in 41 near-hand effect experiments across 15 papers (2006 to 2016), the average sample size was 30.5 participants per group, with a median of 28.0 participants, and the use of undergraduate samples was common. Thus, the present sample is not unusual for this literature. However, it is possible that there are some methodological differences between the current study and previous work that, until now, have not been thought to be critical to obtaining the NHE. Study 2 attempted to localize this putative methodological factor. This is important for theoretical reasons, as identifying a factor that can extinguish the NHE should shed light on its underlying mechanism. In addition, the use of geometric stimuli, which may not map cleanly onto the magnocellular and parvocellular pathways, limits the interpretation of this study (and, in principle, other studies as well e.g. Colman, Remington, & Kritikos, 2017; Lloyd et al., 2010; Weidler & Abrams, 2014). The subsequent studies therefore rely on target detection rather than discrimination paradigms, in order to more closely match early work studying the Near Hand Effect, setting aside the question of a potential visual pathway mechanism.

Study 2

Because Study 1 failed to replicate the single-hand NHEs found in prior work, the methodological differences between Study 1 and the existing literature were considered. While the colour discrimination component of our task was similar to Study 4 of Dufour and Touzalin (2008), these authors did not include cross-midline conditions, and they did not test multiple stimulus features within the same experiment. Moreover, Schultheis & Carlson (2013) found that employing more than two positions in the same experiment tended to eliminate the NHE, albeit using a two-hand paradigm. To address these potential methodological factors, the number of hand positions used in Study 2 was reduced. Additionally, while the task used in Study 1 matched some prior work (Dufour & Touzalin, 2008), it is relatively rare within the NHE literature to employ a paradigm in which the hand rests flat against a horizontal surface on which stimuli are displayed. Therefore, the potential contributions of hand shaping were directly investigated in this second study.

Hand shaping has been shown to interact with the NHE in important ways. Specifically, surrounding the display with two hands in a power grasp position facilitates temporal sensitivity while forming precision grasps in an otherwise identical body position facilitates spatial sensitivity (Thomas, 2015). Similarly, power grasps facilitate speeded target detection while precision/pincer grasps do not (Thomas, 2013). Interestingly, these effects can be transferred to unusual parts of the hands (the back of the hand, the tips of the little fingers) through grasp training (Thomas, 2016). Thus, the affordance of a hand posture for action seems to be responsible for alterations in target processing (Thomas, 2016). In addition, target detection is speeded for near-palm items but not near-back-of-hand items (Reed et al., 2010), so both hand posture and the spatial relationship between the target and the hands’ surfaces seem to dictate performance for near-hand targets.

In Study 1, hands were placed directly onto the horizontal screen in a flat hand posture, and targets appeared above the fingertips. In the reaching literature, this is termed a “stationed” hand position (Whishaw, Sacrey, Travis, Gholamrezaei, & Karl, 2010). In the classic reaching sequence, a stationed (extended) hand is held flat against a surface with the fingers extended. This position maximizes postural support, and provides the starting point of the hand action. From this position, the limb can be lifted in a targeted reach. As the hand is lifted from a stationed position, it takes on a “collected” posture. Collection is a hand posture in which the hand is at rest with the fingers slightly curved and closed (Figure 6). Collection occurs during the aiming phase while the hand is transported during a reach, during crawling, during climbing, and even during speech-accompanying gestures (Dosso & Whishaw, 2012; Sacrey & Whishaw, 2010; Whishaw et al., 2010). As the hand approaches its target, the fingers open and extend in a task-specific grasping and manipulating posture. Manipulation depends on the size, orientation, and intentions associated with the target, while stationing and collecting postures are consistent across a wide range of hand actions. Collection is also frequently observed when hands are at rest (Sacrey & Whishaw, 2010), and has been described as “set[ting] the hand in a starting position from which subsequent skilled movements are initiated” (Sacrey & Whishaw, 2010). This station-collect-manipulate sequence can be seen in non-human primates and even in stepping and skilled reaching movements in the laboratory rat. This suggests that this sequence has been conserved across evolutionary time, and may reflect a shared neural representation of skilled hand actions (Iwaniuk & Whishaw, 2000; Sacrey, Alaverdashvili, & Whishaw, 2009). In all but small adjustments of the manipulating hand, collection tends to occur repeatedly between successive hand actions, suggesting that it serves a special role in the preparation and aiming of reaching and grasping.

Figure 6 

Examples of stationed (left) and collected (right) hand postures.

It was predicted that collected (curved) hands will elicit a stronger NHE than stationed (extended) hands due to their special role in skilled reaching and grasping. This is consistent with recent theories of the NHE which emphasize the action-relevance of the hand as a key determinant of the effect (Brockmole et al., 2013; Festman, Adam, Pratt, & Fischer, 2013; Thomas, 2013, 2015, 2016). The original work by Reed et al. employed collected hand postures, with the hand resting on its side (Reed et al., 2006). However, subsequent work has featured hands stationed against response buttons (Abrams, Davoli, Du, Knapp, & Paull, 2008; Gozli et al., 2012) or on a different plane than the targets (Lloyd, 2009). To our knowledge, only one set of studies has examined the effect of stationed hands in the same plane as the targets, and this group was successful in finding improved visual sensitivity and target detection for targets that appeared just past the tips of the stationed hand, akin to the design of our Study 1 (Dufour & Touzalin, 2008).

Materials and Methods

Twenty-nine new participants were recruited from the same set of sources. One participant was excluded for exceeding 2.5 standard deviations of the rest of the sample in terms of reaction time. The average age of the sample was 24.4 years (SD = 5.9 years). Participants’ self-reported gender was female (n = 26) or male (n = 3), and their ethnicity was Asian (n = 15), Caucasian/White (n = 7), Hispanic/Latin American/South American (n = 6), and Middle Eastern (n = 1). Their handedness was classified to be right-handed (n = 25), ambidextrous (n = 3), or left-handed (n = 1), per Oldfield, 1971.

The task was a very simple target detection task presented on the same horizontal screen used in Study 1. The stimuli were the same coloured, rotated lines. Stimuli appeared in either left or right locations (22 cm on either side of fixation) after a delay of 800, 1200, or 1600 milliseconds. On each trial, participants had to press the single response button as soon as a target appeared. First, there were two practice blocks of 16 trials each, performed in the Lap Left and Lap Right positions (described below). There were four positions: Screen Left (left hand placed on the left side of the screen), Screen Right (right hand placed on the right side of the screen), Lap Left (left hand placed in the lap), and Lap Right (right hand placed in the lap). Matching Study 1, in the hand-near (Screen) conditions, the hand was located 4.5 cm below the ipsilateral stimulus location. In the hand-far (Lap) conditions, the hand was located 39.3 cm from both stimulus locations and 16 cm below the screen surface. In all positions a keypad was held in the lap, and participants were instructed to press the designated response key (consistent for all trials) with their non-specified hand. There were 50 trials per block (48 visible targets and 2 catch trials). The four positions were performed in random order twice: once as collected (curved) hand postures, and once as stationed (extended) hand postures (order counterbalanced across participants). For three participants, a single block was omitted from the analysis due to equipment failure.


Data were excluded if reaction time was less than 150 ms or greater than 1000 ms. Responses were made on only 2.7% of catch trials.

A within-subjects ANOVA was performed with hand posture (station, collect), position (on screen, in lap), non-responding hand (left, right), and target location (left, right) as within-subject independent variables, and reaction time as the dependent variable. It was predicted that position, non-responding hand, and target location would interact or would appear in a four-way interaction with hand posture, indicating speeded target detection near the hand. Data are shown in Figure 7. First, there was an interaction between hand posture and position (F(1, 24) = 5.93, p = .02, partial η2 = .20). Follow-up analyses revealed that the in-lap positions produced marginally slower reaction times than the on-screen positions when the hands were collected (curved) (F(1, 25) = 4.26, p = .05, partial η2 = .15) but not stationed (extended) (F(1, 26) = .19, p = .67, partial η2 = .007). Next, there was an interaction between non-responding hand and target location (F(1, 24) = 7.60, p = .01, partial η2 = .24). When participants responded with their left hand, responses were marginally slower to right-side as compared to left-side targets (F(1, 26) = 4.23, p = .05, partial η2 = .24). There was no difference in response speed between the two target locations when the right hand was used to respond (F(1, 25) = .23, p = .64, partial η2 = .009). Lastly, an unpredicted three-way interaction between hand posture, non-responding hand, and target location was found (F(1, 24) = 4.69, p = .04, partial η2 = .16); the two target locations elicited different response times only when a collected (curved) right hand was the non-responding hand (F(1, 26) = 6.21, p = .02, partial η2 = .19). Critically, the analysis failed to find the predicted interaction between position, non-responding hand, and target location (F(1, 24) = 3.14, p = .09, partial η2 = .12), nor did these factors interact with hand posture to determine response time (F(1, 24) = .35, p = .56, partial η2 = .01).

Figure 7 

Reaction times in Study 2 (mean ± SE).


In this study, speeded target detections near the hand were not found, regardless of whether the near-target hand posture was collected or stationed. Hand shaping manipulations have been shown to interact with the NHE in the prior literature (Reed et al., 2010; Thomas, 2013, 2015). What these previous studies seem to have in common is that the relationship between the hand and the target was action-relevant (e.g. palm versus back of hand, precision versus power grasp). It was therefore expected that collected (curved) hand postures would facilitate the NHE to a greater extent than stationed (extended) hand postures because of their special relationship with reaching actions. However, the evidence did not support this hypothesis.

Nevertheless, some unpredicted effects of hand posture, hand position, non-responding hand, and target location were found on reaction time. Specifically, on-screen hands produced marginally faster reaction times than in-lap positions, but only when the hands were collected (curved). Also, for the left hand only, responses to right-side targets were slower than responses to left-side targets. Finally, a collected right hand, regardless of hand location, slowed responses to right-side as compared to left-side targets. Taken together, these results suggest that slowed responding to right-side targets is related to the responding left hand/non-responding right hand configuration, especially with collected hand positions and especially when the right hand is located in the lap.

Study 1 found faster responding with the right hand than with the left hand, regardless of target location. Here in Study 2, this difference is seen only for right-sided stimuli. However, one hesitates to put much weight on these trends for two reasons: 1) they do not survive correction for multiple comparisons, and 2) similar work has not produced the same effects (Dufour & Touzalin, 2008; Lloyd et al., 2010; Reed et al., 2006).

Taking Studies 1 and 2 together, evidence of NHEs was not found when participants were asked to make uncued discrimination (Study 1) and detection responses (Study 2). A number of studies in the past, however, have employed tasks that involved a predictive cue (e.g. Le Bigot & Grosjean, 2016; Lloyd, Azañón, & Poliakoff, 2010; Reed et al., 2006 but see Dufour & Touzalin, 2008; Le Bigot & Grosjean, 2012 for examples without a peripheral cue). Reed et al. (2006) employed a target-detection task and found that near-hand targets were detected more rapidly regardless of the validity of a predictive peripheral cue. In contrast, a number of researchers have employed target discrimination tasks and found that near-hands magnify the effect of the cue rather than causing a straightforward decrease in response time (Colman et al., 2017; Le Bigot & Grosjean, 2016; Lloyd et al., 2010). Since a visual cue is frequently used in NHE studies, but the evidence regarding the relationship between the NHE and cue presence is mixed, Study 3 directly investigated the relationship between cue presence and the NHE.

Study 3

In this study, performance on target detection without a cue (Study 2) was compared to target detection with a predictive peripheral cue in order to evaluate the potential contributions of the cue to the NHE and in order to evaluate the possibility that, in this task, a peripheral cue is necessary to instantiate the effect. The results of Reed et al. (2006) would predict speeded responding near the hand regardless of cue validity, while a replication of other researchers would predict a larger cueing effect near the hand (Colman et al., 2017; Le Bigot & Grosjean, 2016; Lloyd et al., 2010). The present study employed a procedure matched to that used by Reed et al. in order to determine whether a near hand has a cue-dependent or cue-independent effect on stimulus processing. As in Studies 1 and 2, a large horizontal “tabletop” surface was employed. Furthermore, after the failure to find an effect of hand posture in Study 2, the collected hand posture was eliminated, thus further reducing the number of conditions tested, a factor which may (as mentioned earlier) reduce the size of the NHE (Schultheis & Carlson, 2013). Thus, this study was intended to allow us to answer two questions: how does the NHE interact with the presence of a peripheral cue, and is the NHE different for horizontal and vertical surfaces?

Materials and Methods

Forty new participants were recruited from the pool of undergraduate students receiving course credit. Their handedness was classified as ambidextrous (n = 3), left-handed (n = 1), or right-handed (n = 36) per Oldfield, 1971. The average age was 20.6 years, SD = 1.8 years. Their gender was male (n = 9) or female (n = 31) and their ethnicity was Asian (n = 36), Middle Eastern (n = 2), and Caucasian/White (n = 2). Two participants were excluded for technical errors during testing. Five participants were excluded for responding on more than 25% of catch trials.

In this study, we assessed whether a near hand enhanced the effect of a valid (as opposed to invalid) peripheral cue. Participants sat at the horizontal screen and responded with a keypad located in their lap, as before. They were instructed to fixate on a cross in the centre of the screen. Two white boxes were located 22 cm away from the fixation cross on either side. A cue (the brightening of one of these two potential target locations) preceded the target by 200 ms. The cue remained on-screen with the target until the trial ended. Participants were instructed to press the single designated response key as quickly as possible once a target appeared. The target (a 5 cm white square) could stay on screen for up to 3000 ms, but disappeared once the keypress occurred. The interval between trials ranged from 1500–3000 ms. The trial breakdown was 35 valid, 10 invalid, and 5 catch (no target) trials per block, presented in a randomized order. Each subject performed four position blocks for 50 trials each. The blocks were Screen Left (left hand near the left location on the screen), Screen Right (right hand near the right location on the screen), Lap Left (the left hand was located in the lap near the responding right hand), and Lap Right (the right hand was located in the lap near the responding left hand). Matching the previous two studies, in the hand-near (Screen) conditions the hand was located 4.5 cm below the ipsilateral stimulus location and in the hand-far (Lap) conditions the hand was located 39.3 cm from both stimulus locations and 16 cm below the screen surface.


Data were cleaned to exclude responses <150 ms or >1000 ms (per Colman et al., 2017). Responses were made on 12.1% of catch trials. A within-subjects ANOVA was performed with non-responding hand (left, right), hand position (screen, lap), cue validity (valid, invalid) and target location (left, right) as within-subject factors and reaction time as the DV (Reed et al., 2006). Data are presented in Figure 8.

Figure 8 

Reaction times for Study 3. Means and SEs are shown.

There was a main effect of cue validity (F(1, 32) = 132.55, p < .001, partial η2 = .81). Responses following valid cues were performed more quickly than responses following invalid cues. No other effects were detected. Evidence of the original finding from Reed et al. (2006), a three-way interaction between non-responding hand, position, and target location without an effect of cue validity, was not found (F(1, 32) = .006, p = .94, partial η2 < .001), nor was there a significant four-way interaction between non-responding hand, position, target location, and cue validity (F(1, 32) = 3.74, p = .06, partial η2 = .11).

To investigate whether there was evidence for larger cueing effects near the hand (Colman et al., 2017; Le Bigot & Grosjean, 2016; Lloyd et al., 2010), an ANOVA was performed with cueing effect magnitude as the DV and position (lap/screen), non-responding hand (left/right), and target location (left/right) as the independent factors (Figure 9). This analysis produced no significant effects or interactions. The three-way interaction between position, non-responding hand, and target location was marginal (F(1, 32) = 3.74, p = .06, partial η2 = .11). A follow-up analysis examining each of the four hand positions separately indicated that right-side targets elicited larger cueing effects than left-sided targets when the hand was located on the right side of the screen (t(32) = –2.1, p = .04); however, this effect does not survive correction for multiple comparisons.

Figure 9 

Cueing effects seen in Study 3. Means and SEs are shown.


This study investigated whether the inclusion of predictive peripheral cues might instantiate the NHE. This study was intended to act as a direct replication of Reed et al. (2006), albeit performed on a large horizontal surface rather than a smaller vertical display. Evidence was absent for both a straightforward NHE regardless of cue validity (per Reed et al., 2006) and larger cueing effects near the on-screen hand (per Colman et al., 2017; Le Bigot & Grosjean, 2016; Lloyd et al., 2010).

The results of this study suggest that the failure to find a straightforward NHE in Study 2 was not due to the lack of peripheral cues, because in this study our inclusion of peripheral cues did not “turn on” the NHE. However, because of the discrepancy in screen orientation between this experiment and that of Reed et al. (2006), the possibility still remains that the use of the large horizontal, table-style screen is responsible for the discrepancy between our results and those of Reed and colleagues.

Study 4

Studies 1–3 were all conducted with participants placing their hands directly on the surface of a large horizontal screen, and all failed to find reliable evidence of a NHE. Examination of the NHE literature reveals that small vertical screens are most commonly used (e.g. Lloyd et al., (2010); Reed et al., (2006), among many others). Relatively few studies of the NHE involve a horizontal plane (but see Dufour & Touzalin, (2008); Festman, Adam, Pratt, & Fischer, (2013); Lloyd et al., (2010)), and no direct comparisons between horizontal and vertical screen orientations have been made with respect to the NHE. Prior work investigating screen-directed behaviour has found that horizontal and vertical screens differ in the types of interpersonal collaboration and manual actions that these displays can elicit (Hardy, 2012; Rogers & Lindley, 2004), suggesting that screen orientation can be a salient factor driving behaviour.

Study 4 replicated the procedure used in Study 3 but placed the large screen in a vertical configuration. When in the near-hand position, participants’ hands were placed outside the target locations with the palm facing inward. This was done for two reasons: 1) by employing the vertical plane, the task matches the procedure used by Reed et al. (2006), albeit with different screen dimensions, and 2) the comparison of Study 3 (horizontal) with Study 4 (vertical) sheds light on potential differences in the NHE related to screen orientation.

Materials and Methods

Forty-three new undergraduate participants were tested. Mean age was 20.3 years, SD = 2.6 years. Their handedness was classified as right-handed (n = 38) and ambidextrous (n = 5). Their gender was female (n = 33) and male (n = 10). Their ethnicity was Asian (n = 32), Caucasian/White (n = 9), First Nations (n = 1), and Middle Eastern (n = 1). Data from two participants were excluded due to failure to follow task instructions (n = 1) and poor vision (n = 1). Ten additional participants were excluded for responding on more than 25% of catch trials, leaving a final sample of 31 participants.

The task was identical to Study 3, but the participants placed their hands on cloth-covered blocks on a small horizontal table under the large screen, which was placed in a vertical configuration (Lloyd et al., 2010), see Figure 10. In the hand-near conditions, the hand was located 1.5 cm to the outside of the target box location. In the hand-far conditions, the hand was located 49.5 cm from both stimulus locations. As before, reaction times less than 150 milliseconds or greater than 1000 milliseconds were excluded.

Figure 10 

The seating arrangement used in Study 4.


Responses were made on 10.6% of catch trials. A repeated-measures ANOVA was performed with cue validity (valid, invalid), non-responding hand (left, right), position (lap, screen), and target location (left, right) as within-subjects factors and reaction time as the dependent variable (Figure 11). First, this analysis produced a main effect of cue validity (F(1, 30) = 168.15, p < .001, partial η2 = .85), with responses made more rapidly to targets appearing at validly rather than invalidly cued locations. In addition, there was a main effect of target location (F(1, 30) = 13.90, p = .001, partial η2 = .32); on the whole, responses were made more quickly to targets appearing on the right side of space than to targets appearing on the left side of space. No other effects or interactions were found. Again, this analysis failed to find evidence of the predicted three-way interaction between position, non-responding hand, and target location (F(1, 30) = .69, p = .41, partial η2 = .02), nor did it find a four-way interaction including cue validity (F(1, 30) = .06, p = .81, partial η2 = .002).

Figure 11 

Reaction time data for Study 4. Means and SEs shown.

In addition, an ANOVA was conducted on the data illustrated in Figure 12, which concerned cueing effect magnitude as a function of position (lap versus screen), non-responding hand (left hand versus right hand), and target location (left versus right). This analysis produced no significant main effects or interactions, indicating that cueing effect magnitude was independent of hand position, target position, and hand used to respond. Consistent with all other analyses, we found no three-way interaction between position, non-responding hand, and target location (F(1, 30) = .06, p = .81, partial η2 = .002).

Figure 12 

Cueing effects for Study 4. Means and SEs shown.


As in Studies 1–3, there was no reliable evidence for speeded responding for targets near the hands, nor was there evidence of larger cueing effects at these locations. This study employed the same procedure as in Study 3, but used a vertical rather than horizontal screen orientation. The consistent failure to replicate the near-hand effect in this study suggests that the lack of effect in Study 3 is not due only to the use of a horizontal screen orientation. This study was intended to closely match Study 1 in Reed et al. (2006). Collectively, the findings suggest that the near hand effect may be more fragile than previously thought.

Cross-Study Analysis

In four studies, compelling evidence of the NHE was not found. To address the possibility that this was due to insufficient statistical power, the evidence for the NHE versus evidence for the null hypothesis was evaluated for each study individually and in a pooled sample using Bayesian analyses. Bayesian statistics allow one to directly compare the strength of evidence for the null and alternative hypotheses by computing Bayes Factors (Masson, 2011). The larger the Bayes Factor (BF10), the stronger the evidence in support of the alternative hypothesis. In these analyses, the alternative hypothesis is that a given measure of the NHE is non-zero.

Materials and Methods

For each participant, multiple indices of the NHE were calculated using the following method: average RT for a particular location when the hand was in the lap minus average RT for the same location when the hand was nearby, present and uncrossed. This analysis focused on conditions that were uncued (E1, E2) and validly cued (E3, E4) in order to compare the results most directly with the existing literature. For example, Dufour & Touzalin, (2008; Experiment 4) found a speeded NHE in the absence of a cue, and Reed et al. (2006) found faster responding near the hand regardless of cue validity. Furthermore, when cueing effects could be examined, the difference in cueing effect magnitude for hand-far minus hand-near conditions was calculated in the same manner (e.g. Colman et al., 2017; Le Bigot & Grosjean, 2016; Lloyd et al., 2010). These calculations produced indices of the Near Hand Effect in milliseconds (Table 1). For each index, the corresponding Bayes Factor was calculated in R 3.3.1 using the function ttestBF from the package BayesFactor (Morey & Rouder, 2014; R Core Team, 2016).

Table 1

Values (millseconds) associated with indices of the NHE across studies.

Study Task Screen Featurea Hand Cue Location NHE 95% CI BF10

1 Discrim Horiz Colour Station None Left 11.9 [–15.5, 39.3] .27 *
1 Discrim Horiz Colour Station None Right 5.6 [–167.7, 178.9] .21 *
1 Discrim Horiz Size Station None Left 17.0 [–132.9, 166.9] .38
1 Discrim Horiz Size Station None Right –5.3 [–195.3, 184.7] .20 *
1 Discrim Horiz Orientation Station None Left 18.6 [–163, 200.2] .33
1 Discrim Horiz Orientation Station None Right 45.1 [–141.6, 231.8] 2.0
2 Detect Horiz Onset Collect None Left 3.4 [–37.3, 44.1] .28 *
2 Detect Horiz Onset Collect None Right 10.3 [–53.2, 73.8] .68
2 Detect Horiz Onset Station None Left –1.7 [–46, 42.6] .22 *
2 Detect Horiz Onset Station None Right 5.3 [–31.1, 41.7] .54
3 Detect Horiz Onset Station Valid Left 5.2 [–59.5, 69.9] .27 *
3 Detect Horiz Onset Station Valid Right 11.1 [–68, 90.2] .57
3 Detect Horiz CE Station CE Left –6.5 [–94.4, 81.4] .26 *
3 Detect Horiz CE Station CE Right –5.8 [–112.4, 100.8] .22 *
4 Detect Vertical Onset Rotated Valid Left 3.6 [–40.4, 47.6] .28 *
4 Detect Vertical Onset Rotated Valid Right 7.7 [–65.3, 80.7] .35
4 Detect Vertical CE Rotated CE Left 5.6 [–98.4, 109.6] .22 *
4 Detect Vertical CE Rotated CE Right 6.1 [–85.6, 97.8] .24 *
2–4 Detect Pooled Onset Pooled Pooled Left 2.6 [–2.1, 7.3] .17 *
2–4 Detect Pooled Onset Pooled Pooled Right 8.2 [2.3, 14.1] 1.5

a CE = cueing effect.

* Indicates a Bayes Factor less than 0.33, indicating positive evidence for the null hypothesis.

In addition, to maximize power, data were pooled from Studies 2–4 for conditions in which targets were detected (uncued or following a valid cue) in the presence or absence of a stationed hand. Left- and right-side targets were examined separately. If hand presence speeds responding or increases the magnitude of the cueing effect for near-hand targets, indices should be meaningfully, positively different from zero. Furthermore, the evidence in support of each index is given by its corresponding Bayes Factor. Again, Bayes Factors indicate the strength of evidence for the alternative hypothesis (that hand-associated changes in RT or cueing effect magnitude are non-zero) versus the null hypothesis (that NHEs are 0, i.e. that response times and cueing effects are not different across hand positions). By convention, a Bayes Factor of 3–20 is considered “positive evidence” (Kass & Raftery, 1995) for the alternative hypothesis. Alternately, a Bayes Factor of 0.05–0.33 would indicate positive evidence for the null hypothesis (with smaller BF10 values indicating progressively greater evidence for the null).


The values for the calculated indices of the NHE are shown in Table 1. Note that most un-pooled indices of the Near Hand Effect (14 out of 18) were numerically positive, indicating that responses were faster, or cueing effects larger, for near-hand targets. However, these effects were quite small, with an average value of 7.6 milliseconds. Moreover, none of these indices was supported by positive evidence as indicated by the Bayesian analyses. In fact, the evidence suggests for 11 of the 18 un-pooled indices, there is positive evidence in favour of the null hypothesis instead. Both pooled indices were also numerically positive (i.e. response times were numerically faster for hand-near as opposed to hand-far conditions), but neither was supported even by positive evidence, despite a pooled sample size of approximately 90 participants for each measure.

General Discussion

Recent research has increasingly embraced the claim that body position can play a role in shaping perception and attention. In four studies, the potential contributions of hand position to near-hand target discrimination, detection, and susceptibility to cueing effects were investigated. In Study 1, an uncued target discrimination task was performed in the horizontal plane across a number of hand positions. In Study 2, uncued target detection on the horizontal plane was examined with a reduced number of hand positions and two types of hand posture (collected and stationed). In Study 3, cues that preceded the targets were introduced. In Study 4, it was examined whether screen orientation (horizontal versus vertical) might contribute to the NHE. Surprisingly, statistically significant support for the NHE was absent in all studies. Even with a pooled sample size of over 90 participants, evidence for speeded target detection near the hands was not compelling; for the left hand, evidence for the null hypothesis was 5.9 times stronger than evidence for such an effect, while evidence for a non-zero NHE for the right hand was 1.5 times stronger than evidence for the null hypothesis, and this effect was estimated to have a value of 8.2 milliseconds. This is numerically consistent with the idea that single near hands may provide a magnocellular advantage (increased temporal acuity) at the expense of processing via the parvocellular pathway. However, these effects were surprisingly fragile. This fragility has both theoretical and methodological implications.

Methodologically, an important consideration for the present work and for this literature more broadly is the issue of statistical power. Per the Discussion in Study 1, the present samples of approximately 30 individuals per study are typical of the literature. Inadequate power to study small effects is a known issue in other, similar paradigms in cognitive psychology like the Joint Simon Effect (Karlinsky, Lohse, & Lam, 2017). However, inadequate power does not seem to explain the present failure to find an effect across four studies. This is evidenced by the finding that, in a Bayesian analysis of findings our studies, no indices of the Near Hand Effect were supported even by what is considered to be “positive evidence” (Table 1).

A wide range of methods are used to study NHEs, including paradigms that require target detection paradigms in the presence (Langerak et al., 2013; Reed et al., 2006; Sun & Thomas, 2013) and absence of cues (Abrams & Weidler, 2013; Dufour & Touzalin, 2008; Le Bigot & Grosjean, 2012) as well as measures of cueing effect magnitude (Colman et al., 2017; Le Bigot & Grosjean, 2016; Lloyd et al., 2010). This could be taken as a demonstration of the robustness of NHEs to variations in methodology; however, it is unclear at this point whether one- and two-handed NHEs constitute a single family of effects (Bush & Vecera, 2014; Tseng & Bridgeman, 2011). Others have recently argued, in fact, that hand-proximity effects on vision do not constitute a unitary group because the assumption of context-invariance is not met (Gozli & Deng, 2018; Kingstone, Smilek, & Eastwood, 2008). It is quite likely that some Near-Hand Effect paradigms produce stable and reliable effects (e.g. the use of power and precision grips, Thomas, 2015), while others might be more fragile. The assumption that all NHE phenomena represent a single underlying mechanism is not well-supported.

NHEs are sensitive to a number of moderating factors, including participant handedness (Colman et al., 2017), use of the left versus right hand (Langerak et al., 2013; Le Bigot & Grosjean, 2016; Lloyd et al., 2010), and the use of one- versus two-handed manipulations (Bush & Vecera, 2014; Tseng & Bridgeman, 2011). This heterogeneity, while theoretically interesting, also provides an opportunity for researchers to (perhaps unintentionally) take advantage of a number of researcher degrees of freedom (Simmons, Nelson, & Simonsohn, 2011; Wicherts et al., 2016). Moreover, this literature may be susceptible to publication bias (Francis, 2012), and predicted NHEs are not always found as one would predict (Andringa et al., 2018; Sahar & Makovski, 2017; Schultheis & Carlson, 2013), suggesting that the published literature might over-represent the ease with which these effects can be reliably produced in the laboratory. While this situation in and of itself does not indicate that questionable research practices are being used, pre-registration of future studies could bolster the evidentiary value of NHE findings. To our knowledge, no pre-registered studies of NHEs have yet been published.

NHEs have been proposed to have wide-ranging implications for real behaviour, including shaping real-world attention to objects, modifying neural representations of near-hand (and near-tool) space, facilitating the detection of affordances and the avoidance of danger, biasing processing of near-hand stimuli to specific neural circuitry, and even interacting with representations of the social world (Abrams et al., 2008; Gozli & Brown, 2011; Langerak et al., 2013; Reed et al., 2010, 2006; Sun & Thomas, 2013). While all of these claims are supported by some experimental evidence, the present work suggests that caution is needed. NHEs do not occur reliably even in controlled laboratory tasks; thus, extension of the NHE to interpretations of uncontrolled real-world interactions with objects and people should be treated with caution.

Data Accessibility Statement

All the stimuli, presentation materials, participant data, and analysis materials for this manuscript are available at https://osf.io/xhu7z/.


The authors would like to thank Jane J. Kim and Natalie T. W. Wong for assistance in testing participants.

Funding Information

Funding was provided by the Natural Sciences and Engineering Research Council of Canada (NSERC; RGPIN 170077) and the Social Sciences and Humanities Research Council of Canada (SSHRC; 435-2013-2200) to AK.

Competing Interests

The authors have no competing interests to declare.

Author Contributions

  • Conception and design: JD, AK
  • Coordinated data acquisition: JD
  • Analysis and interpretation of data: JD
  • Drafted and revised article: JD, AK
  • Approved submitted version for publication: JD, AK


  1. Abrams, R. A., Davoli, C. C., Du, F., Knapp, W. H., & Paull, D. (2008). Altered vision near the hands. Cognition, 107(3), 1035–1047. DOI: https://doi.org/10.1016/j.cognition.2007.09.006 

  2. Abrams, R. A., & Weidler, B. J. (2013). Trade-offs in visual processing for stimuli near the hands. Attention, Perception & Psychophysics, 76 383–390. November 2013. DOI: https://doi.org/10.3758/s13414-013-0583-1 

  3. Andringa, R., Boot, W. R., Roque, N. A., & Ponnaluri, S. (2018). Hand proximity effects are fragile: A useful null result. Cognitive research: Principles and implications, 3(1), 7. DOI: https://doi.org/10.1186/s41235-018-0094-7 

  4. Bar, M. (2003). A cortical mechanism for triggering top-down facilitation in visual object recognition. Journal of Cognitive Neuroscience, 15(4), 600–609. DOI: https://doi.org/10.1162/089892903321662976 

  5. Brockmole, J. R., Davoli, C. C., Abrams, R. A., & Witt, J. K. (2013). The world within reach: Effects of hand posture and tool use on visual cognition. Current Directions in Psychological Science, 22(1), 38–44. DOI: https://doi.org/10.1177/0963721412465065 

  6. Bush, W. S., & Vecera, S. P. (2014). Differential effect of one versus two hands on visual processing. Cognition, 133(1), 232–237. DOI: https://doi.org/10.1016/j.cognition.2014.06.014 

  7. Colman, H. A., Remington, R. W., & Kritikos, A. (2017). Handedness and graspability modify shifts of visuospatial attention to near-hand objects. PloS One, 12(1), e0170542. DOI: https://doi.org/10.1371/journal.pone.0170542 

  8. Cosman, J. D., & Vecera, S. P. (2010). Attention affects visual perceptual processing near the hand. Psychological Science, 21(9), 1254–1258. DOI: https://doi.org/10.1177/0956797610380697 

  9. Davoli, C. C., & Brockmole, J. R. (2012). The hands shield attention from visual interference. Attention, Perception, & Psychophysics, 74(7), 1386–1390. DOI: https://doi.org/10.3758/s13414-012-0351-7 

  10. Davoli, C. C., Du, F., Montana, J., Garverick, S., & Abrams, R. A. (2010). When meaning matters, look but don’t touch: The effects of posture on reading. Memory & Cognition, 38(5), 555–562. DOI: https://doi.org/10.3758/MC.38.5.555 

  11. Derrington, A. M., & Lennie, P. (1984). Spatial and temporal contrast sensitivities of neurons in lateral geniculate nucleus of macaque. Journal of Physiology, 357, 219–240. DOI: https://doi.org/10.1113/jphysiol.1984.sp015498 

  12. Dosso, J. A., & Whishaw, I. Q. (2012). Resting hand postures: An index of what a speaker may do next. Gesture, 12(1), 84–95. DOI: https://doi.org/10.1075/gest.12.1.05dos 

  13. Du, F., Wang, X., Abrams, R. A., & Zhang, K. (2017). Emotional processing is enhanced in peri-hand space. Cognition, 165, 39–44. DOI: https://doi.org/10.1016/j.cognition.2017.04.009 

  14. Dufour, A., & Touzalin, P. (2008). Improved visual sensitivity in the perihand space. Experimental Brain Research, 190(1), 91–98. DOI: https://doi.org/10.1007/s00221-008-1453-2 

  15. Festman, Y., Adam, J. J., Pratt, J., & Fischer, M. H. (2013). Both hand position and movement direction modulate visual attention. Frontiers in Psychology, 4, 657. DOI: https://doi.org/10.3389/fpsyg.2013.00657 

  16. Francis, G. (2012). Publication bias and the failure of replication in experimental psychology. Psychonomic Bulletin and Review, 19(6), 975–991. DOI: https://doi.org/10.3758/s13423-012-0322-y 

  17. Gozli, D. G., & Brown, L. E. (2011). Agency and control for the integration of a virtual tool into the peripersonal space. Perception, 40(11), 1309–1319. DOI: https://doi.org/10.1068/p7027 

  18. Gozli, D. G., & Deng, W. (2018). Building blocks of psychology: On remaking the unkept promises of early schools. Integrative Psychological and Behavioral Science, 52(1). DOI: https://doi.org/10.1007/s12124-017-9405-7 

  19. Gozli, D. G., West, G. L., & Pratt, J. (2012). Hand position alters vision by biasing processing through different visual pathways. Cognition, 124(2), 244–250. DOI: https://doi.org/10.1016/j.cognition.2012.04.008 

  20. Graziano, M. S. A., & Gross, C. G. (1998). Spatial maps for the control of movement. Current Opinion in Neurobiology, 8(2), 195–201. DOI: https://doi.org/10.1016/S0959-4388(98)80140-2 

  21. Hardy, J. (2012). Experiences: A year in the life of an interactive desk. Proceedings of the Designing Interactive Systems Conference on – DIS ’12, 679. DOI: https://doi.org/10.1145/2317956.2318058 

  22. Iwaniuk, A. N., & Whishaw, I. Q. (2000). On the origin of skilled forelimb movements. Trends in Neurosciences, 23(8), 372–6. DOI: https://doi.org/10.1016/S0166-2236(00)01618-0 

  23. Karlinsky, A., Lohse, K. R., & Lam, M. Y. (2017). A meta-analysis of the joint Simon effect. In: Annual Meeting of the Cognitive Science Society, 2377–2382. 

  24. Kass, R. E., & Raftery, A. E. (1995). Bayes Factors. Journal of the American Statistical Association, 90(430), 773–795. DOI: https://doi.org/10.1080/01621459.1995.10476572 

  25. Kelly, S. P., & Brockmole, J. R. (2014). Hand proximity differentially affects visual working memory for color and orientation in a binding task. Frontiers in Psychology, 5, 318. DOI: https://doi.org/10.3389/fpsyg.2014.00318 

  26. Kingstone, A., Smilek, D., & Eastwood, J. D. (2008). Cognitive Ethology: A new approach for studying human cognition. British Journal of Psychology, 99(3), 317–340. DOI: https://doi.org/10.1348/000712607X251243 

  27. Langerak, R. M., La Mantia, C. L., & Brown, L. E. (2013). Global and local processing near the left and right hands. Frontiers in Psychology, 4, 793. DOI: https://doi.org/10.3389/fpsyg.2013.00793 

  28. Le Bigot, N., & Grosjean, M. (2012). Effects of handedness on visual sensitivity in perihand space. PloS One, 7(8), e43150. DOI: https://doi.org/10.1371/journal.pone.0043150 

  29. Le Bigot, N., & Grosjean, M. (2016). Exogenous and endogenous shifts of attention in perihand space. Psychological Research, 80(4), 677–684. DOI: https://doi.org/10.1007/s00426-015-0680-y 

  30. Lee, B. B., Pokorny, J., Smith, V. C., Martin, P. R., & Valberg, A. (1990). Luminance and chromatic modulation sensitivity of macaque ganglion cells and human observers. Journal of the Optical Society of America A, 7(12), 2223–2236. DOI: https://doi.org/10.1364/JOSAA.7.002223 

  31. Leonova, A., Pokorny, J., & Smith, V. C. (2003). Spatial frequency processing in inferred PC- and MC-pathways. Vision Research, 43(20), 2133–2139. DOI: https://doi.org/10.1016/S0042-6989(03)00333-X 

  32. Livingstone, M. S., & Hubel, D. H. (1987). Psychophysical evidence for separate channels for the perception of form, color, movement, and depth. Journal of Neuroscience, 7(11), 3416–3468. DOI: https://doi.org/10.1523/JNEUROSCI.07-11-03416.1987 

  33. Lloyd, D. M. (2009). The space between us: A neurophilosophical framework for the investigation of human interpersonal space. Neuroscience and Biobehavioral Reviews, 33(3), 297–304. DOI: https://doi.org/10.1016/j.neubiorev.2008.09.007 

  34. Lloyd, D. M., Azañón, E., & Poliakoff, E. (2010). Right hand presence modulates shifts of exogenous visuospatial attention in near perihand space. Brain and Cognition, 73(2), 102–109. DOI: https://doi.org/10.1016/j.bandc.2010.03.006 

  35. Masson, M. E. J. (2011). A tutorial on a practical Bayesian alternative to null-hypothesis significance testing. Behavior Research Methods, 43(3), 679–690. DOI: https://doi.org/10.3758/s13428-010-0049-5 

  36. Maunsell, J. H. R., Nealey, T. A., & DePriest, D. D. (1990). Magnocellular and parvocellular contributions to responses in the middle temporal visual area (MT) of the macaque monkey. Journal of Neuroscience, 10(10), 3323–3334. DOI: https://doi.org/10.1523/JNEUROSCI.10-10-03323.1990 

  37. Morey, R. D., & Rouder, J. N. (2014). BayesFactor: Computation of Bayes factors for common designs. (Version R package version 0.9.9.). Retrieved from: http://cran.r-project.org/package=BayesFactor. 

  38. Oldfield, R. C. (1971). The assessment and analysis of handedness: The Edinburgh inventory. Neuropsychologia, 9(1), 97–113. DOI: https://doi.org/10.1016/0028-3932(71)90067-4 

  39. Peirce, J. W. (2007). PsychoPy-Psychophysics software in Python. Journal of Neuroscience Methods, 162(1–2), 8–13. DOI: https://doi.org/10.1016/j.jneumeth.2006.11.017 

  40. Perry, C. J., Sergio, L. E., Crawford, J. D., & Fallah, M. (2015). Hand placement near the visual stimulus improves orientation selectivity in V2 neurons. Journal of Neurophysiology, 113(7), 2859–70. DOI: https://doi.org/10.1152/jn.00919.2013 

  41. R Core Team. (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL: https://www.R-project.org/. 

  42. Reed, C. L., Betz, R., Garza, J. P., & Roberts, R. J. (2010). Grab it! Biased attention in functional hand and tool space. Attention, Perception & Psychophysics, 72(1), 236–245. DOI: https://doi.org/10.3758/APP.72.1.236 

  43. Reed, C. L., Grubb, J. D., & Steele, C. (2006). Hands up: Attentional prioritization of space near the hand. Journal of Experimental Psychology. Human Perception and Performance, 32(1), 166–177. DOI: https://doi.org/10.1037/0096-1523.32.1.166 

  44. Rogers, Y., & Lindley, S. (2004). Collaborating around vertical and horizontal large interactive displays: Which way is best? Interacting with Computers, 16(6), 1133–1152. DOI: https://doi.org/10.1016/j.intcom.2004.07.008 

  45. Sacrey, L.-A. R., Alaverdashvili, M., & Whishaw, I. Q. (2009). Similar hand shaping in reaching-for-food (skilled reaching) in rats and humans provides evidence of homology in release, collection, and manipulation movements. Behavioural Brain Research, 204(1), 153–161. DOI: https://doi.org/10.1016/j.bbr.2009.05.035 

  46. Sacrey, L.-A. R., & Whishaw, I. Q. (2010). Development of collection precedes targeted reaching: Resting shapes of the hands and digits in 1–6-month-old human infants. Behavioural Brain Research, 214(1), 125–129. DOI: https://doi.org/10.1016/j.bbr.2010.04.052 

  47. Sahar, T., & Makovski, T. (2017). Grab that face, hammer, or line: No effects of hand position on visual memory. Poster presented at the Annual Workshop on Object Perception, Attention, and Memory (OPAM), Vancouver, CA. 

  48. Schultheis, H., & Carlson, L. A. (2013). Determinants of attentional modulation near the hands. Frontiers in Psychology, 4, 858. DOI: https://doi.org/10.3389/fpsyg.2013.00858 

  49. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. DOI: https://doi.org/10.1177/0956797611417632 

  50. Sun, H.-M., & Thomas, L. E. (2013). Biased attention near another’s hand following joint action. Frontiers in Psychology, 4, 443. DOI: https://doi.org/10.3389/fpsyg.2013.00443 

  51. Thomas, L. E. (2013). Grasp posture modulates attentional prioritization of space near the hands. Frontiers in Psychology, 4, 312. DOI: https://doi.org/10.3389/fpsyg.2013.00312 

  52. Thomas, L. E. (2015). Grasp posture alters visual processing biases near the hands. Psychological Science, 26(5), 625–632. DOI: https://doi.org/10.1177/0956797615571418 

  53. Thomas, L. E. (2016). Action experience drives visual-processing biases near the hands. Psychological Science, 28(1), 124–131. DOI: https://doi.org/10.1177/0956797616678189 

  54. Tseng, P., & Bridgeman, B. (2011). Improved change detection with nearby hands. Experimental Brain Research, 209(2), 257–269. DOI: https://doi.org/10.1007/s00221-011-2544-z 

  55. Weidler, B. J., & Abrams, R. A. (2014). Enhanced cognitive control near the hands. Psychonomic Bulletin & Review, 21(2), 462–469. DOI: https://doi.org/10.3758/s13423-013-0514-0 

  56. Whishaw, I. Q., Sacrey, L.-A. R., Travis, S. G., Gholamrezaei, G., & Karl, J. M. (2010). The functional origins of speech-related hand gestures. Behavioural Brain Research, 214(2), 206–15. DOI: https://doi.org/10.1016/j.bbr.2010.05.026 

  57. Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid P-hacking. Frontiers in Psychology, 7, 1832. DOI: https://doi.org/10.3389/fpsyg.2016.01832 

Peer review comments

The author(s) of this paper chose the Open Review option, and the peer review comments are available at: http://doi.org/10.1525/collabra.167.pr