In much of modern experimental psychology research, participants view stimuli on a computer and provide responses by pressing buttons. For instance, recently published studies have had participants use button or key presses to indicate their judgment of whether something is to the left or right (von Hecker, Klauer, Wolf, & Fazilat-Pour, 2015), whether something is present or absent (Shwarz & Miller, 2016), which way a stimulus is oriented (Atas, San Anton, & Cleeremans, 2015), and whether a stimulus is the same as or different from one recently presented (Rac-Lubashevsky & Kessler, 2016). Experimental psychology’s common approach of using computers with standard input devices capitalizes on the flexible human ability to map meaningful judgments to meaning-free symbols and actions. The approach is particularly well-suited to a view of mind that treats the body and its actions as outputs for mental processes but not an integral part of them. However, there is growing concern in social psychology, cognitive science, and neuroscience that such a view of mind underestimates the role of the body in cognition. The idea of embodied mind is that the body and its actions are important components of cognitive and social information processing (for reviews, see Niedenthal, 2007; Wilson, 2002; Wilson & Golonka, 2013). From this perspective, limiting human behavior to button presses risks limiting the kind of cognition we aim to study. Rather than conforming behavior to the input devices we currently have available, researchers in this area have an interest in finding tools that allow for a wider variety of motoric responses. Here we present a way to use an electromyographic (EMG) signal as a computer input device, enabling participants to control a computer by contracting muscles that are not usually used for that purpose, but which may play a role in conceptual and affective processing that researchers seek to examine.

EMG has long been used in psychophysiology and social neuroscience research, although not as a means of controlling a computer. It is well established that positive and negative affect tend to produce activation of the muscles used for smiling (zygomaticus major) and for brow furrowing (corrugator supercilii) respectively (Larsen, Norris, & Cacioppo, 2003; Tassinary, Cacioppo, & Vanman, 2007), and this effect emerges even when participants are instructed to keep their facial muscles relaxed (Dimberg, Thunberg, & Grunedal, 2002). As a result, researchers use activation of these facial muscles as an indicator of positive and negative affect, allowing them to investigate the role of affect in social cognition. For example, studies have shown that participants who score higher on measures of racism also tend to show increased corrugator activation in response to African American faces, a finding that addresses the affective components of racism (Vanman, Ryan, Pederson, & Ito, 2013). In addition, participants with post-traumatic stress disorder show robust brow EMG responses to trauma-related cues, leading Pole (2007) to suggest that the face may hold “unappreciated diagnostic information” for clinicians. Because EMG can detect subtle muscle activation that may happen automatically and without awareness, it can be useful in capturing aspects of a response that participants may be unable or unwilling to express verbally.

EMG has been especially valuable for research on embodied simulation and mimicry, suggesting a role for the motor system in social interaction. Several studies have used EMG to show evidence that people subtly and spontaneously mimic motor movements of others. Hofree, Urgen, Winkielman, and Saygin (2015) showed EMG activation in the arms while participants watched others wave, even when they were not supposed to do the action themselves, and this effect held whether they observed humans or robots waving. Furthermore, a large literature indicates that people spontaneously mimic the facial expressions of others. In a series of experiments, Dimberg (1997, 2007; Dimberg & Karlsson, 1997; for general discussion see also Dimberg, 1988) measured EMG activity from subjects’ corrugator and zygomaticus muscles as they viewed images of human facial expressions. In these studies, subjects showed elevated activity over the corrugator supercilii when exposed to faces expressing anger relative to the activity elicited when they viewed happy faces, and they showed greater activation over the zygomaticus major when viewing happy faces relative to seeing angry faces. Dimberg, Thunberg, and Elmehed (2000) were able to elicit similar responses from subjects that were exposed to facial expressions unconsciously, using a backwards-masking approach that displayed emotional expressions for only 30 ms. The same effects were found in a group of participants who were directly instructed not to respond to the faces they saw (Dimberg et al., 2002). Further evidence of the automaticity of facial mimicry and its importance to social interaction can be gleaned from research that compares the behavior of typically developing individuals with those who have autism spectrum disorder (ASD). McIntosh, Reichmann-Decker, Winkielman, and Wilbarger (2006) found that participants with and without ASD were able to mimic facial expressions when instructed to do so, but only those without ASD did so spontaneously (see also Beall, Moody, McIntosh, Hepburn, & Reed, 2008).

These examples illustrate the usual use of EMG in research: participants engage in some psychological task while EMG is passively recorded. To our knowledge, psychologists have not previously used the EMG signal as a way for participants to control a computer deliberately. Such an approach would make it possible to design experiments in which participants can respond not only with button presses, but with a wide range of effectors. Because deliberate muscle contractions generate large signals, it is possible to detect them even without sophisticated EMG systems. Here we describe a simple, inexpensive method for using EMG to collect reaction-time data on deliberate muscle contractions. We then present the results of an experiment that demonstrates the efficacy of the system as a research tool.

System Description

We used two MyoWare Muscle Sensors by Advancer Technologies and a MaKey MaKey Classic microcontroller by MaKey MaKey, as shown in Figure 1. Developed for gamers and robotics hobbyists, the MyoWare is an inexpensive (~$38 USD) two-inch long wearable board that can be attached to the skin with standard snap-on electrodes. The MaKey MaKey Classic is a modified Arduino-based development board that can be programmed to send keypress events to the computer. The MyoWare boards are placed directly on the skin with adhesive backed electrodes. These boards are connected to the MaKey MaKey, from which they are powered at 5 volts and to which they send their amplified, rectified, and integrated EMG signals. The outgoing MyoWare EMG envelope signal ranges from 0 to 4.8 volts, which the MaKey MaKey’s analog-to-digital converter maps to a scale of 0 to 999 (about 4.9 mV per unit). We programmed the MaKey MaKey to sample EMG activity every 50 ms1 from the sensors and to compare each sample to a pre-determined threshold (thresholding procedure described below). If the sample is over threshold, the MaKey MaKey sends a keypress event to the computer which responds as it would to an actual keyboard event. The thresholded EMG signal thus replaces the keyboard as an input device, and events are stored in the resulting data file as they would be with a typical keyboard-based experiment. Detailed assembly instructions and program code are provided in Crawford & Vavra (2016).

Figure 1 

MyoWare sensors with MaKey MaKey microcontroller.


We tested the system in an experiment in which participants judged whether presented faces were happy or angry and communicated these judgments via contractions of their cheek and brow muscles. On some trials (congruent condition), participants contracted the cheek to indicate that the face looked happy and the brow to indicate that it looked angry and on other trials (incongruent condition), they used the opposite mapping of muscle to judgment. We adopted this paradigm to test the system because clear predictions can be derived from the facial mimicry literature: responses should be faster in the congruent than in the incongruent condition. (see Dimberg et al., 2002). This is because in the congruent case, the automatic muscle activation triggered by mimicry facilitates the deliberate muscle contraction that the task requires, whereas in the incongruent case, the automatic response must be overridden in order to generate the required response.



This research was conducted with approval from the University of Richmond’s Institutional Review Board and in accordance with APA’s code of ethics. Seventeen male participants were recruited from the University of Richmond Psychology participant pool. Two participants were dropped because the system failed to consistently collect responses, leaving incomplete data.


The stimuli used were digital photographs from the NimStim Face Stimulus Set (Tottenham et al., 2009) showing happy and angry expressions made by thirty-six models. The experiment was presented on a Dell computer using E-Prime software (Psychology Software Tools, Pittsburgh, PA), using the two channel EMG system described above and in Crawford & Vavra (2016).


Participants were told that during this experiment we would be putting sensors on their face, allowing us to record activation of the muscles directly under the sensors. After informed consent procedures, the experimenter showed participants the areas above the left eyebrow and along the left cheek where the sensors would be placed and the participants wiped these areas thoroughly with a disposable electrode prep pad. The experimenter then placed each MyoWare sensor above the left brow and along the left cheek, approximating the placement recommended by Cacioppo and colleagues (see Cacioppo, Tassinary, & Fridlund, 1990; Fridlund & Cacioppo, 1986; Tassinary et al., 2007). Reference electrodes for each sensor were placed as close together as possible on the temple.

To set response thresholds, the experimenter read the EMG values from the serial monitor while repeatedly instructing participants to fully contract their eyebrows, to reduce their effort by 50%, and to relax the muscle completely. The experimenter set the initial threshold so that it would be higher than the maximum value recorded while participants relaxed, lower than the minimum value recorded while they contracted with 100% effort, and approximately at the level recorded when they used 50% effort. The experimenter then tested the threshold by asking participants to use brow contractions to try to start and stop typing the letter B several times, and asked the participant to judge how well they could control it. If the participant had trouble getting the letter to appear, the threshold was adjusted downward; if the letter appeared without their intention, it was adjusted upward. This process was iterated until the participant reported that they could type with a deliberate (but not difficult) contraction of the muscle and did not find themselves typing accidentally. This was then repeated for the sensors on the cheek. Throughout this procedure, no mention was made of “smiling” or “furrowing.” The average threshold for the signal from the cheek was 333.33 (SD = 69.86), and average threshold for the signal from the brow was 466.67 (SD = 91.94).

The experiment consisted of two blocks of 72 trials each, which included the happy and angry expressions of 36 models. Participants were told to judge as quickly as possible whether the presented face was happy or angry. In the congruent block, they were instructed to indicate that the face was angry by contracting their brow muscle and that the face was happy by contracting their cheek muscle, using the same muscle contractions they had been shown during the thresholding procedure. In the incongruent block, they were instructed to indicate their judgments using the opposite pairing of facial muscles to responses. The blocks were randomly ordered with a short break in between them to explain the instructions, and within each block trials were randomly ordered. There were eight practice trials at the start of each block.

Results and Discussion

The median reaction time of each participant’s responses was calculated separately for congruent and incongruent blocks. For every participant, the median RT was faster in the congruent than in the incongruent condition. The average difference between the two response conditions was almost a quarter of a second (congruent condition mean: 679.8 ms, SE = 36.18, 95% credible interval = 602.2–757.4; incongruent condition: 899.9 ms, SE = 66.05, 95% credible interval = 758.2–1041.5, see Figure 2). Using JASP (2016) software with a standard prior on the effect size (δ ~ Cauchy(.707)), we calculated a Bayes Factor of 433.864, indicating that the data are by far more likely under the alternative hypothesis that the conditions differ than under the null hypothesis that they do not. All data and materials are available for download at

Figure 2 

Slanted colored lines indicate individual participants’ median reaction times. Horizontal black bars indicate average by condition and vertical black bars indicate 95% credible intervals.

As predicted, the speed with which participants could indicate the expression of presented faces depended on which muscles they used to respond. They were substantially faster when instructed to use the cheek to indicate that the face was happy and the brow to indicate that it was angry than when instructed to use the opposite mapping. These results are consistent with prior studies of facial mimicry (e.g., Dimberg et al., 2002; Korb, Grandjean, and Scherer, 2010) showing automatic mimicry of presented faces. The results suggest that such automatic muscle activations facilitate deliberate responses when using congruent muscles and slow responses using incongruent muscles. We note that another contributing factor may be that the concepts of “angry” and “happy” are associated with these muscles, and so the effects observed here may also stem from the compatibility and incompatibility of the required motoric response with the decision being conveyed. These are nonexclusive possibilities and it is likely that both facial mimicry and conceptual compatibility contribute to the reaction time difference observed here. The relative contribution of each can be discerned in future research using both face and non-face stimuli.

Here we demonstrate that it is possible to use an EMG – computer interface to collect reaction time data from muscle contractions, forgoing the usual keyboard or button box input device. We have not assessed whether this system would be effective for detecting subtle changes in muscle activation and have focused on deliberate muscle contractions, which create large signal changes that are easily detected. This approach may be useful not only for collecting reaction time data, but for any task in which participants interact with a computer. Researchers can use it to have participants control the movement or location of an object, or to change an object’s shape, size, or color, and this can be done using any muscle over which it is feasible to place the sensors. In addition, it may prove useful for situations in which a participant’s hands cannot be used to collect responses, perhaps due to disability or because the hands are occupied with a concurrent task. The system described here expands the methodological possibilities for examining stimulus-response compatibility effects, mimicry, and simulation. This approach is well suited to psychology’s growing interest in the embodiment of mind.

Data accessibility statement

Data and analysis files are available on Open Science Framework.

Reaction Time Data:

Data reduction to reaction time medians by subject and condition:

Bayes factor analysis file: