Content area
Memory for words that are drawn or sketched by the participant, rather than written, during encoding is typically superior. While this drawing benefit has been reliably demonstrated in recent years, there has yet to be an investigation of its neural basis. Here, we asked participants to either create drawings, repeatedly write, or list physical characteristics depicting each target word during encoding. Participants then completed a recognition memory test for target words while undergoing functional magnetic resonance imaging (fMRI). Behavioural results showed memory was significantly higher for words drawn than written, replicating the typical drawing effect. Memory for words whose physical characteristics were listed at encoding was also higher than for those written repeatedly, but lower than for those drawn. Voxel-wise analyses of fMRI data revealed two distributed sets of brain regions more active for items drawn relative to written, the left angular gyrus (BA 39) and bilateral frontal (BA 10) regions, suggesting integration and self-referential processing during retrieval of drawn words. Brain-behaviour correlation analyses showed that the size of one's memory benefit for words drawn relative to written at encoding was positively correlated with activation in brain regions linked to visual representation and imagery (BA 17 and cuneus) and motor planning (premotor and supplementary motor areas; BA 6). This study suggests that drawing benefits memory by coactivating multiple sensory traces. Target words drawn during encoding are subsequently remembered by re-engaging visual, motoric, and semantic representations.
Accepted: 16 May 2024 / Published online: 12 June 2024
© The Psychonomic Society, Inc. 2024
Abstract
Memory for words that are drawn or sketched by the participant, rather than written, during encoding is typically superior. While this drawing benefit has been reliably demonstrated in recent years, there has yet to be an investigation of its neural basis. Here, we asked participants to either create drawings, repeatedly write, or list physical characteristics depicting each target word during encoding. Participants then completed a recognition memory test for target words while undergoing functional magnetic resonance imaging (fMRI). Behavioural results showed memory was significantly higher for words drawn than written, replicating the typical drawing effect. Memory for words whose physical characteristics were listed at encoding was also higher than for those written repeatedly, but lower than for those drawn. Voxel-wise analyses of fMRI data revealed two distributed sets of brain regions more active for items drawn relative to written, the left angular gyrus (BA 39) and bilateral frontal (BA 10) regions, suggesting integration and self-referential processing during retrieval of drawn words. Brain-behaviour correlation analyses showed that the size of one's memory benefit for words drawn relative to written at encoding was positively correlated with activation in brain regions linked to visual representation and imagery (BA 17 and cuneus) and motor planning (premotor and supplementary motor areas; BA 6). This study suggests that drawing benefits memory by coactivating multiple sensory traces. Target words drawn during encoding are subsequently remembered by re-engaging visual, motoric, and semantic representations.
Keywords Drawing effect - fMRI - Encoding technique - Memory - Neuroimaging
Introduction
Researchers have long been interested in documenting the effectiveness of various encoding techniques in influencing recall of to-be-remembered information. Numerous studies have demonstrated enhancements in episodic memory performance from techniques such as deep level of processing (Craik & Lockhart, 1972), generation (Slamecka & Graf, 1978), enactment (Engelkamp & Krumnacker, 1980), and production (MacLeod et al., 2010), to name a few. These all reliably increase the amount of information that individuals can later recall, in comparison with more passive encoding strategies such as reading or even writing out to-be-remembered information.
Drawing is another means of encoding information, and it has since been shown to enhance memory for words in younger (Meade et al., 2019; Wammes et al., 2016, 2017, 2019, Wammes, Meade et al., 2018a, Wammes, Roberts et al., 2018b) and cognitively healthy older adults (Meade et al., 2018), as well as those with probable dementia (Meade et al., 2020). Specifically, when asked to encode common nouns by either drawing a picture or writing out the word, both younger and older adults demonstrate superior recall and recognition performance for drawn information. Past work has also demonstrated that drawing a sketch, compared to writing during encoding, improves memory of to-be-remembered pictures (Wammes et al., 2016), academic terms (Wammes et al., 2017), and autobiographical events (Tran et al., 2022). One account of this effect is that drawing during encoding promotes the integration of various means of representing information: pictorial, motoric, and semantic (Wammes et al., 2019).
Such an account for the drawing benefit incorporates aspects of the dual-coding theory put forth by Paivio (1971) to explain the well-known picture superiority effect - the finding of better memory for information when it is presented in picture than word format. Paivio (1971) theorized that pictures are likely processed visually, in terms of the image, as well as verbally, in terms of the label automatically given to the image when it is viewed. It is the dual nature of such representations, verbal and visual, that is believed to confer the memorial benefit documented for pictorial information. This explanation highlights that encoding techniques promoting incorporation of additional means of representation of to-be-remembered material can enhance later recall.
It has been suggested (Meade et al., 2019) that older adults benefit from drawing because this task incorporates elements of encoding techniques that are known to enhance memory in this population, namely semantic generation (Craik & McDowd, 1987), inclusion of pictorial information (Ally et al., 2008; Cherry et al., 2012; Luo et al., 2007; Luo & Craik, 2008; Skinner & Fernandes, 2009; Winograd et al., 1982), and motoric enactment (Feyereisen, 2009). The finding that information presented in picture format is better remembered in older adults and those with probable dementia provides insight into how, from a neural perspective, drawing might be conferring a benefit to memory. As dementia of the Alzheimer's type progresses, neurons in the hippocampus and entorhinal cortex within the medial temporal lobes atrophy and lose efficiency due to accumulation of plaques and tangles (Gémez-Isla et al., 1996). In contrast, the primary visual areas and ventral visual pathway remain relatively intact, at least until more severe stages of the disease (Braak & Braak, 1991). In line with this, fMRI studies have demonstrated that while individuals with dementia exhibit poorer activation within the medial temporal lobes during visual memory tasks, the activation patterns in occipital regions remain similar to healthy controls (Golby et al., 2005; Koenig et al., 2008). Given this, one account for why drawing is so beneficial to memory, even in patients with dementia, is that it relies on the relatively preserved posterior regions of the brain involved in visual perceptual processing to mediate performance (Meade et al., 2020). This account is similar to an explanation put forward to explain why the picture superiority effect is also preserved in such populations (Ally, 2012; Ally et al., 2008; Ally & Budson, 2007).
It seems likely that reliance on visuo-perceptual representations is a key factor underlying the benefit of drawing to memory. Indeed, recent fMRI studies have demonstrated that drawing, as a complex visuomotor activity, evokes neural activity in V1 and V2 as well as the lateral occipital cortex, parietal sites, pre-central gyrus, and motor regions (Fan et al., 2020; Vinci-Booher et al., 2019). Drawing has also been associated with broader activation of the cerebellum, somatosensory regions, frontal regions, and the dorsal visual stream (Fan et al., 2020; Griffith & Bingman, 2020; VinciBooher et al., 2019). Thus, the neural basis of the drawing effect likely stems from the creation of a visuo-spatial representation of to-be-remembered information (i.e., a target word) that supplements a verbal one.
Behavioural studies, however, suggest that the drawing effect is not due simply to a picture superiority effect, as the benefit to memory is greater than that elicited by viewing target pictures or engaging in mental imagery during encoding (Wammes et al., 2019). Similarly, the drawing effect cannot be fully accounted for by engagement of a deeper level of processing (as in Craik & Lockhart, 1972) of materials at encoding, as the memory benefit is larger than when semantic elaborative processing is engaged at encoding (Wammes et al., 2017, 2019). Wammes and colleagues put forward a 'component-integration' account for the drawing effect. They suggest that drawing not only engages visual, semantic, and motoric processing during encoding, but that the act of drawing seamlessly integrates these traces. It is this multi-modal memory trace that allows for greater and more reliable performance on a subsequent retrieval test. The trace is more easily reactivated during retrieval, perhaps because there are multiple routes to remembering the information, or because its strength is overall greater compared to when information is encoded with fewer modes of representation.
Memory reactivation
There is evidence to suggest that the patterns of brain activity engaged during encoding will become re-activated again when remembering that experience. In fact, a feature of vivid remembering is the reactivation of the same cortical areas engaged during the initial perception of an encoded stimulus (Buckner et al., 2001). In general, memory retrieval is believed to be accompanied by similar sensory-specific cortical activation as that produced during the initial encoding of an item or event (Norman & O'Reilly, 2003; Rubin & Greenberg, 1998; Woodruff et al., 2005). For example, studies in which participants associate words with pictures, sounds, or faces during encoding and are subsequently asked to make memory decisions to only the word stimulus, have found that secondary visual, secondary auditory, and face processing regions of the brain are activated during retrieval, depending on which stimulus was associated with the word during encoding (Khader et al., 2005; Nyberg et al., 2000; Vaidya et al., 2002; Wheeler et al., 2000).
Here we aimed to determine the neural processing that supports retrieval of memories for words that were drawn, written, or semantically elaborated upon during encoding. In so doing we can not only specify the neural basis of the drawing effect during memory retrieval, but also examine whether there is neural support for the idea that drawing engages multiple forms of representation. A similar logic was applied in other studies. For example, studying pictures results in subsequent reactivation of occipital regions involved in visual perceptual processing even when the retrieval test presents these targets as words (Vaidya et al., 2002). Along the same lines, encoding information using enactment (engaging in physical movement that depicts the to-be-remembered information during encoding) results in reactivation of motor processing regions during subsequent retrieval of enacted words (Kronke et al., 2013; Macedonia & Mueller, 2016; Mayer et al., 2015; Roberts et al., 2022). A similar principle can also be seen during retrieval of information initially encoded using semantic elaboration. During retrieval, neural activation is higher in inferior frontal regions implicated in semantic processing compared to when information is encoded more shallowly (Otten et al., 2001; Poldrack et al., 1999; Staresina et al., 2009; Wagner et al., 1998). As such, examining neural activation at the time of retrieval should illuminate the processing involved in encoding and representing drawn information in memory.
Current study
In the current study we aimed to determine what neural activity supports memory for drawn information relative to words encoded by writing or semantic elaboration (listing descriptive characteristics pertaining to the to-be-remembered word). We asked participants to encode words outside of the scanner, and to later make recognition memory decisions to words presented while their brains were scanned using functional magnetic resonance imaging (fMRI).
We expected that recognizing drawn words would recruit brain regions involved in visual, motor, and semantic processing to a greater extent than when making memory decisions to words that were written or those for which physical characteristics were listed at encoding. We also expected greater activity following drawing in the bilateral extrastriate visual cortex, namely fusiform, lingual, middle occipital, and inferior temporal gyri, based on previous work examining memory for pictures relative to words (Vaidya et al., 2002).
Given that complex motor plans would be required by drawing during encoding, we expected to observe activity in the primary motor cortex and sensorimotor networks (such as the supplementary motor area; SMA) during later retrieval, as these areas have been linked to memory following enactment of actions (Krônke et al., 2013; Macedonia & Mueller, 2016; Macedonia et al., 2011; Straube et al., 2009). We also reasoned that words written or described physically at encoding might engage these regions as well because all three tasks require physical motor control of a pencil. The motor processing engaged while drawing, however, is arguably more unique than for writing, and as such may engage additional supplementary motor regions involved in meaningful motoric elaboration (Macedonia et al., 2011; Straube et al., 2009). Given that semantic processing may be engaged when planning or elaborating on an image to be drawn during encoding, we reasoned that activity may be greater in inferior frontal and medial temporal regions, posterior parahippocampal gyrus (Brewer et al., 1998; Wagner et al., 1998), and left medial temporal lobe (Kohler et al., 2000), which have been found to be recruited for deep semantic, relative to shallow, processing.
Method
Participants
Of the recent studies that explored the benefit of drawing at encoding on recognition memory, the smallest effect size (memory for draw > write) was d = 0.67 (Wammes et al., 2016, Experiment 5). We therefore performed a power analysis (matched-pairs, two-tailed, a = .05, A = 0.67) using the pwr package (v. 1.3.0; Champley et al., 2020) for R (R Core Team, 2020), which indicated a required sample size of 20 participants to achieve 80% statistical power.
A total of 20 (16 female; M age = 22 years, SD age = 2.31) undergraduate students were recruited through email invitation and poster advertisements at the University of Waterloo. All participants met with the researcher on campus in advance of their study participation date to receive detailed information about the study procedure and to complete an MRI checklist to ensure eligibility for scanning prior to participating. On the day of the study, participants gave written informed consent prior to their participation, and were remunerated $25 CAD following the study. One additional participant took part in the study but their data were not analyzed due to experimenter error leading to a loss of data.
Materials
Word list
Ninety words were selected from the verbal labels for Snodgrass images (Snodgrass & Vanderwart, 1980) to ensure that all words could be readily drawn. Words ranged in frequency from 2.48 to 6.01 (M = 4.14, SD = 0.61) using the wordfreg Python library (Speer et al., 2018), in length from three to 12 letters, (M = 5.5, SD = 1.19), and in number of syllables from one to four (M = 1.64, SD = 0.83). All words were common nouns of objects deemed by the researchers to be highly familiar and prevalent in everyday life (e.g., table, apple, bird).
Paper and pencil materials
Each participant completed the drawing, writing, and listing encoding tasks using a 4.in. X 6-in. pad of paper and a pencil.
Tone identification task
Sound files representing low-, medium- and high-pitched tones were created using Audacity software (Audacity Team, 2021), such that each sine wave tone was exactly 500 ms in duration, at frequencies of 350, 500, and 650 Hz, respectively. This tone identification task was used as a filler task in between encoding and the recognition test, matching prior work on the drawing effect (e.g., Wammes et al., 2016).
Neuropsychological evaluations
Mill Hill Vocabulary Scale Participants were administered set B of the Mill Hill Vocabulary Scale (Raven et al., 1976), in which one must select the correct synonym from a set of six alternatives. On average, participants responded correctly to 54% (SD = 12%; range 30% to 79%) of the items, indicating fluency in the English language (Raven et al., 1976).
NART The National Adult Reading Test (NART; Nelson, 1982) was used to provide an estimate of intelligence. The NART consists of a 50-item list of irregularly pronounced English words, which participants read aloud. On average, participants correctly pronounced 62% (SD = 16%) of the words, indicating IQ scores within a typical range (defined as NART scores above or around 50%; Bright et al., 2018).
Program and presentation equipment
Stimulus presentation during encoding was accomplished using E-Prime software (v. 3.0.3.60; Psychology Software Tools, 2016) presented on a Windows laptop. The stimuli in the retrieval task were presented using an Avotec Silent Vision (Model SV-7021) fiber-optic visual presentation system with binocular projection glasses controlled by a computer running E-Prime software synchronized to triggerpulses from the magnet. Responses on the recognition test were made with participants' right index and middle finger using a Lumina SRB Model 200A MRI response pad with SRBox input to E-Prime.
Procedure
The entirety of the study took place at Grand River Hospital, located in Kitchener, Ontario. A small side room next to the MRI suite was used for the initial encoding tasks, as well as for completion of neuropsychological questionnaires and debriefing following the scanning session. The procedures and materials for this study were approved by the Office of Research Ethics at the University of Waterloo and the Tri-Hospital Research Ethics Board at Grand River Hospital. All data, analysis code, experiment programs, and other materials are listed on the Open Science Framework (OSF; https://osf.io/74mdc/).
fMRI scanning parameters
At the beginning of the scanning session, a whole-brain T1-weighted anatomical image was collected for each participant (TR = 7.5 ms; TE = 3.4 ms; voxel size, 1 x 1x 1 mm'; FOV = 240 x 240 mm?; 150 slices; no gap; flip angle = 8 degrees). Each recognition test run was scanned using an event-related design. Functional data were collected using gradient echo-planar T2·-weighted images acquired on a Philips 1.5 Tesla machine (TR = 2000 ms; TE = 30 ms; slice thickness = 5 mm with no gap, 26 slices; FOV = 200 x 200 mm"; voxel size = 2.75 x 2.75 x 5 тт"; flip angle = 70 degrees). Each of the six experimental runs took 120 s to complete and had 26 slices per volume, 60 volumes total, consisting of five target words from each encoding task (15 total), 15 lure words, and 19-24 fixation-cross baseline trials (the number of fixation crosses varied due to their random presentation times of 1-6 s). Before each experimental run began, there was an 8-s steady-state time whereby a fixation cross was presented but no functional data were recorded.
Encoding phase
The encoding phase was completed outside of the scanner, while the participant was seated in a chair, at a table. Prior to beginning, participants gave informed signed consent. At the beginning of the encoding phase, participants completed a brief practice session to familiarize them with the encoding and retrieval tasks and were encouraged to ask questions for clarification. Practice consisted of three encoding trials, one for each condition, followed by a sixitem old-new recognition test; none of the words in this practice phase were included in the experimental phase. The duration of the encoding phase was approximately 20 min including the practice session.
From the master stimuli list of 90 words, 30 were randomly selected to be drawn, 30 written, and 30 listed, with the presentation of the encoding trial types intermixed. ' On each encoding trial, the prompt appeared in the center of the screen above the target word. Participants then had 10 s to perform either the drawing, writing, or listing task, depending on which was indicated by the prompt. A 500ms tone alerted them to stop performing the task and prepare for the next target word and prompt. A blank screen was then presented for 4 s to give participants time to flip their sheet of paper to the next page before the next word and prompt appeared. Before the practice phase, participants were informed of the time constraints for each item and that they would hear a tone to indicate the end of the trial and the appearance of the ensuing prompt/target word for the next trial.
Participants were told that depending on the prompt presented during each individual trial, they were to either 'draw', 'write', or 'list' in response to the target word on the pad of paper provided. For the 'draw' prompt, participants were instructed to draw a picture illustrating the object that the word on the screen represents, and to continue adding detail for the full duration of the trial. For the 'write' prompt, participants were instructed to clearly and carefully write out the word multiple times. For the 'list' prompt, participants were instructed to write out a list of physical descriptive characteristics for the object the word represents, and were given the example that for "mouse" they might list words like 'furry', 'grey', 'long tail", 'small', etc.
Retention interval
Following the end of the encoding phase, participants immediately completed a 2-min tone classification filler task to prevent immediate rehearsal of encoded words. In this task, participants were presented three tones (350, 500, or 650 Hz), one at a time for 500 ms each, in a random order, and were asked to indicate via a button press whether it was a low-, medium-, or high-pitched tone. Afterwards, participants changed into MRI-compatible clothing, completed an MRI safety screening assessment form with the technician, and were situated on the MRI scanner bed. Fiber-optic binocular projection glasses were adjusted within the scanner to ensure that participants had a clear view of a sample word presented on the screen. An anatomical scan was then completed, lasting approximately 8 min. Following the anatomical scan, participants were again given instructions on the format of the recognition test, how they should make their responses, and a reminder to limit their head movement as much as possible. The total duration of the retention period between the end of the encoding phase and the beginning of the recognition test was approximately 20 min.
Retrieval phase
The recognition phase began once participants verbally indicated they were comfortable and understood the instructions. Word stimuli were presented through the binocular projection glasses, controlled by a computer running an E-Prime program. In total, the recognition test consisted of 180 words that were presented in an intermixed order, with 90 words from the encoding phase (30 drawn, 30 written, 30 listed) and 90 new/lure words. The recognition test was divided into six separate runs of equal length, each lasting 2 min in duration. Each run contained five words of each encoding type (drawn, written, and listed) and 15 lures, with each word presented for 2 s. Recognition response trials only progressed after 2 s had elapsed, and did so regardless of whether a key-press was made earlier. The onset of word presentation in each run was pseudo-randomized using OptSeq2 (Dale, 1999; Greve, 2006), a tool for automatically scheduling the order and timing of events for rapid-presentation event-related fMRI experiments. In addition to words, there were also fixation crosses presented in a random order, intermixed with the words. Fixation crosses were displayed for a random length of time between 1 and 6 s. To ensure a consistent neuroimaging scan time, the duration of fixation crosses in each run was determined by the number of fixation crosses presented, ranging from 19 to 24. Because fixation crosses were randomly dispersed throughout the retrieval phase, gaps between stimulus presentation ranged from 0 s in the case of back-to-back word presentations and 1-6 s if a fixation cross was presented inbetween encoding words (the latter of which was much more frequently the case). There was a brief break between each run (maximum 30 s) during which the researcher asked the participant over a speaker if they were comfortable and ready to begin the next run. The total duration of the recognition test, including all six runs, was a maximum of 15 min.
Participants were instructed to respond as quickly and accurately as possible to each word presented on the screen by indicating if the word was 'old', meaning they saw it in the encoding phase, or 'new', meaning they did not see the word in the encoding phase. Responses were made using two buttons indicating either "old" or 'new' on a Lumina MRI response pad. All responses were also manually recorded by the researcher on a sheet of paper as backup. Following completion of the recognition test runs, participants were brought back to the small side room and completed set B of the Mill Hill Vocabulary Scale and the NART with the researcher. Finally, they were debriefed, and given remuneration for their time.
fMRI data preprocessing and analyses
fMRI data were preprocessed and analyzed using AFNI software (у. 20.3.01; Cox, 1996). To begin, each subject's fMRI data were converted from raw DICOM files to seven separate three-dimensional (3D) datasets using the to3d command, with the first representing the anatomical volume and the latter six representing functional data. Each subject's anatomical image was then entered into AFNTs @ SSwarper (v. 2.6; Cox, 2022; Saad et al., 2009) tool, which performed skull-stripping and non-linear alignment to the TalairachTournoux atlas template (TT_N27_SSW; Talairach & Tournoux, 1988). The results of this process were checked manually via automatically generated quality-control images provided by the program.
Next, data from each subject underwent a customized version of AEMГ $ afni_proc.py pipeline (у. 7.49; Taylor et al., 2018). It included slice timing alignment on volumes, alignment of functional data to the anatomical dataset, warping of anatomical data to Talairach standard space, volume registration, and whole-brain masking. Blurring was then performed using a 6-mm Gaussian blur with a full width at half maximum (FWHM) to increase the signal-to-noise ratio. Original anisotropic 2.75 x 2.75 x 5 mm? voxels were resampled to isotropic 2.75 x 2.75 x 2.75 mm" voxels with a volume of 20.797 mm? each. Motion artefacts were detected and censored at a 0.3-mm cut-off and outlier datapoints were also censored at 5%. Each run of functional data was scaled to a mean of 100 before undergoing a regression analysis to build a model that included timing data from all four possible recognition test response outcomes: correct hits for target items, false alarms for lure items, correct rejections for lure items, and misses for target items. For two participants, neuroimaging data from a single functional run each were missing, but their remaining data were included in analyses.
After each participant's data had undergone this same preprocessing pipeline, all data were entered into a repeatedmeasures ANOVA using the 3dANOVA2 command (у. 23.0.00; Ward, 2023). In this ANOVA, only correct hits on old target items and correct rejections of lure items were considered. Contrasts were then conducted to compare each of the three target conditions. We also examined contrasts between hits in each condition and correct rejections of lure items (e.g., Draw - New). As a result, voxel-wise analyses were performed for trials in which correct recognition responses were made to items belonging to the Draw (M = 28 hits), List (M = 26 hits), Write (M = 16 hits), or New (M = 83 correct rejections) conditions.
Following the ANOVA, 3dmask_tool was used to form an intersection mask that represents voxels that were persistent across at least 70% of participants. Next, to calculate the minimum number of voxels needed for cluster-size thresholding, we used the -ClustSim option in 3dttest++ (Cox et al., 2017) for each individual contrast of interest. This modern nonparametric method of determining cluster thresholds uses a non-Gaussian spatial autocorrelation function (ACF) and runs 10,000 simulations to reveal a minimum number of voxels that would be needed for each cluster to achieve significance at the recommended р < .001 level (Woo et al., 2014) while maintaining the false-positive rate at p < .05 (Cox et al, 2017). The minimum number of voxels for significant cluster formations in each contrast varied from 9 to 33, and clusters were formed assuming bi-sided tests with NN = 2.
Finally, the 3dTcorr1D command was used to calculate brain-behaviour Spearman rank correlation maps for each contrast (i.e., Draw vs. Write, Draw vs. List, and List vs. Write). In these analyses, the proportion of words correctly recognized in each condition was tabulated and subtracted from that of the other condition (e.g., Draw - Write). The value of each of these difference scores was then correlated with differences in brain activity for that specific contrast (e.g., Draw vs. Write). Finally, an aggregate group-level correlation map was formed across the whole brain. No brainbehaviour correlations survived when significance was set at p < .001 with a minimum cluster-size threshold of 20 voxels. In the spirit of conducting exploratory analyses that can inform future investigations, the significance threshold was then lowered to р < .01 while maintaining a 20-voxel minimum cluster-size.
In the voxel-wise and correlational analyses, significant clusters were localized to anatomical brain regions within 2 mm and 3 mm, respectively, of the peak voxel coordinate in the cluster, as determined by the Talairach Daemon atlas (Lancaster et al., 2000) through AFNT's whereami function. In Figs. 2 and 3, functional data are overlaid on a Talairached version of the Colin 27 anatomical dataset (1.e., the TT N27 template; Holmes et al., 1998). For fMRI images using transparent thresholding (Taylor et al., 2022), see our page on the OSF (https://osf.10/74mdc/).
Results
Behavioural memory performance
To examine memory performance across conditions, a oneway repeated-measures ANOVA was conducted using the afex package (v. 1.2-1; Singmann et al., 2023) for R, with Condition as the independent variable with three levels (Draw, List, and Write) and hit rate on the recognition test as the dependent measure. Mauchly's test indicated that the assumption of sphericity had been violated, W = 0.63, p = 016, € = .731, therefore a Greenhouse-Geisser correction was applied to the ANOVA. Results revealed a significant main effect of Condition, F(1.46, 27.79) = 74.40, MSE = 0.02, p < .001, np2 = .80, BF, = 3.04e+10.? Paired-samples t-tests with Bonferroni adjustments showed that memory was higher in the Draw relative to both the Write condition, 1(19) = 9.44, p < .001, A = 2.11, Clos [1.30, 2.90], BF, = 9.66e+5, and the List condition, 1(19) = 3.51, p = .007, d = 0.79, Clos (0.27, 1.28], BF o = 17.54 (see Fig. 1). Items from the List condition were also better recognized than those in the Write condition, #(19) = 8.95, p < .001, d = 2.00, Cl; [1.22, 2.76], BF) = 4.43e+5 (see Table 1). In each case, large values for the Bayes factors indicate strong or extreme evidence in favor of the alternative model.
fMRI results
Using a significance threshold of p < .001 while controlling family-wise error to p < .05, and with minimum cluster sizes set custom for each contrast (ranging from 9 to 33 voxels), brain activity was examined when correct memory responses were made to items belonging in the Draw, List, and Write conditions (see Table 2). Words from the Draw and List conditions elicited distinct neural activity from correct rejection responses to New items, while those from the Write condition did not lead to any significant clusters that survived thresholding.
When comparing brain activity for correctly recognized words from the Draw relative to Write encoding conditions, significant clusters of positive activity peaked in the left and right medial frontal gyri and the left middle temporal gyrus (see Fig. 2). Activity for correctly recognized words from the List relative to Write condition was significantly higher in a single cluster that also peaked in the left middle temporal gyrus.
To assess whether differences in neural activation between conditions correlated with differences in behavioural memory performance outcomes, we calculated Spearman rank correlations for each contrast. Here, we focus on brain-behaviour correlations for our main contrast of interest: Draw - Write. Four distinct positive correlation clusters were determined to be significant, with peaks in the right inferior and superior frontal gyri, left cuneus (i.e., area V1), and left middle frontal gyrus (see Table 3).3 Activity in the right superior frontal and left middle frontal gyri together present as bilateral clusters peaking in Brodmann area (BA) 6, revealing widespread brain-behaviour correlations in both the premotor and the supplementary motor areas (see Fig. 3). No negative brain-behaviour correlation clusters survived thresholding in this contrast.
For the sake of completeness, it is worth mentioning that the other two contrasts revealed only negative brainbehaviour correlations, which we do not interpret further: A single negative correlation cluster in the Draw - List contrast (35 voxels centered on x = -41, y = -51, 7 = -7), and two negative correlation clusters in the List - Write contrast (32 voxels peaking at x = -25, у = -51, z = -18, and 20 voxels peaking at x = 28, у = -21, z = 57; see files оп OSF for more details).
Discussion
Drawing as an encoding strategy has been shown to enhance memory in younger adults (Wammes et al., 2016, 2017, 2019, Wammes, Meade et al., 2018a, Wammes, Roberts et al., 2018b), as well as cognitively healthy older adults (Meade et al., 2020; Tran et al., 2022). Here, we investigated the neural regions underlying the drawing-induced memory benefit. We asked participants to either draw, write, or list physical characteristics of target words during encoding, and later scanned participants' brains while they performed a recognition memory test for the studied words. Behaviourally, memory was significantly higher for words drawn than written, replicating the typical drawing effect. Indeed, this pattern was present in 19 of 20 participants (with the remaining participant showing equivalent memory in the two conditions; see Fig. 1). Memory for words encoded by listing physical characteristics was also higher than for those written, but lower than those drawn. Voxel-wise analyses of fMRI data revealed a distributed set of brain regions active during recognition of words drawn relative to written at encoding, highlighting integration (angular gyrus) and self-referential processing (anterior prefrontal cortex). Brain-behaviour correlational analyses showed that memory for words drawn at encoding increased with activation in premotor and supplemental motor areas, as well as area V1 and the cuneus.
Wammes and colleagues (Wammes et al., 2016) have proposed that the drawing effect arises from engagement of motoric, elaborative semantic, and pictorial coding, all integrated seamlessly when one draws a to-be-remembered word during encoding (see also Fernandes et al., 2018, for a review, and Wammes et al., 2019). Moreover, it has been demonstrated previously that the act of drawing evokes neural activity in occipital (V1, V2, and lateral occipital cortex), parietal areas, and motor regions (Fan et al., 2020; VinciBooher et al., 2019). Given this, the neural basis of the drawing effect likely stems from the creation of visuospatial and motor representations of to-be-remembered information that supplement a semantic one, as well as perhaps integration of these various modes of representing target words.
Interpretation of fMRI results
Multimodal integration
When comparing memory for words that were drawn relative to written during encoding, significant activation was found in the left angular gyrus (AG; BA 39). This selective activation of the angular gyrus is somewhat unsurprising as it is a known hub for multimodal integration. Given that the AG sits at the junction of areas associated with sensory input, motor plans, comprehension, and emotion, researchers have suggested that it is particularly well situated anatomically to enable mental integration (Binder & Desai, 2011). In fact, meta-analyses (Binder et al., 2009; Kim, 2010), qualitative reviews (Binder et al., 2009; Humphreys et al., 2021), and primary research articles (Bonnici et al., 2016; Shannon & Buckner, 2004; Wagner et al., 2005; Yazar et al., 2014, 2017) have all highlighted the AG as a major centre for multimodal integration during episodic retrieval, leading some researchers to propose a 'contextual integration model" for the AG in the human cortex (Ramanan et al., 2018). That activity in this well-known hub for integration was greater during retrieval of words drawn relative to written at encoding suggests that the act of remembering a drawn word involves some degree of integration of multimodal representations.
Going beyond the draw versus write comparison, we also saw that listing physical characteristics of a word during encoding led to greater subsequent activation in the AG during recognition, compared to words that were written. The semantic characteristics that participants generated during the 'list' encoding task were often concrete visual attributes of the item. Thus, deep elaboration (listing semantic characteristics) may also engage the AG as it has been shown to play a role in integrating different pieces of semantic knowledge relating to an item (Kim et al., 2011; Noonan et al., 2013).
While we did not identify any significant clusters in the Draw-List contrast, there was a significant difference in behavioural memory performance. This disparity suggests that Draw and List words are represented similarly in the brain during recognition, but that there may be something extra driving performance for Draw items that we were unable to detect in our neuroimaging analyses, perhaps due to insufficient statistical power to detect what is likely a small effect. Indeed, when the statistical threshold for this contrast was lowered to р < .01, significant positive clusters emerged around bilateral primary motor and somatosensory areas. We refrain from making conclusions based on these results due to the lower threshold used for detection, but these analyses seem to point to statistical power as the main reason for not detecting significant clusters at p < .001 in this particular contrast.
Goal maintenance and self-referential processing
By far, the largest cluster of activation observed in the critical draw versus write contrast was activation of the right anterior medial prefrontal cortex (aPFC). This region has been previously linked to self-referential processing (D'Argembeau et al., 2007; Kelley et al., 2002; Meyer & Lieberman, 2018; Nejad et al., 2013; van der Meer et al., 2010), and the ability to maintain multiple parallel goals (Charron & Koechlin, 2010; Euston et al., 2012; Koechlin et al., 1999). The medial aPFC has also been implicated when one attends to one's own emotions and mental states (Gilbert et al., 2006). Given this, it may be that drawing facilitates subsequent memory because it enables greater self-referential processing during retrieval, ultimately aiding the retrieval process.
This same brain area is also known to act as an early unconscious decision-maker (Elliott et al., 1999) that collects information, makes a decision, and sends that decision to the SMA (Soon et al., 2008), which in turn outputs the decision as a motor response (i.e., a button press). Thus, the vast bilateral activation seen in the aPFC could be indicative of simultaneous goal pursuits to form mental imagery of one's own drawing during the recognition test, culminating in a final response decision that gets forwarded to motor areas to be carried out via button-press.
Brain-behaviour correlations
Positive brain-behaviour correlations were only found in the draw versus write comparison. Within this contrast, four significant clusters were observed. We found that activity in bilateral supplementary motor areas (SMAs) correlated positively with the size of the drawing effect. Beginning with Penfield's early psychosurgery explorations of the human motor cortex in the 1950s, the SMA has been recognized for its role in internal generation of movement plans (Penfield & Welch, 1951). Subsequently, considerable evidence has implicated the SMA in complex motor sequences and responses on visuomotor tasks both in humans (e.g., Gerloff et al., 1997; Jenkins et al., 1994; Lee & Quessy, 2003; Roland et al., 1980; Shibasaki et al., 1993) and in macaque monkeys (e.g., Mushiake et al., 1990). If the function of the SMA is to form complex movements, why would it be preferentially activated during a recognition test that requires only a simple button press? One explanation can be found in related work that considered the SMA (and other motor cortices) as key regions mediating retrieval of actions in monkeys (Passingham, 1989; Shima & Tanji, 1998) and humans (Gaymard et al., 1990, 1993). Neuroimaging work has similarly confirmed that the SMA is instrumental in not just motor planning for real actions, but for imagined actions as well (Cunnington et al., 1996; Dechent et al., 2004; Mehler et al., 2019; Ueno, 2003). Hence the bilateral SMA activation seen here could indicate that participants were re-imagining the motor sequences they used to form their previously created drawings. Indeed, a related literature on memory for enacted words has come to similar conclusions, supporting the notion that motoric re-activation during recall of previously performed actions can enhance memory (Roberts et al., 2022).
A number of voxels clustering in the inferior frontal gyrus (BA 47) were also correlated with memory performance for words drawn at encoding and may hint at a more nuanced story. Prior work suggests that BA 47 may be implicated in processing of stimuli unfolding over time (Levitin & Menon, 2003; Vuust et al., 2006), spatial working memory (Jonides et al., 1993), and episodic memory retrieval (Cabeza et al., 2002). If the SMA activity highlighted earlier is truly due to motor imagery at retrieval, then perhaps activity in BA 47 is evidence of a spatiotemporal dynamic in working memory whereby the sequence of hand movements used to draw the word is reinstated as part of the motor imagery experience.
Importantly, brain-behaviour correlational analyses also demonstrated evidence suggesting that recognition of words drawn relative to written at encoding invoked brain regions involved in visual imagery (left cuneus, also known as area Vİ, primary visual cortex, and BA 17). A plethora of studies have concluded that area V1 is involved in mental imagery in the absence of a visual stimulus (Chen et al., 1998; Klein et al., 2000; for a review, see Pearson et al., 2015). Indeed, activation of area V1 is reportedly similar when viewing a stimulus and imagining it (Reddy et al., 2010), and even maintains a retinotopic map for imagined content (Klein et al., 2004). That memory for drawn items increased as activity in this region increased suggests participants were re-imagining the drawing they made, or that the image of that drawing came to mind when making a recognition memory decision to the word it represents. It seems plausible, therefore, that participants were actively trying to recall their drawings by imagining them during the recognition test, in order to determine whether an item was previously studied. In fact, the angular gyrus (AG) - which had shown preferential activation during recognition of drawn items in our voxel-wise analyses - has also been shown to play a role in semantic matching of visual stimuli (Seghier et al., 2010), furthering the notion that correctly recognizing previously drawn items may be due to participants attempting to re-imagine their previously created drawings.
Overall, brain-behaviour correlations revealed that the size of one's memory benefit for words drawn relative to written at encoding was positively correlated with activation in brain regions linked to motor imagery, mental visualization, and spatiotemporal ordering. While speculative, it seems plausible that this group of brain areas is indicative of a multidimensional cognitive effort to re-experience a drawing at retrieval. If a drawing can be re-instated in some capacity, we reason that a participant is more likely to correctly endorse the item as a previously studied word.
Drawing as a multimodal heuristic
Because drawing at encoding is thought to be a multimodal activity, there are many ways in which target words could be later remembered. One can rely on recalling the generative process used to form a drawing, the mental image of a drawing they made, the motor sequences used to create it, or as we suggest here, all three simultaneously. Therefore, insofar as drawing evokes multimodal encoding during the study phase, that same memory can be more easily retrieved on the recognition test. This could be why drawn words are better remembered relative to those in the list condition, and why listing semantic characteristics in turn leads to better memory than writing the word over and over. When writing a target word repeatedly, however, a less diverse mental representation is formed. As such, participants have less prior information upon which to base their later memory decision. Having such little information to rely on during the recognition test, participants may be more likely to incorrectly classify previously written words as 'new'.
When listing semantic attributes in the 'list' condition, one may not have access to a motor sequence used to draw the item, but one still arguably maintains benefits from mental generation and, perhaps to a lesser extent, visual imagery. When listing physical characteristics of a noun, one likely pictures the item in their mind's eye before listing visual attributes of the item. For example, when given the word 'dog', one first imagines an image of a dog and then writes down "tail", 'paws', "four legs', 'furry', etc. As suggested previously, this could be why we see AG activation in the list condition: Mental imagery could be integrated with semantic knowledge in order to generate this list of attributes. The list can then be re-generated at retrieval, facilitating endorsement of the target word as previously studied.
While our explanation of the drawing benefit manifesting at retrieval is speculative, it seems to align with the notion of re-living a drawing. Taking advantage of the multimodal nature of a drawn item in memory and re-experiencing it are potent ways to determine whether an item has been previously studied or not. That brain areas preferentially activated for correctly endorsed drawn words involved aspects of movement, mental imagery, and semantic integration supports this notion. Furthermore, that activation of brain regions reasonably associated with re-living a drawing (SMA, V1, etc.) correlated positively with the size of the drawing benefit suggests that using the simple heuristic of 'Did I draw that?" allows one to take advantage of the rich multimodal representations formed at encoding (or, lack thereof) in order to make a memory decision.
Limitations and future directions
This study was the first to investigate the neural basis of the drawing benefit on memory. As such, it has its limitations. First, our experimental paradigm consisted of 30 words per condition. Because fewer correct responses were made for targets from the Write relative to the Draw and List conditions, there were fewer trials available to contribute to the neural analysis. Ultimately this means the fMRI signal for the Write condition was less reliable. Future fMRI work could address this limitation by removing one of the comparator conditions (write or list) in the experiment, allowing more hits to occur for the two remaining conditions.
It is also worth noting that while in this study we have inferred the presence of specific cognitive processes from observed brain activation patterns, these are reverse inferences being made in an exploratory context. We are not suggesting that a given brain region performs a cognitive function exclusively, rather we base our discussion and conclusions on findings and suggestions from multiple other published studies that implicate a particular brain region in a given cognitive function. Here we aimed to document the network of regions that are present during retrieval of drawn relative to written or listed words. Overall, our results suggest drawing benefits memory by coactivating multiple sensory traces. Target words drawn during encoding are subsequently remembered by reengaging visual, motoric, and semantic representations.
Our work suggests that encoding strategies that encourage creation and integration of multiple representations during encoding create a more robust memory trace. However, we did not assess whether variations in ability to integrate such representations limits the drawing benefit to memory. We are currently conducting a study in children to consider whether limited frontal-lobe-mediated executive functioning, required for 'integration', limits the benefit of drawing on memory. Those data would arguably be better suited to supporting an integration account of drawing's benefit. A corollary to this would be that targets that are drawn at encoding should be better able to withstand the negative effects of multi-tasking. That is, if one representation is blocked there, memory can be sustained by the other representations. An obvious future experiment, therefore, is to compare retrieval of words drawn versus written at encoding when items are encoded or retrieved under divided-attention conditions.
Finally, while we had a priori predictions, we did not preregister our predictions, and we chose to conduct a voxelwise analysis rather than a region of interest (ROI) analysis, or pattern-based investigations like multi-voxel pattern analysis (MVPA) or representational similarity analysis (RSA). We made this decision because we felt that the first foray into brain regions supporting retrieval of words drawn at encoding should be exploratory. Subsequent work can specifically target ROIs found here to refine our conclusions. An ideal follow-up study would allow for multi-voxel pattern analysis to directly address the claims put forward here that there is a distributed network activated during retrieval of drawn words, involved in integration and self-referential processing (angular gyrus, anterior prefrontal cortex), and another implicated in mental/visual imagery (cuneus).
Conclusions
The goal of this study was to explore neural mechanisms driving the large and reliable memory improvements following drawing as an encoding strategy. Behavioural data demonstrated a clear and reliable benefit of drawing over and above both listing and writing as encoding techniques. Listing semantic characteristics of words also improved memory relative to writing. Voxel-wise analyses of fMRI data revealed significant activation clusters in two distinct brain regions that were preferentially activated when remembering drawn relative to written words. These clusters peaked in right medial frontal cortex and left angular gyrus. Further brain-behaviour correlations suggested that the magnitude of the drawing effect on memory correlated positively with activity in bilateral supplementary motor areas, right prefrontal cortex, and left primary visual cortex. Overall, we found that retrieval of words drawn at encoding invoked activation of brain regions involved in integration, self-referential processing, and mental/visual imagery. We suggest that during recognition of drawn words participants are likely using a simple heuristic - 'Did I draw that?" - to elicit multimodal memory retrieval in an effort to weigh evidence for a recognition memory decision.
Acknowledgments We thank Nicole Stuart for their devotion and help with data cleaning and pre-processing whilst in a global pandemic. Special thanks to Paul Taylor and Rick Reynolds at the NIMH for advice and guidance with our neuroimaging analyses.
Funding This research was supported by Natural Sciences and Engineering Research Council (NSERC) of Canada Postgraduate Scholarships to BRTR and MEM, and by NSERC Discovery Grant 202003917 to MAF.
Availability of data, code, and materials All data, analysis code, experiment programs, and other materials are listed on the Open Science Framework (OSF; https://osf.io/74mdc/).
Declarations
Competing interests Not applicable.
Ethics approval The procedures and materials for this study were approved by the Office of Research Ethics at the University of Waterloo and the Tri-Hospital Research Ethics Board at Grand River Hospital.
Consent to participate Informed consent was obtained from all individual participants included in the study.
Consent for publication Not applicable.
1 Due to experimenter error, we did not counterbalance the words assigned to each encoding task. However, there were no significant differences between the words in each encoding task in terms of frequency, F(3, 176) = 0.53, p = .66, number of letters, F(3, 176) = 0.38, р = .77, and proportion of words representing animate vs. inanimate objects, F(3, 176) = 0.88, р = 45.
2 Bayes factors were calculated using the BayesFactor package (version 0.9.12-4.4; Morey et al., 2011) for R, enlisting a default JeffreysZellner-Siow (JZS) prior with a Cauchy distribution (center = 0, r = 0.707). This package compares the fit of various linear models. In the present case, Bayes factors for the alternative (BF,() are in comparison to intercept-only null models. Interpretations of Bayes factors follow the conventions of Lee and Wagenmakers (2013). Bayes factors in favor of the alternative (BF) or null (BF) models are presented in accordance with each preceding report of NHST analyses (i.e., based on a р < .05 criterion) such that BF > 1.
3 A leave-one-out cross-validation analysis confirmed that the effect sizes at the peak voxel coordinate in each cluster for the Draw - Write contrast were stable across 20 leave-one-out datasets and matched the effect sizes seen in the full data set, as reported in Table 3: inferior frontal gyrus (mean p = .80, 95% CI = [.79, .81]), superior frontal gyrus (mean р = .75, 95% CI = [.74, .76]), cuneus (mean р = .77, 95% CI = [.76, .78]), and middle frontal gyrus (mean р = .73, 95% CI =[.72,.75)).
References
Ally, B. A. (2012). Using pictures and words to understand recognition memory deterioration in amnestic mild cognitive impairment and Alzheimer's disease: A review. Current Neurology and Neuroscience Reports, 12(6), 687-694. https://doi.org/10.1007/ $11910-012-0310-7
Ally, B. A., & Budson, A. E. (2007). The worth of pictures: Using high density event-related potentials to understand the memorial power of pictures and the dynamics of recognition memory. Neurolmage, 35(1), 378-395. https://doi.org/10.1016/j.neuro image.2006.11.023
Ally, B. A., Waring, J. D., Beth, E. H., McKeever, J. D., Milberg, W. P., & Budson, A. E. (2008). Aging memory for pictures: Using high-density event-related potentials to understand the effect of aging on the picture superiority effect. Neuropsychologia, 46(2), 679-689. https://doi.org/10.1016/j.neuropsychologia.2007.09. 011
Audacity Team. (2021). Audacity: Free audio editor and recorder (Version 3.0.0) [Computer software]. Audacity Team. https://www. audacityteam.org/
Bedini, M., & Baldauf, D. (2021). Structure, function and connectivity fingerprints of the frontal eye field versus the inferior frontal junction: A comprehensive comparison. European Journal of Neuroscience, 54(4), 5462-5506. https://doi.org/10.1111/ejn.15393
Binder, J. R., & Desai, R. H. (2011). The neurobiology of semantic memory. Trends in Cognitive Sciences, 15(11), 527-536.
Binder, J. R., Desai, R. H., Graves, W. W., & Conant, L. L. (2009). Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19(12), 2767-2796. https://doi.org/10.1093/cercor/bhp055
Bonnici, H. M., Richter, E. R., Yazar, Y., & Simons, J. S. (2016). Multimodal feature integration in the angular gyrus during episodic and semantic retrieval. The Journal of Neuroscience, 36(20), 5462-5471. https://doi.org/10.1523/INEUROSCI.4310-15.2016
Braak, H., & Braak, E. (1991). Neuropathological stageing of Alzheimer-related changes. Acta Neuropathologica, 82(4), 239-259. https://doi.org/10.1007/BF00308809
Brandt, S. A., & Stark, L. W. (1997). Spontaneous eye movements during visual imagery reflect the content of the visual scene. Journal of Cognitive Neuroscience, 9(1), 27-38. https://doi.org/ 10.1162/jocn.1997.9.1.27
Brewer, J. B., Zhao, Z., Desmond, J. E., Glover, G. H., & Gabrieli, J. D. (1998). Making memories: Brain activity that predicts how well visual experience will be remembered. Science, 281(5380), 1185-1187. https://doi.org/10.1126/science.281.5380.1185
Bright, P., Hale, E., Gooch, V. J., Myhill, T., & van der Linde, I. (2018). The National Adult Reading Test: Restandardisation against the Wechsler Adult Intelligence Scale-Fourth edition. Neuropsychological Rehabilitation, 28(6), 1019-1027. https://doi.org/10. 1080/09602011.2016.1231121
Buckner, R. L., Wheeler, M. E., & Sheridan, M. A. (2001). Encoding processes during retrieval tasks. Journal of Cognitive Neuroscience, 13, 406-415. https://doi.org/10.1162/08989290151137430
Cabeza, R., Dolcos, F., Graham, R., & Nyberg, L. (2002). Similarities and differences in the neural correlates of episodic memory retrieval and working memory. Neurolmage, 16(2), 317-330. https://doi.org/10.1006/nimg.2002.1063
Cabeza, R., & Nyberg, L. (2000). Imaging cognition II: An empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12(1), 1-47. https://doi.org/10.1162/0898929005 1137585
Campana, G., Cowey, A., Casco, C., Oudsen, I., & Walsh, У. (2007). Left frontal eye field remembers "where" but not "what." Neuropsychologia, 45(10), 2340-2345. https://doi.org/10.1016/j. neuropsychologia.2007.02.009
Champley, S., Ekstrom, C., Dalgaard, P., Gill, J., Weibelzahl, S., Anandkumar, A., Ford, C., Volcic, R., & De Rosario, H. (2020). Package 'pwr': Basic functions for power analysis (Version 1.30) [Computer software]. CRAN. https://CRAN.R-project.org/ package=pwr
Charron, S., & Koechlin, E. (2010). Divided representation of concurrent goals in the human frontal lobes. Science, 328(5976), 360-363. https://doi.org/10.1126/science.1183614
Chen, W., Kato, T., Zhu, X.-H., Ogawa, S., Tank, D. W., & Ugurbil, K. (1998). Human primary visual cortex and lateral geniculate nucleus activation during visual imagery. NeuroReport, 9(16), 3669.
Cherry, K. E., Silva Brown, J., Jackson Walker, E., Smitherman, E. A., Boudreaux, E. O., Volaufova, J., & Michal Jazwinski, S. (2012). Semantic encoding enhances the pictorial superiority effect in the oldest-old. Neuropsychology, Development, and Cognition, Section B, Aging, Neuropsychology and Cognition, 19(1-2), 319-337. https://doi.org/10.1080/13825585.2011.619645
Cox, K. W. (1996). AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages. Computers and Biomedical Research, an International Journal, 29(3), 162-173. https://doi.org/10.1006/cbmr.1996.0014
Cox, R. W. (2022). @SSwarper (Version 2.6) [Computer software]. National Institute of Mental Health. https://afni.nimh.nih.gov/ pub/dist/doc/program_help/@SSwarper.html
Cox, K. W., Chen, G., Glen, D. R., Reynolds, K. C., & Taylor, P. A. (2017). fMRI clustering in AFNI: False-positive rates redux. Brain Connectivity, 7(3), 152-171. https://doi.org/10.1089/brain. 2016.0475
Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for memory research. Journal of Verbal Learning and Verbal Behavior, 11(6), 671-684. https://doi.org/10.1016/ S0022-5371(72)80001-X
Craik, E. I. M., & McDowd, J. M. (1987). Age differences in recall and recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13, 474-479. https://doi.org/10.1037/ 0278-7393.13.3.474
Cunnington, R., Iansek, R., Bradshaw, J. L., & Phillips, J. C. (1996). Movement-related potentials associated with movement preparation and motor imagery. Experimental Brain Research, 111(3), 429-436. https://doi.org/10.1007/BF00228732
D'Argembeau, A., Ruby, P., Collette, F., Degueldre, C., Balteau, E., Luxen, A., Maquet, P., & Salmon, E. (2007). Distinct regions of the medial prefrontal cortex are associated with self-referential processing and perspective taking. Journal of Cognitive Neuroscience, 19(6), 935-944. https://doi.org/10.1162/jocn.2007. 19.6.935
Dale, A. M. (1999). Optimal experimental design for event-related fMRI. Human Brain Mapping, 8(2-3), 109-114. https://doi.org/ 10.1002/(SICI)1097-0193(1999)8:2/3%3c109:AID-HBM7% 3e3.0.CO;2-W
Dechent, P., Merboldt, K.-D., & Frahm, J. (2004). Is the human primary motor cortex involved in motor imagery? Cognitive Brain Research, 19(2), 138-144. https://doi.org/10.1016/j.cogbrainres. 2003.11.012
Elliott, R., Rees, G., & Dolan, R. J. (1999). Ventromedial prefrontal cortex mediates guessing. Neuropsychologia, 37(4), 403-411. https://doi.org/10.1016/S0028-3932(98)00107-9
Engelkamp, J., & Krumnacker, H. (1980). Image- and motor-processes in the retention of verbal materials [Image- and motor-processes in the retention of verbal materials.]. Zeitschrift Für Experimentelle Und Angewandte Psychologie [Journal of Experimental and Applied Psychology], 27(4), 511-533.
Euston, D. R., Gruber, A. J., & McNaughton, B. L. (2012). The role of medial prefrontal cortex in memory and decision making. Neuron, 76(6), 1057-1070. https://doi.org/10.1016/j.neuron.2012.12.002
Fan, J. E., Wammes, J. D., Gunn, J. B., Yamins, D. L. K., Norman, K. A., & Turk-Browne, N. B. (2020). Relating visual production and recognition of objects in human visual cortex. Journal of Neuroscience, 40(8), 1710-1721. https://doi.org/10.1523/JNEUR OSCI.1843-19.2019
Fernandes, M. A., Wammes, J. D., & Meade, M. E. (2018). The surprisingly powerful influence of drawing on memory. Current Directions in Psychological Science, 27(5), 302-308. https:// doi.org/10.1177/0963721418755385
Feyereisen, P. (2009). Enactment effects and integration processes in younger and older adults" memory for actions. Memory, 17(4), 374-385. https://doi.org/10.1080/09658210902731851
Fletcher, P. C., Frith, C. D., Baker, S. C., Shallice, T., Frackowiak, R. S. J., & Dolan, K. J. (1995a). The mind's eye-pPrecuneus activation in memory-related imagery. Neurolmage, 2(3), 195-200. https:// doi.org/10.1006/nimg.1995.1025
Fletcher, P. C., Frith, C. D., Grasby, P. M., Shallice, T., Frackowiak, E. S., & Dolan, E. J. (1995b). Brain systems for encoding and retrieval of auditory-verbal memory. An in vivo study in humans. Brain: A Journal of Neurology, 118(Pt 2), 401-416. https://doi. org/10.1093/brain/118.2.401
Fletcher, P. C., Shallice, T., Frith, C. D., Frackowiak, K. $. J., & Dolan, E. J. (1996). Brain activity during memory retrieval: The influence of imagery and semantic cueing. Brain, 119(5), 1587-1596. https://doi.org/10.1093/brain/119.5.1587
Gaymard, B., Pierrot-Deseilligny, Ch., & Rivaud, S. (1990). Impairment of sequences of memory-guided saccades after supplementary motor area lesions. Annals of Neurology, 28(5), 622-626. https://doi.org/10.1002/ana.410280504
Gaymard, B., Rivaud, S., & Pierrot-Deseilligny, C. (1993). Role of the left and right supplementary motor areas in memory-guided saccade sequences. Annals of Neurology, 34(3), 404-406. https:// doi.org/10.1002/ana.410340317
Gerloff, C., Corwell, B., Chen, R., Hallett, M., & Cohen, L. C. (1997). Stimulation over the human supplementary motor area interferes with the organization of future elements in complex motor sequences. Brain, 120(9), 1587-1602. https://doi.org/10.1093/ brain/120.9.1587
Gilbert, S. J., Spengler, S., Simons, J. S., Steele, J. D., Lawrie, S. M., Frith, C. D., & Burgess, P. W. (2006). Functional specialization within rostral prefrontal cortex (area 10): A meta-analysis. Journal of Cognitive Neuroscience, 18(6), 932-948. https://doi.org/ 10.1162/jocn.2006.18.6.932
Golby, A., Silverberg, G., Race, E., Gabrieli, S., O'Shea, J., Knierim, K., Stebbins, G., & Gabrieli, J. (2005). Memory encoding in Alzheimer's disease: An fMRI study of explicit and implicit memory. Brain: A Journal of Neurology, 128(Pt 4), 773-787. https://doi.org/10.1093/brain/awh400
Goémez-Isla, T., Price, J. L., McKeel, D. W., Morris, J. C., Growdon, J. H., & Hyman, B. T. (1996). Profound loss of layer II entorhinal cortex neurons occurs in very mild Alzheimer's disease. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 16(14), 4491-4500. https://doi.org/10.1523/ JNEUROSCI.16-14-04491.1996
Greve, D. N. (2006). Optseq2 (Version 2.12) [Computer software]. Martinos Center for Biomedical Imaging, Harvard University. https://surfer.nmr.mgh.harvard.edu/optseg/
Griffith, F. J., & Bingman, V. P. (2020). Drawing on the brain: An ALE meta-analysis of functional brain activation during drawing. The Arts in Psychotherapy, 71. https://doi.org/10.1016/j. aip.2020.101690
Gurtner, L. M., Hartmann, M., & Mast, F. W. (2021). Eye movements during visual imagery and perception show spatial correspondence but have unique temporal signatures. Cognition, 210, 104597. https://doi.org/10.1016/j.cognition.2021.104597
Holmes, C. J., Hoge, R., Collins, L., Woods, R., Toga, A. W., & Evans, A. C. (1998). Enhancement of MR images using registration for signal averaging. Journal of Computer Assisted Tomography, 22(2), 324-333. https://doi.org/10.1097/00004 728-199803000-00032
Humphreys, G. F., Lambon Ralph, M. A., & Simons, J. S. (2021). A unifying account of angular gyrus contributions to episodic and semantic cognition. Trends in Neurosciences, 44(6), 452- 463. https://doi.org/10.1016/j.tins.2021.01.006
Jenkins, I. H., Brooks, D. J., Nixon, P. D., Frackowiak, B. $. J., & Passingham, R. E. (1994). Motor sequence learning: A study with positron emission tomography. Journal of Neuroscience, 14(6), 3775-3790. https://doi.org/10.1523/jneurosci.14-06-03775.1994
Jonides, J., Smith, E. E., Koeppe, R. A., Awh, E., Minoshima, S., & Mintun, M. A. (1993). Spatial working memory in humans as revealed by PET. Nature, 363(6430), 6430. https://doi.org/10. 1038/363623a0
Kelley, W. M., Macrae, C. N., Wyland, C. L., Caglar, S., Inati, S., & Heatherton, T. F. (2002). Finding the self? An event-related fMRI study. Journal of Cognitive Neuroscience, 14(5), 785-794. https://doi.org/10.1162/08989290260138672
Khader, P., Burke, M., Bien, S., Ranganath, C., & Rösler, E. (2005). Content-specific activation during associative long-term memory retrieval. Neurolmage, 27, 805-816. https://doi.org/10.1016/j. neuroimage.2005.05.006
Kim, H. (2010). Dissociating the roles of the default-mode, dorsal, and ventral networks in episodic memory retrieval. Neurolmage, 50(4), 1648-1657. https://doi.org/10.1016/j.neuroimage.2010.01.051
Kim, K. K., Karunanayaka, P., Privitera, M. D., Holland, S. K., & Szaflarski, J. P. (2011). Semantic association investigated with functional MRI and independent component analysis. Epilepsy & Behavior, 20(4), 613-622. https://doi.org/10.1016/j.yebeh. 2010.11.010
Klein, I., Paradis, A.-L., Poline, J.-B., Kosslyn, S. M., & Le Bihan, D. (2000). Transient activity in the human calcarine cortex during visual-mental imagery: An event-related fMRI study. Journal of Cognitive Neuroscience, 12, 15-23. https://doi.org/10.1162/ 089892900564037
Klein, I., Dubois, J., Mangin, J.-F., Kherif, F., Flandin, G., Poline, J.-B., Denis, M., Kosslyn, S. M., & Le Bihan, D. (2004). Retinotopic organization of visual mental images as revealed by functional magnetic resonance imaging. Cognitive Brain Research, 22(1), 26-31. https://doi.org/10.1016/j.cogbrainres.2004.07.006
Koechlin, E., Basso, G., Pietrini, P., Panzer, S., & Grafman, J. (1999). The role of the anterior prefrontal cortex in human cognition. Nature, 399(6732), 6732. https://doi.org/10.1038/20178
Koenig, P., Smith, E. E., Troiani, V., Anderson, C., Moore, P., & Grossman, M. (2008). Medial temporal lobe involvement in an implicit memory task: Evidence of collaborating implicit and explicit memory systems from fMRI and Alzheimer's disease. Cerebral Cortex, 18(12), 2831-2843. https://doi.org/10.1093/ cercor/bhn043
Kohler, S., Moscovitch, M., Winocur, G., & McIntosh, A. K. (2000). Episodic encoding and recognition of pictures and words: Role of the human medial temporal lobes. Acta Psychologica, 105, 159-179. https://doi.org/10.1016/S0001-6918(00)00059-7
Kronke, K. M., Mueller, K., Friederici, A. D., & Obrig, H. (2013). Learning by doing? The effect of gestures on implicit retrieval of newly acquired words. Cortex, 49(9), 2553-2568. https://doi. org/10.1016/j.cortex.2012.11.016
Lancaster, J. L., Woldorff, M. G., Parsons, L. M., Liotti, M., Freitas, C. S., Rainey, L., Kochunov, P. V., Nickerson, D, Mikiten, $. A., & Fox, P. T. (2000). Automated Talairach atlas labels for functional brain mapping. Human Brain Mapping, 10(3), 120-131. https:// doi.org/10.1002/1097-0193(200007)10:3%3c120:aid-hbm30% 3e3.0.co;2-8
Lee, D., & Quessy, S. (2003). Activity in the supplementary motor area related to learning and performance during a sequential visuomotor task. Journal of Neurophysiology, 89(2), 1039-1056. https:// doi.org/10.1152/jn.00638.2002
Lee, M. D., & Wagenmakers, E.-J. (2013). Bayesian cognitive modeling: A practical course. Cambridge University Press. https:// doi.org/10.1017/CBO9781139087759
Levitin, D. J., & Menon, V. (2003). Musical structure is processed in "language" areas of the brain: A possible role for Brodmann Area 47 in temporal coherence. Neurolmage, 20(4), 2142-2152. https://doi.org/10.1016/j.neuroimage.2003.08.016
Lundstrom, B. N., Petersson, K. M., Andersson, J., Johansson, M., Fransson, P., & Ingvar, M. (2003). Isolating the retrieval of imagined pictures during episodic memory: Activation of the left precuneus and left prefrontal cortex. Neurolmage, 20(4), 1934-1943. https://doi.org/10.1016/j.neuroimage.2003.07.017
Luo, L., & Craik, F. I. (2008). Aging and memory: A cognitive approach. The Canadian Journal of Psychiatry, 53(6), 346-353. https://doi.org/10.1177/070674370805300603
Luo, L., Hendriks, T., & Craik, E. I. M. (2007). Age differences in recollection: Three patterns of enhanced encoding. Psychology and Aging, 22, 269-280. https://doi.org/10.1037/0882-7974.22.2.269
Macedonia, M., & Mueller, K. (2016). Exploring the neural representation of novel words learned through enactment in a word recognition task. Frontiers in Psychology, 7. https://doi.org/10. 3389/fpsyg.2016.00953
Macedonia, M., Müller, K., & Friederici, A. D. (2011). The impact of iconic gestures on foreign language word learning and its neural substrate. Human Brain Mapping, 32(6), 982-998. https://doi.org/ 10.1002/hbm.21084
MacLeod, C. M., Gopie, N., Hourihan, K. L., Neary, K. R., & Ozubko, J. D. (2010). The production effect: Delineation of a phenomenon. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(3), 671-685. https://doi.org/10.1037/a0018 785
Mast, E. W., & Kosslyn, S. M. (2002). Eye movements during visual mental imagery. Trends in Cognitive Sciences, 6(7), 271-272. https://doi.org/10.1016/s1364-6613(02)01931-9
Mayer, K. M., Yildiz, I. B., Macedonia, M., & von Kriegstein, K. (2015). Visual and motor cortices differentially support the translation of foreign language words. Current Biology, 25(4), 530-535. https://doi.org/10.1016/j.cub.2014.11.068
Meade, M. E., Wammes, J. D., & Fernandes, M. A. (2018). Drawing as an encoding tool: Memorial benefits in younger and older adults. Experimental Aging Research, 44(5), 369-396. https://doi.org/ 10.1080/0361073X.2018.1521432
Meade, M. E., Wammes, J. D., & Fernandes, M. A. (2019). Comparing the influence of doodling, drawing, and writing at encoding on memory. Canadian Journal of Experimental Psychology, 73(1), 28-36. https://doi.org/10.1037/cep0000170
Meade, M. E., Ahmad, M., & Fernandes, M. A. (2020). Drawing pictures at encoding enhances memory in healthy older adults and in individuals with probable dementia. Aging, Neuropsychology, and Cognition, 27(6), 880-901. https://doi.org/10.1080/13825 585.2019.1700899
Mehler, D. M. A., Williams, A. N., Krause, F., Liihrs, M., Wise, R. G., Turner, D. L., Linden, D. E. J., & Whittaker, J. B. (2019). The BOLD response in primary motor cortex and supplementary motor area during kinesthetic motor imagery based graded fMRI neurofeedback. Neurolmage, 184, 36-44. https://doi.org/ 10.1016/j.neuroimage.2018.09.007
Meyer, M. L., & Lieberman, M. D. (2018). Why people are always thinking about themselves: Medial prefrontal cortex activity during rest primes self-referential processing. Journal of Cognitive Neuroscience, 30(5), 714-721. https://doi.org/10.1162/jocn_a_ 01232
Morey, R. D., Rouder, J. N., Jamil, T., Urbanek, S., Forner, K., & Ly, A. (2011). BayesFactor: Computation of Bayes factors for common design (Version 0.9.12-4.4) [Computer software]. CRAN. https:// CRAN.R-project.org/package=BayesFactor
Mushiake, H., Inase, M., & Tanji, J. (1990). Selective coding of motor sequence in the supplementary motor area of the monkey cerebral cortex. Experimental Brain Research, 82(1), 208-210. https://doi.org/10.1007/BF00230853
Nejad, A., Fossati, P., & Lemogne, C. (2013). Self-referential processing, rumination, and cortical midline structures in major depression. Frontiers in Human Neuroscience, 7. https://doi.org/10. 3389/fnhum.2013.00666
Nelson, H. E. (1982). National Adult Reading Test (NART): For the assessment of premorbid intelligence in patients with dementia: Test manual. NFER-Nelson.
Noonan, K. A., Jefferies, E., Visser, M., & Lambon Ralph, M. A. (2013). Going beyond inferior prefrontal involvement in semantic control: Evidence for the additional contribution of dorsal angular gyrus and posterior middle temporal cortex. Journal of Cognitive Neuroscience, 25(11), 1824-1850. https://doi.org/10. 1162/jocn_a_00442
Norman, K. A, & O'Reilly, K. C. (2003). Modeling hippocampal and neocortical contributions to recognition memory: A complementary-learning-systems approach. Psychological Review, 110, 611-646. https://doi.org/10.1037/0033-295X.110.4.611
Nyberg, L., Habib, R., McIntosh, A. R., & Tulving, E. (2000). Reactivation of encoding-related brain activity during memory retrieval. Proceedings of the National Academy of Sciences of the United States of America, 97(20), 11120-11124. https://doi.org/10.1073/pnas.97.20.11120
Ogiso, T., Kobayashi, K., & Sugishita, M. (2000). The precuneus in motor imagery: A magnetoencephalographic study. NeuroReport, 11(6), 1345.
Otten, L. J., Henson, R. N., & Rugg, M. D. (2001). Depth of processing effects on neural correlates of memory encoding: Relationship between findings from across- and within-task comparisons. Brain: A Journal of Neurology, 124(Pt 2), 399-412. https://doi. org/10.1093/brain/124.2.399
Paivio, A. (1971). Imagery and verbal processes. Holt, Rinehart, and Winston.
Passingham, R. E. (1989). Premotor cortex and the retrieval of movement. Brain, Behavior and Evolution, 33(2-3), 189-192. https:// doi.org/10.1159/000115927
Pearson, J., Naselaris, T., Holmes, E. A., & Kosslyn, S. M. (2015). Mental imagery: Functional mechanisms and clinical applications. Trends in Cognitive Sciences, 19(10), 590-602. https://doi. org/10.1016/j.tics.2015.08.003
Penfield, W., & Welch, K. (1951). The supplementary motor area of the cerebral cortex: À Clinical and experimental study. A.M.A. Archives of Neurology & Psychiatry, 66(3), 289-317. https:// doi.org/10.1001/archneurpsyc.1951.02320090038004
Poldrack, E. A., Wagner, A. D., Prull, M. W., Desmond, J. E., Glover, C. H., € Gabrieli, J. D. (1999). Functional specialization for semantic and phonological processing in the left inferior prefrontal cortex. Neurolmage, 10(1), 15-35. https://doi.org/10. 1006/nimg.1999.0441
Psychology Software Tools. (2016). E-Prime (version 3.0.3.60). Psychology Software Tools Inc https://www.pstnet.com
R Core Team. (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing http:// Www.r-project.org/
Ramanan, S., Piguet, O., & Trish, M. (2018). Rethinking the role of the angular gyrus in remembering the past and imagining the future: The contextual integration model. The Neuroscientist, 24(4), 342-352. https://doi.org/10.1177/1073858417735514
Raven, J. C., Court, J. H., & Raven, J. (1976). Manual for Raven's progressive matrices and vocabulary scales. Lewis.
Reddy, L., Tsuchiya, N., & Serre, T. (2010). Reading the mind's eye: Decoding category information during mental imagery. NeuroImage, 50(2), 818-825. https://doi.org/10.1016/j.neuroimage. 2009.11.084
Roberts, B. B. T., MacLeod, C. M., & Fernandes, M. A. (2022). The enactment effect: A systematic review and meta-analysis of behavioral, neuroimaging, and patient studies. Psychological Bulletin, 148(5-6), 397-434. https://doi.org/10.1037/bul0000360
Roland, P. E., Larsen, B., Lassen, N. A., & Skinhoj, E. (1980). Supplementary motor area and other cortical areas in organization of voluntary movements in man. Journal of Neurophysiology, 43(1), 118-136. https://doi.org/10.1152/jn.1980.43.1.118
Rorden, C., & Brett, M. (2000). Stereotaxic display of brain lesions. Behavioural Neurology, 12(4), 191-200. https://doi.org/10.1155/ 2000/421719
Rubin, D. C., & Greenberg, D. L. (1998). Visual memory-deficit amnesia: A distinct amnesic presentation and etiology. Proceedings of the National Academy of Sciences of the United States of America, 9509), 5413-5416.
Saad, Z. S., Glen, D. R., Chen, G., Beauchamp, M. S., Desai, R., & Cox, R. W. (2009). A new method for improving functional-tostructural MRI alignment using local Pearson correlation. Neurolmage, 44(3), 839-848. https://doi.org/10.1016/j.neuroimage. 2008.09.037
Schall, J. D. (2004). On the role of frontal eye field in guiding attention and saccades. Vision Research, 44(12), 1453-1467. https://doi. org/10.1016/j.visres.2003.10.025
Seghier, M. L., Fagan, E., & Price, C. J. (2010). Functional subdivisions in the left angular gyrus where the semantic system meets and diverges from the default network. Journal of Neuroscience, 30(50), 16809-16817. https://doi.org/10.1523/INEUROSCI. 3377-10.2010
Shannon, B. J., & Buckner, R. L. (2004). Functional-anatomic correlates of memory retrieval that suggest nontraditional processing roles for multiple distinct regions within posterior parietal cortex. Journal of Neuroscience, 24(45), 10084-10092. https://doi.org/ 10.1523/JNEUROSCI.2625-04.2004
Shibasaki, H., Sadato, N., Lyshkow, H., Yonekura, Y., Honda, M., Nagamine, T., Suwazono, S., Magata, Y., Ikeda, A., Miyazaki, M., Fukuyama, H., Asato, R., & Konishi, J. (1993). Both primary motor cortex and supplementary motor area play an important role in complex finger movement. Brain, 116(6), 1387-1398. https://doi.org/10.1093/brain/116.6.1387
Shima, K., & Tanji, J. (1998). Both supplementary and presupplementary motor areas are crucial for the temporal organization of multiple movements. Journal of Neurophysiology, 80(6), 3247-3260. https://doi.org/10.1152/jn.1998.80.6.3247
Singmann, H., Bolker, B., Westfall, J., Aust, F., Ben-Shachar, M. S., Hgjsgaard, S., Fox, J., Lawrence, M. A., Mertens, U., Love, J., Lenth, R., Haubo, R., & Christensen, B. (2023). afex: Analysis of factorial experiments (Version 1.2-1) [Computer software]. CRAN. https://CRAN.R-project.org/package=afex
Skinner, E. I., & Fernandes, M. A. (2009). Age-related changes in the use of study context to increase recollection. Aging, Neuropsychology, and Cognition, 16, 377-400. https://doi.org/10.1080/ 13825580802573052
Slamecka, N. J., & Graf, P. (1978). The generation effect: Delineation of a phenomenon. Journal of Experimental Psychology: Human Learning € Memory, 4(6), 592-604. https://doi.org/10.1037/ 0278-7393.4.6.592
Snodgrass, J. G., & Vanderwart, M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity, and visual complexity. Journal of Experimental Psychology: Human Learning and Memory, 6, 174-215. https://doi.org/10. 1037/0278-7393.6.2.174
Soon, C. S., Brass, M., Heinze, H.-J., & Haynes, J.-D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience, 11(5). https://doi.org/10.1038/nn.2112
Speer, R., Chin, J., Lin, A, Jewett, S., & Nathan, L. (2018). Luminosoinsight/wordfreq (Version 2.2) [Computer software]. Zenodo. https://zenodo.org/record/1443582
Staresina, B. P., Gray, J. C., & Davachi, L. (2009). Event congruency enhances episodic memory encoding through semantic elaboration and relational binding. Cerebral Cortex, 19(5), 1198-1207. https://doi.org/10.1093/cercor/bhn165
Straube, B., Green, A., Weis, S., Chatterjee, A., & Kircher, T. (2009). Memory effects of speech and gesture binding: cortical and hippocampal activation in relation to subsequent memory performance. Journal of Cognitive Neuroscience, 21(4), 821-836. https://doi.org/10.1162/jocn.2009.21053
Talairach, J., & Tournoux, P. (1988). Co-planar stereotaxic atlas of the human brain: 3-D proportional system: An approach to cerebral imaging (1st ed.). Thieme.
Taylor, P. A., Chen, G., Glen, D. R., Rajendra, J. K., Reynolds, K. C., & Cox, R. W. (2018). fMRI processing with AFNI: Some comments and corrections on "exploring the impact of analysis software on task fMRI results". bioRxiv. https://doi.org/10.1101/308643
Taylor, P., Reynolds, R., Calhoun, V., Gonzalez-Castillo, J., Handwerker, D., Bandettini, P., Mejia, A., & Chen, G. (2022). Highlight results, don't hide them: Enhance interpretation, reduce biases and improve reproducibility. Preprint. https://doi. org/10.1101/2022.10.26.513929
Totten, E. (1935). Eye movement during visual imagery. Comparative Psychology Monographs, 11(3), 46-46.
Tran, S. H. N., Beech, I., & Fernandes, M. A. (2022). Drawing compared to Writing in a diary enhances recall of autobiographical memories. Aging, Neuropsychology, and Cognition, 1-17. https://doi.org/10. 1080/13825585.2022.2047594 advance online publication
Ueno, T. (2003). An fMRI study during finger movement tasks and recalling finger movement tasks in normal subjects and Schizophrenia patients. Kyushu Shinkei Seishin Igaku, 4934), 141-147.
Vaidya, C. J., Zhao, M., Desmond, J. E., & Gabrieli, J. D. E. (2002). Evidence for cortical encoding specificity in episodic memory: Memory-induced re-activation of picture processing areas. Neuropsychologia, 40, 2136-2143. https://doi.org/10.1016/S00283932(02)00053-2
van der Meer, L., Costafreda, S., Aleman, A., & David, A. $. (2010). Self-reflection and the brain: A theoretical review and meta-analysis of neuroimaging studies with implications for schizophrenia. Neuroscience & Biobehavioral Reviews, 34(6), 935-946. https:// doi.org/10.1016/j.neubiorev.2009.12.004
Vinci-Booher, S., Cheng, H., & James, K. H. (2019). An analysis of the brain systems involved with producing letters by hand. Journal of Cognitive Neuroscience, 31(1), 138-154. https://doi.org/10. 1162/jocn_a_01340
Vuust, P., Roepstorff, A., Wallentin, M., Mouridsen, K., & Ostergaard, L. (2006). It don't mean a thing...: Keeping the rhythm during polyrhythmic tension, activates language areas (BA47). Neurolmage, 31(2), 832-841. https://doi.org/10.1016/j.neuro image.2005.12.037
Wagner, A. D., Schacter, D. L., Rotte, M., Koutstaal, W., Maril, A., Dale, A. M., Rosen, B. R., & Buckner, K. L. (1998). Building memories: Remembering and forgetting of verbal experiences as predicted by brain activity. Science, 281(5380), 1188-1191. https://doi.org/10.1126/science.281.5380.1188
Wagner, A. D., Shannon, B. J., Kahn, I., & Buckner, K. L. (2005). Parietal lobe contributions to episodic memory retrieval. Trends in Cognitive Sciences, 9(9), 445-453. https://doi.org/10.1016/j. tics.2005.07.001
Wammes, J. D., Meade, M. E., & Fernandes, M. A. (2016). The drawing effect: Evidence for reliable and robust memory benefits in free recall. Quarterly Journal of Experimental Psychology, 69(9), 1752-1776. https://doi.org/10.1080/17470218.2015.1094494
Wammes, J. D., Meade, M. E., & Fernandes, M. A. (2017). Learning terms and definitions: Drawing and the role of elaborative encoding. Acta Psychologica, 179, 104-113. https://doi.org/10.1016/j. actpsy.2017.07.008
Wammes, J. D., Meade, M. E., & Fernandes, M. A. (2018a). Creating a recollection-based memory through drawing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(5), 734-751. https://doi.org/10.1037/xIm0000445
Wammes, J. D., Roberts, B. B. T., & Fernandes, M. A. (2018b). Task preparation as a mnemonic: The benefits of drawing (and not drawing). Psychonomic Bulletin & Review, 25(6), 2365-2372. https://doi.org/ 10.3758/s13423-018-1477-y
Wammes, J. D., Jonker, T. R., & Fernandes, M. A. (2019). Drawing improves memory: The importance of multimodal encoding context. Cognition, 191, 103955. https://doi.org/10.1016/j.cognition. 2019.04.024
Ward, B. D. (2023). 3dANOVA2 (Version 23.0.00) [Computer software]. National Institute of Mental Health. https://afni.nimh.nih.gov/pub/ dist/doc/program_help/3dANOVA2.html
Wheeler, M. E., Petersen, $. E., & Buckner, K. L. (2000). Memory's echo: Vivid remembering reactivates sensory-specific cortex. Proceedings of the National Academy of Sciences of the United States of America, 97(20), 11125-11129. https://doi.org/10.1073/ pnas.97.20.11125
Winograd, E., Smith, A. D., & Simon, E. W. (1982). Aging and the picture superiority effect in recall. Journal of Gerontology, 37, 10-75. https:// doi.org/10.1093/geronj/37.1.70
Woo, C. W., Krishnan, A., & Wager, T. D. (2014). Cluster-extent based thresholding in fMRI analyses: pitfalls and recommendations. Neurolmage, 91, 412-419. https://doi.org/10.1016/j.neuroimage. 2013.12.058
Woodruff, C. C., Johnson, J. D., Uncapher, M. R., & Rugg, M. D. (2005). Content-specificity of the neural correlates of recollection. Neuropsychologia, 43, 1022-1032. https://doi.org/10. 1016/j.neuropsychologia.2004.10.013
Yazar, Y., Bergstrom, Z. M., & Simons, J. S. (2014). Continuous theta burst stimulation of angular gyrus reduces subjective recollection. PLOS ONE, 9(10), E110414. https://doi.org/10.1371/journ al.pone.0110414
Yazar, Y., Bergstrom, Z. M., & Simons, J. S. (2017). Reduced multimodal integration of memory features following continuous theta burst stimulation of angular gyrus. Brain Stimulation, 10(3), 624-629. https://doi.org/10.1016/j.brs.2017.02.011
Copyright Springer Nature B.V. Jan 2025