Introduction
One might expect decision-making by humans to be quite different from that of rats. In decisions with wide-reaching long-term consequences, we expect (or at least wish) humans would avail themselves of abstract conceptual thought, logical reasoning, and culturally accumulated knowledge that would be unavailable to a rat. Yet all organisms face a continuous challenge of selecting among alternative available actions in order to pursue goals. In order to select an action, sensory information, internal knowledge, and goals are combined to assess and evaluate the likely outcomes of possible actions relative to survival needs. Often there is not enough time to acquire the evidence necessary to determine with certainty the optimal course of action, so an action must be selected despite unresolved or unresolvable uncertainty. Some mechanism is needed to ensure timely commitment and to optimize outcome on average, and this must adapt flexibly to prevailing sensory context, shifting goal priorities, the urgency of action, and the severity of consequences of errors. When it comes to the continuous sensory guidance of moment-by-moment actions, decisions about sensory evidence are made in a fraction of a second. We speculate that in this case, mechanisms are largely conserved across mammals.
A now-classic series of studies in humans and non-human primates introduced the use of a stochastic visual motion task to study decision making (Britten et al., 1992, 1993, 1996; Shadlen et al., 1996; Shadlen and Newsome, 1996; Gold and Shadlen, 2001, 2007; Shadlen and Newsome, 2001; Roitman and Shadlen, 2002; Mazurek et al., 2003; Huk and Shadlen, 2005; Palmer et al., 2005). In each trial a visual stimulus provides information regarding which of two available actions is associated with reward and which is associated with non-reward or penalty. Stimulus strength is modulated by the motion coherence, which is defined as the fraction of the dots in the display that are “signal” (moving toward the rewarded response side). The remaining dots are “noise” (moving in random directions). As stimulus strength increases, accuracy increases and response time decreases for both monkeys (Roitman and Shadlen, 2002) and humans (Palmer et al., 2005). This is parsimoniously explained by drift diffusion models, which postulate that noisy sensory evidence is integrated over time until the accumulated evidence reaches a decision threshold (Stone, 1960; Ashby, 1983; Busemeyer and Townsend, 1993; Gold and Shadlen, 2001, 2007; Usher and McClelland, 2001; Ratcliff and Tuerlinckx, 2002; Palmer et al., 2005; Brown and Heathcote, 2008; Ratcliff and McKoon, 2008; Ratcliff et al., 2016). Although this class of model is highly successful, more data are needed to test model predictions and differentiate among competing versions of the model and alternative model classes (Wang, 2002; Ratcliff and McKoon, 2008; Pleskac and Busemeyer, 2010; Purcell et al., 2010; Rao, 2010; Heathcote and Love, 2012; Tsetsos et al., 2012; Huang and Rao, 2013; Usher et al., 2013; Scott et al., 2015; Ratcliff et al., 2016; Sun and Landy, 2016; White et al., 2018).
For example, when monkeys or humans perform this task, among trials of the same stimulus strength the interleaved trials with longer response times are more likely to be errors (Roitman and Shadlen, 2002; Palmer et al., 2005). In its simplest form the drift diffusion model does not explain this result; therefore the observation has been an important constraint for recent theoretical efforts. The result can be explained if the decision bound is not constant but instead decays as a function of time (Churchland et al., 2008; Cisek et al., 2009; Bowman et al., 2012; Drugowitsch et al., 2012). A collapsing decision bound can be rationalized as an optimal strategy under some task constraints (Rao, 2010; Hanks et al., 2011; Huang and Rao, 2013; Tajima et al., 2016) though this argument has been challenged by others (Hawkins et al., 2015; Boehm et al., 2016). There are alternative ways to explain the data within the sequential sampling model framework without positing an explicit urgency signal or decaying bound (Ditterich, 2006a,b; Ratcliff and McKoon, 2008; Ratcliff and Starns, 2013).
When rats performed the same random dot motion task, however, the opposite effect was found: their later decisions were more likely to be accurate (Reinagel, 2013b; Shevinsky and Reinagel, 2019). The same has also been reported for image discriminations in rats (Reinagel, 2013a), for visual orientation decisions in mice (Sriram et al., 2020), and in humans in some other tasks (McCormack and Swenson, 1972; Ratcliff and Rouder, 1998; Long et al., 2015; Stirman et al., 2016). This result is not readily explained by some of the models suggested to explain the late errors of primates [reviewed in Heitz (2014), Ratcliff et al. (2016), and Hanks and Summerfield (2017)]. Here, we explore a stochastic variant of the drift-diffusion model (Ratcliff and Tuerlinckx, 2002; Ratcliff and McKoon, 2008) for its ability to explain these problematic findings in both species.
Results
In a basic drift diffusion model (DDM), the relative sensory evidence in favor of a decision (e.g., “motion is rightward” vs. “motion is leftward”) is accumulated by an internal decision variable, resulting in a biased random walk, i.e., diffusion with drift (Figure 1A). The average drift rate is determined by the sensory signal strength (e.g., the coherence of visual motion). When the decision variable reaches either decision threshold, the agent commits to a choice. The time at which the decision variable crosses a threshold (response time), and the identity of the decision threshold that is crossed (correct vs. incorrect), vary from trial to trial. The model parameters are the starting point z, threshold separation a, drift rate v, and non-decision time t (in Figures 1A–E, z=0 a=2, t=0, v = 0.7).
FIGURE 1
An interesting feature of this model is that for any set of parameters, the errors and correct responses have identical response time distributions (Figures 1B–E, red vs. green). Therefore errors are on average the same speed as correct responses – even if the signal is so strong that errors are very rare.
We note that this does not, but may at first appear to, contradict two other facts. First, responses to stronger stimuli tend to be both more accurate and faster, which in this model is explained by a higher drift rate v. In this sense response time is negatively correlated with accuracy – but only when comparing trials of differing stimulus strengths. Second, conservative subjects tend to take more time to respond and are more accurate, which in this model is explained by a greater threshold separation a. In this sense response time is positively correlated with accuracy – but only when comparing blocks of trials with different overall degrees of caution. Both of these facts are consistent with the fact that within a block of fixed overall caution, comparing among the trials of the same stimulus strength, response time and accuracy are uncorrelated in the basic DDM model.
Both humans and rats deviate systematically from the prediction that correct and error trials have the same mean and probability distribution, however (Shevinsky and Reinagel, 2019). In the random dot motion discrimination task, for example, correct trials of rat subjects tend to have longer response times compared to errors (e.g., Figures 1F, cf. 1E). We quantify this effect by comparing the response times of individual correct trials to nearby (but not adjacent) error trials of the same stimulus strength (Figure 1I). This temporally local measure is robust to data non-stationarities that could otherwise produce a result like that shown in Figure 1F artefactually (Shevinsky and Reinagel, 2019). Humans also violate the basic DDM model prediction, but in the opposite way. For humans, errors tend to have longer response times (e.g., Figures 1J, cf. 1E; summarized in 1M). Our goal is to find a unified framework to account for both these deviations from predictions.
Drift Diffusion Model With Variable Parameters
It was previously shown that adding noise to the parameters of a bounded drift diffusion model can differentially affect the error and correct response time distributions (Ratcliff and Tuerlinckx, 2002; Ratcliff and McKoon, 2008). The version we implemented has three additional parameters: variability in starting point σz, variability in non-decision-time σt, and variability in drift rate σv (Figure 2A). We are able to find parameter sets that produce behavior qualitatively similar to either a rat (Figures 2B–E, cf. 1F–I) or a human (Figures 2F–I, cf. 1J–M). Notably, this model can replicate the shift in the response time distribution of correct trials to either later or earlier than that of error trials (Figures 2B, cf. 1F; and Figure 2F, cf. 1J), unlike the standard DDM (Figure 1E). The model also replicates the fact that the amplitude of this effect increases with stimulus strength (Figure 2E solid blue symbols, cf. Figure 1I; and Figures 2I, cf. 1M). Removing the drift rate variability and starting point variability from these simulations improved accuracy (Figures 2C,G open symbols), increased the response time for ambiguous stimuli (Figures 2D,H), and eliminated the difference between average correct and error response times (Figures 2E,I).
FIGURE 2
We systematically varied the parameters of this model (Figures 3A–C) to determine all the conditions under which the mean RT of correct trials can be greater or less than the mean RT of error trials, using parameter ranges from the literature (Ratcliff and Tuerlinckx, 2002; Wagenmakers et al., 2007; Ratcliff and McKoon, 2008). Like the basic DDM, the simulations with σz = 0, σv = 0 showed no difference between correct and error RT for any drift rate (black curves are on y = 0 line), in spite of the addition of non-decision time variability σt. We never observed a positive RT difference in this model unless the starting point was variable (the dark blue or black curves, σz = 0, lie entirely on or below the abscissa). Whenever σz > 0 and σv = 0, the RT difference was positive (thin lines other than black). We never observed a negative RT difference in the absence of drift rate variability (thin lines, σv = 0, lie entirely on or above the abscissa). Whenever σv > 0 and σz = 0, the RT difference was negative (dark blue curves). Holding other parameters constant, the RT difference always increased (more positive, or less negative) with increasing σz (blue → red) and decreased with increasing σv (thin → thick).
FIGURE 3
When both starting point variability and drift rate variability are present simultaneously, these opposing effects trade off against one another quantitatively, such that there are many parameter combinations consistent with any given sign and amplitude of effect. This explains why parameter fits to data are generally degenerate. Taken together, the simulations show that human-like pattern is associated with dominance of σv and the rat-like pattern with dominance of σz. The non-decision time t and its variability σt were explored in separate simulations and did not impact the effect of interest (not shown).
It is difficult or impossible to recover the true parameters of this model by fitting data (Boehm et al., 2018). Nevertheless, we fit published human and datasets to identify example parameters of the model consistent with the observed data. The parameters obtained from fitting are not guaranteed to be the optimal solutions of the model nor accurate measures of noise in the subjects. Bearing these caveats in mind, the distributions of parameters we obtained (Figures 3D,F,H) were consistent with the parameter sweeps. Parameters fit to humans and rats overlapped substantially, but the human distributions were shifted toward those that favor late errors (higher σv, higher a and lower σz), and rats’ parameters toward those that favor early errors (higher σz, lower a and σv). Trials simulated using the fitted parameters reproduced the sign of the effect (Figures 3J,L). In a subset of examples defined by low bias, the difference between parameter distributions of rats and humans were less pronounced (Figures 3E,G,I). Experiments with low bias still exhibit the species difference, and models fit to those examples still reproduced the species difference (Figures 3K,M).
This analysis does not prove that humans and rats have trial-by-trial variability in drift rate and starting point, much less provide an empirical measure of that variability. What it does show is that if starting point and drift rate vary from trial to trial, that alone could be sufficient to produce the effects previously reported in either species. Only subtle differences in the relative dominance of drift rate vs. starting point variability would be required to explain the reported species difference.
Variability Need Not Be Random
Random trial-to-trial variability in parameters can cause differences between correct and error response times. But “variability” does not have to be noise. Systematic biases in the starting point or drift rate would also vary from trial to trial, and therefore would produce similar effects. We tested whether bias alone could produce results resembling those of Shevinsky and Reinagel (2019).
First, recall that we have defined decision thresholds as “correct” vs. “error” rather than “left” vs. “right” (Figure 2A). Therefore it is impossible for the mean starting point z to be biased, because the agent cannot know a priori which side is correct. If a subject’s starting point were systematically biased to one response side, the starting point would be closer to the correct threshold on half the trials (when the preferred side was the correct response), but further on the other half of the trials (when the non-preferred response was required). Thus the mean starting point would be z=0, but the distribution of z would be binary (or at least bimodal), and thus high in variance. This could mimic a model with high σz, even in the absence of stochastic parameter variability.
We demonstrate by simulation that adding a fixed response-side bias to the starting point of an otherwise standard DDM is sufficient to produce a response bias (Figure 4A). Response accuracy is higher when the correct response (“target”) is on the preferred side, and can fall below chance for weak stimuli to the non-preferred side (Figure 4B). At any given coherence, response times are faster for targets on the preferred side (Figure 4C). For targets on the preferred side, reaction times of correct trials are faster than errors, whereas for targets on the non-preferred side, correct trials are slower than errors (Figure 4D). This is because when the target is on the preferred side, the starting point is closer to the correct threshold, such that correct responses cross threshold faster than error responses, while the opposite is true for targets on the non-preferred side. Thus both the left and right target trials violate the expectation of ⟨RTc⟩ = ⟨RTe⟩, but in opposite directions.
FIGURE 4
If the left-target and right-target trials are pooled in a single analysis – even if exactly equal numbers of both kinds are used – these opposite effects do not cancel out (Figure 4E). On average, correct trials would have longer RT than error trials, to an increasing degree as coherence increases (Figure 4E), just as commonly seen in rodents (e.g., Figure 1I; see Shevinsky and Reinagel, 2019). The reason for this, in brief, is that the side with the starting point nearer the error threshold is responsible for the vast majority of the errors, and these errors have short RTs. The imbalance of contributions to correct responses is less pronounced. Although the side with the starting point nearer the correct threshold contributes the majority of correct responses, and those have short RTs, the drift rate ensures that both sides contribute substantial numbers of correct trials. For a more detailed account see Supplementary Figure 1 in Supplementary Materials. Mechanisms aside, the important point is that if a response bias is present (Figure 4A) and an effect like that in Figure 4E is obtained from an analysis that pools left- and right-target trials, starting point bias toward the preferred side should be considered as a possible cause. Either the analysis shown here (Figure 4D) or one that separates left-side from right-side choices (see Supplementary Figure 2D in Supplementary Materials) can be used to reveal the contribution of starting-point bias to early errors.
What if response bias arose, not from a shift in the starting point of evidence accumulation, but rather from an asymmetry in the drift rate for leftward vs. rightward motion: vR www.frontiersin.org vL? Again, there would be an excess of responses to the preferred side (Figure 4F), and preferred target trials would be more accurate (Figure 4G) and faster (Figure 4H). If left and right targets were analyzed separately, each on its own would have a fixed drift rate, and therefore would behave as predicted by the basic DDM: correct trials and errors would have the same mean reaction time (Figure 4I).
But if left- and right-target trials were pooled, v would be biased toward or away from the correct response in different trials with equal probability, resulting in a binary or bimodal distribution in v. Thus the standard deviation of v (σv) would be large, producing effects equivalent to high drift rate variability σv (Figure 4J), just as commonly seen in primates (e.g., Figure 1M; see Shevinsky and Reinagel, 2019). The reason for this is that pooling left- and right-target trials is equivalent to mixing together trials from high- and low-coherence stimuli: the slower RTs over-represent the low-coherence (slow, inaccurate) trials while faster RTs over-represent the high coherence (fast, accurate) trials, such that errors are on average slower than correct trials (see Supplementary Figure 1 in Supplementary Materials). Therefore, if a response bias is present (Figure 4F) and an effect like that in Figure 4J is observed in a pooled-trial analysis, drift rate bias is a candidate mechanism. Either the analysis shown here (Figure 4I) or one that separates left-side from right-side choices (see Supplementary Figure 2I in Supplementary Materials) can be used to clarify the contribution of drift rate bias to late errors.
Finally, note that rats and humans with the same degree of response-side bias could have opposite effects on ⟨RTcorrect⟩−⟨RTerror⟩ (Figures 4E vs. J), if the starting point were more biased in rats and drift rate more biased in humans.
We belabor the effects of response-side bias in order to draw a broader generalization. The results just shown (Figures 4A–J) require only that a bias to one side (L or R) exists in each trial. It does not matter if that bias is fixed or varying from trial to trial, only that it is uncorrelated with the correct response. If the starting point or drift rate were biased in individual trials based on the recent trial history, for example, this would also bias the decision toward or away from the correct response with equal probability in each trial. Therefore history-dependent bias can also mimic either high σz (Figure 4O) or high σv (Figure 4T). But in this case, there would be no overall left or right side bias (Figures 4K,P), and even after conditioning the analysis on the target side, the “early error” (Figure 4N) or “late error” (Figure 4S) phenotypes would persist. By analogy to the case of response side bias, one could test for this specific kind of bias by conditioning the analysis on the location of the previous trial’s reward.
In principle, therefore, biases due to trial history or other contextual states could also be sufficient to explain the observed difference between error and correct response times in both species, even in the absence of overt side bias, random variability of parameters, or within-trial parameter change. Again, the difference between rats and humans does not require that historical or contextual biases are stronger in either species, only that when present, they have a stronger effect on drift rate in humans and a stronger effect on starting point in rats.
In real data, however, the observed effects could be explained by a combination of response-side bias, history-dependent bias, contextual modulation, and noise, impacting both starting point and drift rate in both species. Therefore, conditioning the analysis on discrete trial types is not a practical way to detect (or rule out) bias effects in most data sets. Other new modeling approaches show promise for dissecting such mixed effects, however (Urai et al., 2019; Ashwood et al., 2020).
Discussion
The impetus for this study was an observed difference between primate and rodent decision-making: for primates correct decisions are on average faster than errors, whereas for rodents correct decisions are on average slower than errors. Both observations violate the predictions of the standard drift diffusion model. In one study this species difference was seen even when the sensory task was matched such that rats were just as accurate and just as fast as humans in the task, and even among subjects with low bias or lapse and comparable accuracy and speed (Shevinsky and Reinagel, 2019).
We do not presume that the difference in response time of correct vs. error trials is functionally significant for either species; the difference is small and accounts for a small fraction of the variance in response time. The reason this effect is interesting is because it places constraints on the underlying decision-making algorithms, and in particular, because it is inconsistent with DDM in its basic form.
Decreasing accuracy with response time has been widely reported in both humans and non-human primates (Roitman and Shadlen, 2002; Palmer et al., 2005) and has been explained by a number of competing models (Ditterich, 2006a,b; Ratcliff and McKoon, 2008; Rao, 2010; Hanks et al., 2011; Huang and Rao, 2013; Ratcliff and Starns, 2013; Tajima et al., 2016). It was only recently appreciated that accuracy increases with response time in this type of task in rats (Reinagel, 2013a,b; Shevinsky and Reinagel, 2019; Sriram et al., 2020), and it remains unclear which of those models can accommodate this observation as well. In this study we showed that either parameter noise (Ratcliff and Tuerlinckx, 2002; Ratcliff and McKoon, 2008) or systematic parameter biases could explain the observed interaction between response time and accuracy in either species. Similar effects might be found in other related decision-making models.
On the models explored here, greater variability in the starting point of evidence accumulation would produce the effect seen in rats, whereas greater variability in the drift rate of evidence accumulation would produce the effect seen in humans. We do not know why rodents and primates should differ in this way. It could be, for example, that drift rate is modulated by top-down effects arising in cortex, while starting point is modulated by bottom-up effects arising subcortically, and species differ in the relative strength of these influences. Or perhaps some kinds of bias act on starting point while others act on drift rate, and species differ in which kinds of bias are stronger.
Can Context Account for Variability?
Although stochastic trial-by-trial variability of parameters could explain the effects of interest (Figure 3), systematic variations can also do so. We demonstrate this for simple cases of response side bias or history-dependent bias (Figure 4). Response bias is more prevalent in rats than in humans, but correct trials have longer RT than errors even in rats with no bias (Shevinsky and Reinagel, 2019). In any case, these simulations show that response side bias would only produce the rat-like pattern if that bias impacted starting point to a greater degree than drift rate.
It is known that decisions in this type of task can be biased by the previous trial’s stimulus, response, and outcome in mice (Busse et al., 2011; Hwang et al., 2017; Roy et al., 2021), rats (Lavan et al., 2011; Roy et al., 2021), non-human primates (Sugrue et al., 2004), and humans (Goldfarb et al., 2012; Roy et al., 2021), reviewed in Frund et al. (2014). Such history-dependent biases can be strong without causing an average side preference or an observable lapse rate (errors on strong stimuli). Species differ in the strength of such biases (Roy et al., 2021), but a difference in strength of bias does not determine whether the effect will be to make error trials earlier or later (Figures 4N,O vs. S,T). This requires a difference in the computational site of action of bias.
In support of this idea, recent studies have traced variability to bias and history-dependent effects in both rodents and primates. In a go-nogo task, choice bias (conservative vs. liberal) in both mice and humans could be explained by bias in drift rate (de Gee et al., 2020). In another study, choice history bias (repeat vs. alternate) was specifically linked to drift rate variability in humans (Urai et al., 2019).
Fluctuations in arousal, motivation, satiety or fatigue could conceivably modulate decision thresholds or drift rates from trial to trial independently of either response side or trial history. [Note that in the model of Figures 2 and 3, fluctuations in the threshold separation parameter a are referred to the starting-point variability parameter σz(Ratcliff and Tuerlinckx, 2002; Ratcliff and McKoon, 2008)]. Such sources of variation may or may not be correlated with other measurable states, such as alacrity (e.g., latency to trial initiation, or in rodents the number or frequency of request licks), arousal (e.g., assessed by pupillometry), fatigue (the number of trials recently completed), satiety (amount of reward recently consumed), or frustration/success (fraction of recent trials penalized/rewarded). As models continue to include more of these effects, it will be of interest to determine how much of the observed behavioral variability is reducable to such deterministic components in each species, and whether those effects can be attributed differentially to starting point vs. drift rate effects in either decision-making models or neural recordings.
Is Parameter Variability a Bug or a Feature?
To the extent that parameter variability is attributable to systematic influences rather than noise, a separate question would be whether this variability is adaptive or dysfunctional, in either species. It is possible that non-sensory influences shift the decision-making computation from trial to trial in a systematic and reproducible fashion that would be functionally adaptive in the context of natural behavior, even though we have artificially broken natural spatial and temporal correlations to render it maladaptive in our laboratory task.
For example, in nature some locations may be intrinsically more reward-rich, or very recent reward yields may be informative about the expected rewards at each location. In the real world, recently experienced visual motion might be highly predictive of the direction of subsequent motion stimuli. Therefore biasing either starting point or drift rate according to location or recent stimulus or reward history may be adaptive strategies under ecological constraints, for either or both species.
Consistent with this suggestion, decision biases of mice have been modeled as learned, continuously updated decision policies (Ashwood et al., 2020). Although the policy updates did not optimize expected reward in that study, the observed updates might still reflect hard-wired learning rules that would be optimal on average in natural contexts.
Conclusion
It has been argued that neural computations underlying sensory decisions could integrate comparative information about incoming sensory stimuli (e.g., left vs. right motion signals), internal representations of prior probability (frequency of left vs. right motion trials) and the expected values (rewards or costs) associated with correct vs. incorrect decisions, in a common currency (Gold and Shadlen, 2001, 2007). On the premise that basic mechanisms of perceptual decision-making are likely to be conserved (Cesario et al., 2020), fitting a single model to data from multiple species – especially where they differ in behavior – is a powerful way to develop and distinguish among alternative computational models (Urai et al., 2019), and enables direct comparison of species.
Materials and Methods
The data and code required to replicate all results in this manuscript are archived in a verified replication capsule (Reinagel and Nguyen, 2022).
Experimental Data
No experiments were reported in this manuscript. Example human and rat data were previously published (Shevinsky and Reinagel, 2019; Reinagel and Shevinsky, 2020). Specifically, Figure 1F used rat fixed-coherence epoch 256 of that repository; Figures 1G–I used rat psychometric epoch 146. Figure 1J used human fixed-coherence epoch 63, and Figures 1K–M used human psychometric epoch 81 from that data set. Figures 3D–M used the “AllEpochs” datasets.
To summarize the experiment briefly: the task was random dot coherent motion discrimination. When subjects initiated a trial, white dots appeared at random locations on a black screen and commenced to move. Some fraction of the dots (“signal”) moved at the same speed toward the rewarded response side. The others (“noise”) moved at random velocities. Subjects could respond at any time by a left vs. right keypress (human) or lick (rat). Correct responses were rewarded with money or water; error responses were penalized by a brief time-out. Stimulus strength was varied by the motion coherence (fraction of dots that were signal dots). Other stimulus parameters (e.g., dot size, dot density, motion speed, contrast) were chosen for each species to ensure that accuracy ranged from chance (50%) to perfect (100%) and response times ranged from ∼500 to 2,500 ms for typical subjects of the species.
Computational Methods
The drift diffusion process was simulated according to the equation X(t) = X(t − 1) ± δ with probability p of increasing and (1 − p) of decreasing. Here t is the time point of the process, with time step τ in seconds; δ=σ⋅τ−−√ δ=σ⋅τ denotes the step size, where σ is the standard deviation of the Gaussian white noise of the diffusion; p=0.5⋅(1+v⋅τ√σ) p=0.5⋅(1+v⋅τσ) , where v is the drift rate. The values for τ and σ were fixed at 0.1 msec and 1, respectively. For any trial, the process starts at a starting position z, sampled from a uniform distribution of range σz, assumes a constant drift rate v, sampled from a normal distribution of standard deviation σv, and continues until X(t) exceeds either threshold boundary. The non-decision-time t, sampled from a uniform distribution of range σt, is added to the elapsed time to obtain the final RT associated with that trial.
We measured the interaction between accuracy and response time using the temporally local measure ⟨RTcorrect – RTerror⟩ introduced in Shevinsky and Reinagel (2019). This method is preferred for real data because it is robust to non-trending non-stationarities that are commonly present in both human and rat data, not detected by traditional stationarity tests, and that could confound estimation of the effect of interest. The response time of each error trial is compared to a temporally nearby correct trial of the same coherence, requiring a minimum distance of >3 trials to avoid sequential effects, and a maximum distance of 200 trials to avoid confounds due to long-range non-stationarity. For simulated data, where stationarity is guaranteed, the temporally local measure ⟨RTcorrect – RTerror⟩ and global measure ⟨RTcorrect⟩ – ⟨RTerror⟩ are numerically equivalent.
We fit parameters of the model shown in Figure 2A to published human and rat datasets (Reinagel and Shevinsky, 2020) using the Hierarchical Drift Diffusion Model (HDDM) package (Wiecki et al., 2013). We emphasize that fitting the parameters of this model is problematic (Boehm et al., 2018). Our interpretation of the parameters (Figures 3D–M) is limited to asserting that these example parameters can produce human-like or rat-like effects, to the extent demonstrated. For further details of fitting, including scripts, raw output files and summary statistics of parameters, see Supplementary Materials.
Data Availability Statement
Publicly available datasets were analyzed in this study (Reinagel and Shevinsky, 2020). These data can be found at doi: 10.7910/DVN/ATMUIF.
Author Contributions
QN proposed and implemented the model, produced the reported results, and edited earlier drafts of the manuscript. PR provided direction and oversight, generated the figures, and wrote the final manuscript. Both authors contributed to the article and approved the submitted version.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnins.2022.794681/full#supplementary-material
Supplementary Figure 1 | Intuitions for the pooling effects.
Supplementary Figure 2 | Alternative analysis of simulated bias.
Supplementary Table 1 | Summary of fit parameters.
Supplementary Statistics | Parameters fit to data.
Supplementary Methods | Model fitting.
Scripts Folder | Python scripts to run with HDDM.
Outputs Folder | Output files generated by HDDM.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
When observers make rapid, difficult perceptual decisions, their response time is highly variable from trial to trial. In a visual motion discrimination task, it has been reported that human accuracy declines with increasing response time, whereas rat accuracy increases with response time. This is of interest because different mathematical theories of decision-making differ in their predictions regarding the correlation of accuracy with response time. On the premise that perceptual decision-making mechanisms are likely to be conserved among mammals, we seek to unify the rodent and primate results in a common theoretical framework. We show that a bounded drift diffusion model (DDM) can explain both with variable parameters: trial-to-trial variability in the starting point of the diffusion process produces the pattern typically observed in rats, whereas variability in the drift rate produces the pattern typically observed in humans. We further show that the same effects can be produced by deterministic biases, even in the absence of parameter stochasticity or parameter change within a trial.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer