A defining characteristic of cervical dystonia (CD) is deviated head posture. Clinical trials of new treatments to improve head posture in CD require outcome measures that quantify its severity. Head posture severity is most commonly quantified with the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS),1,2 or its updated version, the TWSTRS-2.3 However, correct application of these scales requires substantial training and experience with CD. Furthermore, like most clinical rating scales, ratings with these scales are intrinsically subjective, influenced by clinician training, experience, and judgment. Thus, the TWSTRS and TWSTRS-2 are susceptible to intra- and inter-rater variability. If truly objective methods for measuring CD motor severity are developed, they could reduce reliance upon experience and scale-specific training and circumvent the variability intrinsic to subjective rating scales. Calls for objective characterizations of CD date back over 30 years4–6 and calls to leverage new technologies continue to be identified as a research priority in dystonia.7
There have been numerous efforts to develop objective methods for assessing CD motor severity. Most of the approaches involve some type of instrumentation. Early attempts included a protractor collar that the patient wore around the neck and a wall chart for measuring head deviation from neutral in each of three axes (the Cervical Dystonia Severity Scale8). Other approaches monitor muscle activity either with electromyography9 or with ultrasound.10 More commonly, instrumented methods have captured 3D orientation of the head in CD using various motion capture technologies. These have included, for example, (1) electromagnetic-based sensors,11–13 (2) sets of multiple reflective markers and either optoelectronic5 or infrared14 cameras, (3) inertial measurement units usually combining accelerometers and gyroscopes,15 (4) wearable direct sensors, such as a combination of inclinometer and torsiometer,16 or (5) wireless thin-film accelerometers.17 These systems typically operate with high spatial and temporal resolution, and some low cost options have emerged. However, all of these approaches involve placing devices on the patient's neck or head. Because of CD's sensory abnormalities and alleviating maneuvers, the devices may modulate the very CD motor phenomenon we wish to capture.
Noncontact alternatives have been developed with specialized video cameras that also use infrared light and sensors to directly capture depth. When combined with custom algorithms, they can estimate the 3D orientation of the head in neurologically normal adults.18,19 This approach has been incorporated in a semiautomated interactive system for use in CD, demonstrating correlations with some items on the TWSTRS.20 Although the system is inexpensive and portable, there is no guidance on how the software's many parameters should be tuned for use in CD and the system is not widely available in movement disorders clinics.
All of the aforementioned methods require specialized equipment and expertise, variable demands on space, and time required for setup, calibration, and use. These are probably at least some of the reasons previous studies using those methods have usually been limited to single centers with cohorts of fewer than 20 patients. Alternatively, quantification of head posture from standard video recordings would provide a digital method requiring only a conventional video camera widely available in movement disorders centers and pervasive in mobile personal devices. This strategy was recognized over 30 years ago, when investigators used a marker on the nose and standard video recordings to quantify CD motor symptoms.4 They manually annotated every frame and quantified deviations in the 2D plane corresponding to the pitch and yaw axes, graphically depicting improvements after neurectomy and rhizotomy procedures for two CD patients. Even very early video reviews of generalized dystonia, dating back to the 1940s, involved similar frame-by-frame analyses21 and helped inform the argument that dystonia had a neurologic, rather than psychiatric, basis.22
Although conventional video recordings do not directly provide 3D information, the computer vision field has been developing methods to estimate the 3D angular orientation of the head (“head pose estimation”; see Fig. 1) from 2D digital images. We are extending those advances in order to develop a system to capture motor manifestations of dystonia from conventional video recordings (the Computational Motor Objective Rater; CMOR). CMOR quantifies head tremor severity in CD.23 In this study we employ CMOR to estimate head posture severity in CD. Our objectives were twofold: first, to use CMOR to quantify the multi-axis directionality of predominant posture in CD and second, to evaluate convergent validity between CMOR and clinicians for quantifying head posture severity.
We retrospectively analyzed clinical data and video recordings from 206 CD patients enrolled across 10 North American academic centers in a cross-sectional rating scale validation study (
The overall workflow for CMOR-based video processing is illustrated in Figure 2. Our analyses were based on a segment of the video exam protocol in which CD patients typically exhibit their most severe head deviation because they were instructed to close their eyes and let their head drift to its natural dystonic position for approximately 10 sec. The segment was identified as the intersection of annotations by two video annotators using ELAN 4.9.4.27 Both annotators were instructed to mark the beginning and end times of the segment, operated independently, and were blind to the clinical severity ratings.
All videos underwent a quality control review by three independent reviewers also blind to the clinical ratings. Quality control issues were considered relevant if reported by at least two out of the three reviewers. Two aspects of video recording quality were noted and used to assess how robust CMOR's results would be to such quality issues: “dark” and “unstable”. Videos were deemed “dark” if the illumination on the face was considered sufficiently low to make it difficult to discern facial features upon which CMOR relies in order to estimate head posture. Videos were deemed “unstable” if they involved excessive panning and/or rotation of the video camera. Five other types of transient video issues were noted if: (1) the camera was not frontal relative to the participant, (2) the participant made intentional head turns that were not reflective of their natural dystonic position, (3) other faces were visible in the video frame, (4) the video was flipped sideways, and (5) the camera's zoom cropped out part of the participant's head. Identification of any one of these five issues by at least two out of the three reviewers was used to exclude that participant from further analyses.
CMOR's current computer vision engine (CVE) is OpenFace 2.0,28 an open-source computer vision tool that estimates head pose for each video frame. It uses a deep neural network29 to estimate the 3D projection of facial landmarks. The landmarks are then used with a generalized direct least-square method30 to infer the three angles of rotation that specify head pose. OpenFace has been validated for head pose estimation against a publicly available dataset (ICT-3DHP), which in turn provides ground truth from a combination of Polhemus Fastrak and Microsoft Kinect sensors.31 Head pose is most commonly specified as the angle of rotation from centered in each of three orthogonal rotational axes: pitch, roll, and yaw. The sign for each is specified in terms of the participant's perspective, such that, positive is up for pitch, left for roll, and left for yaw. We chose to use these rotational axes not only because they are the most common convention in the computer vision field, but also because they correspond to clinical convention in dystonia, with TWSTRS-2 items as illustrated in Figure 1: pitch (antero/retrocollis), roll (laterocollis, also referred to as “tilt”), and yaw (rotation, also referred to as “torticollis”). Video frames were filtered for CVE confidence. The longest contiguous period of frames with confidence levels above 0.7 out of 1.0 were retained for further processing. CMOR's head posture severity metrics were calculated as the mean angle of deviation (in degrees) for each axis.
We quantified participants' predominant postures in terms of the direction in each axis and the mix of axes involved. We defined deviations from neutral as angles with absolute values outside the CVE's mean absolute error, which is 3.5, 3.1, and 3.1 degrees for pitch, roll, and yaw, respectively.28 We evaluated the directionality with CMOR's head posture metrics retaining sign. Categorical indicators based on sign (up vs. down, left vs. right) were evaluated with two-sided Chi-square tests under the null hypothesis that the directions were evenly divided across the whole cohort. For quantifying the mix of axes involved, we followed clinical convention by retaining sign in pitch (i.e., anterocollis and retrocollis), and collapsing sign in roll and yaw.
We evaluated convergent validity between CMOR and clinical ratings of severity using Spearman correlations. To compare CMOR's metrics with the corresponding clinical severity ratings, a single pitch axis clinical rating was formulated by subtracting each participant's retrocollis rating from their anterocollis rating, producing ordinal scores in the ranges of −10 to 10 for the GDRS and − 4 to 4 for the TWSTRS-2. For the roll and yaw axes, for which the clinical ratings of severity are non-negative, the absolute value was used as the CMOR metric. In all statistical tests we used an alpha of 0.05 to determine significance after correcting for multiple comparisons.
Results Participant exclusions and demographicsOf 206 participants, two were removed because they had nonstandard video recordings. Of the remaining participants four were removed because the camera was not frontal relative to the participant; seven were removed because the participant made intentional head turns that were not reflective of their natural dystonic position; four were removed because other faces were visible in the video frame; one was removed because the video was flipped sideways; and one was removed because the camera's zoom cropped out part of the participant's head. Some participants' videos exhibited multiple issues. In summary, 17 were excluded because of data collection issues.
Of 206 participants, a different subset of four participants was excluded based on CMOR's inability to reliably compute metrics. In these cases, no video frames passed the CVE's confidence minimum, and post hoc review showed that the most consistent reason was likely a combination of insufficient illumination of the face and the camera was zoomed out so far that the participant's head comprised less than 10% of the width of the video frame. The union of video recording issues and CMOR issues excluded a total of 21 participants. This yielded a final cohort for all subsequent analyses of N = 185. Table 1 summarizes their demographics and overall motor severity. The median video segment duration was 13 secs (range 5–61, SD 8).
Table 1 Patient characteristics (
The directionality of participants' predominant postures for each of the three axes are depicted in Figure 3. In post hoc review, we found that for participants deemed to have scores of zero for both anterocollis and retrocollis the mean angle of pitch was −13 degs, well outside the range of the CVE's mean absolute error of 3.5 degs in pitch. This was likely because participants were instructed to close their eyes during this segment of the examination. In contrast, for participants deemed to be clinically neutral in roll and yaw, the absolute mean angles for those axes were less than the CVE's mean absolute error of 3.1 degs. Thus, all subsequent predominant posture analyses reported here reflect correcting for the pitch angle by 13 degs. At the population level, there was no significant tendency for rotation in one direction over the other in all of the three axes: 49% up in pitch, 45% left in roll, and 59% left in yaw (Chi-sq(df = 1) = 0.07, 1.22, and 4.31 respectively, all p >0.05 after correcting for multiple comparisons). The distribution of axial involvement is reported in Table 2. In summary, 62 participants exhibited anterocollis, 59 retrocollis, 139 roll, and 145 yaw. Only five participants (3%) exhibited no deviation in any axis. Of the remaining 180 participants, there was involvement of only one axis for 35 (19.4%), two axes for 65 (36.1%), and all three axes for 80 (44.4%). Scatterplots showing the heterogeneity of the combination of axes across participants— in 3D and each of the three unique pairwise combinations of two axes—are provided in Figure S1.
Table 2 Axial involvement distribution.
Pitch | Roll | Yaw | n | % |
– | – | – | 5 | 2.7 |
– | – | Yes | 15 | 8.1 |
– | Yes | – | 12 | 6.5 |
– | Yes | Yes | 32 | 17.3 |
Retro | – | – | 6 | 3.2 |
Retro | – | Yes | 8 | 4.3 |
Retro | Yes | – | 11 | 5.9 |
Retro | Yes | Yes | 34 | 18.4 |
Antero | – | – | 2 | 1.1 |
Antero | – | Yes | 10 | 5.4 |
Antero | Yes | – | 4 | 2.2 |
Antero | Yes | Yes | 46 | 24.9 |
CMOR's video-based head posture severity metrics correlated with clinical ratings of severity for all three axes of rotation (see Fig. 4). CMOR's metrics correlated with the GDRS, with Spearman's rho varying from 0.66 to 0.68 (all p <0.001). CMOR's metrics also correlated with TWSTRS-2, with Spearman's rho varying from 0.59 to 0.62 (all p <0.001). In post hoc analyses, these correlations were not markedly different for participants with versus without comorbid head tremor (Table S1). However, the correlations did exhibit strong differences across individual recruiting sites in the Pitch axis, with Spearman's rho closer to 0.80 for three sites and in the range of 0.3–0.4 for two sites, with the latter being non-significant after correction for multiple comparisons (Table S2).
Of the 185 participants, 11 had “dark” videos, and 21 had “unstable” videos. No participants had videos that were both “dark” and “unstable”. The influence of excluding either or both of these groups from the analyses of CMOR's correlations with the GDRS and TWSTRS-2 is provided in Table 3.
Table 3 CMOR's robustness to dark and/or unstable videos.
Include dark? | Include unstable? | N | Correlations with CMOR (Spearman's rho) | |||||
GDRS | TWSTRS-2 | |||||||
Pitch | Roll | Yaw | Pitch | Roll | Yaw | |||
– | – | 153 | 0.72 | 0.67 | 0.68 | 0.60 | 0.56 | 0.65 |
– | Y | 174 | 0.67 | 0.66 | 0.68 | 0.57 | 0.59 | 0.64 |
Y | – | 164 | 0.70 | 0.67 | 0.68 | 0.62 | 0.58 | 0.63 |
Y | Y | 185 | 0.66 | 0.66 | 0.68 | 0.59 | 0.60 | 0.62 |
This study demonstrates that CMOR's measures of head posture severity in CD exhibit convergent validity with clinical severity ratings in all three axes of rotation. The strength of the results is consistent with prior convergent validity between CMOR's predecessor and clinical ratings of severity of blepharospasm.32 The results lay the foundation for CMOR's potential future clinical utility. Like other instrumented measures, it quantifies motor severity objectively. Thus it prevents subjective measurement variability from confounding variability intrinsic to the patient and their treatment response. This would reduce sample size estimates, increase sensitivity, and decrease cost in future prospective studies including clinical trials. Compared to instrumented measures, the approach is more clinically efficient: it does not involve body-worn sensors, requires only conventional video recordings, and needs only a brief, 10 sec demonstration in which the patient is instructed to let their head drift to its natural dystonic position. Because CMOR quantifies severity and the mix of involved axes on an individualized basis, it also facilitates rational, objectively based personalized medicine.33 For example, clinical studies could determine whether CMOR outputs used to tailor muscle selection and dosing for BoNT injections would improve outcome, as has been proposed with kinematic measures of CD.16 Such personalized treatment could reduce the number of cycles required for patients to achieve optimal benefit from BoNT. Because CMOR's underlying computer vision technology can run in real time without the need for a separate GPU, it could ultimately also be incorporated into real time biofeedback for physical therapy-style rehabilitation.
Like most instrumented methods that use deterministic algorithms, a given input will always produce the same output, thus CMOR's measures of severity have zero “intra-rater” variability. Unlike other instrumented measures, CMOR requires only conventional video recordings. It does not require specialized equipment or expertise and can be used outside of laboratory settings. This dramatically extends its potential future clinical utility compared to other objective measures. For clinical research including clinical trials, most movement disorders clinics already have video recording capability. With a few simple guidelines, no additional equipment or expertise is required to conduct a simple examination and make a brief video recording. Severity assessments would not have to rely solely upon clinician expertise in CD and their training on the TWSTRS-2. With additional software development including a simple user interface that includes instructions and instant feedback about video quality issues, we could streamline the otherwise nonautomated process developed in this study to field an automated version of CMOR. Once fielded, CMOR could be used to rate video recordings much faster than human raters. All of these factors will enable a CMOR-based assessment to be deployed in large scale, multisite clinical trials.
CMOR's assessment exhibits robustness in three regards. First, CMOR's metrics exhibited convergent validity with clinical ratings from both a single rater (as in the case of the GDRS as applied in the current study) as well as multiple raters (as in the case of the TWSTRS-2 ratings from each of 10 different raters). The level of agreement between CMOR and the TWSTRS-2 was consistently slightly lower than between CMOR and the GDRS. This may be a natural consequence of differences in the design of the two scales. Lower correlations are common when comparing continuous valued measures with less granular ordinal scales.34 The TWSTRS-2 is less granular, with 5 levels, than the GDRS with 11 levels. Thus the TWSTRS-2 may be less of an “interval” scale than the GDRS. Despite the TWSTRS-2’s anchors for head posture, its application may be more likely to exhibit a superlinear relationship to objective measures because of the natural log-scale properties of human perception.35 This was evident when assessments from inertial measurement units were compared to the original TWSTRS, though with only eight subjects.15 The lower agreement with the TWSTRS-2 may also arise from inter-rater variability among the 10 raters applying the TWSTRS-2. Nevertheless, the significant agreements for all axes for both scales suggest that, regardless of the specific rater and rating system against which they might be compared, CMOR provides valid estimates of head posture severity.
Second, CMOR's metrics for head posture were robust to two forms of video quality degradation: poor illumination (“dark” videos) and unstable camera orientation (“unstable” videos). Including these cases had minimal if any negative effect on overall agreement between CMOR and both rating scales. Importantly, it also increased the number of participants that could be retained in the analysis by about 21% (from 153 to 185). Most of the “unstable” videos were from only one of the 10 sites which did not use a tripod during recording. Although camera stability and participant illumination relative to backgrounds can be improved in future recordings, our results suggest that CMOR's assessments are robust to these aspects of poor video quality. Third, CMOR exhibited convergent validity with clinical severity ratings regardless of whether or not the CD patient had comorbid head tremor.
CMOR also enabled us to objectively quantify the mix of axes involved in CD head posture. Deviations in each direction of three axes were represented within our participant cohort. The majority of participants (80.5%) had involvement of more than one axis, and almost half (44.4%) had involvement of all three axes of rotation. CMOR's assessment of head posture also enabled us to determine whether CD patients tend to have head deviations more common in one direction than the other in each of the three axes of rotation. With the exception of pitch (anterocollis vs. retrocollis), this directionality information is lost in clinical rating scales. Yet the directionality of pitch has been associated with likelihood of comorbid head tremor in CD36 and in turn head tremor subtype is differentially associated with pain severity.37 In our cohort, we found that there was no bias toward anterocollis versus retrocollis, left versus right in laterocollis (“tilt”, roll), and left versus right in torticollis (“rotation”, yaw). Another study with 120 CD patients38 found that retrocollis was more common than anterocollis, there was a trend toward more patients tilting right than left in laterocollis, and more patients turning left than right in torticollis. However, they did not report how directionality in each of these axes was assessed and they report only prevalence for each direction without statistical analyses. Interestingly, however, their results are consistent with the (non-significant) trends in our data for laterocollis and torticollis. The reasons for potential trends in direction are unclear. One hypothesis is that the left turning torticollis is slightly more common because of handedness or lifelong laterally asymmetric behavioral patterns such as phone use or driving, or some combinations thereof. The hypothesis about driving would be relevant for only those patients whose CD onset occurred after they started driving. This is the case for the overwhelming majority of patients with CD. The hypothesis could be tested with carefully designed studies identifying the side of the road on which patients have done most of their driving prior to developing CD. Objective methods like CMOR that can easily scale to large studies with many patients can also be combined with studies demonstrating lateral asymmetry in pathophysiology39–43 and enable us to begin to address these questions about the etiology and pathophysiology of directional biases in CD.
The approach used in this study has a few limitations. First, some aspects of video recordings that are problematic for the current implementation of CMOR do not pose problems for humans. For example, we excluded from analyses participants in which the video recording exhibited various issues. In some cases—such as when other faces were visible in the video frame, if the video was flipped sideways, the camera was not oriented frontal relative to the participant, or if the camera's zoom cropped out part of the participant's head—a human may be able to infer the participant's true head posture, albeit with possibly less accuracy. In still other cases—such as when a participant makes what looks like an intentional head turn unrelated to their natural dystonic position—the human assessment depends on context. Do they have knowledge of the relative location of other parties in the room? Can they infer from the simultaneously recorded audio whether dialog during the examination may induce participants to orient their heads in a different direction or include non-verbal “yes” or “no” head movements in response to questions? Our current CMOR implementation does not take into account these subtle but important details of the examination protocol and associated video recording. But in principle these factors can be addressed with improved protocol and recording adherence and/or additional computer vision and AI technology. Second, CMOR's assessments are based on camera coordinates. So if a participant's torso is not square to the camera, CMOR will over- or under-estimate deviations in head posture. This issue could be addressed in future studies with an examination protocol that enforces that the trunk be frontal to the camera, as has been done in some studies,12,13 or by adding to CMOR other computer vision technology that also infers the orientation of the torso.44 Third, CMOR's underlying CVE was trained on videos and simultaneously recorded motion capture sensor data from neurologically normal adults. Although the mapping from images to head pose estimates would likely remain relatively unchanged, the CVE's training could be expanded to include individuals with neurological disorders. Fourth, as with all assessments of only overt motor symptoms, CMOR does not directly assay other aspects of CD that contribute to disability and health-related quality of life.45 Those aspects include important non-motor symptoms such as anxiety and pain that are better assessed with patient reports. Nevertheless, TWSTRS ratings are significantly related to dystonia non-motor symptoms,46 so CMOR's motor assessments may provide an indirect link to non-motor features of CD.
Based on the present application of CMOR to CD head posture and our prior results using CMOR to quantify head tremor severity in CD,23 we are extending CMOR in multiple directions that will expand the scope of focal dystonia motor symptoms whose severity it can assess. In CD, we are applying CMOR to evaluate range of motion and head tremor subtypes. We are also extending our previous results with CMOR's predecessor to quantify motor severity for another common form of focal dystonia, blepharospasm.32 By quantifying both CD and blepharospasm, which together comprise over 80% of isolated dystonia phenotypes,47 CMOR will ultimately be relevant to a diverse array of motor symptoms for the majority of dystonia patients. In future work, we plan to evaluate CMOR's ability to differentiate dystonia patients from both neurologic and non-neurologic controls. We also plan to prospectively evaluate CMOR's ability to detect changes in response to treatments. We hypothesize that objective measures like CMOR, in conjunction with patient reports of adverse effects, will help to provide a rational basis for optimizing the tradeoff between maximizing treatment efficacy and minimizing adverse effects including dysphagia.48 Computer vision applications in areas of medicine beyond neurology are expanding widely49 and there are ongoing efforts to enable them to run real time on resource-limited mobile platforms.50,51 Given the maturity of video recording technology on mobile personal devices, CMOR could also ultimately be fielded in support of telemedicine and remote assessment. Combined with secure cloud connectivity, this scenario could enable more frequent and sustained assessments in patients' daily lives, untether patients from the limits of clinical expertise in their geographic locale, and facilitate health care cost reduction.52
AcknowledgmentsThe authors thank the patients who participated in this study. The authors also gratefully acknowledge assistance from the WUSM Biorepository team, including Laura Wright for managing video recording and clinical data intake, and Matt Hicks for technical support.
Conflict of InterestThe authors report no conflict of interest.
Data availability statementOriginal participant data are available from the Dystonia Coalition upon reasonable request.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Objective
Deviated head posture is a defining characteristic of cervical dystonia (CD). Head posture severity is typically quantified with clinical rating scales such as the Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS). Because clinical rating scales are inherently subjective, they are susceptible to variability that reduces their sensitivity as outcome measures. The variability could be circumvented with methods to measure CD head posture objectively. However, previously used objective methods require specialized equipment and have been limited to studies with a small number of cases. The objective of this study was to evaluate a novel software system—the Computational Motor Objective Rater (CMOR)—to quantify multi‐axis directionality and severity of head posture in CD using only conventional video camera recordings.
Methods
CMOR is based on computer vision and machine learning technology that captures 3D head angle from video. We used CMOR to quantify the axial patterns and severity of predominant head posture in a retrospective, cross‐sectional study of 185 patients with isolated CD recruited from 10 sites in the Dystonia Coalition.
Results
The predominant head posture involved more than one axis in 80.5% of patients and all three axes in 44.4%. CMOR's metrics for head posture severity correlated with severity ratings from movement disorders neurologists using both the TWSTRS‐2 and an adapted version of the Global Dystonia Rating Scale (rho = 0.59–0.68, all
Conclusions
CMOR's convergent validity with clinical rating scales and reliance upon only conventional video recordings supports its future potential for large scale multisite clinical trials.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 Institute for Neural Computation, University of California, San Diego, La Jolla, California, USA
2 Department of Computer Science, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
3 Department of Pediatrics, University of California, La Jolla, California, USA
4 Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California, USA
5 Parkinson's Disease Center and Movement Disorders Clinic, Department of Neurology, Baylor College of Medicine, Houston, Texas, USA
6 Department of Neurology, Emory University School of Medicine, Atlanta, Georgia, USA
7 Department of Neurological Sciences, Rush University Medical Center, Chicago, Illinois, USA
8 Department of Neurology, University of Rochester, Rochester, New York, USA
9 Department of Neurology, Washington University School of Medicine, St. Louis, Missouri, USA; Departments of Radiology, Neuroscience, Physical Therapy, and Occupational Therapy, Washington University School of Medicine, St. Louis, Missouri, USA
10 Department of Neurology, Emory University School of Medicine, Atlanta, Georgia, USA; Departments of Human Genetics, Emory University School of Medicine, Atlanta, Georgia, USA
11 Department of Neurology, Virginia Commonwealth University, Richmond, Virginia, USA
12 Department of Neurology, University of New Mexico Health Sciences Center, Albuquerque, New Mexico, USA; Neurology Service, New Mexico Veterans Affairs Health Care System, Albuquerque, New Mexico, USA
13 Institute for Neural Computation, University of California, San Diego, La Jolla, California, USA; Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California, USA