Menu
- About
- Funding
- Research
- Wright Center
- News & Events
Back to Top Nav
Back to Top Nav
Back to Top Nav
Back to Top Nav
Real-world data is enormously complex. We developed a mathematical framework for transforming qualitative and quantitative real-world observations into mathematical shapes, called manifolds. When we force these manifolds to be 2D or 3D, we can turn complex datasets into visualizable shapes. This turns the problem of finding patterns in complex data into a problem that our visual systems solve every moment of every day: identifying structure in visual patterns and shapes. We are applying this framework to a wide variety of questions. A particularly exciting direction for this work has been to model people's cognitive and psychiatric states, which we are using to explore cognitive markers of mental illness.
A. Heusser, K/ Ziman, L, Owen, J. Manning, HyperTools: A Python Toolbox for Gaining Geometric Insights into High-Dimensional Data. Journal of Machine Learning Research 18 (2018) 1-6
Our research is focused on modeling fast-timescale whole-brain networks with the goal of better understanding how our brains support complex cognition. We developed a family of computational approaches that we apply to recordings from electrodes implanted in the brains of neurosurgical patients with drug-resistant epilepsy, and from fMRI in healthy individuals. A common theme underlying our approaches has been to use machine learning methods to stitch together insights across individuals that may then be used to build detailed maps of how brain regions interact as people are processing and remembering their experiences.
Owen, L.L.W., Muntianu, T.A., Heusser, A.C., Daly, P., Scangos, K., Manning, J.R. (2020, June) A Gaussian process model of human electrocorticographic data. Cerebral Cortex: in press. https://academic.oup.com/cercor/advance-article/doi/10.1093/cercor/bhaa115/5851264
We present a novel haptic and audio feedback device that allows blind and visually impaired (BVI) users to understand circuit diagrams. TangibleCircuits allows users to interact with a 3D printed tangible model of a circuit which provides audio tutorial directions while being touched. Our system comprises an automated parsing algorithm which extracts 3D printable models as well as an audio interface from a Fritzing diagram. We found that BVI users were better able to understand the geometric, spatial and structural circuit information using TangibleCircuits, as well as enjoyed learning with our tool.
Josh Urban Davis, Te-Yen Wu, Bo Shi, Hanyi Lu, Athina Panotopoulou, Emily Whiting, and Xing-Dong Yang. 2020. TangibleCircuits: An Interactive 3D Printed Circuit Education Tool for People with Visual Impairments. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1–13. DOI:https://doi.org/10.1145/3313831.3376513
The central U.S. is one of the most agriculturally productive regions on Earth and the only land region that did not warm significantly over the 20th century (i.e. a warming hole). Here, we investigate the impact of this warming hole on U.S. corn production by developing an empirical crop model using a machine learning algorithm to capture the effect of climate variability on yield. We then create counterfactual climate scenarios without the warming hole, which drive our empirical model as well as an established biophysical crop model. The warming hole has increased U.S. corn yields by approximately 5-10% per year through two complementary mechanisms: a prolonged maturation time and lower drought stress. Our results underscore the relative lack of climate change impacts on central US corn production, and the potential compounded challenge that a collapse of the warming hole and climate change would create for farmers.
Partridge, Trevor F., Winter, J. M., Liu, L., Kendall, A. D., Basso, B., & Hyndman, D. W. (2019). Mid-20th century warming hole boosts US maize yields. Environmental Research Letters, 14(11). https://doi.org/10.1088/1748-9326/ab422b
Synthetic aperture radar (SAR) is a day-or-night any-weather imaging modality that has become an important tool in remote sensing. We introduce a sampling-based approach to SAR image formation that goes beyond single image estimates to provide quantification of the certainty with which an estimate should be trusted. A hierarchical Bayesian model is constructed using conjugate priors that directly incorporate coherent imaging and the problematic speckle phenomenon that is known to degrade image quality. Utilizing a non-uniform fast Fourier transform, an efficient Gibbs sampler is used to sample the image, speckle, and noise parameters. The resulting group of samples are used to derive point estimates and confidence information.
ProQuest:https://search.proquest.com/docview/2404647188?pq-origsite=gscholar&fromopenview=true
Wearable sensors were leveraged to develop two methods for computing hip joint angles and moments during walking and stair ascent that are more portable than the gold standard. The Insole-Standard (I-S) approach replaced force plates with force-measuring insoles and achieved results that match the curvature of results from similar studies. Peaks in I-S kinetic results are high due to error induced by applying the ground reaction force to the talus. The Wearable-ANN (W-A) approach combines wearables with artificial neural networks to compute the same results. Compared against the I-S, the W-A approach performs well (average rRMSE = 18%, R2 0.77).
Chapman, RM, McCabe, MV, & Van Citters, DW. What are Patients Doing Outside the Clinic? Categorizing Activities using Remotely Captured Wearable Data and Machine Learning. ASME J Biomech Eng. Under Review.
Around 90% of individuals with autism experience sensory sensitivities, but our understanding of these symptoms is limited by past studies' unrealistic experimental designs and unreproducible results. We use a novel combination of virtual reality, eyetracking, and convolutional neural networks to model the staged of visual processing that predict differences in visual attention between individuals with and without autism. We find that even the earliest stages of the model can predict differences in gaze behavior between autists and controls. This suggests that visual processing differences in autism are not principally driven by the semantically-meaningful features within a scene but emerge from differences in early visual processing.
Histological classification of colorectal polyps plays a critical role in both screening for colorectal cancer and care of affected patients. An accurate, automated system for classifying colorectal polyps on digitized histopathology slides could benefit clinicians and patients. In this study, we developed a deep neural network for classification of four major colorectal polyp types based on digitized histopathology slides from the Dartmouth-Hitchcock Medical Center (DHMC). The neural network achieved performance comparable with pathologist diagnoses made at the point-of-care. If confirmed in clinical settings, our model could assist pathologists by improving the diagnostic efficiency, reproducibility, and accuracy of colorectal cancer screenings.
Jason W. Wei, Arief A. Suriawinata, Louis J. Vaickus, Bing Ren, Xiaoying Liu, Mikhail Lisovsky, Naofumi Tomita, Behnaz Abdollahi, Adam S. Kim, Dale C. Snover, John A. Baron, Elizabeth L. Barry, Saeed Hassanpour, "Evaluation of a Deep Neural Network for Automated Classification of Colorectal Polyps on Histopathologic Slides", JAMA Network Open, 3(4):e203398, 2020.
To improve user productivity in virtual reality (VR), annotation systems allow users to capture insights and observations while in VR sessions. I propose VR-Notes, a design for an annotation system in VR that captures the annotator's perspective for "doodle" and audio annotations, aiming to provide a richer viewing experience of these annotations at a later time. Early results from my experiment showed that the VR-Notes doodle method required less movement and rotation in the headset and controllers when compared to a popular freehand drawing method. Users also preferred and scored the VR-Notes doodle method higher than the freehand drawing method.