Winners of Neukom Graduate Fellowships have been announced for the 2014-2015 academic year. Fellowships will provide a full year of funding, including stipend and benefits, to Ph.D. students engaged in faculty-advised research in the development of novel computational techniques as well as the application of computational methods to problems in the Sciences, Social Sciences, Humanities, and the Arts.
The 2014-2015 winners are:
During the past decade, the most profound discovery in the cosmic evolution of galaxies is that in the center of almost every large galaxy there is a black hole as massive as a million to a few billion times the mass of the Sun. An even more intriguing puzzle is that the larger galaxies are observed to have larger supermassive black holes (SMBHs). This strongly indicates that the cosmic evolution history of galaxies and SMBHs are similar despite the vast difference in their physical size.
Based on Einstein's theory of general relativity, black holes are among the most elusive objects in the universe, since not even light can escape from their powerful gravitational grasp. However, black holes are not entirely ''black'': a black hole growing through accretion of gas is extremely
powerful, converting the black hole's mass to energy at a rate that is about 1000 times more efficient than that of a nuclear bomb. For a SMBH with the size of our solar system, energy released by accretion can outshine the entire host galaxy that has more than 100 billion stars.
These powerful growing black holes are known as ''quasars''. Today, astronomers believe that the parallel evolutionary track of galaxies and their central SMBHs are built on these energetic events, where the birth of stars in a galaxy fuels the SMBH and the powerful energy output from the SMBH regulates the growth of stars. However, no conclusive evidence of this coevolution picture has been found. Mostly because of the fact that a half of the quasar population cannot be seen directly with optical telescopes, as the dust that is fueling SMBH accretion often obscures the light from the black hole itself. Since the galaxies and SMBHs are believed to grow from the same fuels, the key to unveil the connection between SMBHs and galaxies growth might lie within these hidden quasars.
This project utilizes observations of more than 500,000 galaxies in the Boötes constellation, including data obtained with NASA's Chandra, Spitzer and Wide-field infrared Survey Explorer space telescopes. This wealth of panchromatic observations allows us to track down the obscured quasars that cannot be seen with ground-based telescopes. To identify obscured quasars, we will develop machine-learning algorithms that separate quasars from normal galaxies based on their peculiar mid-infrared spectral shape. The algorithms will also be able to Disentangle the spectra complexly mixed with galaxy and black hole energy outputs, thus allowing us to measure the growth rates of quasars and their host galaxies. For the large number of quasars in the Boötes survey region, we will be able to statistically determine whether the galaxy and the SMBHs do grow concurrently in this dust-enshrouded quasar phase. This will provide insights to our understanding on the cosmic evolution of galaxies.
Recent advances in robotics, cameras, and fiber optic devices have made robot-assisted minimally invasive surgeries (RAMIS) a reality. RAMIS is a term used to describe the type of procedures where robotically actuated instruments and a laparoscope are inserted into the anatomy through small incisions (typically <12mm in diameter), and are operated by a surgeon via a console. By limiting the size of the incisions, RAMIS significantly reduces patient trauma, blood loss, scarring and recovery time following surgery, making RAMIS a popular choice with both patients and surgeons. As a result, RAMIS is used for a range of procedures, including partial and radical tumor resection surgeries, transoral surgeries, and cardiac surgeries, among others. In this project, I propose to accomplish two specific goals to enhance the RAMIS experience for a surgeon.
First, I'll work on developing tools for 3D modeling of the tissue surface using stereo-laparoscopic images. There are several surgical procedures where preoperative and intra-operative registration of images is highly desirable, but has been difficult to achieve due to limitations of existing technologies. For example, in a partial nephrectomy procedure, a surgeon resects only the tumorous portion of the kidney, which is localized preoperatively by CT or MR imaging. An intraoperative rendering of the tissue surface and tumor boundaries would assist a surgeon in accurately delineating the tumorous region to be resected. To address this challenge, I propose to develop computational algorithms that use images acquired by the stereo-laparoscope of the surgical system to recreate the three dimensional (3D) structure of the surgical site in real time. This will enable the surgeon to perform patient-specific preoperative and intraoperative tissue volume registration for guidance during surgery. Secondly, I plan to develop tools for intraoperative multimodal image fusion for enhanced surgical view. A shortcoming of the existing data visualization tools for RAMIS is that multimodal images are displayed separately. This limitation requires a surgeon to assimilate the multimodal data and mentally register subsurface features (from an ultrasound image) with the optical image. To this end, I plan to develop a computational algorithm that registers multimodal images and accurately fuses them to facilitate intraoperative multimodal imaging.
This suite of computation tools will significantly enhance data visualization, alongside boosting surgeon comfort and reducing anxiety during demanding surgical procedures. This can potentially have an impact on surgical outcomes as well. These novel tools can become a platform for enhanced surgical experience for the surgeons and researchers in Urology, Gynecology, ENT, Cardiac surgery, and several other procedures where RAMIS approach is increasingly being adopted.
Unlike other highly social species that enact social behavior by forming loose aggregations (e.g., swarms, herds), humans form relatively complexly bonded, stable social networks. The demands of surviving and reproducing in these groups are thought to have comprised a driving force in human brain evolution, shaping much of what is unique about human brain structure and function. Additionally, how we perceive and interact with others depends not only on factors like their personality and appearance, but also on their position relative to us in the social networks that we inhabit. Yet, very little is known about how our brains encode information about the social networks in which we are embedded.
This project will characterize how information contained in the pattern of ties in a real-world social network (e.g., social distance between pairs of individuals; network centrality/influence of particular individuals) is represented in the brains of its members. To do this, we will combine the approaches of information-based mapping of functional magnetic resonance imaging (fMRI) data and social network analysis (SNA). After reconstructing the social network of first-year students in a large Dartmouth graduate program, a subset of individuals from the network will be recruited for an fMRI study where they will view several of their classmates' faces. Model similarity structures will be constructed that describe the individuals in each participant's stimulus set in terms of social network metrics of interest (e.g., distance from perceiver, network centrality). In local neighborhoods throughout each participant's brain, spatial response patterns to each classmate's face will be extracted to generate a local neural similarity structure. Each local neural similarity structure will be modeled as a linear combination of the model similarity structures constructed based on social network metrics. This will allow us to map the entire brain in terms of how much models based on particular aspects of social network position capture the information contained in local cortical neighborhoods, and thus, to elucidate the aspects of social network position that are spontaneously encoded, and the brain systems involved. Additionally, we will probe the information content of a particular brain area, the anterior right temporoparietal junction (aRTPJ), which we have previously found to encode coarse social distance information. The degree of correspondence between aRTPJ similarity structures and similarity structures based on network distance will be assessed, and multidimensional scaling (MDS) will be used to visualize participants' aRTPJ similarity structures. Participants will be shown MDS plots generated from their own neural data amid randomly generated plots, and indicate which plot best matches their intuitions about the structure of their social network.
This project capitalizes on the ability of SNA to quantify behaviorally relevant aspects of others' positions in a real-world social network, and the ability of fMRI to characterize representations contained in multiple brain regions simultaneously, including those that drive behavior, but may not match verbal reports. Characterizing how the brain encodes the structure of a social network in which it is embedded presents an important first step towards a deeper understanding of how humans perceive the social networks that we inhabit.
The process by which a single cell divides to create two daughter cells capable of metabolism, growth, and further reproduction, is known as the cell cycle. Factors contained within the cytoplasm are known to regulate faithful progression through this cycle, yet some cells have many nuclei cohabitating in a continuous, common cytoplasm that are remarkably able to progress through the cell cycle (grow, duplicate their DNA, and divide in two) independently of one another. This is the case in the multinucleate filamentous fungus Ashbya gossypii, in which nuclei only a few microns from each other in the same cytoplasm divide independently. We believe that nuclei also behave differently with respect to the genes they are transcribing, especially cell cycle regulators. These differences may promote asynchronous cell cycle progression in the multinucleate environment. By merging single-molecule imaging techniques with automated image analysis pipelines, statistical analyses, and computational modeling techniques, I am investigating the mechanisms of independent transcriptional activity to create compartments within a continuous cytoplasm and generate functional differences between nuclei.
Gene expression is a fundamental process in biology. This is the process by which the static genetic code is interpreted to synthesize variable amounts of mRNA and protein. Cellular phenotypes are intricately linked to cells' gene expression profiles. Regulation of gene expression allows phenotypic plasticity, so that the same bacteria can live on cold metal surfaces in a hospital as in a warm human host. Because of its central role, enormous information is stored in data measuring how genes are expressed. Researchers have used such measurements to identify disease-related genes, establish tumor subtypes and delineate personal responses to drugs. However, it remains challenging to directly leverage the entire compendium of gene expression data for an organism. Such expression measurements were made in different labs and hospitals using distinct platforms and technologies, which represents a significant source of experimental noise. Beyond experimental factors, the tissue and cell type origins of most samples in public databases are not well annotated.
To address the challenge of integrating and analyzing entire gene expression compendia, we have begun employing deep learning. Deep learning aims to learn layers of representations directly from data. We have applied denoising autoencoders, one type of deep learning algorithms, to expression data from the bacterium Pseudomonas aeruginosa. It successfully discovered important features related to organismal biology, for example, a feature associated with oxygen levels in the environment (Figure1).
Given the success of this approach in analysis of the Pseudomonas aeruginosa compendium, we propose to apply our deep learning based approach to capture the transcriptional programs active in unannotated gene expression data measuring human cells. This deep learning approach is completely distinct from methods that have been applied to perform integrative analysis. If successful, our newly proposed method will be the first to detect tissue/cell-type-specific signals from the data without any expert knowledge guidance. Also, because it is unsupervised, it has the potential to discover completely novel recurrent patterns and raise questions that have not yet known to ask.
Last Updated: 10/6/14