Our research investigates and exploits the distributed and parallel models of computation to create innovative high-performance, secure, and quality-aware software systems for various real-world applications. We conceive, design, and develop innovative tools, system environments, and concrete prototypes to demonstrate the impact of our research.
Distributed Systems | Software Engineering | Cloud Computing | Knowledge and Data Engineering
eDOTS (Enhanced Distributed Object Tracking System): eDOTS is an active academic research project involving the creation of an opportunistic indoor tracking system with the goal of providing highly accurate tracking estimates in an indoor environment. Indoor tracking is significantly more difficult than its outdoor counterpart due to the nature of indoor environments (lack of wireless signal reception or line of sight) and the tolerance for estimated error with the positional estimate. This work focuses on the discovery and classification of sensors in previously unknown environments and then using optimization techniques in an attempt to find the optimal subset of sensors to be used for tracking. Other areas that are covered as part of this project are multi-sensor data fusion and the use of positioning techniques and algorithms (Wi-Fi, RFID, Bluetooth, Vision, NFC, GPS, and Inertial sensors) for determining an objects given location.
TruSSCom (Trustworthy Software Service Selection and Composition): The TruSSCom project is developing a comprehensive framework for the design, selection, and composition of trustworthy distributed systems from existing software services using principles of trust models, subjective logic, multi-level specifications and matching, theory of evidence, and machine learning. Unlike prevalent approaches that focus on one view or a specific application domain, this research is creating generic models for trust of individual software services and their ensembles by considering the internal and external views, associated formalisms, prediction analysis, and their applications to real distributed systems from domains of indoor tracking, cyberbullying detection and vehicle-to-vehicle collaborations.
ExaFSI (An Exascale Fluid-Structure Interaction Solving Framework): This project aims to design new scalable algorithms to enable unprecedented large-scale simulations for numerical fluid-structure-interaction simulations. FSI problems are ubiquitous in a wide variety of science, engineering, medical, and biological domains. The research tackles the challenges of minimizing memory accesses, communication cost, synchronization, I/O cost, and load rebalancing as well as CPU/GPU optimizations. It also studies building cost performance models, seeking lower bound, and finding optimal ways to reach the lower bound.
ParTask (A High-Performance and High-Productivity Task Parallelism Model and Library): This project targets designing a new generic task-based parallel programming library to support different scientific domains such as dense/sparse matrix computations, computational fluid dynamics, big graph processing, and machine learning. The new programming library is an extended C library with a simple API. It is able to achieve both high performance and high productivity at the same time due to its simplified interface and efficient task-scheduling runtime system. The project has the potential to combine the ecosystem of HPC and the ecosystem of Big Data via a common programming model and runtime system meanwhile achieving the highest performance.
DataBroker Computing (Creating a Unified Framework to Integrate Simulation/Modeling with Data Analysis Applications): New approaches and methodology are to be developed to support in-situ active data analysis when simulation and analysis are combined in a virtuous circle with significantly increased performance and productivity. The project creates novel ways to unify simulations and data analysis and achieve the optimal performance. The research introduces new data analysis programming API, efficient runtime systems, and new abstractions to optimize data movement, data space management, and resource co-scheduling. It will demonstrate an innovative high performance and easy-to-use big data processing system.
Distributed Simultaneous Localization and Mapping (SLAM): This project explores the methods and algorithms for generating a map of an unknown environment while simultaneously localizing an agent in a distributed computing framework. The goal is to incrementally build a map (i.e., 3D geometry) consisting of stable natural features in the environment as multiple mobile agents move in the environment. Computer vision methods are used in both building the map and localization. In this framework, the mobile agents with cameras (e.g., mobile phones or tablets) can come into the environment, build a local part of the environment, and communicate with other agents in order to contribute to the construction of the global map. These mobile agents can exit the environment, leaving their contribution to the map behind, while other mobile agents can enter and use the map for localization. Or if they are moving in unmapped areas, they can contribute their part of the map.
Automated Detection and Quantification of Liver Biopsy Images in Non-alcoholic Fatty Liver Disease (NAFLD): The goal of this project is to develop image processing and machine learning algorithms in order to automatically analyze liver biopsy images with the goal of assessing disease stage in non-alcoholic fatty liver disease. The methods use texture and shape based features and in order to detect and quantify various features (e.g., macro- and micro-vesicular steatosis, lobular and portal inflammation, fibrosis) of the liver in different stages of the disease. Collaboration with Dr. Samer Gawrieh, IU School of Medicine, Department of Gastroenterology.
Automated Detection and Quantification of Diabetic Retinopathy in Microscopic Images of Retinas: Enumeration of acellular capillaries is used as a marker to assess the experimental diabetic retinopathy (DR) and response to the pharmacological treatment. Traditional approach to quantify acellular capillaries is manual counting either directly under the microscope or using the captured images. The goal of this project is to develop an automated method to improve the quantification of acellular capillaries in rodents by using computer-based image processing algorithms. Collaboration with Dr. Ashay Bhatwadekar, IU School of Medicine, Department of Ophthalmology.
Craniofacial Reconstruction and Recognition from Skulls: Forensic facial approximation is a useful technique for estimating facial morphology of deceased individuals when other forensic methods have failed to achieve identification. Current practice is to manually and physically construct the face on a physical model of the skull. Faces are rebuilt on skulls, using average tissue depth measurements at a small number of locations on the face that are acquired from cadavers and various types of 2D and 3D imaging (e.g. x-rays, ultrasonic echo location, ultrasound, magnetic resonance imaging (MRI), computed tomography (CT), cone beam computed tomography (CBCT)) or by estimating and rebuilding facial musculature. The process relies on artist's interpretation and experience in rebuilding the face shape. The methodology associated with estimating facial appearance has room for substantial improvement. The goal of this project is to use three-dimensional cone beam computed tomography (3D CBCT) images and machine learning methods based on multivariate statistical modeling and multidimensional shape space to develop a novel, standardized, and accurate method for approximating facial form from unidentified craniofacial remains, with the goal of eliminating individual artistic interpretation out of the process and improving replicability. The final method will also involve interactive tools in order to modify facial shape based on such factors as age and body mass index (BMI). Collaboration with Dr. Katherine Kula, IU School of Dentistry, Department of Orthodontics and Oral Facial Genetics.