VISC

Center for Visual Information Sensing and Computing

Mission, goals, and activities

The center for Visual Information Sensing and Computing (VISC) has a mission to carry out research in the computer based visual information acquisition, processing, synthesis, and its applications.

The research contains major areas of computer vision, computer graphics, artificial intelligence, and multimedia, which aim to capture information visually and analytically, understand its model for machine control and decision making, visualize data and model for human perception, and communicate visual media across interfaces and networks. The applications range widely from medical image recognition and autonomous vehicle driving, to virtual reality systems and data visualization.

The center is involved in graduate education for PhD and Master students. Faculty members also teach related undergraduate courses in pattern recognition, computer vision, and multimedia. In the past three years, VISC has graduated more than ten PhD students and many Master students.

Areas of expertise

  • Image processing
  • Computer vision
  • Pattern recognition
  • Computer graphics
  • Visualization
  • Multimedia
  • Machine interface
  • Virtual reality
  • Artificial intelligence
  • Forensic science
  • Medical images
  • Intelligent transportation
  • Autonomous driving

VISC members

Jiang Yu Zheng

Director of VISC
Professor, Computer & Information Science

computer vision, AI, pattern recognition, video processing, intelligent vehicle, multimedia

View Dr. Zheng's profile »

Mihran Tuceryan

Professor and Associate Chair, Computer & Information Science

virtual reality, computer vision, medical image computation

View Dr. Tuceryan's profile »

Shiaofen Fang

Professor and Department Chair, Computer & Information Science

visualization, computer graphics, medical imaging, AI

View Dr. Fang's profile »

Gavriil Tsechpenakis

Associate Professor, Computer & Information Science

computer vision, computational biology and neuroscience, medical image computing, and machine learning

View Dr. Tsechpenakis's profile »

Publications

  1. G Cheng, JY Zheng, Semantic Segmentation for Pedestrian Detection from Motion in Temporal Domain, 25th Inter Conf. Pattern Recognition 2020, 1-7 (accepted).
  2. Z Wang, JY Zheng, Z. Gao, Detecting Vehicle Interaction in Driving Videos via Motion Profiles, IEEE Inter. Conf. Intelligent Transportation Systems, 1-6, 2020.
  3. MD Sulistiyo, Y Kawanishi, D Deguchi, I Ide, T Hirayama, JY Zheng, H. Murase, Attribute- Aware Loss Function for Accurate Semantic Segmentation Considering the Pedestrian Orientations, IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Science, 103(1), 231-242, 2020
  4. K Kolcheck, Z Wang, H Xu, JY Zheng, Visual Counting of Traffic Flow from a Car via Vehicle Detection and Motion Analysis, Asian Conference on Pattern Recognition, 529-543, 2019. Best Paper Award on Safe Vehicle and Road.
  5. G Cheng, JY Zheng, M Kilicarslan, Semantic Segmentation of Road Profiles for Efficient Sensing in Autonomous Driving, 2019 IEEE Intelligent Vehicles Symposium (IV), 564-569, 2019
  6. Z Wang, G Cheng, JY Zheng, Road Edge Detection in All Weather and Illumination via Driving Video Mining, IEEE Transactions on Intelligent Vehicles 4 (2), 232-243, 2019
  7. MD Sulistiyo, Y Kawanishi, D Deguchi, T Hirayama, I Ide, JY Zheng, H Murase, Attribute- aware semantic segmentation of road scenes for understanding pedestrian orientations, 2018 21st International Conference on Intelligent Transportation Systems, 2698-2703.
  8. Z Gao, Y Liu, JY Zheng, R Yu, X Wang, P Sun, Predicting hazardous driving events using multi- modal deep learning based on video motion profile and kinematics data, 21st International Conference on Intelligent Transportation Systems, 3335-3357, 2018.
  9. G Cheng, Z Wang, JY Zheng, Modeling weather and illuminations in driving views based on big- video mining IEEE Transactions on Intelligent Vehicles 3 (4), 522-533, 2018
  10. G Cheng, JY Zheng, H Murase, Sparse coding of weather and illuminations for ADAS and autonomous driving, 2018 IEEE Intelligent Vehicles Symposium (IV), 2030-2035
  11. M Kilicarslan, JY Zheng, Predict vehicle collision by TTC from motion using a single video camera, IEEE Transactions on Intelligent Transportation Systems 20 (2), 522-533, 2018

  1. Alhakamy, A’aeshah and Tuceryan, M. (2020). Physical Environment Reconstruction Beyond Light Polarization for Coherent Augmented Reality Scene on Mobile Devices. In M. L. Gavrilova, C. J. K. Tan, J. Chang, & N. M. Thalmann (Eds.), Transactions on Computational Science XXXVII: Special Issue on Computer Graphics (pp. 19-38). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-662-61983-4_2
  2. Tuceryan, M., Hemmady, A., Schebler, C., Alex, A., & Bhatwadekar, A. (2020). Automated computer-based enumeration of acellular capillaries for assessment of diabetic retinopathy. SPIE Medical Imaging, 11317, pp. 151 - 156. https://doi.org/10.1117/12.2543400
  3. Gawrieh, S., Sethunath, D., Cummings, O. W., Kleiner, D. E., Vuppalanchi, R., Chalasani, N., & Tuceryan, M. (2020). Automated quantification and architectural pattern detection of hepatic fibrosis in NAFLD. Annals of Diagnostic Pathology, 47, 151518. doi:10.1016/j.anndiagpath.2020.151518 
  4. Alhakamy, A’aeshah, and Tuceryan, M. (2020). Real-time Illumination and Visual Coherence for Photorealistic Augmented/Mixed Reality. ACM Computing Surveys, 53(3), 1-34. doi:10.1145/3386496
  5. Alhakamy, A’aeshah and Tuceryan, M. (2019). An Empirical Evaluation of the Performance of Real-Time Illumination Approaches: Realistic Scenes in Augmented Reality. In 6th International Conference on Augmented Reality, Virtual Reality and Computer Graphics (AVR 2019), Lecture Notes on Computer Science (Vol. 11614, pp. 179-195). Salento, Italy: Springer International Publishing. Doi: 10.1007/978-3-030-25999-0_16;
  6. Alhakamy, A’aeshah and Tuceryan, M. (2019) Polarization-Based Illumination Detection for Coherent Augmented Reality Scene Rendering in Dynamic Environments. In: Gavrilova M., Chang J., Thalmann N., Hitzer E., Ishikawa H. (eds) Advances in Computer Graphics. CGI 2019. Lecture Notes in Computer Science, vol 11542. Springer, Cham, doi: 10.1007/978-3-030-22514- 8_1
  7. Alhakamy,A’aeshah and Tuceryan, M. (2019, 11-14 April 2019). CubeMap360: Interactive Global Illumination for Augmented Reality in Dynamic Environment. 2019 SoutheastCon, pp. 1- 8. https://doi.org/10.1109/SoutheastCon42311.2019.9020588
  8. Alhakamy, A’aeshah and M. Tuceryan. "AR360: Dynamic Illumination for Augmented Reality with Real-Time Interaction." 2019 IEEE 2nd International Conference on Information and Computer Technologies (ICICT), 14-17 March 2019 2019, pp. 170-174. doi:10.1109/INFOCT.2019.8710982;
  9. Allam, E., Mpofu, P., Ghoneima, A., Tuceryan, M., and Kula, K. (2018). "The Relationship Between Hard Tissue and Soft Tissue Dimensions of the Nose in Children: A 3D Cone Beam Computed Tomography Study." J Forensic Sci 63(6): 1652-1660. DOI: 10.1111/1556- 4029.13801
  10. Sethunath D, Morusu S, Tuceryan M, Cummings OW, Zhang H, Yin XM, et al. Automated assessment of steatosis in murine fatty liver. PLoS One. 2018;13(5):e0197242. Epub 2018/05/11. doi: 10.1371/journal.pone.0197242. PubMed PMID: 29746543; PubMed Central PMCID: PMCPMC5945052.
  11. R. Egodagamage and M. Tuceryan, "Distributed monocular visual SLAM as a basis for a collaborative augmented reality framework," Computers & Graphics, vol. 71, pp. 113-123, 2018. 10.1016/j.cag.2018.01.002

  1. Keerthika Koka, Shiaofen Fang. Text Visualization for Feature Selection in Online Review Analysis. International Journal of Big Data Intelligence, Int. J. Big Data Intelligence, Vol. 6, Nos. 3/4, 2019, pp. 202-211.
  2. Jingwen Yan, Vinesh Raja V, Zhi Huang, Enrico Amico, Kwangsik Nho, Shiaofen Fang, Olaf Sporns, Yu-chien Wu, Andrew Saykin, Joaquin Goni, Li Shen. Brain-wide structural connectivity alterations under the control of Alzheimer risk genes. Int J Comput Biol Drug Des. 2020;13(1):58-70. doi:10.1504/ijcbdd.2020.10026789.
  3. Zigon B, Li H, Yao X, Fang S, Hasan MA, Yan J, Moore JH, Saykin AJ, Shen L, for the ADNI. (2018) GPU Accelerated Browser for Neuroimaging Genomics, Journal of Neuroinformatics, Apr. 2018, doi: 10.1007/s12021-018-9376-y.
  4. Jerry Wang, Shiaofen Fang, Meie Fang, Jeremy Wilson, Noah Herrick, and Susan Walsh. Automatic Landmark Placement for Large 3D Facial Image Dataset. To appear in: 2019 IEEE BigData Conference, workshop on Big Media Dataset Construction, Management and Applications. December, 2019.
  5. Li H, Fang S, Mukhopadhyay S, Saykin A, Shen L. Interactive machine learning by visualization: A small data solution. HMData’18: The 2nd IEEE Workshop on Human-in-the-loop Methods and Human Machine Collaboration in BigData. Seattle, WA, December 10, 2018
  6. Xie L, Amico E, Salama P, Wu Y, Fang S, Sporns O, Saykin A, Goni J, Yan J, Shen L. (2018) Heritability estimation of reliable connectomic features. CNI’17: MICCAI Workshop on Connectomics in NeuroImaging, Lecture Notes in Computer Science, 11083:58-66, Granada, Spain, September 20, 2018
  7. Jingwen Yan, Kefei Liu, Huang Li, Enrico Amico, Shannon Risacher, Yu-chien Wu, Shiaofen Fang, Olaf Sporns, Andrew Saykin, Joaquín Goñi, Li Shen. Joint exploration and mining of memory-relevant brain anatomic and connectomic patterns via a three-way association model. 2018 IEEE International Symposium on Biomedical Imaging (ISBI), Apr. 2018, DOI: 10.1109/ISBI.2018.8363511.
  8. Yan J, Raja V, Huang Z, Enrico A, Nho K, Fang S, Sporns O, Wu Y, Saykin AJ, Goni J, Shen L. (2018) Brain-wide structural connectivity alterations under the control of Alzheimer risk genes. ICIBM’18: Int. Conf. on Intelligent Biology and Medicine, Los Angeles, CA, USA, June 10-12, 2018. 

  1. R. Radmanesh, Z. Wang, V.S. Chipade, G. Tsechpenakis, and D. Panagou, “LIV-LAM: LiDAR and Visual Localization and Mapping,” IEEE American Control Conference, 2020, 10.23919/ACC45564.2020.9148037.
  2. L. He, S. Gulyanon, M. Mihovilovic Skanata, D. Karagyozov, E. Heckscher, M. Krieg, G. Tsechpenakis, M. Gershow, and D. Tracey, “Direction selectivity in Drosophila proprioceptors requires mechanosensory channel TMC,” Current Biology, 29(6):945-956, 2019.
  3. S. Gulyanon, L. He, D. Tracey, and G. Tsechpenakis, “Neuron tracking in calcium image stacks using accordion articulations,” Int'l Symposium on Biomedical Imaging: from Nano to Macro (ISBI), 2019, doi: 10.1109/ISBI.2019.8759386
  4. L. He, S. Gulyanon, M. Mihovilovic Skanata, D. Karagyozov, E. Heckscher, M. Krieg, G. Tsechpenakis, M. Gershow, and D. Tracey, “Proprioceptive neurons of larval Drosophila melanogaster show direction selective activity requiring the mechanosensory channel TMC,” BioRxiv, doi: https://doi.org/10.1101/463216, 2018.
  5. Z. Wang and G. Tsechpenakis, "Stream clustering with dynamic estimation of emerging local densities," Int’l Conf. on Pattern Recognition, 2018, doi:10.1109/ICPR.2018.8546208
  6. Z. Wang, S. Farhand, and G. Tsechpenakis, "Fading affect bias: improving the trade-off between accuracy and efficiency in feature clustering," IEEE Winter Conf. on Applications of Computer Vision, 2018, doi: 10.1007/s00138-019-01008-w
  7. S. Gulyanon, N. Sharifai, M.D. Kim, A. Chiba, and G. Tsechpenakis, “Part-Wise Neuron Segmentation Using Artificial Templates,” Int'l Symposium on Biomedical Imaging: from Nano to Macro (ISBI), 2018, doi: 10.1109/ISBI.2018.8363656
  8. S. Gulyanon, L. He, D. Tracey, and G. Tsechpenakis, “Neurite Tracing in Time-lapse Calcium Images using MRF-modeled Pictorial Structures,” Int'l Symposium on Biomedical Imaging: from Nano to Macro (ISBI), 2018, doi: 10.1109/ISBI.2018.8363872 

External grants (2018-2020)

Jiang Yu Zheng

  • IUPUI Driving Image Benchmark under All Weather and Illumination Conditions, 2020 Enhancing Computer Vision for Public Safety Challenge by National Institute of Standards and Technology, Phase I and II, PI, 19/15/2020-5/1/2021.

Shiaofen Fang

  • Integrative Bioinformatics Approaches to Human Brain Genomics and Connectomics. National Institutes of Health (NIBIB), R01 EB022574. $1,943,717. 8/1/16 – 4/30/21,Role: Co-I (PI: Li Shen)

Gavril Tsechpenakis

  • CAREER: Modeling the Structure and Dynamics of Neuronal Circuits in the Drosophila larvae using Image Analytics (NSF, 2013-2018)