  {"id":9315,"date":"2021-11-09T09:54:24","date_gmt":"2021-11-09T14:54:24","guid":{"rendered":"https:\/\/www.vanderbilt.edu\/vise\/?p=9315"},"modified":"2021-11-09T09:55:07","modified_gmt":"2021-11-09T14:55:07","slug":"vise-on-the-virtual-road-2021-miccai-conference","status":"publish","type":"post","link":"https:\/\/www.vanderbilt.edu\/vise\/vise-on-the-virtual-road-2021-miccai-conference\/","title":{"rendered":"VISE on the Virtual Road:  2021 MICCAI conference"},"content":{"rendered":"<p>Members of five labs affiliated with the 国产原创 Institute for Surgery and Engineering took part in the 24th annual International Conference on Medial Image Computing and Computer Assisted Intervention, sharing their work with like-minded scientists from around the world.<\/p>\n<p>The conference brings together leading biomedical scientists, clinicians, and engineers who focus on medical imaging and computer assisted intervention. The three-day virtual conference included workshops, oral presentations, and poster sessions. The labs and presenters were:<\/p>\n<p><strong>Medical-image Analysis and Statistical Interpretation Lab (MASI)<\/strong><\/p>\n<p><img loading=\"lazy\" class=\"wp-image-4930 size-thumbnail alignleft\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2019\/03\/19170933\/30511704281_2c5cc057a3_z-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2019\/03\/19170933\/30511704281_2c5cc057a3_z-150x150.jpg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2019\/03\/19170933\/30511704281_2c5cc057a3_z-80x80.jpg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2019\/03\/19170933\/30511704281_2c5cc057a3_z-190x190.jpg 190w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/>Postdoctoral scholar, <strong>Shunxing Bao<\/strong>, proposed a novel multi-channel high-resolution image synthesis approach, called pixN2N-HD, to tackle any possible combinations of missing stain scenarios in Multiplexed Immunofluorescence Imaging (MxIF).<\/p>\n<p>He is lead author on the paper, \u201cRandom Multi-Channel Image Synthesis for Multiplexed Immunofluorescence Imaging\u201d.<\/p>\n<p>\u201cTo our knowledge, this is\u00a0the first comprehensive study of dealing with the missing stain challenge in MxIF via deep synthetic learning,\u201d said Bao.<\/p>\n<p><img loading=\"lazy\" class=\"alignleft wp-image-7043 size-thumbnail\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/08\/19170816\/riqiang-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/08\/19170816\/riqiang-150x150.jpg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/08\/19170816\/riqiang-80x80.jpg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/08\/19170816\/riqiang-190x190.jpg 190w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/>Graduate student<strong> Riqiang Gao<\/strong>, first author on the paper, \u201cLung Cancer Risk Estimation with Incomplete Data: A Joint Missing Imputation Perspective\u201d presented during one of the oral sessions.<\/p>\n<p>\u201cWe propose a new adversarial training-based model that imputes one modality combining the conditional knowledge from another modality and achieve state-of-the-art performance in downstream cancer prediction,\u201d, said Gao, who also won a MICCCAI 2021 Travel Award.<\/p>\n<p><img loading=\"lazy\" class=\"alignleft wp-image-8755 size-thumbnail\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/06\/19170703\/Yucheng_Tang-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/06\/19170703\/Yucheng_Tang-150x150.jpg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/06\/19170703\/Yucheng_Tang-80x80.jpg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/06\/19170703\/Yucheng_Tang-190x190.jpg 190w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/>Graduate student <strong>Yucheng Tang<\/strong> presented the paper titled \u201cPancreas CT Segmentation by Predictive Phenotyping.\u201d<\/p>\n<p>\u201cWe demonstrate a predictive task to encourage image embedding to the phenotyping cluster with similar patient outcomes for pancreas segmentation of diabetes patients,\u201d Tang said.<\/p>\n<p>\u201cThe integrated imaging phenotyping method could encourage solutions that better respect anatomical variability, especially associated with disease progression or comorbidities.\u201d<\/p>\n<p><strong>the biomedical data Representational and Learning laB (HRLB)<\/strong><\/p>\n<p><img loading=\"lazy\" class=\"alignleft wp-image-9318 size-thumbnail\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2021\/11\/19203417\/YuankaiHuo-150x150.jpeg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2021\/11\/19203417\/YuankaiHuo-150x150.jpeg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2021\/11\/19203417\/YuankaiHuo-80x80.jpeg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2021\/11\/19203417\/YuankaiHuo-200x200.jpeg 200w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2021\/11\/19203417\/YuankaiHuo-190x190.jpeg 190w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/>Assistant Professor of Electrical and Computer Engineering <strong>Yuankai Huo<\/strong> was a session chair of the Machine Learning in Medical Imaging workshop.<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" class=\"alignleft wp-image-9322 size-thumbnail\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2021\/05\/19203416\/Quan-Liu-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2021\/05\/19203416\/Quan-Liu-150x150.jpg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2021\/05\/19203416\/Quan-Liu-80x80.jpg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2021\/05\/19203416\/Quan-Liu-190x190.jpg 190w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/>From his group, graduate student <strong>Quan Liu<\/strong> presented and proposed a simple triplet representation learning (SimTriplet) approach on pathological images. The group\u2019s paper, on which Liu is first author, is titled \u201cSimTriplet: Simple Triplet Representation Learning with a Single GPU.\u201d<\/p>\n<p>\u201cThe proposed SimTriplet method takes advantage of the multi-view nature of medical images beyond self-augmentation and maximizes both intra-sample and inter-sample similarities via triplets from positive pairs, without using negative samples,\u201d Liu said.<\/p>\n<p><strong>Neuroimaging and Brain Dynamics Lab (NEURDY<\/strong><\/p>\n<p>The group\u2019s recent work proposes a multi-task learning framework to estimate the physiological time-series signals directly from fMRI data to aid many datasets that lack these in scan measurements.<\/p>\n<p><img loading=\"lazy\" class=\"alignleft wp-image-8747 size-thumbnail\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/01\/19170703\/roza-bayrack-copy-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/01\/19170703\/roza-bayrack-copy-150x150.jpg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/01\/19170703\/roza-bayrack-copy-80x80.jpg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2018\/01\/19170703\/roza-bayrack-copy-190x190.jpg 190w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/>\u201cMeasurements of breathing and heart rate gathered during fMRI scans are important for improving data quality and enabling new studies of brain physiology, said graduate student <strong>Roza Bayrak. <\/strong><\/p>\n<p>She is the first author of the paper, \u201cFrom Brain to Body: Learning Low-Frequency Respiration and Cardiac Signals from fMRI Dynamics\u201d and presented during an oral session.<\/p>\n<p><strong>Medical Image Processing Lab (MIP)<\/strong><\/p>\n<p><strong><img loading=\"lazy\" class=\"alignleft wp-image-8496 size-thumbnail\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2016\/05\/19170719\/picture_Jianing-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2016\/05\/19170719\/picture_Jianing-150x150.jpg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2016\/05\/19170719\/picture_Jianing-80x80.jpg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2016\/05\/19170719\/picture_Jianing-190x190.jpg 190w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/>Jianing Wang<\/strong>\u2019s group proposed an atlas-based method to segment the intracochlear anatomy in the post-implantation CT images of cochlear implant recipients.<\/p>\n<p>\u201cOur method produces results that are comparable to the current state of the art (SOTA) and requires only a fraction of the time needed by the SOTA, which is important for end-user acceptance,\u201d Wang said.<\/p>\n<p>The VISE alumna presented the group\u2019s paper titled, \u201cAtlas-based Segmentation of Intracochlear Anatomy in Metal Artifact Affected CT Images of the Ear with Co-trained Deep Neural Networks.\u201d<\/p>\n<p><strong>Medical Image Computing Lab (MedICL)<br \/>\n<\/strong><\/p>\n<p><img loading=\"lazy\" class=\"alignleft wp-image-7694 size-thumbnail\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2019\/07\/19170748\/Dewei-Hu-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2019\/07\/19170748\/Dewei-Hu-150x150.jpg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2019\/07\/19170748\/Dewei-Hu-80x80.jpg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2019\/07\/19170748\/Dewei-Hu-190x190.jpg 190w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/>Graduate student <strong>Dewei Hu<\/strong> and his team proposed a local intensity fusion encoder (LIFE), a self-supervised method to segment 3D retinal vasculature from OCT angiography. LIFE requires neither manual delineation nor multiple acquisition devices.<\/p>\n<p>\u201cTo our best knowledge, it is the first label-free learning method with quantitative validation of 3D OCT-A vessel segmentation,\u201d said Hu.<\/p>\n<p>Hu is first author of the paper, \u201cLIFE: A Generalizable Autodidactic Pipeline for 3D OCT-A Vessel Segmentation\u201d and presented during the conference.<\/p>\n<p>The 国产原创 Institute for Surgery and Engineering (VISE)\u00a0is an interdisciplinary, trans-institutional structure designed to facilitate interactions and exchanges between engineers and physicians. Its goal is to become the premier institute for the training of the next generation of surgeons, engineers, and computer scientists capable of working symbiotically on new solutions to complex interventional problems, ultimately resulting in improved patient care.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Members of five labs affiliated with the 国产原创 Institute for Surgery and Engineering took part in the 24th annual International Conference on Medial Image Computing and Computer Assisted Intervention, sharing their work with like-minded scientists from around the world. The conference brings together leading biomedical scientists, clinicians, and engineers who focus on medical imaging and&#8230;<\/p>\n","protected":false},"author":670,"featured_media":4994,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true,"_links_to":"","_links_to_target":""},"categories":[12],"tags":[707,32,175,706,231,64,31,30],"acf":[],"jetpack_featured_media_url":"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2019\/03\/19170933\/VISE-FB-profile-image-1.jpg","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p98pzF-2qf","_links":{"self":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts\/9315"}],"collection":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/users\/670"}],"replies":[{"embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/comments?post=9315"}],"version-history":[{"count":5,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts\/9315\/revisions"}],"predecessor-version":[{"id":9323,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts\/9315\/revisions\/9323"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/media\/4994"}],"wp:attachment":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/media?parent=9315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/categories?post=9315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/tags?post=9315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}