  {"id":10698,"date":"2023-11-21T11:25:43","date_gmt":"2023-11-21T16:25:43","guid":{"rendered":"https:\/\/www.vanderbilt.edu\/vise\/?p=10698"},"modified":"2023-11-21T11:27:33","modified_gmt":"2023-11-21T16:27:33","slug":"vise-on-the-road-2023-miccai-conference","status":"publish","type":"post","link":"https:\/\/www.vanderbilt.edu\/vise\/vise-on-the-road-2023-miccai-conference\/","title":{"rendered":"VISE on the Road: 2023 MICCAI conference"},"content":{"rendered":"<p><img loading=\"lazy\" class=\"aligncenter size-large wp-image-10700\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/11\/21101524\/Screenshot-2023-11-21-at-7.29.19-AM-650x310.png\" alt=\"\" width=\"650\" height=\"310\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/11\/21101524\/Screenshot-2023-11-21-at-7.29.19-AM-650x310.png 650w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/11\/21101524\/Screenshot-2023-11-21-at-7.29.19-AM-300x143.png 300w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/11\/21101524\/Screenshot-2023-11-21-at-7.29.19-AM-150x72.png 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/11\/21101524\/Screenshot-2023-11-21-at-7.29.19-AM.png 727w\" sizes=\"(max-width: 650px) 100vw, 650px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>Members of six VISE labs affiliated with the 国产原创 Institute for Surgery and Engineering (VISE) traveled to Vancouver to take part in the 26<sup>th<\/sup> International Conference on Medical Image Computing and Computer-Assisted Intervention. Like-minded scientists from around the world gather yearly at the MICCAI conference to share their work with one another on a grand stage.<\/p>\n<p>The annual conference attracts leading biomedical scientists, engineers, and clinicians from multiple disciplines associated with medical imaging and computer-assisted intervention. The conference includes oral presentations, poster sessions, workshops, tutorials, and challenges. The VISE labs and presenters were:<\/p>\n<p><strong>Medical-image Analysis and Statistical Interpretation Lab (<a href=\"https:\/\/my.vanderbilt.edu\/masi\/\" target=\"_blank\" rel=\"noopener\">MASI<\/a>)<\/strong><\/p>\n<p><strong>Bennett Landman, PhD, <\/strong>Chair, Department of Electrical and Computer Engineering, co-lead the QuantConn challenge with CDMRI, the International Workshop on Computational Diffusion MRI.<\/p>\n<p>\u201cMICCAI is a fantastic venue to share and discuss deep technical innovations,\u201d Landman said.<\/p>\n<p>\u201cIn addition to the large center stage, we have developed a robust satellite community that brings together smaller groups to advance specialized science. This year, we led the QuantConn challenge as a satellite event to move toward using diffusion MRI tractography as biomarkers.\u201d<\/p>\n<p>Graduate student Nancy Newlin, one of the challenge organizers, gave a QuantConn talk titled \u201cIntroducing QuantConn: Overcoming challenging diffusion acquisitions with harmonization.&#8221;<\/p>\n<p>\u201cIt was an honor to work as point person for this project and engage with these researchers firsthand,\u201d she said. \u201cI&#8217;m excited to publish the findings later this year!\u201d<\/p>\n<p>Thomas Li, a MD\/PhD student, presented an oral talk titled, \u201cLongitudinal Multimodal Transformer Integrating Imaging and Latent Clinical Signatures from Routine EHRs for Pulmonary Nodule Classification.\u201d<\/p>\n<p>During a poster session, graduate student, Peter Lee, shared his work titled \u201cScaling up 3D Kernels with Bayesian Frequency Re-parameterization for Medical Image Segmentation\u201d<\/p>\n<p><strong>Biomedical Data Representational and Learning Lab (<a href=\"https:\/\/my.vanderbilt.edu\/huolab\/\" target=\"_blank\" rel=\"noopener\">HRLB<\/a>)<\/strong><\/p>\n<p>Assistant Professor of Electrical and Computer Engineering\u00a0<strong>Yuankai Huo<\/strong> was happy to return to the first in-person MICCAI conference in North America since COVID.<\/p>\n<p>\u201cYou could really feel the excitement in the air \u2013 everyone was so thrilled to be back, sharing ideas face-to-face,\u201d he said.<\/p>\n<p>\u201cAnd the advancements, like in multi-modal and self-supervised learning in medical image analysis, were just the cherry on top. It felt like we were not just catching up on lost time but actually leaping forward.\u00a0To be part of this, after such a challenging time, was really something special.&#8221;<\/p>\n<p>Huo shared graduate student <strong>Tianyuan Yao\u2019s<\/strong> work on diffusion harmonization with an oral presentation titled \u201cA unified single-stage learning model for estimating fiber orientation distribution functions on heterogeneous multi-shell diffusion-weighted MRI.\u201d Yao attended virtually.<\/p>\n<p>Graduate student <strong>Ruining Deng <\/strong>won second place for his entry\u2014&#8221;Knowledge-Infused Efficient Learning for Giga-Pixel Virtual Microscopy Images&#8221;\u2014in the MICCAI Student Board Thesis Madness, a 3-minute PhD thesis competition.<\/p>\n<p>Deng also presented his first author paper\u00a0during the MICCAI main conference. Title: &#8220;Democratizing Pathological Image Segmentation with Lay Annotators via Molecular-empowered Learning.\u201d<\/p>\n<p>Deng enjoyed the in-person opportunity to ask questions and receive advice with experts in the field. \u201cThe biggest takeaway\u00a0is understanding how to define a good question, one that can resolve clinical problems and have clinical value while also driving technical innovation. With such solutions, we can build a knowledge bridge between the medical\/surgical\/clinical fields and engineering,\u201d he said.<\/p>\n<p><strong>Can Cui<\/strong> presented her first author paper at MILLAND workshop titled \u201cFeasibility of Universal Anomaly Detection Without Knowing the Abnormality in Medical Images.\u201d<\/p>\n<p>She said the conference was a fantastic experience and cited connecting with old and new friends as a highlight. \u201cI had the opportunity to engage in face-to-face discussions with other researchers and witness the enthusiasm among researchers for various exciting topics in the medical image domain,\u201d she said.<\/p>\n<p><strong>Quan Liu<\/strong> presented his first author paper (remotely) during the MMMI workshop: \u201cM^2Fusion: Bayesian-based Multimodal Multi-level\u00a0Fusion on Colorectal Cancer Microsatellite Instability\u00a0Prediction.\u201d<\/p>\n<p>\u201cI had the opportunity to immerse myself in a series of captivating talks, each centered around distinct topics such as multimodal learning, efficient learning, and histopathology image analysis,\u201d Liu said. \u201cThe ideas presented were truly inspiring, and the cutting-edge methods discussed have left me feeling not only motivated but also well-equipped for my future research endeavors.\u201d<\/p>\n<p><strong>Medical Image Computing Lab (MedICL)<\/strong><\/p>\n<p>Two students in the MedICL lab, under the direction of Assistant Professor of Computer Science Ipek Oguz, won awards during the conference.<\/p>\n<p><strong>Han Liu&#8217;<\/strong>s team won the CrossMoDA challenge with their project titled: \u201cLearning Site-specific Styles for Multi-institutional Unsupervised Cross-modality Domain Adaptation.\u201d In addition, Liu presented \u201c\u201cCOLosSAL: A Benchmark for Cold-start Active Learning for 3D Medical Image Segmentation\u201d in a workshop.<\/p>\n<p><strong>David Lu<\/strong> won the outstanding paper award at the AE-CAI workshop. His paper is titled: \u201cASSIST-U: A System for Segmentation and Image Style Transfer for Ureteroscopy.\u201d This was Lu\u2019s first, first-author paper and he said, \u201cIt felt incredibly encouraging to be recognized.\u201d Lu also presented a workshop talk on titled \u201cMAP: Domain Generalization via Meta-Learning on Anatomy-Consistent Pseudo-Modalities\u201d on behalf of Dewei Hu at MedAGI.<\/p>\n<p><strong>Visual Informatics and Engineering Lab (VINE)<\/strong><\/p>\n<p>Assistant Professor of Computer Science<strong> Daniel Moyer <\/strong>organized two satellite events, the Fairness in AI and Medical Imaging (FAIMI) workshop, and a tractography challenge with Bennett Landman&#8217;s students\/collaborators.<\/p>\n<p><strong>Machine Automation, Perception, and Learning Lab (<a href=\"https:\/\/my.vanderbilt.edu\/maple-lab\/\" target=\"_blank\" rel=\"noopener\">MAPLE<\/a>)<\/strong><\/p>\n<p>Graduate students <strong>John Han, Ayberk Acar, and Jumanh Atoum<\/strong>, along with research assistant <strong>Yinhong Quin<\/strong>, won Best Methodology Report award in the Surgical Tool Localization in Endoscopic Videos, which was part of the EndoVis challenge. The MAPLE lab is under the direction of assistant professor in computer science <strong>Jie Ying Wu<\/strong>.<\/p>\n<p>Graduate student Ayberk Acar presented his first author paper during the AE-CAI | CARE | OR 2.0 workshop titled: \u201cTowards Navigation in Endoscopic Kidney Surgery based on Preoperative Imaging.\u201d He also presented additional papers in that workshop for students who were unable to attend.<\/p>\n<p>First authors Ayberk Acar and Jumanh Atoum paper title<strong>: \u201c<\/strong>Intraoperative Gaze Guidance with Mixed Reality\u201d<\/p>\n<p>First authors Guansen Tong and Jiayi Li paper title: \u201cDevelopment of an Augmented Reality Guidance System for Head and Neck Cancer Resection\u201d<\/p>\n<p><strong>Biomedical Image Analysis for Image Guided Interventions Lab (<a href=\"https:\/\/my.vanderbilt.edu\/bagl\/\" target=\"_blank\" rel=\"noopener\">BAGL<\/a>)<\/strong><\/p>\n<p>Graduate student<strong> Mohammad Khan <\/strong>presented his poster and fellow lab mate Ziteng Liu\u2019s paper during the Simulation and Synthesis in Medical Imaging workshop<em>.<\/em> Khan\u2019s poster title: <strong>\u201c<\/strong>Cochlear Implant Fold Detection in Intra-operative CT using Weakly Supervised Multi-Task Deep Learning.\u201d Liu\u2019s paper: \u201cSuper-resolution segmentation network for inner-ear tissue segmentation.\u201d<\/p>\n<p>Khan found MICCAI an enriching experience. \u201cThe opportunity to engage with experts and learn about the latest developments in my field has equipped me with new algorithms and cutting-edge techniques. The experience has expanded my knowledge and will undoubtedly shape the future of my work,&#8221; he said.<\/p>\n<p><strong>Medical Image Processing Lab (MIP)<\/strong><\/p>\n<p>Graduate student <strong>Yubo Fan<\/strong> echoed Khan sentiment. \u201cAttending MICCAI in person and presenting our work there was a fantastic experience,\u201d he said.<\/p>\n<p>\u201cI was especially impressed by the exceptional quality of cutting-edge research showcased at the conference. It was really nice to see that there are many interesting problems to solve in this field. The opportunity to network with other researchers in academia and industry was also invaluable.\u201d<\/p>\n<p>Fan presented during a workshop and poster session. Workshop paper title: \u201cA Unified Deep-Learning-Based Framework for Cochlear Implant Electrode Array Localization. \u201dPoster title \u201cCT Synthesis with Modality-, Anatomy-, and Site-Specific Inference.\u201d<\/p>\n<p>The 国产原创 Institute for Surgery and Engineering (VISE)\u00a0is an interdisciplinary, trans-institutional structure designed to facilitate interactions and exchanges between engineers and physicians. Its goal is to become the premier institute for the training of the next generation of surgeons, engineers, and computer scientists capable of working symbiotically on new solutions to complex interventional problems, ultimately resulting in improved patient care.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; Members of six VISE labs affiliated with the 国产原创 Institute for Surgery and Engineering (VISE) traveled to Vancouver to take part in the 26th International Conference on Medical Image Computing and Computer-Assisted Intervention. Like-minded scientists from around the world gather yearly at the MICCAI conference to share their work with one another on a&#8230;<\/p>\n","protected":false},"author":670,"featured_media":10700,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true,"_links_to":"","_links_to_target":""},"categories":[12],"tags":[68,40,41,591,32,454,85,692,706,64,31,30,369],"acf":[],"jetpack_featured_media_url":"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/11\/21101524\/Screenshot-2023-11-21-at-7.29.19-AM.png","jetpack_publicize_connections":[],"jetpack_sharing_enabled":false,"jetpack_shortlink":"https:\/\/wp.me\/p98pzF-2My","_links":{"self":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts\/10698"}],"collection":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/users\/670"}],"replies":[{"embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/comments?post=10698"}],"version-history":[{"count":18,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts\/10698\/revisions"}],"predecessor-version":[{"id":10717,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts\/10698\/revisions\/10717"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/media\/10700"}],"wp:attachment":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/media?parent=10698"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/categories?post=10698"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/tags?post=10698"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}