g-factor attention model for deep neural network powered parallel imaging: gANN

Authors

    Jaeyeon Yoon1,2, Doohee Lee1,2, Jingyu Ko1,2, Jingu Lee1, Yoonho Nam3,4, and Jongho Lee1


     

    1. Seoul National University, Seoul, Korea, Republic of
    2. AIRS medical, Seoul, Korea, Republic of
    3. Department of Radiology, Seoul St. Mary’s Hospital, Seoul, Korea, Republic of
    4. College of Medicine, The Catholic University of Korea, Seoul, Korea, Republic of

    ISMRM 2019

    In this study, we proposed a new concept of an attention model for deep neural network based parallel imaging. We utilized g-factor maps to inform the neural network about the location containing high possibility of aliasing artifact. Also the proposed network used sensitivity maps and acquired k-space data to ensure the data consistency. Since the g-factor attention deep neural network considered both multi-channel information and spatially variant aliasing condition, our proposed network successfully removed aliasing artifacts up to factor 6 in uniform under-sampling and showed high performance when compared to conventional parallel imaging methods.