Generative Adversarial Networks: Ultrasound Image Translation

Noah Graff, “Generative Adversarial Networks: Ultrasound Image Translation”
Mentors: Istvan Lauko & Adam Honts, Mathematical Sciences

Low cost and highly portable ultrasound devices under development are designed to be equipped with low frequency ultrasound transducers that provide relatively low imaging quality after reconstruction. There is strong interest to develop software technology to improve such images and to approximate the image quality of high-end devices with high frequency transducers. With this research, we aim at using deep learning technology to translate images, produced using a low frequency transducers, to the higher frequency image domain to counter the hardware limitations. The type of deep learning we are experimenting with is referred to as Generative Adversarial Network (GAN), based on convolutional neural nets. This type of network structure is commonly used in image translation tasks. Specifically, based on unpaired ultrasound images from high and low frequency domain, collected from volunteers by the staff of UWMs sonography program, we aim at helping to better identify cephalic veins using GAN enhanced imaging. We present our results, that could readily translate to use on other anatomies, and which show substantial, structurally stable enhancement of image quality over low-frequency imaging results, compensating for the lack of high-end hardware components.

 

Comments

  1. Really beautiful job walking through some complex information in a way that was very understandable. I felt like I learned a lot!

  2. This was a very clear presentation. I was impressed by how you anticipated questions and answered them throughout your talk. I thought of many questions while listening, but you addressed everything. Your delivery is very understandable for people outside your field.

Leave a Reply

Your email address will not be published. Required fields are marked *