Thumbnails:
List:
Year:
Category:
Session:
Poster:
Getting poster data...
Samuel Pantze, Matthew McGinity, Ulrik Günther (Center for Advanced Systems Understanding, Görlitz, Germany; Helmholtz-Zentrum Dresden-Rossendorf, Germany; IXLAB Technische Universität Dresden, Germany)
We propose a novel VR-based workflow using eye tracking for rapid ground truth generation and proofreading when using deep learning-based cell tracking models. Life scientists reconstruct cell lineage trees from 3D time-lapse microscopy images acquired at high spatio-temporal resolution. The reconstruction itself is computationally expensive, and traditionally involves manually annotating the positions of individual cells through all recorded time points, and then linking them over time to create complete trajectories. Deep learning based algorithms accelerate this process, yet rely heavily on manually-annotated high-quality ground truth data and curation. Even nowadays, manual annotation is still performed with 2D interfaces and visualisations, which greatly limits spatial understanding and navigation. In this work, we bridge the gap between deep learning-based cell tracking software and 3D/VR based visualisation. We lift the incremental annotation, training and proofreading loop of the deep learning model into the 3rd dimension and apply natural user interfaces like eye tracking to accelerate the cell tracking workflow for life scientists.