beandeau>

Résumé des présentations

Alexandra Fragola, Institut des Sciences Moléculaires d’Orsay (ISMO), Université Paris Saclay, CNRS - Adaptive optics light-sheet microscopy for in vivo imaging

The study of fast biological phenomena at the cellular level has become possible even in depth in biological tissues in several animal models and rodents, thanks to efficient 3D microscopy techniques. Light sheet fluorescence microscopy (LSFM) is particularly well suited to imaging relatively transparent specimens such as zebrafish, but image quality deep inside biological tissues remains limited by scattering and optical aberrations due to the refractive index inhomogeneity, which limit both resolution and sensitivity in all microscopy modalities.

A novel approach for fast adaptive optics (AO) in fluorescence light-sheet microscopy

To overcome this difficulty, in particular regarding aberrations, AO has been implemented on several optical sectioning microscopy setups and currently provides a reliable live correction of the aberrations, enabling subcellular imaging of organelle dynamics in the early zebrafish brain in light-sheet microscopy. Significant gain in spatial resolution and signal intensity was demonstrated, but the two approaches based on a direct wavefront sensing or a sensorless process still suffer from several limitations, linked to the complex generation or the invasive introduction of a guide star inside the sample, or a time-consuming iterative approach to reach a good correction. We proposed a few years ago an innovative strategy of adaptive optics for neuroimaging, by adapting pioneer work from astronomy to the constraints of fluorescence microscopy. By using an extended-source Shack-Hartmann wavefront sensor (ESSH), our approach allows to preserve speed and accuracy of the direct wavefront sensing method and takes advantage of existing labelling methods of biological samples [1]. It relies on the cross-correlation of images of an extended source obtained through a microlens array and requires to be coupled to an optical sectioning method in order to provide a guide plane: it is thus interestingly compatible with various fluorescence imaging techniques already implemented for 3D imaging such as two-photon microscopy [2] or light-sheet fluorescence microscopy [3].

AO-enhanced LSFM for biological imaging in depth

We have implemented this original wavefront sensor into home-built light-sheet microscopy setups, in order to quantitatively demonstrate the performance of this new and fast closed-loop approach. We developed two home-built AO-LSFM setups: 1) an AO-DSLM setup (DSLM for Digitally Scanned Light-sheet fluorescence microscopy), including AO in the emission path, in order to correct for sample-induced aberrations, and 2) an AO-ASLM setup (ASLM for Axially-Swept Light-Sheet Microscopy), also with AO in the emission path, for high-resolution 3D imaging in densely labelled samples. Based on two-colour labelling of the sample, which is routinely used in biology to enable structural and functional imaging, we then demonstrated ESSH-based AO running at 10Hz with an optimal photon budget, allowing improved imaging of GCaMP7b labeled neurons in depth in the live adult Drosophila brain using AO-DSLM [3]. Recently, we have shown that ESSH-based AO-ASLM provides diffraction-limited 3D resolution deep (>300µm) into the zebrafish brain, by correcting sample-induced aberrations, together with an x2 signal increase for large cells, and maintaining the ability to distinguish individual cells even at great depths (see figure). This is key to many studies, as a major biomarker is the quantification of cell amount/density (segmentation) in many fields (developmental biology, neuro, cardio).

 

 

Hugues Talbot, CentraleSupélec, Saclay-IA - 3D image analysis in the era of deep learning

Image analysis is the process of extracting meaningful information from image data. In the last 10 years or so, this process has changed from being a specialised domain requiring advanced knowledge of mathematical low-level vision operators, to a subdomain of Artificial Intelligence dominated by machine-learning and particularly deep-learning methods. These use data and examples to learn how to do a task, such as image classification, instead of following fixed rules. They can produce impressive results, but they also have some challenges.
 
One challenge is getting enough data, especially in 3D, which are harder to obtain than 2D images, because they need special equipment and techniques. Another challenge is getting expert labels for the data, which means telling the machine what each part of the image shows. This is essential, but it can be very hard and time-consuming to do, especially in 3D. Some tasks are even harder than image classification, such as image segmentation. Image segmentation is a task that involves dividing an image into regions that belong to different objects or categories, such as organs or tissues in a medical image. This task requires more detailed labels than image classification, which makes it more difficult for machine learning and deep learning.
 
In this talk, we will explain some of the recent methods that use deep learning for image segmentation in 3D. We will not assume prior knowledge of machine learning or image processing. We will show some of the basic concepts and show some useful tools to perform image analysis of 3D images.
 
 

 

Chargement... Chargement...