Wavefront Reconstruction Method Based on Multi-Feature Fusion in Neural Networks
-
Abstract
The Shack-Hartmann Wavefront Sensor (SH-WFS) is widely used in adaptive optical systems. To fully utilize the information contained in SH-WFS images, a neural network-based wavefront reconstruction method called Moment-U-Net is proposed. Besides the centroid used in traditional wavefront reconstruction methods, this approach additionally uses spot intensity and second-order moment features to characterize spot shape information, enabling high-precision wavefront reconstruction. Moment-U-Net adopts U-Net as its main architecture and assembles feature extraction modules such as dense connection modules DenseBlock (Dense Convolutional Block) and channel attention mechanisms SEBlock (Squeeze-and-Excitation Block), allowing effective capture of higher-order aberration features during training. The model demonstrates excellent convergence when trained using large-scale simulated atmospheric phase and wavefront image data. Validation tests with simulated atmospheric turbulence of varying intensities show that Moment-U-Net achieves root mean square reconstruction errors ranging from 0.010 μm to 0.025 μm. Additionally, this method has high reconstruction precision for faint stars, achieving errors below 0.070 μm for 8th magnitude stars. Experimental results demonstrate that Moment-U-Net not only has high wavefront reconstruction precision, but also has strong generalization capabilities across different turbulence intensities and stellar magnitudes. This highlights its potential for practical observational applications to enhance the correction capabilities of adaptive optical systems.
-
-