High-throughput spike detection in greenhouse cultivated grain crops with attention mechanisms based deep learning models

Logo poskytovatele

Varování

Publikace nespadá pod Pedagogickou fakultu, ale pod Přírodovědeckou fakultu. Oficiální stránka publikace je na webu muni.cz.
Autoři

ULLAH Sajid PANZAROVÁ Klára TRTÍLEK Martin LEXA Matej MÁČALA Vojtěch ALTMANN Thomas NEUMANN Kerstin HEJÁTKO Jan PERNISOVÁ Markéta GLADILIN Evgeny

Rok publikování 2024
Druh Článek v odborném periodiku
Časopis / Zdroj Plant Phenomics
Fakulta / Pracoviště MU

Přírodovědecká fakulta

Citace
www https://spj.science.org/doi/10.34133/plantphenomics.0155
Doi http://dx.doi.org/10.34133/plantphenomics.0155
Klíčová slova spike detection; high-throughput image analysis; Attention networks; Deep Neural Networks
Popis Detection of spikes is the first important step towards image-based quantitative assessment of crop yield. However, spikes of grain plants occupy only a tiny fraction of the image area and often emerge in the middle of the mass of plant leaves that exhibit similar colors as spike regions. Consequently, accurate detection of grain spikes renders, in general, a non-trivial task even for advanced, state-of-the-art deep learning neural networks (DNNs). To improve pattern detection in spikes, we propose architectural changes to Faster-RCNN (FRCNN) by reducing feature extraction layers and introducing a global attention module. The performance of our extended FRCNN-A vs. conventional FRCNN was compared on images of different European wheat cultivars, including ’difficult’ bushy phenotypes from two different phenotyping facilities and optical setups. Our experimental results show that introduced architectural adaptations in FRCNN-A helped to improve spike detection accuracy in inner regions. The mAP of FRCNN and FRCNN-A on inner spikes is 76.0% and 81.0%, respectively, while on the state-of-the-art detection DNNs, Swin Transformer mAP is 83.0%. As a lightweight network, FRCNN-A is faster than FRCNN and Swin Transformer on both baseline and augmented training datasets. On the FastGAN augmented dataset, FRCNN achieved mAP of 84.24%, FRCNN-A 85.0%, and the Swin Transformer 89.45%. The increase in mAP of DNNs on the augmented datasets is proportional to the amount of the IPK original and augmented images. Overall, this study indicates a superior performance of attention mechanisms-based deep learning models in detecting small and subtle features of grain spikes.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.