← 뒤로

Automatic Apparent Nasal Index from Single Facial Photographs Using a Lightweight Deep Learning Pipeline: A Pilot Study.

Medicina (Kaunas, Lithuania) 2025 Vol.61(11) 🔓 OA Nasal Surgery and Airway Studies
TL;DR A compact detector can deliver near-universal nose localization and accurate automatic estimation of the nasal index from a single photograph, enabling reliable five-class categorization without manual measurements.
OpenAlex 토픽 · Nasal Surgery and Airway Studies Reconstructive Facial Surgery Techniques Cleft Lip and Palate Research

Saravi B, Schorn L, Lommen J, Wilkat M, Vollmer A, Güzel HE, Vollmer M, Schrader F, Sproll CK, Kübler NR, Singh DD

Abstract

: Quantifying nasal proportions is central to facial plastic and reconstructive surgery, yet manual measurements are time-consuming and variable. We sought to develop a simple, reproducible deep learning pipeline that localizes the nose in a single frontal photograph and automatically computes the two-dimensional, photograph-derived apparent nasal index (aNI)-width/height × 100-enabling classification into five standard anthropometric categories. : From CelebA we curated 29,998 high-quality near-frontal images (training 20,998; validation 5999; test 3001). Nose masks were manually annotated with the VGG Image Annotator and rasterized to binary masks. Ground-truth aNI was computed from the mask's axis-aligned bounding box. A lightweight one-class YOLOv8n detector was trained to localize the nose; predicted aNI was computed from the detected bounding box. Performance was assessed on the held-out test set using detection coverage and mAP, agreement metrics between detector- and mask-based aNI (MAE, RMSE, R; Bland-Altman), and five-class classification metrics (accuracy, macro-F1). : The detector returned at least one accepted nose box in 3000/3001 test images (99.97% coverage). Agreement with ground truth was strong: MAE 3.04 nasal index units (95% CI 2.95-3.14), RMSE 4.05, and R 0.819. Bland-Altman analysis showed a small negative bias (-0.40, 95% CI -0.54 to -0.26) with limits of agreement -8.30 to 7.50 (95% CIs -8.54 to -8.05 and 7.25 to 7.74). After excluding out-of-range cases (<40.0), five-class classification on n = 2976 images achieved macro-F1 0.705 (95% CI 0.608-0.772) and 80.7% accuracy; errors were predominantly adjacent-class swaps, consistent with the small aNI error. Additional analyses confirmed strong ordinal agreement (weighted κ = 0.71 linear, 0.78 quadratic; Spearman ρ = 0.76) and near-perfect adjacent-class accuracy (0.999); performance remained stable when thresholds were shifted ±2 NI units and across sex and age subgroups. A compact detector can deliver near-universal nose localization and accurate automatic estimation of the nasal index from a single photograph, enabling reliable five-class categorization without manual measurements. The approach is fast, reproducible, and promising as a calibrated decision-support adjunct for surgical planning, outcomes tracking, and large-scale morphometric research.

추출된 의학 개체 (NER)

유형영어 표현한국어 / 풀이UMLS CUI출처등장
해부 nose scispacy 1
합병증 nasal index scispacy 1
약물 CIs -8.54 scispacy 1
질환 adjacent-class scispacy 1
질환 macro-F1 scispacy 1
기타 nasal scispacy 1
기타 out-of-range scispacy 1
기타 adjacent-class scispacy 1

MeSH Terms

Humans; Deep Learning; Pilot Projects; Nose; Photography; Face; Anthropometry