Skip to main content
MohsPedia/Surgical Technique

AI & Deep Learning in Mohs Surgery: Frozen Section Analysis

Deep learning algorithms applied to Mohs frozen sections have reached pooled sensitivity ~95% and specificity ~94% for BCC margin detection in published internal validation studies, with the first whole-slide margin control system demonstrated by the Campanella group in 2021. cSCC detection has lagged BCC, but a 2024 algorithm achieved median average precision of 0.904 at 100x magnification for invasive cSCC. As of 2026, no AI tool has FDA clearance for autonomous Mohs frozen section interpretation — every published algorithm is positioned as decision support, not a replacement for surgeon-pathologist judgment, and external prospective validation remains the principal gap before clinical deployment.

By Dr. Yehonatan Kaplan (M.D., Fellow ACMS)·Published: 2026-04-22·Updated: 2026-04-22·Reviewed: 2026-04-22
artificial intelligencedeep learningdigital pathologyMohs surgeryfrozen sectionBCCSCCmachine learning
Share

Key Takeaways

  • A 2024 systematic review and meta-analysis (Lin et al., PMID 38991503) reported pooled sensitivity ~95% and specificity ~94% for AI detection of BCC on Mohs frozen sections in internal validation cohorts.
  • The Campanella et al. 2020 deep learning model (PMID 32590033) was the first published algorithm to detect BCC on whole-slide Mohs frozen sections, followed by a 2021 whole-slide margin control workflow study (Kim et al., PMID 33656186).
  • For cSCC on Mohs frozen sections, the Jiang et al. 2024 algorithm (PMID 37864429) reached a median average precision of 0.904 at 100x magnification — SCC detection still trails BCC due to greater morphologic complexity.
  • Tan et al. 2025 (PMID 39625169) demonstrated automated BCC segmentation on whole-slide Mohs frozen sections, supporting workflow integration where the algorithm flags suspicious regions for surgeon review.
  • As of 2026, no AI algorithm has FDA clearance for autonomous Mohs interpretation. All deployed systems are decision support tools, and external prospective validation remains limited.
  • Aggressive BCC subtypes (infiltrative, morpheaform), inflammatory mimickers, and frozen section artifacts (folding, ink contamination, freezing artifact) are the dominant failure modes that require surgeon override of algorithm output.

Overview: AI in Mohs Surgery

Deep learning algorithms applied to Mohs frozen sections aim to assist surgeons in identifying residual tumor at margins, reducing slide-reading time, and standardizing interpretation across institutions. Current research focuses on whole-slide image analysis using convolutional neural networks (CNNs) trained on labeled frozen section datasets. As of 2026, AI is positioned as a decision support tool, not a replacement for surgeon-pathologist judgment. The clinical question driving AI development in Mohs is binary at every margin: tumor present, or not. Unlike diagnostic dermatopathology, Mohs interpretation does not require subtyping or grading at the margin — only detection. This narrow task is well-matched to current CNN architectures, which is why BCC margin detection has been the most successful published application to date.

Deep Learning for BCC Detection on Frozen Sections

BCC is the most studied tumor for AI detection in Mohs surgery. Multiple groups have published CNN-based detection algorithms with high sensitivity in internal validation. The published trajectory of BCC AI development: Campanella et al. 2020 (PMID 32590033) published one of the first deep learning models trained specifically on Mohs frozen sections for BCC detection, demonstrating that whole-slide CNN classification was feasible despite the morphologic differences between frozen and FFPE tissue. Lee et al. 2020 (PMID 32926979) reported a deep learning algorithm with high sensitivity for BCC detection on Mohs frozen sections, validating the approach in an independent dataset. Kim et al. 2021 (PMID 33656186) extended the work to a whole-slide margin control workflow, showing that the algorithm could reproduce surgeon margin calls on Mohs slides with high accuracy. Tan et al. 2025 (PMID 39625169) advanced the field from classification to segmentation, producing pixel-level tumor maps overlaid on the whole-slide image — a format that fits more naturally into the surgeon's standard map-and-mark workflow. Reported BCC algorithm sensitivities in internal validation have ranged from 91% to 100%. The principal limitations remaining in 2026 are external validation across institutions with different staining protocols, generalization to rare BCC variants, and integration into the time-constrained Mohs workflow.
StudyYearTaskReported PerformancePMID
Campanella et al.2020BCC detection on Mohs frozen WSIFirst published BCC Mohs DL algorithm32590033
Lee et al.2020BCC detection on Mohs frozen sectionsHigh sensitivity (>90%) in internal validation32926979
Kim et al.2021Whole-slide margin control for BCCAlgorithm reproduced surgeon margin calls33656186
Tan et al.2025Automated BCC segmentation on Mohs WSIPixel-level tumor maps for workflow integration39625169

AI for SCC Detection on Frozen Sections

SCC AI development trails BCC because the morphologic task is harder. SCC detection on Mohs frozen sections requires distinguishing invasive carcinoma from actinic keratosis, SCC in situ, dense inflammation, and reactive epithelial changes — none of which enter the differential for BCC at the same frequency. The Jiang et al. 2024 algorithm published in Experimental Dermatology (PMID 37864429) reported a median average precision of 0.904 at 100x magnification for invasive cSCC on Mohs frozen sections. This is the strongest published cSCC Mohs result to date and establishes that high-magnification CNN detection of invasive squamous nests is technically feasible. Future work in SCC Mohs AI involves multi-stain validation (H&E plus rapid IHC), inclusion of perineural invasion detection, and discrimination of well-differentiated invasive SCC from acantholytic and pseudoglandular variants that can mimic adnexal structures on frozen sections.

AI Performance vs Human Mohs Surgeons

The 2024 systematic review and meta-analysis by Lin et al. (PMID 38991503) is the most rigorous summary of AI performance in Mohs and dermatologic surgery to date. The pooled performance for BCC frozen section detection was sensitivity ~95% and specificity ~94% — figures that approach experienced Mohs surgeon-pathologist accuracy in internal validation cohorts. The systematic review also catalogs the principal limitations: most published studies report internal validation only, external prospective trials are rare, dataset sizes are small relative to other deep learning domains (typically a few hundred to a few thousand slides), and head-to-head comparisons with practicing Mohs surgeons under realistic time pressure have not been published. The practical interpretation in 2026 is that AI for Mohs has reached technical readiness for decision support but has not yet generated the prospective evidence required for autonomous deployment.
MetricAI Algorithms (pooled, internal validation)Experienced Mohs SurgeonGap
Sensitivity for BCC~95% (Lin 2024 meta-analysis)>95% (assumed standard of care)Comparable in internal validation
Specificity for BCC~94% (Lin 2024 meta-analysis)>95% (assumed standard of care)Slightly lower; false positives possible
External validationLimited published dataN/A — surgeon expertise transfersMajor gap in AI literature
Reading time per slideSeconds (algorithm) + surgeon reviewMinutes per slideAI faster, but surgeon review still needed
ReproducibilityDeterministic given fixed weightsInter-observer variationAI more reproducible

Whole-Slide Imaging & Margin Control Workflows

AI for Mohs requires whole-slide imaging (WSI) infrastructure. Frozen sections must be scanned within minutes to fit the surgical workflow, where the typical Mohs cycle is 30-60 minutes per stage and the lab portion of that cycle is the rate-limiting step. Current deployment models for AI-assisted Mohs at academic centers: 1. The histotechnician produces frozen sections per standard protocol. 2. Slides are scanned on a fast WSI device (target: under 2 minutes per slide at 20x or 40x). 3. The whole-slide image is sent to a hospital server running the trained CNN. 4. The algorithm returns a tumor probability map within seconds to a few minutes. 5. The surgeon reviews both the standard glass slide and the algorithm overlay before calling the margin. The Campanella group's 2021 work (PMID 33656186) demonstrated the feasibility of whole-slide margin control with deep learning in this workflow. Tan et al. 2025 (PMID 39625169) advanced the segmentation output format that maps most naturally onto the Mohs map.

Limitations & Failure Modes

Every Mohs surgeon deploying AI should understand the systematic failure modes documented in the published literature and in early clinical experience. External generalization: Algorithm performance drops on slides from external institutions because of lab-specific staining differences, cryostat differences, and scanner differences. This is the single most important limitation reported across the BCC and SCC literature. Aggressive BCC subtypes: Infiltrative and morpheaform BCCs can be missed at sparse single-cell infiltration, where the algorithm has fewer tumor pixels per region of interest to detect. These are precisely the cases where Mohs is most indicated, so missed detection is clinically consequential. Inflammatory mimickers: Dense lymphocytic infiltrate and granulomatous inflammation can generate false positives because the algorithm has learned to associate cellularity and basophilia with tumor. Frozen section artifacts: Tissue folding, freezing artifact, ink contamination from margin marking, and incomplete sectioning all reduce algorithm confidence and can produce both false positives and false negatives. Underrepresented variants: Most algorithms are trained on common BCC variants. Rare tumors (basosquamous carcinoma, micronodular BCC with sparse infiltration, fibroepithelioma of Pinkus) are underrepresented in training data and predict less reliably.

Augmented Reality & Surgical Navigation

Beyond histology, AI is being integrated into preoperative and intraoperative surgical planning. Augmented reality (AR) systems overlay tumor margin estimates onto the patient during surgery, derived from preoperative dermoscopy, OCT, or photography. Early studies in 106 skin tumor surgeries including 16 Mohs cases demonstrated technical feasibility of AR overlay during dermatologic surgery but did not show reduced operative time or improved margin control compared to standard practice. AR for Mohs remains experimental as of 2026 and has not been incorporated into ACMS practice guidelines.

Regulatory & Ethical Considerations

As of 2026, no AI algorithm has FDA clearance for autonomous Mohs frozen section interpretation. Algorithms are research-use-only or marketed as decision support. Liability for the margin call remains with the operating surgeon, regardless of what the algorithm output indicates. Patient consent for AI-assisted interpretation is not yet standardized. Some academic centers have begun including a sentence in the Mohs consent form noting that algorithmic decision support tools may be used during slide review. ACMS and AAD have not issued formal guidance on consent language as of 2026. Bias in training datasets is a known concern. Most published Mohs AI training sets are dominated by Fitzpatrick skin types I-III and by anatomical sites where Mohs is most commonly performed (head and neck). Performance on Fitzpatrick V-VI skin and on uncommon Mohs sites (genital, hand, foot) has not been independently validated. Ongoing audit of algorithm performance across patient subgroups is required for any institution deploying AI in Mohs.

Future Directions (2026 and Beyond)

Active research areas in AI for Mohs surgery as of 2026: 1. Multi-tumor algorithms that detect BCC, SCC, and melanoma in a single model rather than requiring separate algorithms per tumor type. 2. Perineural invasion detection on frozen H&E and on rapid IHC (S-100, SOX10), where detection is currently surgeon-dependent and inconsistent. 3. Real-time integration with the cryostat workflow, including automated slide labeling, scanning trigger, and result return without manual handoffs. 4. AI-augmented IHC interpretation for melanocyte counting on MART-1 and SOX10 slides — an area where reproducibility gains may be larger than for H&E margin assessment. 5. Predictive models for Mohs stage count from preoperative dermoscopy, clinical photography, and biopsy histology, allowing better operating room scheduling. Mayo Clinic established a dedicated AI and Innovation fellowship in dermatologic surgery in 2025 with $1M funding, signaling institutional investment in this area at the academic Mohs level.

Practical Recommendations for Mohs Surgeons in 2026

For surgeons considering AI integration into a Mohs practice: 1. Treat AI output as a screening tool, not a diagnostic standard. The algorithm flags candidate regions; the surgeon makes the call. 2. Confirm any algorithm-flagged region with traditional H&E review and rapid IHC if the case warrants it (see /mohspedia/mohs-immunohistochemistry). 3. Track your false-negative rate over time. Log every case where you found tumor the algorithm missed, with subtype, artifact, and stain quality noted. 4. Stay informed of algorithm updates and validation studies. Vendors push model updates that change behavior; your local performance log lets you detect regressions. 5. Audit performance quarterly across Fitzpatrick skin types and anatomic sites to detect dataset bias in your patient population. 6. Do not chart 'algorithm cleared margin' as the basis for any clinical decision. Document the surgeon's independent margin call. 7. Follow MohsPedia and ACMS for emerging guidance. See related articles at /mohspedia/mohs-lab and /mohspedia/mohs-immunohistochemistry for laboratory and IHC context.

Frequently Asked Questions

References
  1. [1] Deep learning for basal cell carcinoma detection on Mohs frozen sections. J Invest Dermatol. .
  2. [2] Deep learning algorithm with high sensitivity for basal cell carcinoma detection on Mohs frozen sections. J Am Acad Dermatol. .
  3. [3] Whole-slide margin control through deep learning in Mohs micrographic surgery for basal cell carcinoma. Mod Pathol. .
  4. [4] Deep learning algorithm to detect cutaneous squamous cell carcinoma on Mohs frozen sections. Exp Dermatol. .
  5. [5] Deep learning for automated segmentation of basal cell carcinoma on Mohs frozen sections. J Invest Dermatol. .
  6. [6] Artificial intelligence for Mohs and dermatologic surgery: a systematic review and meta-analysis. Dermatol Surg. .

About This Article

Author: , Fellow ACMS

Last Medical Review:

Audience: Dermatologic Surgeons