HKUST PRET System Outperforms Pathologists in Metastasis Detection

HKUST researchers developed PRET, a plug-and-play pathology AI that applies in-context learning to whole-slide images and adapts to new cancer types using only 1-8 annotated slides at inference, without additional training. Validated on 23 international datasets covering 18 cancer types, PRET performs cancer screening, tumor subtyping, tumor segmentation, and lymph node metastasis detection. In the challenging lymph node metastasis task, PRET reached an AUC near 98.71% using eight slides. The system accepts multiple visual prompts, slide labels, bounding boxes, rough masks, tumor masks, enabling flexible annotation budgets and strong cross-region generalizability. Partners include Guangdong Provincial People's Hospital and Harvard Medical School. PRET reduces annotation and compute burdens and could accelerate deployment of AI pathology, especially in resource-constrained settings, though prospective clinical validation and workflow integration remain necessary.
What happened
HKUST introduced PRET, a plug-and-play pathology analysis system that brings in-context learning from NLP into whole-slide image (WSI) analysis. The model adapts to new cancer types and multiple diagnostic tasks at inference using only 1-8 annotated slides, with no further training required. The team validated PRET on 23 international benchmark datasets spanning 18 cancer types and reported an AUC of approximately 98.71% on lymph node metastasis detection using eight examples.
Technical details
PRET implements a prompt-driven, few-shot inference paradigm for pathology. Instead of dataset-specific fine-tuning, practitioners provide visual prompts during inference so the system can perform tasks on unseen tumor types. The system supports multiple prompt modalities:
- •slide labels
- •bounding boxes
- •rough masks
- •tumor masks
Technical details
PRET covers a range of diagnostic tasks, including cancer screening, tumor subtyping, tumor segmentation, and lymph node metastasis detection. The approach trades offline fine-tuning and large annotation sets for an interactive, sample-efficient inference step. The papers and press coverage emphasize generalizability across institutions and staining variations by evaluating on datasets from China, the United States, and the Netherlands.
Why it matters
PRET addresses two persistent deployment barriers for pathology AI: the heavy cost of task-specific annotation and the need to retrain models for each new tumor type or diagnostic task. By enabling few-shot adaptation at inference, PRET can shorten development cycles, reduce compute and labeling budgets, and simplify regulatory and operational workflows in settings where data collection is costly or impossible. Achieving nearly 98.71% AUC on lymph node metastasis detection with only eight annotated slides is significant because metastasis detection is clinically high-stakes and typically requires curated, labor-intensive datasets.
Practical implications
For ML practitioners and pathology teams, PRET suggests a new workflow: curate a small set of high-quality annotated exemplar slides for a target site, then use prompt-based inference to bootstrap diagnostic functionality. This reduces the need for centralized data aggregation and massive reannotation when moving models across hospitals or cancer subtypes. PRET also supports tiered annotation budgets: coarser prompts like rough masks or slide-level labels can be used where pixel-level masks are unavailable.
Limitations and caveats
The published materials focus on retrospective benchmark evaluations. Key open questions remain: how PRET performs prospectively on real-world clinical workflows, robustness to unseen staining protocols and scanners, handling of rare tumor morphologies, and regulatory acceptability. Integration with laboratory information systems (LIS), pathologist-in-the-loop UIs, and quality control pipelines will determine real-world impact. Clinical validation across diverse prospective cohorts and head-to-head trials versus human pathologists in real workflows are the logical next steps.
What to watch
Monitor preprints or peer-reviewed papers for methodology and ablation details; watch for prospective clinical validations, commercial partnerships, or open-source releases. If PRET-like prompt paradigms generalize, expect a wave of research adapting prompt engineering and few-shot evaluation to other medical imaging modalities.
Scoring Rationale
This is a significant technical advance: applying `in-context learning` to WSIs and demonstrating high few-shot performance addresses a major bottleneck for clinical deployment. The score is tempered by the need for prospective validation, regulatory approval, and workflow integration before broad clinical impact.
Practice with real Health & Insurance data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Health & Insurance problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


