Skip to main content
Daily Report

Daily Endocrinology Research Analysis

02/27/2026
3 papers selected
172 analyzed

Analyzed 172 papers and selected 3 impactful papers.

Summary

A multicenter randomized trial showed that transitioning directly from multiple daily injections to a tubeless automated insulin delivery system significantly improved glycemic control without safety concerns in children and adults with type 1 diabetes. Two AI studies advanced endocrine diagnostics and therapeutics: an explainable deep learning framework for inpatient insulin titration aligned with clinician reasoning and improved junior clinicians’ accuracy, and a privacy-preserving hand-image model that detected acromegaly with AUC 0.96, outperforming specialists.

Research Themes

  • Automated insulin delivery for type 1 diabetes
  • Explainable AI for insulin titration
  • Privacy-preserving AI diagnosis in endocrinology

Selected Articles

1. Tubeless automated insulin delivery versus multiple daily injections in children and adults with type 1 diabetes with elevated HbA

82.5Level IRCT
The lancet. Diabetes & endocrinology · 2026PMID: 41747751

In the RADIANT multicenter RCT (n=188), direct transition from multiple daily injections to a tubeless automated insulin delivery system significantly improved glycemic control versus continued injections, without safety concerns, across children and adults with type 1 diabetes. Findings support tubeless AID as a standard-of-care option for those with suboptimal control on injections.

Impact: This is the first randomized trial to directly compare tubeless AID against multiple daily injections in both adults and children with suboptimal control, demonstrating superior efficacy and acceptable safety.

Clinical Implications: Clinicians can consider direct transition from injections to tubeless AID to improve HbA1c and time-in-range in eligible patients, potentially expanding access to automated insulin therapy across age groups.

Key Findings

  • Multicenter RCT randomized 188 participants (125 AID, 63 control) using tubeless AID vs continued multiple daily injections.
  • Tubeless AID produced a greater reduction in HbA1c than injections; improvements were observed in both children and adults.
  • No safety concerns were identified in the AID group during the trial.

Methodological Strengths

  • Multicenter, international, randomized, parallel-group design with inclusion of both pediatric and adult populations
  • Use of continuous glucose monitoring and clinically meaningful endpoints (e.g., HbA1c)

Limitations

  • Open-label design may introduce performance bias
  • Trial follow-up duration and detailed safety event rates are not specified in the abstract

Future Directions: Longer-term, pragmatic studies should evaluate durability of benefits, real-world adherence, cost-effectiveness, and health equity of tubeless AID across diverse populations.

BACKGROUND: Automated insulin delivery (AID) systems have been shown to improve glycaemic outcomes in people with type 1 diabetes managed with insulin pump therapy. No randomised studies have evaluated the benefits of tubeless AID in both adults and children with suboptimal glycaemia compared with multiple daily injections. We aimed to evaluate the safety and efficacy of a tubeless AID system compared with multiple daily injections in this population. METHODS: RADIANT was a multicentre, international, parallel-group, open-label, randomised, controlled trial done in 19 hospitals in the UK, Belgium, and France. Participants aged 4-70 years with type 1 diabetes managed with multiple daily injections and continuous glucose monitoring and who had HbA FINDINGS: Between Sept 11, 2023, and April 26, 2024, 188 participants were randomly assigned to the AID group (n=125) or the control group (n=63). The AID group had a greater reduction in HbA INTERPRETATION: Results from this trial show the clinical efficacy of direct transition from multiple daily injections to tubeless AID in adults and children with type 1 diabetes, with no safety concerns, supporting AID as a therapeutic option within standard of care for people with type 1 diabetes. FUNDING: Insulet Corporation.

2. Explainable deep learning framework incorporating medical knowledge for insulin titration in diabetes.

79Level IIICohort
Communications medicine · 2026PMID: 41748914

An expert-guided explainable deep learning framework using Shapley Taylor Interaction Index and a doctor-in-the-loop process improved transparency and clinical alignment for insulin titration in hospitalized T2D patients. The system reduced unreasonable explanations versus other XAI methods and significantly improved junior clinicians’ titration accuracy and confidence.

Impact: This work advances trustworthy AI in endocrinology by integrating interaction-aware explanations with clinician constraints, demonstrating measurable gains in decision quality for insulin dosing.

Clinical Implications: Hospitals can adopt explainable, clinician-aligned AI decision support to augment insulin titration, particularly benefiting less-experienced practitioners while maintaining transparency for audit and training.

Key Findings

  • Two EHR cohorts (internal n=1,275; external n=292) were used to develop and validate an XAI framework for insulin titration.
  • STII-based explanations with doctor-in-the-loop constraints reduced unreasonable explanations and aligned closely with expert rationale.
  • In AI–human collaboration testing, junior clinicians achieved significantly higher insulin titration accuracy with the system; clinician confidence improved across experience levels.

Methodological Strengths

  • External validation across independent cohort with expert-verified explanation alignment
  • Explicit modeling of feature interactions (STII) and iterative doctor-in-the-loop refinement

Limitations

  • Non-randomized design; clinical outcome impact (e.g., hypoglycemia rates) was not assessed
  • Single external site and inpatient setting may limit generalizability to other care contexts

Future Directions: Prospective multicenter trials should evaluate patient outcomes, workflow integration, and fairness across subgroups; open-sourcing code and explanation audits could strengthen reproducibility and trust.

BACKGROUND: Deep learning has shown promise in diabetes management but faces challenges in real-world application due to its "black-box" nature, characterized by opaque internal decision-making processes. Explainable artificial intelligence (XAI) methods have been proposed to enhance model transparency. However, most of current XAI methods applied in the medical field often ignore the interaction of features in complex environments and pose deviation from clinical domain knowledge. METHODS: Our study used two Electronic Health Record (EHR) cohorts of hospitalized patients with type 2 diabetes (T2DM), including an internal dataset of 1,275 inpatients (mean age 58.5 ± 14.3 years) and an external dataset of 292 patients (mean age 69.3 ± 14.5 years). We introduce an expert-guided XAI framework to improve the transparency and trustworthiness of deep learning models for insulin titration in diabetes management. The framework utilizes a post-hoc XAI model named Shapley Taylor Interaction Index (STII) to capture the impact of feature interactions. Additionally, the model is refined iteratively in a doctor-in-the-loop (DIL) process by encoding clinical constraints to align with medical expertise. RESULTS: Here we show that our STII-DIL model could explore the interaction factors and reduce unreasonable explanations compared with other explanation models. The final XAI system explanations demonstrated strong alignment with experts' explanations and increased correctness by expert evaluation An AI-human collaboration study revealed that insulin titration accuracy significantly improved for junior clinicians with STII-DIL assistance, while senior clinicians showed minimal change. Both junior and senior clinicians reported increased confidence when using the STII-DIL system. CONCLUSIONS: We present an explainable deep learning framework that combines post-hoc XAI and expert domain knowledge to provide transparent and expert-aligned explanations for insulin titration in type 2 diabetes management. This framework enhances decision-making accuracy and confidence, especially for junior clinicians, and may facilitate broader clinical adoption of AI-assisted decision-making tools. This study addresses the need for transparent and reliable artificial intelligence (AI) tools in diabetes care. We developed an explainable deep learning system that helps doctors adjust insulin doses by showing how different clinical factors contribute to its recommendations. The system combines an AI explanation method with guidance from medical experts, allowing the model to be refined to better match real clinical reasoning. We found that this approach produced clearer and more accurate explanations and supported better decision-making, especially for junior clinicians. Both junior and senior clinicians also reported greater confidence when using the system. These findings suggest that explainable AI may improve safety and support wider clinical use of AI-assisted diabetes management.

3. Automatic acromegaly detection using deep learning on hand images: a multicenter observational study.

74.5Level IIICohort
The Journal of clinical endocrinology and metabolism · 2026PMID: 41757900

In a nationwide multicenter study (n=716; 11,480 images), a ResNet-50–based model using privacy-preserving dorsal hand and fist images detected acromegaly with AUC 0.96, sensitivity 0.89, and specificity 0.91, outperforming endocrinologists. Excluding palms/fingerprints mitigates privacy concerns, enabling deployment in public screening contexts.

Impact: Provides a scalable, privacy-conscious diagnostic approach for a rare endocrine disorder and demonstrates superiority to specialist assessment across multiple centers.

Clinical Implications: Hand-image AI screening could prompt earlier referral and biochemical workup for acromegaly while preserving privacy, potentially deployable in health checkups and community settings.

Key Findings

  • Multicenter dataset from 15 Japanese pituitary centers included 716 patients and 11,480 dorsal/fist images without palm/fingerprint regions.
  • ResNet-50 model achieved AUC 0.96, sensitivity 0.89, specificity 0.91, and F1-score 0.89, outperforming endocrinologists (F1 0.43–0.63).
  • Training/validation and test sets were split by centers (12 vs 3), supporting external generalization across institutions.

Methodological Strengths

  • Nationwide multicenter design with institution-wise data split for external testing
  • Privacy-preserving image acquisition (dorsal/fist only) and expert comparison benchmark

Limitations

  • Requires broader validation including healthy general population and other disorders with acromegaloid features
  • Single-country dataset may limit generalizability across ethnicities and care settings

Future Directions: Prospective screening studies in general populations, integration with clinical pathways for confirmatory testing, and assessment of cost-effectiveness and equity.

CONTEXT: Acromegaly poses clinical challenges in terms of early diagnosis and intervention. Therefore, the development of novel diagnostic tools is essential. Although artificial intelligence (AI) models based on external appearance have been proposed, privacy concerns have limited their use. OBJECTIVE: To develop a privacy-conscious deep learning model for detecting acromegaly using hand images. METHODS: This nationwide multicenter study enrolled 716 patients (317 with acromegaly and 399 controls) and 11 480 images from 15 Japanese pituitary centers. The inclusion criteria were age ≥18 years and care received at the participating facilities. Hand images focusing on the dorsal and fist sign, excluding the palm/fingerprint regions, were used to develop the model. The data were split into training/validation (12 centers) and test (3 centers) datasets. A ResNet-50-based model was trained using PyTorch with data augmentation and 5-fold cross-validation. For each patient, the predictions were averaged over 4 images. The performance of the model was compared with that of endocrinologists. RESULTS: The model achieved a sensitivity of 0.89, specificity of 0.91, positive predictive value of 0.88, negative predictive value of 0.93, F1-score of 0.89, and an area under the receiver operating characteristic curve of 0.96, outperforming specialists (F1-score range: 0.43-0.63). CONCLUSION: This study highlights the utility of dorsal hand and fist sign as diagnostic clues for acromegaly, which the AI model captured more accurately than endocrinologists. Using this privacy-conscious feature, this model can be deployed in public settings like health checkups. Further validation using larger datasets, including healthy individuals and diverse diseases, is necessary.