Navigating the challenges of artificial intelligence integration in thoracic surgery
Artificial intelligence (AI) is widely becoming implemented in all aspects of medicine. It is transforming how we diagnose disease, make clinical decisions, and assess outcomes. In thoracic surgery, the momentum is undeniable. AI technologies are enabling earlier and more accurate detection of lung cancer, aiding in preoperative risk assessment, providing real-time intraoperative decision support, and enhancing postoperative care through predictive analytics. These systems extract relevant data from massive amounts of information with remarkable speed and precision, allowing the scaling up of individualized surgical planning with the aim of improving quality and efficiency of care.
However, the path to integration of AI into routine delivery of surgical care is complex and marked by yet unresolved challenges. AI is only as reliable as the data on which it is trained, and current gaps in data diversity, annotation standards, and institutional access may undermine its generalizability and validity. Ethical concerns around patient privacy, infrastructural limitations, lack of algorithmic transparency, and legal ambiguity pose significant barriers to widespread access and use.
To fully realize the potential of AI in thoracic surgery, the surgical community must undertake a rigorous and thoughtful examination of current limitations and barriers.
Data limitations and algorithmic bias
One of the most foundational challenges facing AI in thoracic surgery is the shortage of high-quality, diverse, and well-annotated data (1-4). Unlike fields such as radiology where large, standardized imaging repositories exist, data in thoracic surgery are often fragmented across institutions, inconsistently labeled, and limited in scope (1,5). Operative videos, preoperative imaging, and outcome metrics are shaped by institutional variation, differences in surgeon technique, and regional disparities in patient demographics (3,6,7). This heterogeneity undermines the development of generalizable AI models and contributes to poor reproducibility when algorithms are deployed outside of their training context.
This limitation is more than a technical inconvenience; it is a clinical risk. When algorithms are trained on narrowly representative data, they are prone to biased outputs. If women, minorities, or patients from lower-resourced hospitals are underrepresented, the algorithm’s predictions for those groups may be inaccurate or even harmful (2,8). In thoracic surgery, this could mean disparities regarding cancer staging, risk prediction, or even eligibility for surgery itself. Without intentional strategies to assess and mitigate these biases, AI risks perpetuating and amplifying inequities in care across these different demographic groups. Compounding this issue are persistent challenges in data standardization, which further hinder efforts to ensure equitable and accurate AI implementation. In thoracic oncology, inconsistent definitions, terminology, and data collection practices—such as those observed between the Society of Thoracic Surgeons (STS) and the European Society of Thoracic Surgeons (ESTS) registries—contribute to fragmented datasets that are difficult to harmonize. This fragmentation limits cross-institutional collaboration, restricts large-scale outcomes research, and delays the development of robust, generalizable AI tools. Both STS and ESTS have identified non-uniform data elements as a major barrier to integration, underscoring the urgent need for coordinated standardization efforts to enable collaborative research and equitable innovation (9).
Addressing bias in training data for AI systems, particularly in lung cancer screening, is critical to ensuring equitable performance across diverse populations. Key strategies include curating demographically representative datasets, auditing model performance across subgroups, and involving diverse stakeholders throughout the AI development lifecycle. The literature emphasizes the importance of federating data from multiple institutions to increase heterogeneity, generating synthetic data to fill demographic gaps, and creating datasets that intentionally include underrepresented populations (10-12). Stakeholder engagement at the stages of problem formulation, dataset selection, and model evaluation is essential to prevent the reinforcement of structural disparities and knowledge gaps (11,13). Equally important is rigorous post-training model auditing. AI systems should be systematically evaluated for subgroup performance, particularly for metrics such as false positive and false negative rates, to ensure no population is disproportionately disadvantaged. Ongoing monitoring for data drift, fairness, and clinical validity is necessary after deployment. Both the World Health Organization and the U.S. Department of Health and Human Services have emphasized the need for transparency, explainability, and routine assessments of algorithmic impact on marginalized populations (11).
The annotation of surgical data introduces further complexity. Unlike static imaging, operative videos and intraoperative images require domain-specific labeling by expert surgeons. This is a time-consuming and resource-intensive process. Without standardized annotation protocols and multi-institutional efforts, model training may become inconsistent, limiting scalability (8).
These issues are clearly exemplified by the development of AI-driven three-dimensional reconstruction tools for preoperative planning. While early prototypes have shown potential in improving visualization and guiding surgical strategy in pulmonary segmentectomies, many such tools are trained on small computed tomography (CT) datasets with limited demographic diversity (1). As a result, they may fail when applied to patients with anatomical variants or poor imaging quality (4). Pulmonary vessels under two millimeters in diameter are frequently misidentified, introducing real risk for intraoperative complications such as vessel injury or incomplete resection.
Technical barriers: infrastructure, interpretability, and validation
Integrating AI into thoracic surgery requires more than technical capability. It demands a reconfiguration of how surgical teams operate, make decisions, and interact with the data. For AI to have meaningful clinical impact, it must align seamlessly with the pace and complexity of modern operating rooms. Yet many institutions lack the necessary infrastructure to support these systems (14).
A major obstacle is digital fragmentation. Hospitals often face limited interoperability between electronic health records (EHRs), imaging platforms, and surgical robotics. Real-time data integration, critical for many AI applications, is often hindered by outdated or incompatible systems (7,8). Even advanced AI tools can fail if they are difficult to implement, require excessive manual input, or disrupt workflow (3). Tools such as augmented reality overlays or intraoperative video analytics can enhance surgical precision, but only if they are intuitive and nonintrusive. Interfaces that clutter displays or delay feedback may be abandoned quickly. Effective AI must support surgical flow, not complicate it. In such high-stakes environment as real time surgical procedures, any extra step requiring time or effort may reduce the likelihood of widespread adoption.
A persistent challenge is the opacity of many AI models. Deep learning algorithms often produce accurate results without offering clear explanations (1-3,8,15). In thoracic surgery, where decisions carry serious consequences, this lack of transparency undermines user trust in AI. Surgeons are unlikely to rely on outputs they cannot interpret, especially during complex or unexpected scenarios (2,5,7,16). Strategies such as visual saliency maps or inherently interpretable models can help clinicians understand the rationale behind AI recommendations. These efforts are key to transforming AI from a black box into a trusted clinical tool.
Equally important is validation. Unlike static devices, AI models evolve over time, and their performance can change with new data, institutional differences, or shifting clinical practices (5). A model that performs well in one setting may fail in another. Real-world examples underscore this risk. One AI model, IBM’s Watson for Oncology, recommended bevacizumab for a patient with squamous cell lung cancer, disregarding a known contraindication and risking fatal hemoptysis (17). These cases highlight the potential for serious harm when AI systems are not properly validated.
Validation must extend beyond retrospective studies. Prospective trials and pilot implementations are needed to test AI tools in live surgical environments (15). While publications on AI in thoracic surgery are increasing, few studies evaluate the impact of these tools on actual patient outcomes (16). Without evidence demonstrating improvements in efficiency, safety, or recovery, clinicians and administrators have little motivation to implement them. Furthermore, hospitals may hesitate to invest in tools that lack reimbursement pathways or a clear return on investment. This creates a cycle of stagnation: without clinical adoption, real-world data are scarce. Without real-world data, adoption remains limited. Breaking this cycle will require dedicated support for translational research, including pilot implementation projects, pragmatic trials, and structured post-market evaluations. Variables like blood, smoke, anatomic variation, and time pressure challenge models in ways that static datasets cannot. Algorithms that function flawlessly in controlled settings may fail during actual procedures. Furthermore, validation is not a one-time task. Continuous monitoring, recalibration, and accountability mechanisms must be in place (4). Determining who is responsible for this oversight—developers, regulatory bodies, hospitals, operating surgeons, or a various combination of all—is an ongoing challenge. Without structured governance, AI could enter clinical practice prematurely, lacking awareness of its own limitations.
Financial and operational costs
Another significant yet often underrecognized barrier to AI adoption in thoracic surgery is the financial burden associated with implementation. These costs extend well beyond initial acquisition and include ongoing subscription fees, software licensing, secure digital storage for large imaging and video datasets, and integration with existing hospital systems such as EHRs, imaging platforms, and robotic surgical tools. This integration frequently requires custom engineering and infrastructure upgrades, further compounding costs (18-21). In addition, successful deployment often demands dedicated full-time equivalents (FTEs), such as data scientists, IT specialists, and clinical coordinators, to manage data curation, workflow redesign, algorithm maintenance, and continuous performance monitoring (19,20). These financial barriers are multifactorial, encompassing not only direct costs but also indirect investments in training clinicians and support staff, ongoing model validation, and institutional readiness. For hospitals operating under constrained budgets, the absence of clear reimbursement pathways or robust cost-effectiveness data makes it difficult to justify such investments. As a result, even clinically promising AI tools may face delayed or limited adoption in thoracic surgical practice.
Ethical and legal considerations
The integration of AI into thoracic surgery introduces not only technical challenges but also complex ethical and legal questions. The main question among these is the issue of accountability and liability. When an AI-assisted recommendation leads to patient harm, who is responsible? Traditional malpractice frameworks assume human agency, placing responsibility squarely on the clinician (6). However, AI adds a new layer of influence that may not always be transparent or traceable. At present, surgeons are expected to maintain clinical oversight. AI should function as a decision-support tool, not a substitute for medical judgment. Yet as AI systems become more autonomous, especially within robotic platforms and predictive analytics, the boundaries of accountability become less clear (6,16). Regulatory bodies may need to redefine liability structures, potentially assigning some responsibility to AI developers or treating these systems akin to medical devices, where manufacturers can be held accountable for performance failures (2). The current regulatory pathways for AI in surgical practice are risk-based, adaptive, and jurisdiction-specific, with the United States, European Union, and China each employing distinct frameworks. In the United States, the U.S. Food and Drug Administration (FDA) regulates AI-enabled surgical devices primarily under the Software as a Medical Device (SaMD) framework, using a total product life cycle (TPLC) approach. This includes premarket review, post-market surveillance, and ongoing performance monitoring. The FDA applies a risk-based classification: low-risk administrative AI tools are generally not regulated, while high-risk AI embedded in surgical devices (e.g., robotic systems, intraoperative decision support) undergoes rigorous review. Clinical decision support tools are regulated based on their risk and the degree to which their outputs can be independently verified by clinicians. The FDA has also piloted programs such as the Software Precertification Pilot Program to streamline oversight for rapidly evolving AI technologies, but further statutory authority may be needed for full implementation. The FDA’s regulatory strategy emphasizes collaboration, harmonization of standards, and continuous post-market evaluation to ensure safety and effectiveness in real-world settings (22,23).
Data privacy presents further ethical complexity. AI systems require access to large volumes of patient data, raising questions about ownership, consent, and protection (3,15,16). Although frameworks like HIPAA and General Data Protection Regulation 2018 (GDPR) exist, their enforcement varies, and compliance can be burdensome (5,24). The challenge intensifies when real-time intraoperative data, such as surgical video or biometric parameters, is collected. It remains unclear how this information should be stored, who can access it, and how long it should be retained. Without clear policies, this data may be vulnerable to unauthorized access and misuse. Patients may not even be aware that their data is contributing to AI development. As institutions aggregate data for algorithm development and validation, ensuring secure storage and maintaining anonymity become paramount.
Patients also have the right to understand how AI is involved in their care. If algorithms influence treatment decisions, surgical planning, or risk assessments, this role should be disclosed. Consent in the AI era should include not only data usage but also the potential impact of AI on clinical decisions (6,8). Patients may reasonably want to know how accurate an algorithm is, how it was validated, and whether it has been approved for clinical use. A lack of transparency could erode trust and compromise the therapeutic relationship.
Frameworks for determining when and how to disclose AI use recommend a risk-stratified approach. Key criteria include: the degree of AI autonomy (i.e., whether the AI is making or merely supporting decisions), the clinical risk introduced, the extent to which the AI departs from standard practice, whether the AI is patient-facing, and whether the patient has meaningful agency in the decision (25,26). Patient preferences studies indicate that most patients favor physician-supervised AI and value transparency but may experience information overload or distrust if disclosures are overly technical or not well integrated into the clinical encounter (26,27). Patients generally prefer that physicians retain ultimate responsibility and decision-making authority, especially in cases of disagreement between AI and clinician (28). Involving patient representatives in AI governance and consent policy development is recommended to ensure that patient values and concerns are addressed.
The regulatory landscape remains underdeveloped. Agencies like the U.S. FDA are beginning to propose frameworks for AI in medicine, but many issues remain unresolved (29). Unlike traditional devices, AI algorithms often evolve through continuous learning, making them difficult to assess through static approval processes. This fluidity challenges current regulatory models and may delay their adoption in clinical settings (30). Until more adaptive and clear regulatory standards are established, health systems may be cautious in deploying AI, and insurers may be reluctant to offer reimbursement.
Gaps between research development and clinical adoption
AI has become a prominent focus in thoracic surgery research, yet much of its progress remains confined to academic publications and laboratory settings (1,8,16,24). A growing disconnect exists between the volume of AI innovations in the literature and the number of tools integrated into daily surgical practice (4,24). Bridging this divide requires more than technological breakthroughs. It demands clinical relevance, workflow integration, regulatory clarity, and cultural readiness within surgical teams (16). While algorithms for lung nodule detection, complication prediction, and surgical guidance often demonstrate high accuracy in retrospective analyses, translation into real-world use remains challenging (31-33). Successful implementation depends not only on model performance but also on robust software engineering, intuitive user interfaces, prospective validation, and, critically, active engagement of clinical end users throughout development (2,31-33).
Notably, over 40 AI algorithms have received FDA clearance for thoracic radiology applications in the United States, primarily focused on the detection, segmentation, and characterization of pulmonary nodules. These tools have shown improved sensitivity in identifying nodules when used to augment radiologist interpretation, with only modest increases in false-positive rates (32,34-36). However, while AI assistance can improve radiologist performance in nodule detection and may reduce reading time, direct evidence of clinical benefit for patients, such as improved morbidity, mortality, or patient-centered outcomes, remains limited. Most studies to date have focused on diagnostic accuracy rather than patient outcomes, and there is a lack of robust, prospective, multi-institutional trials demonstrating that AI implementation leads to earlier cancer detection that translates into improved survival or quality of life. Similarly, while economic modeling suggests potential cost-effectiveness of AI in lung cancer screening settings, especially for high-risk populations, these findings are highly uncertain due to the paucity of real-world data linking AI use to downstream clinical and financial outcomes (31,37).
A recurring issue is misalignment between AI developers and surgical practice. Some tools are designed without a full understanding of the clinical environment. A model may accurately identify a rare complication, but if it addresses a low-priority concern or does not inform decision-making, it will not be adopted. Others may attempt to replicate existing clinical tools, adding little value. These limitations often stem from insufficient collaboration between data scientists and clinicians during the design phase. Effective translation begins with identifying real clinical problems and designing AI tools that fit seamlessly into existing workflows. This requires early and sustained input from thoracic surgeons, not as passive testers but as active co-developers. Prioritizing user-centered design and maintaining open communication throughout development ensures that AI addresses practical needs, such as intraoperative navigation or surgical planning for complex cases.
The role in surgeon training
One important challenge to AI assimilation in surgery is the potential erosion of clinical expertise. Traditional surgical training pathway relies on hands-on training under direct supervision by expert senior staff. Surgeon utilization of AI tools in lieu of surgical trainees may interfere with the conventional transmission of knowledge and experience. Overreliance on AI could diminish a surgeon’s ability to adapt during unforeseen situations (2). This is particularly troubling in thoracic surgery, where decisions are often made in rapidly evolving, high-risk conditions. AI should enhance surgical performance, not displace the skills and intuition that define surgical practice. The surgeon must remain the central figure in patient care, safeguarding both safety and trust.
Advocating AI literacy for surgeons
Even the most sophisticated AI tools will fall short if surgeons for whom they are designed to support, do not understand how they work, lack trust in their outputs, or are unsure how to use them. In this context, education is not a complement to innovation but a prerequisite for adoption. Current surgical training programs in thoracic surgery remain largely unprepared for this shift (2,24). As a result, many practicing surgeons have limited insight into how AI systems function. This knowledge gap fosters skepticism and inhibits confident, responsible use of these tools in the clinical setting (7). Addressing this gap requires the integration of AI literacy across all levels of surgical education. From medical school through residency and continuing professional development, surgeons should be introduced to key concepts such as supervised learning, data labeling, algorithmic bias, and model performance metrics. The objective is not to turn surgeons into programmers but to adequately equip them to critically assess and apply AI tools in practice. This transformation must be supported by institutions, professional societies, and certifying bodies. Educational curricula should incorporate dedicated modules on medical AI, hands-on workshops, and simulation platforms that allow learners to interact with AI in realistic clinical scenarios. Interdisciplinary training, involving collaboration with data scientists and engineers, will help ensure that AI development remains grounded in surgical needs.
Familiarity and understanding builds confidence, and confidence builds trust. Surgeons who have participated in the design or validation of AI tools are more likely to use them effectively. Understanding how a recommendation was generated enables clinicians to interpret it within the broader context of patient care. This is particularly important in thoracic surgery, where decisions often depend on subtle anatomical and physiological variables. Trust in AI must be earned through repeated exposure, demonstrable reliability, and transparency in performance.
Future directions and opportunities
Despite significant challenges, the future of AI in thoracic surgery remains promising. AI technologies are positioned to reshape not only surgical techniques but also the way patients are evaluated, monitored, and managed throughout the continuum of care. A key area of advancement is real-time intraoperative AI. With progress in video analysis, computer vision, and augmented reality, AI systems may soon assist surgeons by identifying anatomical landmarks, flagging unsafe maneuvers, and offering workflow-specific guidance. In thoracic surgery, where precision is vital and anatomical variability is common, such tools could reduce intraoperative errors and enhance surgical safety.
Postoperative care also stands to benefit from AI integration. Predictive models could stratify patients by complication risk, enabling targeted follow-up and early intervention. When combined with wearable biosensors and remote monitoring platforms, AI could interpret real-time data on vital signs, respiratory function, and activity levels, alerting clinicians to deviations from expected recovery trajectories and potentially preventing readmissions.
Personalized surgical planning represents another frontier. By synthesizing imaging, comorbidities, performance status, anatomic variants and genomic data, AI can generate tailored operative strategies. These platforms may eventually guide not only procedure selection but also port placement, dissection strategies, and instrument choice, offering support for both expert surgeons and trainees.
AI is also poised to transform surgical education. Machine learning and virtual reality can power adaptive simulation environments that replicate complex cases and provide targeted feedback. In thoracic surgery, where access to high-volume training is limited, these tools may help standardize skill acquisition and reduce variability in technical performance among new graduates.
Beyond technical applications, AI could improve surgical workflow and communication. Natural language processing may assist with perioperative documentation, while AI-enhanced scheduling systems could optimize operating room efficiency. Large language models trained on clinical guidelines and case data may even support shared decision-making by providing structured, evidence-based responses to complex questions.
Realizing these opportunities will require sustained buy-in from all stake holders, investment, interdisciplinary collaboration, and adaptive regulatory frameworks. Surgical societies must help establish standards and ethical guidelines, while institutions must support implementation through infrastructure, training, and policy readiness. AI will not replace thoracic surgeons, but rather, it will extend their capabilities, enabling safer, more precise, and more individualized care. It is incumbent upon thoracic surgeons to seek AI literacy and actively engage with opportunities toward AI development and assimilation into the surgical workflow. The challenge lies in ensuring that innovation respects the fundamental principles of surgical practice: judgment, empathy, accountability, and excellence. If this balance can be achieved, AI will not just augment surgery; it will elevate it.
Acknowledgments
None.
Footnote
Provenance and Peer Review: This article was commissioned by the editorial office, Current Challenges in Thoracic Surgery. The article has undergone external peer review.
Peer Review File: Available at https://ccts.amegroups.com/article/view/10.21037/ccts-25-23/prf
Funding: None.
Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://ccts.amegroups.com/article/view/10.21037/ccts-25-23/coif). The authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Song Z, Izhar A, Wei B. Artificial intelligence-assisted three-dimensional reconstruction in thoracic surgery: a narrative review. Curr Chall Thorac Surg 2025;7:6.
- Cusumano G, D'Arrigo S, Terminella A, et al. Artificial Intelligence Applications for Thoracic Surgeons: "The Phenomenal Cosmic Powers of the Magic Lamp". J Clin Med 2024;13:3750. [Crossref] [PubMed]
- Aleem MU, Khan JA, Younes A, et al. Enhancing Thoracic Surgery with AI: A Review of Current Practices and Emerging Trends. Curr Oncol 2024;31:6232-44. [Crossref] [PubMed]
- Chen Z, Zhang Y, Yan Z, et al. Artificial intelligence assisted display in thoracic surgery: development and possibilities. J Thorac Dis 2021;13:6994-7005. [Crossref] [PubMed]
- Etienne H, Hamdi S, Le Roux M, et al. Artificial intelligence in thoracic surgery: past, present, perspective and limits. Eur Respir Rev 2020;29:200010. [Crossref] [PubMed]
- Cestonaro C, Delicati A, Marcante B, et al. Defining medical liability when artificial intelligence is applied on diagnostic algorithms: a systematic review. Front Med (Lausanne) 2023;10:1305756. [Crossref] [PubMed]
- Ahmed MI, Spooner B, Isherwood J, et al. A Systematic Review of the Barriers to the Implementation of Artificial Intelligence in Healthcare. Cureus 2023;15:e46454. [Crossref] [PubMed]
- Abbaker N, Minervini F, Guttadauro A, et al. The future of artificial intelligence in thoracic surgery for non-small cell lung cancer treatment a narrative review. Front Oncol 2024;14:1347464. [Crossref] [PubMed]
- Fernandez FG, Falcoz PE, Kozower BD, et al. The Society of Thoracic Surgeons and the European Society of Thoracic Surgeons general thoracic surgery databases: joint standardization of variable definitions and terminology. Ann Thorac Surg 2015;99:368-76. [Crossref] [PubMed]
- Viswanathan VS, Parmar V, Madabhushi A. Towards equitable AI in oncology. Nat Rev Clin Oncol 2024;21:628-37. [Crossref] [PubMed]
- Chin MH, Afsar-Manesh N, Bierman AS, et al. Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. JAMA Netw Open 2023;6:e2345050. [Crossref] [PubMed]
- Ferryman K, Mackintosh M, Ghassemi M. Considering Biased Data as Informative Artifacts in AI-Assisted Health Care. N Engl J Med 2023;389:833-8. [Crossref] [PubMed]
- Rojas JC, Fahrenbach J, Makhni S, et al. Framework for Integrating Equity Into Machine Learning Models: A Case Study. Chest 2022;161:1621-7. [Crossref] [PubMed]
- Douglas MJ, Callcut R, Celi LA, et al. Interpretation and Use of Applied/Operational Machine Learning and Artificial Intelligence in Surgery. Surg Clin North Am 2023;103:317-33. [Crossref] [PubMed]
- Mumtaz H, Saqib M, Ansar F, et al. The future of Cardiothoracic surgery in Artificial intelligence. Ann Med Surg (Lond) 2022;80:104251. [Crossref] [PubMed]
- Seastedt KP, Moukheiber D, Mahindre SA, et al. A scoping review of artificial intelligence applications in thoracic surgery. Eur J Cardiothorac Surg 2022;61:239-48. [Crossref] [PubMed]
- Ross C, Swetlitz I. IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show. STAT 2018;
- Nesa L, Rony MKK, Chowdhury S, et al. Artificial Intelligence in Healthcare: A Scoping Review of Medical Professionals' Acceptance and Institutional Challenges in Implementation. J Eval Clin Pract 2025;31:e70170. [Crossref] [PubMed]
- Kamel Rahimi A, Pienaar O, Ghadimi M, et al. Implementing AI in Hospitals to Achieve a Learning Health System: Systematic Review of Current Enablers and Barriers. J Med Internet Res 2024;26:e49655. [Crossref] [PubMed]
- Nair M, Svedberg P, Larsson I, et al. A comprehensive overview of barriers and strategies for AI implementation in healthcare: Mixed-method design. PLoS One 2024;19:e0305949. [Crossref] [PubMed]
- Poon EG, Lemak CH, Rojas JC, et al. Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges. J Am Med Inform Assoc 2025;32:1093-100. [Crossref] [PubMed]
- Ayoub NF, Rameau A, Brenner MJ, et al. American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) Report on Artificial Intelligence. Otolaryngol Head Neck Surg 2025;172:734-43. [Crossref] [PubMed]
- Warraich HJ, Tazbaz T, Califf RM. FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine. JAMA 2025;333:241-7. [Crossref] [PubMed]
- Bellini V, Valente M, Del Rio P, et al. Artificial intelligence in thoracic surgery: a narrative review. J Thorac Dis 2021;13:6963-75. [Crossref] [PubMed]
- Rose SL, Shapiro D. An Ethically Supported Framework for Determining Patient Notification and Informed Consent Practices When Using Artificial Intelligence in Health Care. Chest 2024;166:572-8. [Crossref] [PubMed]
- Mello MM, Char D, Xu SH. Ethical Obligations to Inform Patients About Use of AI Tools. JAMA 2025; Epub ahead of print. [Crossref]
- Elendu C, Amaechi DC, Elendu TC, et al. Ethical implications of AI and robotics in healthcare: A review. Medicine (Baltimore) 2023;102:e36671. [Crossref] [PubMed]
- Lennartz S, Dratsch T, Zopfs D, et al. Use and Control of Artificial Intelligence in Patients Across the Medical Workflow: Single-Center Questionnaire Study of Patient Perspectives. J Med Internet Res 2021;23:e24221. [Crossref] [PubMed]
- Naik N, Hameed BMZ, Shetty DK, et al. Legal and Ethical Consideration in Artificial Intelligence in Healthcare: Who Takes Responsibility? Front Surg 2022;9:862322. [Crossref] [PubMed]
- Morris MX, Song EY, Rajesh A, et al. Ethical, Legal, and Financial Considerations of Artificial Intelligence in Surgery. Am Surg 2023;89:55-60. [Crossref] [PubMed]
- Geppert J, Asgharzadeh A, Brown A, et al. Software using artificial intelligence for nodule and cancer detection in CT lung cancer screening: systematic review of test accuracy studies. Thorax 2024;79:1040-9. [Crossref] [PubMed]
- Yoo H, Kim KH, Singh R, et al. Validation of a Deep Learning Algorithm for the Detection of Malignant Pulmonary Nodules in Chest Radiographs. JAMA Netw Open 2020;3:e2017135. [Crossref] [PubMed]
- Wulaningsih W, Villamaria C, Akram A, et al. Deep Learning Models for Predicting Malignancy Risk in CT-Detected Pulmonary Nodules: A Systematic Review and Meta-analysis. Lung 2024;202:625-36. [Crossref] [PubMed]
- Nam JG, Hwang EJ, Kim J, et al. AI Improves Nodule Detection on Chest Radiographs in a Health Screening Population: A Randomized Controlled Trial. Radiology 2023;307:e221894. [Crossref] [PubMed]
- Murchison JT, Ritchie G, Senyszak D, et al. Validation of a deep learning computer aided system for CT based lung nodule detection, classification, and growth rate estimation in a routine clinical population. PLoS One 2022;17:e0266799. [Crossref] [PubMed]
- Milam ME, Koo CW. The current status and future of FDA-approved artificial intelligence tools in chest radiology in the United States. Clin Radiol 2023;78:115-22. [Crossref] [PubMed]
- Ost DE. Artificial intelligence applications for the diagnosis of pulmonary nodules. Curr Opin Pulm Med 2025;31:344-51. [Crossref] [PubMed]
Cite this article as: Gritsiuta AI, Su S, Petrov RV. Navigating the challenges of artificial intelligence integration in thoracic surgery. Curr Chall Thorac Surg 2025;7:31.

