Texas Attorney General Settles with AI Developer Over Patient Safety Concerns
In a significant move for healthcare technology regulation, Texas Attorney General Ken Paxton has announced a settlement with Pieces Technologies, a Dallas-based artificial intelligence developer. The settlement addresses allegations that the company’s generative AI tools posed risks to patient safety by overstating their accuracy. This case marks a pivotal moment in the intersection of artificial intelligence and healthcare, highlighting the urgent need for transparency and accountability in AI applications.
The Role of Pieces Technologies
Pieces Technologies, headquartered in Irving, Texas, specializes in using generative AI to summarize real-time electronic health record (EHR) data. The company’s software is currently utilized in at least four hospitals across the state, providing critical insights into patient conditions and treatments. However, the Attorney General’s office raised concerns regarding the accuracy of these summaries, particularly in light of the company’s claims about a "severe hallucination rate" of less than one per 100,000. This term refers to instances where AI-generated outputs deviate significantly from factual information, potentially leading to dangerous misinterpretations in clinical settings.
Settlement Terms and Company Response
While Pieces Technologies denied any wrongdoing or liability, the settlement stipulates that the company must "clearly and conspicuously disclose" the definition of its accuracy metrics and the methodology behind them. Should the company fail to meet these disclosure requirements, it is obligated to engage an independent third-party auditor to evaluate the performance and characteristics of its products and services. Pieces Technologies has agreed to comply with these provisions for a period of five years.
In a statement to Healthcare IT News, the company expressed that the Attorney General’s announcement misrepresents the Assurance of Voluntary Compliance it entered into. Pieces Technologies emphasized its commitment to supporting additional oversight and regulation of clinical generative AI, viewing the settlement as an opportunity to foster constructive dialogue on these critical issues.
The Importance of Transparency in AI
The implications of this settlement extend beyond Pieces Technologies. As generative AI becomes increasingly integrated into healthcare systems, the challenges surrounding the accuracy and transparency of these models are coming to the forefront. A recent study from the University of Massachusetts Amherst and Mendel, an AI company focused on hallucination detection, revealed alarming findings regarding AI-generated medical summaries. Researchers tested two large language models—OpenAI’s GPT-4 and Meta’s Llama-3—on 50 detailed medical notes. The results indicated that both models produced numerous inaccuracies, with GPT-4 generating 21 summaries containing incorrect information and 50 that were overly generalized, while Llama-3 had 19 errors and 47 generalizations.
The Broader Context of AI in Healthcare
The reliability of AI tools that generate summaries from electronic health records remains a contentious issue. Dr. John Halamka, president of the Mayo Clinic Platform, has voiced concerns about the current state of generative AI, stating, "it’s not transparent, it’s not consistent, and it’s not reliable yet." This sentiment underscores the necessity for caution in selecting use cases for AI applications in healthcare.
To address these challenges, the Mayo Clinic Platform has developed a risk-classification system designed to evaluate algorithms before their external use. Dr. Sonya Makhni, the platform’s medical director, highlighted the importance of considering how AI solutions might impact clinical outcomes and the potential risks associated with incorrect or biased algorithms. She stressed that both developers and end-users share the responsibility of framing AI solutions in terms of risk.
Regulatory Perspectives on AI in Healthcare
Texas Attorney General Ken Paxton’s statement regarding the settlement with Pieces Technologies reinforces the need for transparency in AI products used in high-risk settings. He emphasized that AI companies must be forthright about their risks, limitations, and appropriate use. Furthermore, he urged hospitals and healthcare entities to carefully evaluate the suitability of AI products and ensure that their staff is adequately trained to use these technologies.
As the healthcare industry continues to embrace AI innovations, the conversation surrounding regulation, oversight, and ethical use of these technologies is more critical than ever. The settlement with Pieces Technologies serves as a reminder of the potential consequences of neglecting these responsibilities and the importance of safeguarding patient safety in an increasingly digital healthcare landscape.