<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Archiving and Interchange DTD v2.3 20070202//EN" "archivearticle.dtd">
<article article-type="methods-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Int J Public Health</journal-id>
<journal-title>International Journal of Public Health</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Int J Public Health</abbrev-journal-title>
<issn pub-type="epub">1661-8564</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1607317</article-id>
<article-id pub-id-type="doi">10.3389/ijph.2024.1607317</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Public Health Archive</subject>
<subj-group>
<subject>Hints and Kinks</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>A Methodology for Using Large Language Models to Create User-Friendly Applications for Medicaid Redetermination and Other Social Services</article-title>
<alt-title alt-title-type="left-running-head">Ratna et al.</alt-title>
<alt-title alt-title-type="right-running-head">LLMs for Medicaid Redermination</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Ratna</surname>
<given-names>Sumanth</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2773744/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Weeks</surname>
<given-names>William B.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2547228/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ferres</surname>
<given-names>Juan Lavista</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Chopra</surname>
<given-names>Aneesh</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Pereira</surname>
<given-names>Mayana</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>
<institution>Department of Computer Science, Yale University</institution>, <addr-line>New Haven</addr-line>, <addr-line>CT</addr-line>, <country>United States</country>
</aff>
<aff id="aff2">
<sup>2</sup>
<institution>Microsoft, AI for Good Lab</institution>, <addr-line>Redmond</addr-line>, <addr-line>WA</addr-line>, <country>United States</country>
</aff>
<aff id="aff3">
<sup>3</sup>
<institution>CareJourney</institution>, <addr-line>Arlington</addr-line>, <addr-line>VA</addr-line>, <country>United States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/943079/overview">Nino Kuenzli</ext-link>, Swiss Tropical and Public Health Institute (Swiss TPH), Switzerland</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1713841/overview">Marlene Joannie Bewa</ext-link>, University of South Florida, United States</p>
<p>One reviewer who chose to remain anonymous</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: William B. Weeks, <email>wiweeks@microsoft.com</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>16</day>
<month>08</month>
<year>2024</year>
</pub-date>
<pub-date pub-type="collection">
<year>2024</year>
</pub-date>
<volume>69</volume>
<elocation-id>1607317</elocation-id>
<history>
<date date-type="received">
<day>25</day>
<month>03</month>
<year>2024</year>
</date>
<date date-type="accepted">
<day>12</day>
<month>08</month>
<year>2024</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2024 Ratna, Weeks, Ferres, Chopra and Pereira.</copyright-statement>
<copyright-year>2024</copyright-year>
<copyright-holder>Ratna, Weeks, Ferres, Chopra and Pereira</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<kwd-group>
<kwd>public health</kwd>
<kwd>access to health services</kwd>
<kwd>health service research</kwd>
<kwd>artificial intelligence (AI)</kwd>
<kwd>Medicaid</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>Background</title>
<p>Following the unwinding of Medicaid&#x2019;s continuous enrollment provision, states must redetermine Medicaid eligibility, creating uncertainty about coverage [<xref ref-type="bibr" rid="B1">1</xref>] and the widespread administrative removal of beneficiaries from rolls [<xref ref-type="bibr" rid="B2">2</xref>].</p>
<p>Existing research demonstrates that Large Language Models (LLMs) can automate clinical trial eligibility query extraction [<xref ref-type="bibr" rid="B3">3</xref>], generation [<xref ref-type="bibr" rid="B4">4</xref>], and classification [<xref ref-type="bibr" rid="B5">5</xref>]. Given that Medicaid redetermination follows eligibility rules similar to those in clinical trials, we thought LLMs might help with Medicaid redetermination, as well.</p>
<p>Therefore, using the State of Washington, South Carolina, and North Dakota as examples, we applied LLMs to extract Medicaid rules from publicly available documents and transform those rules into a web application that could allow users to determine whether they are eligible for Medicaid. This paper describes the methodology we used.</p>
</sec>
<sec sec-type="methods" id="s2">
<title>Methods</title>
<p>Using publicly available HyperText Markup Language (HTML) web pages that describe Medicaid eligibility rules as inputs to LLMs interactions, we used OpenAI GPT-4o and a rule extraction process to summarize those documents into rules related to eligibility criteria and to convert them into Python code which embedded them into a deployable application (<xref ref-type="fig" rid="F1">Figure 1</xref>). We designed the application so that user-provided personal details trigger rules that determine eligibility status.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Diagram of our workflow (United States. 2024). The process that we used to develop an application starts with collecting documents that can be available as an HTML or other formats (Step 1). We used the text in the HTML as input to ChatGPT prompts; ChatGPT then extracted Medicaid rules from text and transformed rules into Python code (Step 2). The interaction with ChatGPT generated deployable Python code as output (Step 3) which, when deployed as an interactive application, can collect user information to determines Medicaid eligibility status (Step 4). (United States, 2024).</p>
</caption>
<graphic xlink:href="ijph-69-1607317-g001.tif"/>
</fig>
<p>To demonstrate the generalizability of our pipeline, we studied three states: the State of Washington (where each Medicaid program has its own eligibility webpage), North Dakota (where a single webpage broadly describes eligibility for all Medicaid programs in a single section), and South Carolina (where a single webpage defines eligibility for each Medicaid program in order).</p>
<p>We evaluated the accuracy of the produced code by calculating the average time in minutes (over five attempts) that it took one of us (SR) to implement functional code across several scenarios. Functional code satisfies three properties: scope, meaning the program implements eligibility calculation for all Medicaid programs provided as input; accuracy, meaning the program&#x2019;s eligibility calculations align with the natural language rules provided as input; and specificity, meaning the program identifies which specific Medicaid program the user is eligible for. We discretized the time needed to make each program &#x201c;functional&#x201d; by binning as follows: programs that required no modification received a score of 2; programs that required more than zero and less than 3&#xa0;min to modify received a score of 1; and programs that needed 3&#xa0;min or longer received a score of 0. As such, higher scores correspond to higher-quality LLM outputs, while lower scores correspond to lower-quality LLM outputs.</p>
<p>We also studied the effects of two programmer-set parameters on results. The first was temperature, which controls the amount of randomness in the output, with temperature 0.0 generating more deterministic responses and temperature 1.0 generating more random responses. The second was &#x201c;top p,&#x201d; which controls how much responses can deviate from the input&#x2019;s topic, with &#x201c;top p&#x201d; 1.0 generating more creative responses and &#x201c;top p&#x201d; 0.0 generating responses that are more restrictive and do not elaborate on defined criteria.</p>
<p>For North Dakota and South Carolina, we used GPT-4o in a two-step pipeline: first, to extract eligibility rules (in natural language); second, to define eligibility rules for public consumption, using Python 3.</p>
<p>For the State of Washington, we introduced another variable into the pipeline; we varied the order of rule extraction: one method converted input to rules for each Medicaid category, then concatenated eligibility rule guidelines, then asked GPT-4o to write Python code to implement those guidelines (&#x201c;Combine to Python&#x201d;); the other method converted input to Python directly, concatenated the Python snippets, and asked GPT-4o to combine the snippets (&#x201c;Python to Combine&#x201d;). We calculated the cost to run our pipeline according to OpenAI&#x2019;s pricing model, which is publicly available.</p>
<p>Our study used publicly available data and was exempt from human subjects&#x2019; review.</p>
</sec>
<sec sec-type="results" id="s3">
<title>Results</title>
<p>For both North Dakota and South Carolina, we found optimal results at a temperature of 0.5 and a &#x201c;top p&#x201d; of 0.0 (<xref ref-type="table" rid="T1">Table 1</xref>). This aligned with our expectations, as we anticipated a moderate temperature would allow creativity in code-generation while maintaining accuracy.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Experiment results for North Dakota and South Carolina. (North Dakota and South Carolina, United States, 2024).</p>
</caption>
<table>
<thead valign="top">
<tr>
<th rowspan="3" colspan="2" align="left"/>
<th colspan="3" align="center">North Dakota</th>
<th colspan="3" align="center">South Carolina</th>
</tr>
<tr>
<th colspan="3" align="center">Temperature value</th>
<th colspan="3" align="center">Temperature value</th>
</tr>
<tr>
<th align="center">0.0</th>
<th align="center">0.5</th>
<th align="center">1.0</th>
<th align="center">0.0</th>
<th align="center">0.5</th>
<th align="center">1.0</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="3" align="left">&#x201c;top p&#x201d; value</td>
<td align="center">
<bold>1.0</bold>
</td>
<td align="center">1.2</td>
<td align="center">1.2</td>
<td align="center">2.0</td>
<td align="center">1.2</td>
<td align="center">1.6</td>
<td align="center">1.4</td>
</tr>
<tr>
<td align="center">
<bold>0.5</bold>
</td>
<td align="center">1.4</td>
<td align="center">1.4</td>
<td align="center">2.0</td>
<td align="center">1.6</td>
<td align="center">1.6</td>
<td align="center">1.0</td>
</tr>
<tr>
<td align="center">
<bold>0.0</bold>
</td>
<td align="center">2.0</td>
<td align="center">
<bold>2.0</bold>
</td>
<td align="center">1.8</td>
<td align="center">1.8</td>
<td align="center">
<bold>2.0</bold>
</td>
<td align="center">1.6</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>The values show the average of the binned scores across five efforts, where higher scores correspond to fewer number of minutes needed to modify the resultant Python code into a functional and accurate application for end users. We show results across several values of temperature (which controls the amount of randomness in the output) and &#x201c;top p&#x201d; (which controls how much responses can deviate from the input&#x2019;s topic). For each state, the optimal result is in bold.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>For the State of Washington, we found that there was no significant difference between &#x201c;Combine to Python&#x201d; and &#x201c;Python to Combine.&#x201d; We found that moderate temperature (at a value of 0.5) and &#x201c;top p&#x201d; (at a value of 0.5) produced high-quality implementations that were specific and required no human corrections (<xref ref-type="table" rid="T2">Table 2</xref>).</p>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>Experiment results for the State of Washington. (State of Washington, United States, 2024).</p>
</caption>
<table>
<thead valign="top">
<tr>
<th rowspan="3" colspan="2" align="left"/>
<th colspan="3" align="center">&#x201c;Combine to Python&#x201d;</th>
<th colspan="3" align="center">&#x201c;Python to combine&#x201d;</th>
</tr>
<tr>
<th colspan="3" align="center">Temperature value</th>
<th colspan="3" align="center">Temperature value</th>
</tr>
<tr>
<th align="center">0.0</th>
<th align="center">0.5</th>
<th align="center">1.0</th>
<th align="center">0.0</th>
<th align="center">0.5</th>
<th align="center">1.0</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td rowspan="3" align="left">&#x201c;top p&#x201d; value</td>
<td align="center">
<bold>1.0</bold>
</td>
<td align="center">1.2</td>
<td align="center">1.4</td>
<td align="center">1.0</td>
<td align="center">1.0</td>
<td align="center">1.4</td>
<td align="center">1.0</td>
</tr>
<tr>
<td align="center">
<bold>0.5</bold>
</td>
<td align="center">1.2</td>
<td align="center">1.0</td>
<td align="center">1.4</td>
<td align="center">1.2</td>
<td align="center">1.2</td>
<td align="center">1.4</td>
</tr>
<tr>
<td align="center">
<bold>0.0</bold>
</td>
<td align="center">1.4</td>
<td align="center">
<bold>1.8</bold>
</td>
<td align="center">1.4</td>
<td align="center">1.4</td>
<td align="center">
<bold>1.8</bold>
</td>
<td align="center">1.0</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn>
<p>The values show the average of the binned scores across five efforts, where higher scores correspond to fewer number of minutes needed to modify the resultant Python code into a functional and accurate application for end users. We show results across several values of temperature (which controls the amount of randomness in the output) and &#x201c;top p&#x201d; (which controls how much responses can deviate from the input&#x2019;s topic). The optimal result for each approach is in bold.</p>
</fn>
</table-wrap-foot>
</table-wrap>
<p>The cost for North Dakota was approximately $0.07 per experiment (specific top p and temperature combination); that for South Carolina was approximately $0.17, and that for the State of Washington was approximately $0.23 per experiment.</p>
</sec>
<sec sec-type="discussion" id="s4">
<title>Discussion</title>
<p>For three states, we used publicly available information on Medicaid eligibility criteria and OpenAI GPT-4o to generate Python code that created interactive applications to help determine Medicaid eligibility. We could do so relatively easily and in a replicable way that could improve service delivery efficiency, potentially while reducing errors caused by manual processing. Our pipeline performed well across all three states, suggesting that our methods are generalizable, and was inexpensive.</p>
<p>Overall, the methodology that we describe could easily be used by states to develop easily understood guidance for potential beneficiaries of a variety of state-run programs. To ensure the reliability and trustworthiness of the system, states implementing this process should address the limitations of LLMs, including potential inaccuracies and fabricated content; further, should they choose to use this method, states should follow ethical guidelines and proper procedures for its deployment and should rigorously verify the accuracy of application in determining Medicaid eligibility.</p>
<p>This general framework might be applicable to multiple government eligibility processes, and, if applied widely, could result in services that are more accessible, transparent, and efficient than traditional methods. The methodology facilitates development of applications in several languages, significantly impacting beneficiaries with limited English proficiency, who are 5.3 times more likely to lose Medicaid benefits than English-proficient ones [<xref ref-type="bibr" rid="B6">6</xref>]. At virtually no cost and with little effort, a process like the one described here might be used to integrate LLMs into healthcare decision support [<xref ref-type="bibr" rid="B7">7</xref>], ease the burden of individuals navigating bureaucratic processes in a variety of social services settings, and foster equitable access to health and other benefits.</p>
</sec>
</body>
<back>
<sec id="s5">
<title>Author Contributions</title>
<p>All authors contributed to the conception and design of the study, SR and MP had access to the data and conducted the analyses. WW, SR, and MP drafted the manuscript. All authors contributed to the article and approved the submitted version.</p>
</sec>
<sec sec-type="funding-information" id="s6">
<title>Funding</title>
<p>The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.</p>
</sec>
<sec sec-type="COI-statement" id="s7">
<title>Conflict of Interest</title>
<p>WW, JF, and MP were employed by Microsoft, and AC was employed by CareJourney.</p>
<p>The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<label>1.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Haley</surname>
<given-names>JM</given-names>
</name>
<name>
<surname>Karpman</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Kenney</surname>
<given-names>GM</given-names>
</name>
<name>
<surname>Zuckerman</surname>
<given-names>S</given-names>
</name>
</person-group>. <source>Most Adults in Medicaid-Enrolled Families Are Unaware of Medicaid Renewals Resuming in the Future</source>. <publisher-loc>Washington, DC</publisher-loc>: <publisher-name>Urban Institute</publisher-name> (<year>2022</year>). <comment>Available from: <ext-link ext-link-type="uri" xlink:href="https://www.rwjf.org/en/insights/our-research/2022/11/most-adults-in-medicaid-enrolled-families-are-unaware-of-medicaid-renewals-resuming-in-the-future.html#:%7E:text=Most%20adults%20with%20family%20Medicaid%20enrollment%20were%20not,and%2015.7%20percent%20reported%20hearing%20only%20a%20little">https://www.rwjf.org/en/insights/our-research/2022/11/most-adults-in-medicaid-enrolled-families-are-unaware-of-medicaid-renewals-resuming-in-the-future.html&#x23;:&#x223c;:text&#x3d;Most%20adults%20with%20family%20Medicaid%20enrollment%20were%20not,and%2015.7%20percent%20reported%20hearing%20only%20a%20little</ext-link> (Accessed May 10, 23)</comment>.</citation>
</ref>
<ref id="B2">
<label>2.</label>
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Tolbert</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Ammula</surname>
<given-names>M</given-names>
</name>
</person-group>. <source>10 Things to Know About the Unwinding of the Medicaid Continuous Enrollment Provision</source>. <publisher-loc>San Francisco, CA</publisher-loc>: <publisher-name>Kaiser Family Foundation</publisher-name> (<year>2023</year>). <comment>Available from: <ext-link ext-link-type="uri" xlink:href="https://www.kff.org/medicaid/issue-brief/10-things-to-know-about-the-unwinding-of-the-medicaid-continuous-enrollment-provision/">https://www.kff.org/medicaid/issue-brief/10-things-to-know-about-the-unwinding-of-the-medicaid-continuous-enrollment-provision/</ext-link>(Accessed May 10, 23)</comment>.</citation>
</ref>
<ref id="B3">
<label>3.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Datta</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>K</given-names>
</name>
<name>
<surname>Paek</surname>
<given-names>H</given-names>
</name>
<name>
<surname>Manion</surname>
<given-names>FJ</given-names>
</name>
<name>
<surname>Ofoegbu</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Du</surname>
<given-names>J</given-names>
</name>
<etal/>
</person-group> <article-title>AutoCriteria: A Generalizable Clinical Trial Eligibility Criteria Extraction System Powered by Large Language Models</article-title>. <source>J Am Med Inform Assoc</source> (<year>2024</year>) <volume>31</volume>(<issue>2</issue>):<fpage>375</fpage>&#x2013;<lpage>85</lpage>. <pub-id pub-id-type="doi">10.1093/jamia/ocad218</pub-id>
</citation>
</ref>
<ref id="B4">
<label>4.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Park</surname>
<given-names>J</given-names>
</name>
<name>
<surname>Fang</surname>
<given-names>Y</given-names>
</name>
<name>
<surname>Ta</surname>
<given-names>C</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>G</given-names>
</name>
<name>
<surname>Idnay</surname>
<given-names>B</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>F</given-names>
</name>
<etal/>
</person-group> <article-title>Criteria2Query 3.0: Leveraging Generative Large Language Models for Clinical Trial Eligibility Query Generation</article-title>. <source>J Biomed Inform</source> (<year>2024</year>) <volume>154</volume>:<fpage>104649</fpage>. <pub-id pub-id-type="doi">10.1016/j.jbi.2024.104649</pub-id>
</citation>
</ref>
<ref id="B5">
<label>5.</label>
<citation citation-type="confproc">
<person-group person-group-type="author">
<name>
<surname>Devi</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Uttrani</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Singla</surname>
<given-names>A</given-names>
</name>
<name>
<surname>Jha</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Dasgupta</surname>
<given-names>N</given-names>
</name>
<name>
<surname>Natarajan</surname>
<given-names>S</given-names>
</name>
<etal/>
</person-group> <article-title>Automating Clinical Trial Eligibility Screening: Quantitative Analysis of Gpt Models Versus Human Expertise</article-title>. In: <conf-name>Proceedings of the 17th International Conference on PErvasive Technologies Related to Assistive Environments</conf-name>. <publisher-name>ACM</publisher-name> (<year>2024</year>). p. <fpage>626</fpage>&#x2013;<lpage>32</lpage>. <pub-id pub-id-type="doi">10.1145/3652037.3663922</pub-id>
</citation>
</ref>
<ref id="B6">
<label>6.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mirza</surname>
<given-names>M</given-names>
</name>
<name>
<surname>Harrison</surname>
<given-names>EA</given-names>
</name>
<name>
<surname>Qui&#xf1;ones</surname>
<given-names>L</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>H</given-names>
</name>
</person-group>. <article-title>Medicaid Redetermination and Renewal Experiences of Limited English Proficient Beneficiaries in Illinois</article-title>. <source>J Immigrant Minor Health</source> (<year>2022</year>) <volume>24</volume>(<issue>1</issue>):<fpage>145</fpage>&#x2013;<lpage>53</lpage>. <pub-id pub-id-type="doi">10.1007/s10903-021-01178-8</pub-id>
</citation>
</ref>
<ref id="B7">
<label>7.</label>
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gottlieb</surname>
<given-names>S</given-names>
</name>
<name>
<surname>Silvis</surname>
<given-names>L</given-names>
</name>
</person-group>. <article-title>How to Safely Integrate Large Language Models into Health CARE</article-title>. <source>JAMA Health Forum</source> (<year>2023</year>) <volume>4</volume>(<issue>9</issue>):<fpage>e233909</fpage>. <pub-id pub-id-type="doi">10.1001/jamahealthforum.2023.3909</pub-id>
</citation>
</ref>
</ref-list>
</back>
</article>