Abstract
Objectives:
The rapid spread of artificial intelligence (AI) in healthcare has increased interest in how the public views and trusts these technologies. However, tools designed to measure these perceptions in the Turkish context remain limited. This study aimed to develop a valid and reliable scale to assess public perceptions of AI in healthcare.
Methods:
An initial set of 41 items was created based on the literature and expert input. Data were collected from 404 adults in Turkey and divided into two groups. Exploratory factor analysis was conducted in the first group, followed by confirmatory factor analysis in the second group to test the factor structure, validity, and reliability of the scale.
Results:
Exploratory factor analysis showed that the scale has a three-factor structure reflecting attitudes and acceptance, trust, and perceived usefulness. This structure explained 74.35% of the total variance. Confirmatory factor analysis supported this model with an acceptable level of fit, and the results also showed that the scale had strong internal consistency.
Conclusion:
The SPAIHS is a psychometrically sound instrument for assessing public perceptions of AI in healthcare.
Introduction
Artificial intelligence (AI) has recently played a decisive role in the digitalisation of healthcare services. AI is used in many areas of healthcare, ranging from medical imaging and clinical decision-support systems to the analysis of large health datasets and population health management. With the help of deep learning, AI can now interpret medical images with an accuracy similar to that of human experts, highlighting its growing role in clinical practice [1].
AI can process health data in digital systems and help predict the likelihood of disease. It can also model different treatment options and their possible outcomes, which improves the overall efficiency of healthcare services [2]. Because of these capabilities, AI is no longer seen as just a technical add-on. It is now viewed as a structural change that shapes how healthcare is delivered, supports clinical decision making, and improves the patient experience [3]. In Turkey, artificial intelligence is increasingly being used in healthcare management, especially in areas such as resource planning, clinical decision support, digital transformation, and public health. At the same time, its growing use has raised concerns about ethics, accountability, transparency, and how the healthcare workforce will adapt [4]. This rapid, AI-driven transformation not only improves the efficiency of healthcare services but also draws attention to how the public perceives and makes sense of these changes [5].
Studies show that people’s views on AI in healthcare are shaped not only by how well it performs in medical decision making but also by concerns about the reduced role of human involvement [6]. Research also suggests that people feel more uneasy when medical decisions are made by machines, and they are often reluctant to accept systems that might replace human judgment, especially in critical or life-saving situations [7]. These findings suggest that even though AI is widely seen as useful and capable, many people are still hesitant to place full trust in it. This hesitation is shaped by ethical concerns, worries about data privacy, and a range of psychological and emotional factors.
When people think about AI and their attitudes toward it, trust stands out as one of the most important issues. Uncertainty about how algorithms work, the possibility of errors, unclear responsibility, potential bias, and concerns about data privacy can all weaken people’s trust in these systems [8]. Research suggests that people are more likely to trust AI when its decisions are easier to understand and when the way the system works is more transparent [9].
However, even when AI produces highly accurate technical results, this alone is not enough to earn the public’s full trust. When it comes to their health, people also want to feel understood and listened to, something they usually expect from human interaction [10]. In this context, even though AI is generally seen as beneficial, building full trust in these systems remains difficult at the moment. The idea of trust, which lies at the heart of this discussion, cannot be understood from a single point of view. Instead, it is shaped by several factors, including privacy, transparency, the risk of errors, the desire to stay in control, and perceptions of fairness. For this reason, trust should not be seen as a single, simple concept. It is a multidimensional perception shaped by issues such as privacy, transparency, fairness, a sense of control, and the possibility of errors [5].
Research on attitudes toward AI shows that people often experience both positive feelings, such as hope, and negative feelings, such as anxiety, at the same time. For example, one study found that although many people are willing to benefit from the opportunities AI offers in healthcare, they also have serious concerns about fully adopting it, mainly because of unclear issues related to responsibility, ethics, and data privacy [11]. In a similar way, other studies have shown that people’s views of AI are shaped by their past experiences with healthcare. They are also influenced by how much individuals value privacy and confidentiality, their general acceptance of technology, and their concerns about risk [12, 13]. These findings show that public attitudes toward AI are diverse and complex, and they cannot be explained only in technical terms.
Studies examining patients’ attitudes toward AI suggest that trust or distrust is not determined solely by expectations of technological performance. Trust is also shaped by how safe patients feel during their care, whether they worry about possible bias in algorithms, concerns that AI might increase costs, fears about the protection of personal data, and the sense that their autonomy could be threatened [14].
Supporting this view, one study found that patients showed only limited trust in medical decisions made solely by AI. However, their trust increased noticeably when they were told that a physician had reviewed the AI’s recommendations [15]. These findings suggest that people’s views of AI are shaped not only by how well it performs in clinical settings but also by a wider set of psychological and social factors.
International studies [5, 16] have examined attitudes toward AI, including trust and acceptance, across different population groups. However, research that looks at these issues in a multidimensional way at the societal level is still very limited in the Turkish context. This gap in the literature provided the basis for the present study. Accordingly, the study aimed to develop a comprehensive tool to measure how society perceives artificial intelligence in healthcare.
The aim of this study was to develop a valid and reliable scale to measure how people perceive the use of artificial intelligence in healthcare. In particular, the study focused on people’s attitudes toward AI, how much they trust it, and how useful they believe it to be. The study also examined whether the scale reflects these different aspects of public perception in a clear and consistent way.
Methods
Study Design and Participants
This psychometric study was conducted using a cross-sectional research design to examine societal perceptions of artificial intelligence (AI) in healthcare. Data were collected in Hatay, a province located in the Mediterranean Region of Turkey.
Individuals aged 18–60 who were literate, able to speak and understand Turkish, residing in Turkey, and willing to participate voluntarily were included in the study. Based on expert opinion, individuals over the age of 60 were excluded due to potential limitations in technology use.
In scale development studies, it is recommended that the sample size should be at least five to ten times the number of items [17]. For the 41-item draft scale, this corresponded to a target sample size of 205–410 participants. A total of 404 individuals were included in the study, which met this recommended range.
Data Collection Instruments
Personal Information Form
The personal information form consisted of nine items addressing age, gender, marital status, education level, employment status, income evaluation, frequency of digital technology use in daily life, level of knowledge about artificial intelligence, and whether participants had ever used AI-based health technologies such as the e-Nabız personal health record system (Ministry of Health, Turkey), step counters, or smart watches. These questions were designed to provide a detailed overview of participants’ personal and technological profiles in line with the aims of the study.
Social Perception of AI in Healthcare Scale (SPAIHS)
The initial item pool of the scale was developed through a comprehensive review of national and international literature on the use of AI in healthcare. Searches conducted in PubMed, Scopus, Web of Science, and Google Scholar using keywords such as “artificial intelligence,” “AI in healthcare,” “public perception,” “trust in AI,” “privacy concerns,” “technology acceptance,” and “perceived usefulness” revealed key themes related to perceived usefulness, trust and privacy, risk and uncertainty, and attitudes and acceptance toward AI.
In constructing the item pool, empirical findings reported in the literature, conceptual explanations, results from review studies, theoretical frameworks in healthcare AI and digital health, and the researcher’s prior academic experience in this field were taken into consideration. These sources were compared to generate original items representing each theme, differentiate overlapping content, and ensure the use of accessible language appropriate for individuals with varying levels of literacy. As a result, a theoretical pool of 41 items was created.
To assess content validity, the draft scale was reviewed by a panel of six experts, including two associate professors specializing in measurement and evaluation, three assistant professors working in healthcare AI and digital health, and one Turkish language specialist.
The draft scale and an evaluation form based on the Davis method [18] sent to the experts, who were given 10 days to complete their review. The experts suggested limiting the age range to 60 years, removing unnecessary conjunctions or words, correcting expression errors, and adapting the language to ensure comprehensibility for the general population. The relevant items were revised in line with these recommendations. The I-CVI and S-CVI/Ave values calculated using the Davis method were within acceptable ranges. The scale was designed in a five-point Likert format, scored as 1 = “Strongly Disagree,” 2 = “Disagree,” 3 = “Neutral,” 4 = “Agree,” and 5 = “Strongly Agree.” Following expert evaluation and content validity analysis, the scale items were subjected to factor analysis to assess construct validity.
Data Collection
Data were collected online via Google Forms between 1 August and 15 September 2025 using a cross-sectional design. Convenience and snowball sampling methods were employed. During the data collection process, the survey link was shared through the researcher’s social and professional networks, as well as via WhatsApp and social media platforms, and posted in various online community groups. Participants were kindly asked to forward the survey link to other individuals who might be willing to participate. This approach aimed to reach adult individuals residing in Hatay Province, Turkey. Participation was entirely voluntary, and respondents could proceed with the survey only after providing informed consent.
Eligibility criteria included being 18–60 years of age, living in Turkey, and being able to read and understand Turkish. Participation was voluntary, and respondents could proceed with the survey only after selecting the statement, “I voluntarily agree to participate in this study.”
The survey consisted of two sections: a personal information form and the Social Perception of AI in Healthcare Scale (SPAIHS). To protect participants’ privacy, no identifying information such as IP addresses or personal identifiers was collected. The survey link was shared with approximately 650 individuals, and a total of 404 completed responses were obtained. On average, participants took about 8 min to complete the survey.
Data Analysis
IBM SPSS 26 and AMOS 24 were used for the analyses. The total sample of 404 participants was randomly split into two equal groups of 202. The first group was used for exploratory factor analysis to identify the underlying structure of the scale. Before this analysis, sampling adequacy was checked using the Kaiser–Meyer–Olkin (KMO) measure and Bartlett’s test of sphericity [19, 20].
During the EFA, factor loadings, eigenvalues, explained variance, communalities, and item–factor relationships were examined, and items were revised based on the observed structure. To confirm the factor structure identified through EFA, Confirmatory Factor Analysis (CFA) was conducted on Group 2, an independent subsample. Model fit was evaluated using commonly recommended fit indices (e.g., CMIN/df, IFI, TLI, CFI, RMSEA), and the compatibility of the model with the theoretical structure was assessed.
To support construct validity, Composite Reliability (CR) and Average Variance Extracted (AVE) values were calculated to assess convergent validity. Discriminant validity was examined by comparing the square roots of the AVE values with the correlations between the factors. Internal consistency was evaluated using Cronbach’s alpha for both the overall scale and each subdimension. A significance level of p < 0.05 was used for all item analyses [21].
Research Ethics
The study was conducted in accordance with the principles of the Declaration of Helsinki. Prior to data collection, ethical approval was obtained from the Scientific Research and Publication Ethics Committee of Mustafa Kemal University, Social and Human Sciences (meeting date: 01.08.2025; meeting number: 10; decision no: 07). All participants provided informed consent by reading the consent statement and selecting the option “I voluntarily agree to participate in this study” before beginning the survey.
Results
Socio-Demographic Characteristics of the Participants
A total of 404 participants were included in the study. Regarding age distribution, 49.3% were between 18 and 29 years, 38.9% were between 30 and 44 years, and 11.9% were between 45 and 60 years. Of the participants, 55% were women, 52.2% were single, and 59.9% were university graduates. Additionally, 52% reported being employed, and 60.9% described their income level as moderate.
A total of 74.5% of participants indicated that they used digital technologies continuously in daily life, and 32.2% stated that they had an advanced level of knowledge about artificial intelligence. In contrast, 54.5% reported that they rarely used AI-based health technologies (Supplementary Table S1).
Construct Validity: Exploratory Factor Analysis (EFA) (N = 202)
To assess the suitability of the data for factor analysis, the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy and Bartlett’s Test of Sphericity were examined. The KMO value was 0.902, and values above 0.80 are described in the literature as indicating “very good” sampling adequacy. Bartlett’s Test of Sphericity yielded χ2(820) = 8376.904, p < 0.001, demonstrating that the correlations among variables were appropriate for factor analysis. Based on these results, the dataset was considered suitable for conducting the EFA [22].
The results of the EFA indicated that the scale consisted of a three-factor structure, and the factor loadings are presented in Table 1. In naming the factors, the shared conceptual meaning of the items grouped under each dimension, as well as relevant theoretical insights from the literature, were taken into account. The first factor consisted of items A35, A36, A40, A39, A32, A37, A38, and A41, and was labeled “Attitudes and Acceptance.” Factor loadings ranged from 0.679 to 0.955, and this factor alone explained 41.41% of the total variance. The second factor comprised items A26, A28, A29, A27, A25, A23, and A30, and was labeled “Trust,” as it reflected participants’ confidence in AI. Factor loadings ranged from 0.665 to 0.905, explaining 24.77% of the total variance. The third factor included items A2, A5, A6, A10, A9, A7, and A11, and was labeled “Perceived Usefulness,” as it represented perceptions of AI’s contribution to healthcare. Factor loadings ranged from −0.556 to −0.957, accounting for 8.16% of the variance. Together, the three factors explained 74.35% of the total variance, demonstrating that the scale supports the theoretically proposed structure [19]. Based on these results, the findings indicate that the factor structure of the scale is consistent with the theoretically proposed model.
TABLE 1
| Scale items | CVI | F1 | F2 | F3 |
|---|---|---|---|---|
| A35 | 1.00 | 0.955 | | |
| A36 | 1.00 | 0.942 | | |
| A40 | 1.00 | 0.907 | | |
| A39 | 0.83 | 0.831 | | |
| A32 | 1.00 | 0.790 | | |
| A37 | 1.00 | 0.742 | | |
| A38 | 1.00 | 0.689 | | |
| A41 | 1.00 | 0.679 | | |
| A26 | 1.00 | | 0.905 | |
| A28 | 1.00 | | 0.902 | |
| A29 | 1.00 | | 0.896 | |
| A27 | 0.83 | | 0.859 | |
| A25 | 1.00 | | 0.838 | |
| A23 | 1.00 | | 0.802 | |
| A30 | 0.83 | | 0.665 | |
| A2 | 0.83 | | | −0.560 |
| A5 | 1.00 | | | −0.957 |
| A6 | 1.00 | | | −0.925 |
| A10 | 0.83 | | | −0.718 |
| A9 | 1.00 | | | −0.702 |
| A7 | 1.00 | | | −0.587 |
| A11 | 0.83 | | | −0.556 |
| Eigenvalue | | 9.112 | 5.450 | 1.797 |
| % Of variance | | 41.417 | 24.773 | 8.167 |
| Cumulative % | | 41.417 | 66.190 | 74.358 |
Factor loadings and explained variance of the scale (Hatay, Turkey, 2025).
F1, Attitudes and Acceptance; F2, Trust; F3, Perceived Usefulness; CVI, Content Validity Index. Items with factor loadings above 0.40 were retained. AVE, CR, and Cronbach’s alpha values indicate strong construct validity and reliability. Bold values indicate the cumulative percentage of variance explained by the extracted factors.
During the EFA, a total of 17 items were removed based on factor loadings, cross-loadings, and theoretical consistency. Two items (A14 and A24) were excluded because their factor loadings were below 0.40. Several items (A18, A21, A22, A17, A33, and A34) were removed because they loaded on more than one factor and blurred the factor structure. The remaining items (A1, A3, A4, A8, A13, A15, A16, A19, and A20) were excluded because they did not fit well with the theoretical framework of the scale.
As a result of these steps, the EFA yielded a 24-item, three-factor structure. In the subsequent CFA, two additional items (A12 and A31) were removed due to high error covariances and modification indices that negatively affected model fit. Thus, a final 22-item, three-dimensional scale that was both statistically and theoretically robust was obtained. No reverse-coded items are included in the scale.
Confirmatory Factor Analysis (CFA) (N = 202)
The validity of the three-factor structure identified through EFA was tested using Confirmatory Factor Analysis (CFA), as shown in Figure 1. The analysis indicated the following model fit indices: χ2/df = 2.877, CFI = 0.920, TLI = 0.909, IFI = 0.921, NFI = 0.883, RMSEA = 0.097, and RMR = 0.066.
FIGURE 1

CFA path diagram of the scale (N:202) (Hatay, Turkey, 2025).
Overall, the CFI, TLI, and IFI values were above 0.90, indicating an acceptable fit of the model [23, 24]. The χ2/df ratio and RMR value also supported this conclusion. Although the NFI value was slightly below the ideal level, this index is known to be sensitive to sample size and model complexity, so it was not considered a sufficient reason to reject the model [25, 26]. The RMSEA value was within the range considered acceptable in the literature [27].
Reliability
The internal consistency reliability of the scale was evaluated using Cronbach’s alpha coefficients. According to commonly used guidelines, Cronbach’s alpha values above 0.70 indicate acceptable reliability, and values above 0.90 indicate very strong internal consistency [28, 29].
In this study, the Cronbach’s alpha coefficient was 0.952 for the first factor, 0.938 for the second factor, and 0.927 for the third factor. The alpha coefficient for the overall scale was 0.917, indicating that the items consistently measure the same construct at a high level. All alpha values were above 0.90, indicating that the three subdimensions of the scale were internally consistent and reliable.
As shown in Table 2, positive correlations were observed among the three subdimensions of the scale—Attitudes and Acceptance (AA), Trust (TR), and Perceived Usefulness (PU)—in the expected directions. The strong correlation between AA and PU (r = 0.804) suggests that individuals who perceive AI as useful are more likely to adopt it and develop positive attitudes toward its use. In contrast, the low correlations of the Trust subdimension with both AA (r = 0.114) and PU (r = 0.203) indicate that trust operates more independently from the other dimensions.
TABLE 2
| Subdimension | AA | TR | PU | CR | AVE | CA |
|---|---|---|---|---|---|---|
| AA | 0.792 | 0.114 | 0.804 | 0.954 | 0.668 | 0.952 |
| TR | 0.114 | 0.870 | 0.203 | 0.956 | 0.758 | 0.938 |
| PU | 0.804 | 0.203 | 0.801 | 0.927 | 0.609 | 0.927 |
Convergent and discriminant validity statistics (Hatay, Turkey, 2025).
AA, Attitudes and Acceptance; TR, Trust; PU, Perceived Usefulness; CR, Composite Reliability; AVE, Average Variance Extracted; CA, Cronbach’s Alpha.
The √AVE values used to assess discriminant validity showed that each subdimension explains its own items better than it explains the items of other subdimensions. For each factor, the √AVE value was higher than its correlations with the other factors, demonstrating that discriminant validity was achieved [22, 30]. Thus, the subdimensions were confirmed to be conceptually distinct from one another.
To evaluate convergent validity, the AVE values (ranging from 0.60 to 0.76) were examined and found to exceed the acceptable threshold of 0.50. This finding indicates that the items within each subdimension adequately represent their respective constructs [31].
The high values observed in both Cronbach’s alpha (CA) and Composite Reliability (CR) analyses (CA: 0.927–0.952; CR: 0.927–0.956) indicate that the scale possesses a highly consistent and reliable structure [20, 32]. The highest reliability values were observed in the “Attitudes and Acceptance” and “Trust” subdimensions, indicating that the items within these dimensions are highly consistent with one another.
Overall, these findings demonstrate that the scale possesses both convergent and discriminant validity and that all three subdimensions support the theoretically proposed structure. Together, these results indicate that the scale shows strong reliability as well as satisfactory convergent and discriminant validity.
As shown in Table 3, participants generally demonstrated positive attitudes toward the use of artificial intelligence in healthcare (X̄ = 3.58; SD = 0.61). Among the subdimensions, the “Attitudes and Acceptance” dimension had the highest mean score (X̄ = 4.20), indicating that participants broadly embraced healthcare-related AI applications and showed a favorable inclination toward their use.
TABLE 3
| Scale and subdimensions | Number of items | Mean | SD | Skewness | Kurtosis |
|---|---|---|---|---|---|
| SPAIHS (total) | 22 | 3.58 | 0.61 | −0.523 | 1.306 |
| Attitudes and acceptance | 8 | 4.20 | 0.84 | −1.113 | 0.619 |
| Trust | 7 | 2.50 | 0.99 | 0.661 | −0.184 |
| Perceived usefulness | 6 | 3.95 | 0.79 | −0.739 | 0.131 |
Descriptive statistics of the scale (Hatay, Turkey, 2025).
The “Perceived Usefulness” dimension also had a high mean score (X̄ = 3.95), suggesting that participants view AI as functional, beneficial, and capable of improving healthcare services. In contrast, the mean score for the “Trust” dimension was considerably lower (X̄ = 2.50). This finding implies that although participants hold positive attitudes toward AI-based health applications, they remain cautious regarding issues such as privacy, data security, and confidentiality.
All Skewness and Kurtosis values for the scale and its subdimensions were within the range of −1.5 to +1.5, indicating that the assumption of normal distribution was met. Overall, these findings show that participants perceive AI applications as useful and acceptable, yet exhibit a more cautious stance regarding the trust dimension [33].
Discussion
This study examined the validity and reliability of the SPAIHS, a scale developed to measure societal perceptions of artificial intelligence in healthcare. Participants generally reported positive views about the use of AI in healthcare. Perceived usefulness and acceptance were particularly high. However, trust scores were lower. This shows that seeing AI as beneficial does not necessarily mean trusting it. This pattern is consistent with what has been described as the medical AI paradox, in which people recognize the benefits of AI, such as speed and accuracy, yet remain reluctant to place full trust in it [7, 34].
The three-factor structure of the SPAIHS aligns with previous scale development studies demonstrating that attitudes toward AI are multidimensional. Similar patterns have been observed in studies measuring nursing students’ attitudes toward AI. These studies have identified dimensions such as benefit, perceived risk, and willingness to use AI [35]. Studies among dentistry students report a similar pattern. Although AI is seen as a tool that can accelerate treatment, concerns remain about professional roles and data privacy [36]. These results show that the dimensions of the SPAIHS are both statistically sound and theoretically well grounded.
The high Perceived Usefulness scores observed in this study are consistent with international evidence suggesting that AI contributes to more accurate diagnoses, facilitates patient monitoring, and enhances the speed of healthcare delivery. A recent meta-analysis showed that AI performs especially well in imaging and early diagnostic applications. This has contributed to a more positive public perception [37]. An analysis of social media content similarly reported that the majority of posts related to medical AI were positive and conveyed a sense of optimism [38]. Several studies also show that the public expects AI to accelerate healthcare processes and reduce errors [5]. These results, in line with the existing literature, indicate that participants believe AI to be a functional and practical tool in healthcare.
In contrast, the low scores observed in the Trust dimension suggest ongoing concerns regarding privacy, the possibility that data may be used for purposes beyond their original intent, and uncertainties about data security. Privacy is not merely a technical matter. It is a multilayered construct that includes cultural, social, and ethical dimensions. Several studies show that trust in healthcare-related AI depends on more than technical performance [39]. Transparency, data protection, accountability, and ethical regulation are also important. Trust in AI systems does not rely only on accuracy. People also want clear explanations about how the system works. They want to know who controls the data and how it is used [40]. Studies also show that responsibility is a major concern [41, 42]. Many people are unsure who would be held accountable if an AI system makes a mistake.
When the Turkish context is considered, the reasons behind the prominence of these concerns become clearer. The health informatics system in Turkey has long operated in a centralized structure. This has led to citizens experiencing frequent and obligatory interactions with digital health platforms such as e-Nabız and MHRS. This structure makes it easier to adopt new technologies. However, it also raises concerns about who controls health data, how it is processed, and how it may be used in the future. Recent legal and ethical analyses have emphasized the need for greater clarity in this area. This includes issues such as the secondary use of personal health data, anonymization procedures, and cross-border data transfer [43, 44]. In addition, the fact that trust in government institutions and trust in digital applications are not at the same level in Turkey contributes to individuals approaching the use of artificial intelligence in healthcare with greater caution.
The results of this study show that perceptions of AI in healthcare in Turkey are similar to those in many other countries. They fall between technological optimism and caution driven by privacy concerns. Participants strongly believe that AI will add value to healthcare services; however, they become more apprehensive when the processing of their own health data is involved. As noted in international research, improving AI solely at the technical level is not sufficient to enhance public trust. For individuals to feel confident in AI systems, data processing practices must be clearly explained. The basis of system decision-making should be understandable. Accountability in the event of an error must be defined, and ethical frameworks should be transparent and publicly known [39]. Clearly defining these frameworks and communicating them to the public are critically important for strengthening societal trust and facilitating the integration of AI into healthcare services.
In this study, data were mainly collected through WhatsApp and social media. This likely increased the participation of people who actively use digital technologies. The literature indicates that people who are more immersed in digital tools tend to adopt AI applications more readily [45, 46]. Therefore, the high scores observed in the “Attitudes and Acceptance” and “Perceived Usefulness” dimensions may be related to the digitally advantaged nature of the sample. Individuals who are more enthusiastic about technology may show lower levels of trust. They tend to be more sensitive to issues such as privacy, data security, and the functioning of algorithms [47, 48]. This pattern may also help explain why the Trust dimension showed lower scores in our study. Additionally, research indicates that studies involving older adults or groups with limited digital experience typically report lower levels of technology acceptance [49, 50]. Therefore, caution should be exercised when generalizing these results to the broader population, and future research should include more diverse samples.
The results obtained in this study should be interpreted in light of certain sampling-related considerations. Although participation was not formally restricted to residents of a single province, the data collection process was conducted through social networks and online communities centered in Hatay Province. As a result, the sample may reflect regional and contextual characteristics specific to Hatay Province. Convenience and snowball sampling were used together with online recruitment. This may have increased the number of participants who are more digitally engaged and familiar with technology. These factors may limit the generalizability of the findings to the broader Turkish population. Therefore, the results should be interpreted with caution.
Future studies are encouraged to validate the scale using more diverse samples.
These samples should be drawn from different regions and sociodemographic backgrounds.
Strengths and Limitations
One of the main strengths of this study is the systematic development and psychometric evaluation of a scale specifically designed to assess societal perceptions of artificial intelligence in healthcare. The use of a sufficiently large sample and the division of the dataset into independent subsamples for exploratory and confirmatory factor analyses contribute to the robustness of the scale’s factor structure. In addition, the simultaneous reporting of multiple reliability and validity indicators provides strong evidence for the internal consistency and construct validity of the instrument.
Despite these strengths, several limitations should be considered. Data collection was conducted via social networks and online communities largely centered in Hatay Province, which may have introduced a degree of geographic concentration and could limit the generalizability of the findings to the wider Turkish population. Moreover, the reliance on convenience and snowball sampling combined with online recruitment may have resulted in the overrepresentation of individuals with higher levels of digital engagement. Finally, the cross-sectional design precludes the assessment of temporal changes in perceptions of artificial intelligence in healthcare.
Conclusion
This study comprehensively evaluated the psychometric properties of the SPAIHS, a scale developed to measure societal perceptions of artificial intelligence (AI) in healthcare. The three-factor structure of the scale, which includes attitudes and acceptance, trust, and perceived usefulness, was supported by both statistical analyses and theory. The outcomes of the exploratory and confirmatory factor analyses support the scale’s construct validity, while the high Cronbach’s alpha, CR, and AVE values indicate strong internal consistency and convergent validity.
Participants generally expressed positive attitudes toward the use of AI in healthcare, with particularly high scores in the usefulness and acceptance dimensions. However, the comparatively lower trust scores suggest that technological optimism coexists with caution regarding privacy and data security. This pattern highlights the need for more transparent and explanatory policies concerning the processing and protection of health data in the context of Turkey.
The SPAIHS provides a reliable way to assess how the public views and trusts artificial intelligence in healthcare across different dimensions. The scale has the potential to contribute meaningfully to monitoring public opinion, planning educational and awareness initiatives, informing policy development, and guiding strategies aimed at enhancing the acceptance of AI applications in healthcare. Future studies applying the scale across different cultures, age groups, and socio-demographic profiles will enable the production of comparative data and support a broader understanding of the societal impacts of AI in healthcare.
Statements
Data availability statement
The data supporting this study’s findings are available from the corresponding author upon reasonable request.
Ethics statement
The study was conducted in accordance with the principles of the Declaration of Helsinki. Prior to data collection, ethical approval was obtained from the Hatay Mustafa Kemal University Social and Human Sciences Scientific Research and Publication Ethics Committee (meeting date: 01.08.2025; meeting number: 10; decision no: 07). All participants provided informed consent by reading the consent statement and selecting the option “I voluntarily agree to participate in this study” before beginning the survey.
Author contributions
FNKş conceived and designed the study, collected the data, performed the statistical analyses, interpreted the findings, and drafted and approved the final manuscript.
Funding
The author(s) declared that financial support was not received for this work and/or its publication.
Conflict of interest
The author declare that they do not have any conflicts of interest.
Generative AI statement
The author(s) declared that generative AI was used in the creation of this manuscript. The author used GPT-5 (OpenAI, 2025) to improve the clarity of the text. The final content was reviewed and verified by the author.
Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.
Supplementary material
The Supplementary Material for this article can be found online at: https://www.ssph-journal.org/articles/10.3389/ijph.2026.1609194/full#supplementary-material
References
1.
EstevaARobicquetARamsundarBKuleshovVDePristoMChouKet alA Guide to Deep Learning in Healthcare. Nat Medicine (2019) 25:24–9. 10.1038/s41591-018-0316-z
2.
TopolEJ. High-Performance Medicine: The Convergence of Human and Artificial Intelligence. Nat Medicine (2019) 25:44–56. 10.1038/s41591-018-0300-7
3.
FahimYAHasaniIWKabbaSRagabWM. Artificial Intelligence in Healthcare and Medicine: Clinical Applications, Therapeutic Advances, and Future Perspectives. Eur J Med Res (2025) 30:848. 10.1186/s40001-025-03196-w
4.
Kuşcu ŞahinFN. Dijital Dönüşümden Yapay Zekâya Yolculuk. J Acad Social Sci Stud (2025) 18:533–50. 10.29228/JASSS.82190
5.
FritschSJBlankenheimAWahlAHetfeldPMaassenODeffgeSet alAttitudes and Perception of Artificial Intelligence in Healthcare: A Cross-Sectional Survey Among Patients. Digital Health (2022) 8:20552076221116772. 10.1177/20552076221116772
6.
LiWLiuX. Anxiety About Artificial Intelligence from Patient and Doctor-Physician. Patient Education Counseling (2025) 133:108619. 10.1016/j.pec.2024.108619
7.
LongoniCBonezziAMorewedgeCK. Resistance to Medical Artificial Intelligence. J Consumer Res (2019) 46:629–50. 10.1093/jcr/ucz013
8.
KauttonenJRousiRAlamäkiA. Trust and Acceptance Challenges in the Adoption of AI Applications in Health Care: Quantitative Survey Analysis. J Med Internet Res (2025) 27:e65567. 10.2196/65567
9.
GaubeSSureshHRaueMMerrittABerkowitzSJLermerEet alDo as AI Say: Susceptibility in Deployment of Clinical Decision-Aids. NPJ Digital Medicine (2021) 4:31. 10.1038/s41746-021-00385-9
10.
KerasidouA. Artificial Intelligence and the Ongoing Need for Empathy, Compassion and Trust in Healthcare. Bull World Health Organ (2020) 98:245–50. 10.2471/BLT.19.237198
11.
RojahnJPaluASkienaSJonesJJ. American Public Opinion on Artificial Intelligence in Healthcare. PLoS One (2023) 18:e0294028. 10.1371/journal.pone.0294028
12.
FrostEKBoswardRAquinoYSJBraunack-MayerACarterSM. Public Views on Ethical Issues in Healthcare Artificial Intelligence: Protocol for a Scoping Review. Syst Reviews (2022) 11:142. 10.1186/s13643-022-02012-4
13.
WitkowskiKDoughertyRBNeelySR. Public Perceptions of Artificial Intelligence in Healthcare: Ethical Concerns and Opportunities for Patient-Centered Care. BMC Med Ethics (2024) 25:74. 10.1186/s12910-024-01066-4
14.
RichardsonJPSmithCCurtisSWatsonSZhuXBarryBet alPatient Apprehensions About the Use of Artificial Intelligence in Healthcare. NPJ Digital Medicine (2021) 4:140. 10.1038/s41746-021-00509-1
15.
RodlerSKoplikuRUlrichDKaltenhauserACasuscelliJEismannLet alPatients’ Trust in Artificial Intelligence–Based Decision-Making for Localized Prostate Cancer: Results from a Prospective Trial. Eur Urol Focus (2024) 10:654–61. 10.1016/j.euf.2023.10.020
16.
RobertsonCWoodsABergstrandKFindleyJBalserCSlepianMJ. Diverse Patients’ Attitudes Towards Artificial Intelligence (AI) in Diagnosis. PLOS Digital Health (2023) 2:e0000237. 10.1371/journal.pdig.0000237
17.
VanVoorhisCRWMorganBL. Understanding Power and Rules of Thumb for Determining Sample Sizes. Tutorials in Quantitative Methods for Psychology (2007) 32:43–50. 10.20982/tqmp.03.2.p043
18.
DavisLL. Instrument Review: Getting the Most from a Panel of Experts. Appl Nursing Research (1992) 5:194–7. 10.1016/s0897-1897(05)80008-4
19.
TabachnickBGFidellLS. Using Multivariate Statistics. 7th ed. Boston, MA: Pearson (2019).
20.
WidamanKFHelmJL. Exploratory Factor Analysis and Confirmatory Factor Analysis. In: APA Handbook of Research Methods in Psychology: Data Analysis and Research Publication. American Psychological Association (2023). p. 376–410.
21.
ShresthaN. Factor Analysis as a Tool for Survey Analysis. Am Journal Appl Mathematics Statistics (2021) 9:4–11. 10.12691/ajams-9-1-2
22.
YaşlıoğluM. Sosyal Bilimlerde Faktör Analizi ve Geçerlilik: Keşfedici ve Doğrulayıcı Faktör Analizlerinin Kullanılması. İstanbul Üniversitesi İşletme Fakültesi Dergisi (2017) 46:74–85.
23.
ByrneB. Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming. 2. Edition. New York, NY: Routledge (2010).
24.
ÇapıkC. Geçerlik ve Güvenirlik Çalışmalarında Doğrulayıcı Faktör Analizinin Kullanımı. Anadolu Hemşirelik Ve Sağlık Bilimleri Dergisi (2014) 17:196–205.
25.
BayramN. Yapısal Eşitlik Modellemesine Giriş. Baskı. Bursa: Ezgi Kitabevi (2010).
26.
ŞimşekOF. Yapisal Esitlik Modellemesine Giris: Temel Ilkeler ve LISREL Uygulamalari. Ankara: Ekinoks Yayınları (2020).
27.
MeydanCHŞeşenH. Yapısal Eşitlik Modellemesi – AMOS Uygulamaları. Ankara: Detay Yayıncılık (2015).
28.
AkgülAÇevikO. Statistical Analysis Techniques Business Management Applications in SPSS. Ankara (2005).
29.
ÖzdamarK. Package Programs and Statistical Data Analysis (Multivariate Analysis). Eskişehir, Turkey: Nisan Kitabevi (2001).
30.
FornellCLarkerDF. Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. J Marketing Res (1981) 18:39–50. 10.2307/3151312
31.
HairJJosephFHultG. Partial Least Squares Structural Equation Modeling (PLS-SEM) Using R: A Workbook. Springer International Publishing (2021). Epub ahead of print 2021. 10.1007/978-3-030-80519-7
32.
HairJFBlackWCBabinBJ. Multivariate Data Analysis (2019).
33.
UzunsakalEYıldızD. Alan Araştırmalarında Güvenilirlik Testlerinin Karşılaştırılması Ve Tarımsal Veriler Üzerine Bir Uygulama. Uygulamalı Sosyal Bilimler Dergisi (2018) 2:14–28.
34.
GaoSHeLChenYLiDLaiK. Public Perception of Artificial Intelligence in Medical Care: Content Analysis of Social Media. J Med Internet Res (2020) 22:e16649. 10.2196/16649
35.
TuranMCengizZ. Nursing Students’ General Attitudes Towards Artificial Intelligence Scale (NGAAIS): A Turkish Validity and Reliability Study. Nurse Educ Pract (2025) 88:104574. 10.1016/j.nepr.2025.104574
36.
YılmazCErdemRZUygunLA. Artificial Intelligence Knowledge, Attitudes and Application Perspectives of Undergraduate and Specialty Students of Faculty of Dentistry in Turkey: An Online Survey Research. BMC Med Educ (2024) 24:1149. 10.1186/s12909-024-06106-6
37.
AliOAbdelbakiWShresthaAElbasiEAlryalatMAADwivediYK. A Systematic Literature Review of Artificial Intelligence in the Healthcare Sector: Benefits, Challenges, Methodologies, and Functionalities. J Innovation and Knowledge (2023) 8:100333. 10.1016/j.jik.2023.100333
38.
AlmanaaM. Trends and Public Perception of Artificial Intelligence in Medical Imaging: A Social Media Analysis. Cureus (2024) 16:e70008. 10.7759/cureus.70008
39.
MorleyJFloridiL. An Ethically Mindful Approach to AI for Health Care. The Lancet (2020) 395:254–5. 10.1016/S0140-6736(19)32975-7
40.
VayenaESalathéMMadoffLCBrownsteinJS. Ethical Challenges of Big Data in Public Health. PLoS Computational Biology (2015) 11:e1003904. 10.1371/journal.pcbi.1003904
41.
GerkeSMinssenTCohenG. Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare. In: BohrAMemarzadehK, editors. Artificial Intelligence in Healthcare. Academic Press (2020). p. 295–336.
42.
YoungATAmaraDBhattacharyaAWeiML. Patient and General Public Attitudes Towards Clinical Artificial Intelligence: A Mixed Methods Systematic Review. The Lancet Digital Health (2021) 3:e599–e611. 10.1016/S2589-7500(21)00132-1
43.
AğınSOrbatuD. Legal and Ethical Approaches to the Usage of Blockchain and Artificial Intelligence Technologies in Healthcare in the Scope of Personal Data Protection. J Behcet Uz Children’s Hosp (2025) 15:59–65. 10.4274/jbuch.galenos.2025.84665
44.
Uygun İlikhanSÖzerMTanberkanHBozkurtVHow to Mitigate the Risks of Deployment of Artificial Intelligence in Medicine?Turkish J Med Sci (2024) 54:(3)–92. 10.55730/1300-0144.5814
45.
IşıkMÇamurÖ. Yapay Zekâ Ve Dijital Okuryazarlık: Akademik Çabada Yeni Dinamikler. Beykoz Akademi Dergisi (2024) 12:173–97. 10.14514/beykozad.1508294
46.
DağaşanA. Dijital Okuryazarlık Ile Yapay Zekâ Okuryazarlığı Arasındaki Ilişkide Bilgi Ve Iletişim Teknolojilerine Yönelik Tutumun Aracı Rolü. Uluslararası Türkçe Edebiyat Kültür Eğitim Dergisi (2025) 14:238–51. 10.7884/teke.1628023
47.
VayenaEBlasimmeACohenIG. Machine Learning in Medicine: Addressing Ethical Challenges. PLoS Medicine (2018) 15:e1002689. 10.1371/journal.pmed.1002689
48.
NongPPlattJ. Patients’ Trust in Health Systems to Use Artificial Intelligence. JAMA Netw Open (2025) 8:e2460628. 10.1001/jamanetworkopen.2024.60628
49.
KnowlesBHansonVL. Older Adults’ Deployment of ‘Distrust. ACM Trans Computer-Human Interaction (Tochi) (2018) 25:1–25. 10.1145/3196490
50.
WongAKCLeeJHTZhaoYLuQYangSHuiVCC. Exploring Older Adults’ Perspectives and Acceptance of Ai-Driven Health Technologies: Qualitative Study. JMIR Aging (2025) 8:e66778. 10.2196/66778
Summary
Keywords
artificial intelligence, attitudes, healthcare delivery, psychometric measurement, scale development
Citation
Kuşcu Şahin FN (2026) Development and Examination of the Psychometric Properties of the Social Perception of Artificial Intelligence in Healthcare Scale in the Turkish Context: Evidence From Hatay Province. Int. J. Public Health 71:1609194. doi: 10.3389/ijph.2026.1609194
Received
13 October 2025
Revised
13 January 2026
Accepted
28 January 2026
Published
25 February 2026
Volume
71 - 2026
Edited by
Gabriel Gulis, University of Southern Denmark, Denmark
Reviewed by
Najmaddin A. H. Hatem, Hodeidah University, Yemen
Cornelius G. Wittal, Roche Pharma AG, Germany
Updates
Copyright
© 2026 Kuşcu Şahin.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Fatma Nuray Kuşcu Şahin, nuraykuscu@outlook.com
Disclaimer
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.