Skip to main content

Assessing satisfaction in simulation among nursing students: psychometric properties of the Satisfaction with Simulation Experience - Italian Version scale



The Satisfaction with Simulation Experience scale is a 5-point Likert scale that measures students’ satisfaction in medium and high-fidelity simulation scenarios. This study aims at investigating the psychometric properties of the Satisfaction with Simulation Experience - Italian Version scale.


A multi-centre cross-sectional study was conducted. The scale was administered to a sample of 266 undergraduate nursing students from two Italian universities after attending a medium- and high-fidelity simulation session in November 2022 and March 2023. Cronbach’s alpha coefficient and item-total correlation were sorted out to assess internal consistency and reliability. The test-retest method was used as a measure of scale stability over time as well as the confirmatory factor analysis to verify construct validity.


The Cronbach’s alpha value was 0.94 for the overall scale, indicating excellent reliability, and it was 0.84 or higher for each subscales, indicating good reliability. A large correlation coefficient of 0.60 or higher was found between each item and its subscale and between each item and the overall scale score. A medium test-retest correlation coefficient was found for most items (r > 0.30). The confirmatory factor analysis confirmed the factorial structure found in the original study.


Satisfaction is an important teaching and learning quality indicator along with the achievement of learning outcomes in simulation. The Satisfaction with Simulation Experience - Italian Version scale showed good reliability and validity; therefore, it could be a useful tool to assess simulation impact in Italian nursing students. The extensive utilization of the Satisfaction with Simulation Experience scale, along with its various validated versions, could facilitate assessing satisfaction in simulation across diverse contexts and enable comparisons of findings across studies in different countries.

Peer Review reports


Current evidence highlights the potential power of simulation as a technology-based educational strategy in promoting better learning outcomes in students and professionals in the healthcare field [1]. Simulation is defined as “the process by which we are trying to achieve results approximating clinical practice as closely as possible” [2]; it is an educational strategy rather than a technology [2, 3], through which students may experience real-world elements that are observable and therefore assessable by teachers [4, 5].

Following the covid-19 pandemic [6, 7], and the integration of state-of-the-art technologies [1] the use of simulation in healthcare professionals’ education has been increasing significantly. Furthermore, simulation has been recognised as a key strategy for acquiring essential skills to work in unpredictable and complex environments where mutual dependence and cooperation with other professions are vital in delivering high-quality care [8, 9].

Moreover, literature reviews show that simulation improves knowledge and skills among undergraduate health students [1, 7, 10]. Additionally, simulation contributes in reducing anxiety and stress and in fostering reflective learning, self-confidence and satisfaction [11,12,13,14,15].

Despite being recognised as an important variable, satisfaction alone does not provide a full picture of the effectiveness of simulation [16]. In the field of social science, students’ learning satisfaction is defined as the impact of the process which have taken place during a teaching and learning experience. Thus, it may play a crucial role in fostering students’ willingness to continue studying in a life-long perspective and promote learning outcomes achievement [17]. Several literature reviews reported how students’ satisfaction is related to simulation [7, 11, 18,19,20,21]. The majority of nursing students showed a high level of satisfaction with simulation [22, 23] and qualitative evaluation also revealed that students generally have positive perceptions of their simulation experiences [23, 24]; however, evidence is inconsistent when compared with traditional methods [25]. Student satisfaction is greater in high-fidelity simulation than in virtual learning [26] and it increases after repeated exposure to it [27]. High-fidelity simulation achieves higher levels of satisfaction in comparison to low-fidelity simulation or paper-based case study activities [28]; in contrast, this is not the case when compared with medium-fidelity simulation [29]. Furthermore, in their meta-analysis Yi Li et al. (2022) reported that high-fidelity simulation is not likely to increase learning satisfaction in nursing students instead it could prove to be more beneficial when compared with other teaching methods. This finding may be due to simulation-related factors. Consequently, authors concluded that nursing educators are required to implement evidence-based strategies aimed at improving students’ learning satisfaction [21]. Learners’ satisfaction in simulation can be assessed in a variety of ways, both qualitative and quantitative [16]. Typically, students are asked to respond to a survey based on Likert-type questions.

In 2011, Levett-Jones et al. developed and validated the Satisfaction with Simulation Experience (SSE) scale, a tool designed to compare differences in satisfaction levels in undergraduate nursing students exposed to medium and high-fidelity simulation sessions in Australia [3]. The SSE scale is based on a reflective model and it consists of three sub-scales [30]. The scale has recently been validated in other countries, including Italy [31], Croatia [32] and Turkey [33] and it was evaluated among healthcare professionals from various disciplines, as well as post-graduate healthcare course students [6, 30, 32, 34,35,36].

The validation process is paramount to ensure that the tool is accurate and reliable [37]. Accurate translation is crucial for the validation process, ensuring the tool aligns with the cultural and linguistic nuances of a different geographical setting.

In fact, the delivery of high-quality educational interventions depends on the accurate assessment and deeper understanding of an individual’s cultural and linguistic background [38].

In Italy, a first validation study of the Satisfaction with Simulation Experience - Italian Version (SSE-ITA) scale was carried out on a sample of 10 undergraduate nursing students which included a content validity assessment [31]. However, as authors reported, a greater sample size was needed to confirm psychometric integrity of the newly validated tool. Furthermore, the research team recommended testing the tool in different contexts and cohorts of students with the aim of producing further evidence of reliability and construct validity [31].

Therefore, this study aims at investigating the psychometric properties of the SSE-ITA scale on a larger sample.


A multi-centre cross-sectional study was carried out in 2022–2023 to test the psychometric properties of the SSE-ITA scale among Italian undergraduate nursing students.

Sampling and data collection

A convenience sample of nursing students from two Italian universities was recruited. Specifically, students enrolled in the third year of the Bachelor of Science in Nursing at the University of Modena and Reggio Emilia and first-year students at the University of Parma, who took part in at least one simulation session scheduled in Academic Year 2022–2023, were voluntarily recruited.

Students of the University of Modena and Reggio Emilia filled out the SSE-ITA scale following a high-fidelity simulation session delivered in October and November 2022. The test-retest reliability of the SSE-ITA scale was assessed by administering it to the sample at the end of the simulation session and additionally, on average, 8 days later than simulation session (range 4–42 days). Students of the University of Parma filled out the SSE-ITA scale after taking part in a medium-fidelity simulation session arranged in March 2023.


The SSE-ITA scale aims at assessing nursing students’ satisfaction following a high or medium-fidelity simulation session.

As in the original version of the scale [3], the Italian one is based on a set of 3 sub-scales consisting of 18 items exploring different areas of the simulation experience [30] associated to 5-point Likert scale statements (Strongly Disagree, Disagree, Not Sure, Agree, Completely Agree).

The 3 above-mentioned sub-scales focus on the following areas:

  • Sub-scale 1 titled “Debriefing and Reflections” consisting of 9 items explores participants’ opinions on opportunities for reflection and learning at debriefing stage;

  • Sub-scale 2 titled “Clinical Reasoning’’ consisting of 5 items assesses the effectiveness of simulation in fostering clinical reasoning skills;

  • Sub-scale 3 titled “Clinical Learning,” consisting of 4 items assesses to what extent simulation supports clinical skills development.

In the first Italian validation study, the SSE-ITA showed an Item-Content Validity Index value (I-CVI) ≥ 0.80 and a Subscale-Content Validity Index (S-CVI) equal to 0.94. The reliability coefficient (r) was 0.88 and internal consistency values (Cronbach’s alpha) for each sub-scale were: ‘’Debriefing and reflections’’ α = 0.74; ‘’Clinical reasoning’’ α = 0.69; ‘’Clinical learning’’ α = 0.63; overall scale = 0.71 [31].

Simulation sessions

High-fidelity simulation sessions were delivered at the Centre for Advanced Training and Medical Simulation of the University of Modena and Reggio Emilia. The students were divided into ten groups. Each group participated in a high-fidelity simulation session between October and November 2022. The simulation session was conducted by an experienced simulation instructor and structured as follows: briefing (1 h), simulation session (40 min), and debriefing (1 h). The scenario was based on a deteriorating patient in the emergency department. The patient’s clinical conditions were aimed at pointing out that the patient was about to go into cardiac arrest. Once cardiac arrest was recognised and confirmed, students had to perform Basic Life Support and Defibrillation (BLSD) according to current guidelines [39]. The expected learning outcomes were: the application of the National Early Warning Score (NEWS) scale [40] and of the BLSD algorithm, the correct prioritisation of interventions in accordance with the resources available along with effective communication among team memebers.

University of Parma students were involved in medium-fidelity simulation session focusing on head-to-toe standardised clinical examination based on ABCDE algorithm, on sorting out the NEWS score and on reporting clinical conditions for further care via the SBAR tool [41]. The activity took place in the SIMLAB of the Department of Medicine and Surgery. The same structured approach used in the Modena and Reggio Emilia centre was adopted in Parma. The simulation session was delivered by an experienced simulation instructor and included: briefing (1 h), simulation session (40 min), and debriefing (1 h). Students were divided into 29 groups of 5 students each.

The expected learning outcomes were: appropriate patient assessment through head-to-toe clinical examination, correct application of the NEWS 2 scale, and the effective use of the SBAR tool.

Data analysis

The item/participant ratio equal to or greater than 10:1 was considered to define the sample size according to the indications of Costello & Osborne, 2005 [42]. The characteristics of the sample (gender and age) were analysed through descriptive statistics. Cronbach’s alpha coefficient and item-total correlation were used to assess internal consistency and reliability. Additionally, the test-retest method was used as a measure of the stability of the scale over time. Values of Cronbach’s alpha coefficient ≥ 0.90 were considered excellent, ≥ 0.80 good, ≥ 0.70 acceptable, and ≥ 0.60 questionable. The above-mentioned values were deemed acceptable for Cronbach’s alpha coefficient [43]. For the calculation of item-total correlations and test-retest correlations, the assumptions of normality were checked, and non-parametric statistics techniques (rho of Spearman) were used for data not normally distributed [44]. A range of correlation coefficient between 0.29 and 0.90 was deemed acceptable and an r-value of 0.10 was considered low, of 0.30 medium and of 0.50 high [45, 46]. SPSS version 28 was used for performing statistical analyses.

Confirmatory Factor Analysis (CFA) was conducted to test construct validity of SSE-ITA using Mplus v.6. Prior to proceeding with this type of test, assumptions of normality were tested [47]. Initially, missing data were checked by using the Little’s Missing Completely at Random (MCAR) Test. Subsequently, the multivariate normality assumption was assessed by using the Mardia test and the most appropriate statistical technique were selected to test the model with three first-order factors (“Debriefing and Reflections”, “Clinical Reasoning,” “Clinical Learning”). Absolute and relative fit indexes comparing reproduced co-variance matrix with empirical data were adopted. The following indexes were assessed: Root Mean Square Error of Approximation (RMSEA), Standardized Root Mean Square Residual (SRMR), Comparative Fit Index (CFI), and Tucker-Lewis Index (TLI). Model fit was considered robust if RMSEA and SRMR < 0.08, CFI and TLI > 0.95 [48].


Out of 331 students, 266 have completed the SSE-ITA scale; resulting in a response rate of 80%. Specifically, 123 students were from the University of Modena and Reggio Emilia, while 143 were from the University of Parma. The gender distribution was 85.90% female and 14.10% male and the mean age was 22.69 years ± 4 years.

Reliability analysis

Table 1 shows the main results of the reliability analysis on the SSE-ITA scale:

Table 1 SSE-ITA scale: reliability analysis main results (n = 266)

As measure of reliability, Cronbach’s alpha coefficient was used showing 0.94 for the overall scale, indicating excellent reliability; the sub-scale with the highest Cronbach’s alpha was “Debriefing and Reflections” (α = 0.91), followed by “Clinical Learning” (α = 0.86) and “Clinical Reasoning” (α = 0.84). When removing a given item from the scale, no increase in the Cronbach’s alpha coefficient was noted; hence, analysis of items based on correlation led the researchers to conclude that no items needed to be excluded from the scale (Table 1).

In addition, the variables were not normally distributed: Kolmogorov-Smirnov test and Shapiro-Wilk test were significant for all items, overall score and score of each subscale (p < 0.001); therefore, rho of Spearman was used to test item-total correlations and test-retest correlation. A large correlation coefficient of 0.60 and above resulted between each item and its sub-scale and each item and overall scale score; they are all statistically significant (Table 1).

Table 2 shows the descriptive statistics and the test-retest correlation coefficient of each item.

Table 2 Test-retest reliability results

The test-retest correlation coefficient was low in item 6 (r = 0.10), high in items 13,14,15,17 (r > 0.50) and medium for the remaining items (r > 0.30). However, the analyses show a high percentage (> 62%) of concordance for all items in the test and retest and the medians, in the test and retest, are the same in 10 items. The high degree of homogeneity of the data does not allow an assessment of the stability of the scale over time under each condition (satisfied and not satisfied) and the values of the correlation coefficients obtained could be influenced by the latter.

Confirmatory Factor Analysis

CFA (Confirmatory Factor Analysis) was conducted on SSE-ITA scale to test the three-factor structure of the original scale [3] and of the Italian version [31]. Missing data were less than 4% for each score and the MCAR test results were non-significant (Chi-square = 52.56, DF = 61, Sign. = 0.71), indicating that data were missing randomly without compromising the estimation in data analyses. The18 items of the SSE-ITA showed an asymmetry higher than I1.0I while the Mardia test yielded significant multivariate skewness (M = 25.38, SD = 1.14, p < 0.001) along with Kurtosis (M = 357.58, SD = 3.23, p < 0.001), Maximum Likelihood with Robust standard errors (i.e. MLR) was used as estimator in the following analysis to prevent any negative impact when dealing with non-normal data [49].

As shown in Table 3, the CFA conducted on SSE-ITA showed good fit indices confirming the factor structure found in the original study.

Table 3 SSE-ITA: confirmatory factor analysis results (n = 266)

Table 4 shows the structure of SSE-ITA with items factor loading. Item loadings range from I0.64I to I0.79I; this means that items are good indicators of their respective sub-scale as they are higher than 0.45, the cut-off set by some saturation guidelines [50].

Table 4 Factors loadings resulting from CFA of SSE_ITA (n = 266)


This study aimed at investigating the psychometric properties of the SSE-ITA scale on a larger multi-centre sample of nursing students. The study specifically tested the tool for psychometric properties such as internal consistency and structural validity.

Internal consistency is the degree of interrelatedness among the items and it is often based on Cronbach’s alpha coefficient [51]. The study results revealed that the SSE-ITA overall scale, as well as its three sub-scales, exhibits high internal consistency. In fact, in this study, the values are higher than in the first Italian validation study [31]. These results are aligned with those emerging from of other validation studies of the SSE. Particularly, in this study Cronbach’s alpha is slightly lower than in the original version of the scale in relation to subscale ‘’Debriefing and reflections’’ for which α = 0.93 and to sub-scale ‘’Clinical reasoning’’ for which α = 0.85 [3].

Compared to the Croatian version of the scale (SSE-CRO) [32], Cronbach’s alpha is slightly higher in this scale than in the SSE-ITA for the first factor CRO - F1 (α = 0.90) and the third factor CRO - F3 (α = 0.73), as well as in the overall scale (α = 0.92). In the second factor CRO-F2, the alpha coefficient (α = 0.84) remains consistent with the subscale “Clinical Reasoning” of SSE-ITA.

Recently, the Turkish version of the scale (SSES-TR) was validated as well [33]. The SSES-TR exhibits lower Cronbach’s alphas compared to SSE-ITA in both the overall scale (α = 0.93) and the sub-scales (α = 0.90 for the “Debriefing and Reflections” sub-scale, α = 0.77 for the “Clinical Reasoning” sub-scale, and α = 0.81 for the “Clinical Learning” sub-scale). The CFA conducted on the SSES-TR also indicates acceptable or good fit measures (RMSEA = 0.09, CFI = 0.98, TLI = 0.98, SRMR = 0.09).

Therefore, the results of the CFA suggest that the scale possesses good structural validity; this means that the degree to which the instrument scores reflect the dimensionality of satisfaction as the construct is adequate [51]. The main satisfaction indicators in the original scale were drawn from the literature and a panel of experts subsequently reached consensus on their related items; a large correlation between each item and its subscale and items and the overall scale was found. Therefore, these variables may contribute to better defining what satisfaction in simulation really means and promoting research in this area.

Satisfaction holds a primary position in Kirkpatrick’s educational program evaluation model, specifically focusing on perceptions. This aspect becomes crucial when examining the impact of simulation-based teaching programs, particularly in the context of the uneven integration of high-fidelity simulation in Italian educational programs. There are prevailing misconceptions and low expectations among students in this regard [52].

To enhance the generalizability of validation results for the Italian version of the instrument, our study involved multiple centres and students from both first and third academic years. This is a notable improvement compared to the initial validation study, which included only second-year students from a single centre. Future studies using this tool can provide valuable insights into its validation.

Using a validated tool for measuring satisfaction is essential not only for the Italian context but also for addressing gaps in the literature. Two recent meta-analyses have reported non-significant results for student satisfaction in high-fidelity simulation, possibly influenced by factors such as simulation-related elements and the number of exposures [13, 20, 53]. Interestingly, repeated exposures have been shown to enhance student satisfaction [27, 52].

A critical consideration is the comparison among different types of simulation and the identification of factors influencing both learning outcomes and satisfaction. Studies assessing these aspects using validated tools are particularly desirable, especially with the advancements in robotics and artificial intelligence, which are reshaping educational standards [52].

In light of our study findings, the Satisfaction with SSE scale proves to be a valuable assessment tool in Italian simulation settings. When translated, it holds potential for international use, enabling comparability. It’s worth noting that satisfaction significantly impacts student retention [54] and confidence, influencing nurses’ behaviors in professional settings [55].

For future research, it’s crucial to acknowledge the limitations of this study. The homogeneity of data between test and retest poses a limitation, influencing the statistical analyses for the time stability measure of the scale. Additionally, the wide time interval (from 4 to 42 days) for retests may have influenced the obtained results [56].The previous validation study of the SSE-ITA indicated good test-retest reliability of the scale [31]. Similarly, the test-retest reliability coefficients of SEES-TR were found to be comparable [33].

Another notable limitation is the absence of measurements for convergent and cross-cultural validity. Convergent validity assesses whether the scores of the tested tool align with expectations when correlated with other tools [51]. For instance, Tüzer et al. [33] compared the SSES-TR with “The Scale of Student Satisfaction and Confidence In Learning” to establish convergent validity. Cross-cultural validity, on the other hand, gauges how well the performance of items on a translated or culturally adapted instrument reflects that of the original version [51]. To address this psychometric property, a combined dataset of scores from Italy and another country with comparable samples could provide valuable insights.


In conclusion, satisfaction plays a pivotal role in achieving learning outcomes in simulation. The SSE-ITA scale, with demonstrated validity and reliability, stands as a valuable tool for assessing simulation in Italian nursing students. Widespread use of this scale and its validated versions can facilitate satisfaction assessments in diverse contexts and support evidence of its psychometric integrity, particularly in cross-cultural validity. This opens avenues for further studies investigating the relationship between satisfaction and learning outcomes in simulation.

Data availability

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.



Satisfaction with Simulation Experience - Italian Version


Item-Content Validity Index value


Subscale-Content Validity Index


Confirmatory Factor Analysis


Root Mean Square Error of Approximation


Standardized Root Mean Square Residual


Comparative Fit Index


Tucker-Lewis Index


subscale 1 “Debriefing and Reflections”


subscale 2 “Clinical Reasoning”


subscale 3 “Clinical Learning”


  1. McInerney N, Nally D, Khan MF, Heneghan H, Cahill RA. Performance effects of simulation training for medical students – a systematic review. GMS J Med Educ. 2022;39(5).

  2. Gaba DM. The future vision of simulation in health care. Qual Saf Heal Care. 2004;13(suppl1):i2–10.

    Article  Google Scholar 

  3. Levett-Jones T, McCoy M, Lapkin S, Noble D, Hoffman K, Dempsey J, Arthur C, Roche J. The development and psychometric testing of the satisfaction with Simulation Experience Scale. Nurse Educ Today. 2011;7705–10.

  4. Koukourikos K, Tsaloglidou A, Kourkouta L, Papathanasiou IV, Iliadis C, Fratzana A, Panagiotou A. Simulation in clinical nursing education. Acta Inf Med. 2021;29(1):15–20.

    Article  Google Scholar 

  5. Bertozzi S, Ferri P, Cortini C, Mentasti R, Scalorbi S, Di Lorenzo R, Rovesti S, Alberti S, Rubbi I. Clinical Judgment Skills Assessment in High Fidelity Simulation: A Comparison Study in Nursing Education. In: Kubincová, Z., Melonio, A., Durães, D., Rua Carneiro, D., Rizvi, M., Lancia, L, editors Methodologies and Intelligent Systems for Technology Enhanced Learning, Workshops, 12th International Conference. MIS4TEL 2022. Lecture Notes in Networks and Systems, 2023 (538): 133–143.

  6. Stirparo G, Gambolò L, Bellini L, Medioli F, Bertuol M, Guasconi M, Sulla F, Artioli G, Sarli L. Satisfaction evaluation for ACLS training. Acta Biomed. 2022;93(3):e2022260.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Tong LK, Li YY, Au ML, Wang SC, Ng WI. High-fidelity simulation duration and learning outcomes among undergraduate nursing students: a systematic review and meta-analysis. Nurse Educ Today. 2022;116:105435.

    Article  PubMed  Google Scholar 

  8. Akselbo I, Aune I. 2023. Springer Nature. Accessed 15 July 2023.

  9. Bajpai S, Semwal M, Bajpai R, Car J, Ho AHY. Health professions’ Digital Education: review of learning theories in Randomized controlled trials by the Digital Health Education Collaboration. J Med Internet Res. 2019;21(3):e12912.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Chow KM, Ahmat R, Leung AWY, Chan CWH. Is high-fidelity simulation-based training in emergency nursing effective in enhancing clinical decision-making skills? A mixed methods study. Nurse Educ Pract. 2023;69:103610.

    Article  PubMed  Google Scholar 

  11. Ayed A, Khalaf I. The outcomes of integrating High Fidelity Simulation in nursing education: an integrative review. Open J Nurs. 2018;8(5):292–302.

    Article  Google Scholar 

  12. Foronda C, Liu S, Bauman EB. Evaluation of simulation in undergraduate nurse education: an integrative review. Clin Simul Nurs. 2013;9(10):e409–16.

    Article  Google Scholar 

  13. La Cerra C, Dante A, Caponnetto V, Franconi I, Gaxhja E, Petrucci C, Alfes CM, Lancia L. Effects of high-fidelity simulation based on life-threatening clinical condition scenarios on learning outcomes of undergraduate and postgraduate nursing students: a systematic review and meta-analysis. BMJ Open. 2019;9(2):e025306.

  14. Warren JN, Luctkar-Flude M, Godfrey C, Lukewich J. A systematic review of the effectiveness of simulation-based education on satisfaction and learning outcomes in nurse practitioner programs. Nurse Educ Today. 2016;46:99–108.

    Article  PubMed  Google Scholar 

  15. Dante A, Masotta V, Marcotullio A, Bertocchi L, Caponnetto V, La Cerra C, Petrucci C, Alfes CM, Lancia L. The lived experiences of intensive care nursing students exposed to a new model of high-fidelity simulation training: a phenomenological study. BMC Nurs. 2021;30(1):154.

    Article  Google Scholar 

  16. Prion S. A practical Framework for evaluating the Impact of Clinical Simulation experiences in prelicensure nursing education. Clin Simul Nurs. 2008;4(3):69–78.

    Article  Google Scholar 

  17. Wu YC, Hsieh LF, Lu JJ. What’s the relationship between learning satisfaction and Continuing Learning Intention? Procedia - Soc Behav Sci. 2015;191:2849–54.

    Article  Google Scholar 

  18. Fegran L, Ten Ham-Baloyi W, Fossum M, Hovland OJ, Naidoo JR, van Rooyen DRM, Sejersted E, Robstad N. Simulation debriefing as part of simulation for clinical teaching and learning in nursing education: a scoping review. Nurs Open. 2023;10(3):1217–33.

    Article  PubMed  Google Scholar 

  19. Lee J, Lee H, Kim S, Choi M, Ko IS, Bae J, Kim SH. Debriefing methods and learning outcomes in simulation nursing education: a systematic review and meta-analysis. Nurse Educ Today. 2020;87:104345.

    Article  PubMed  Google Scholar 

  20. Yeoungsuk S, Seurk P. Effectiveness of debriefing in Simulation-based education for nursing students: a systematic review and Meta-analysis. J Korean Acad Fundam Nurs. 2022;29(4):399–415.

    Article  Google Scholar 

  21. Li YY, Au ML, Tong LK, Ng WI, Wang SC. High-fidelity simulation in undergraduate nursing education: a meta-analysis. Nurse Educ Today. 2022;111:105291.

    Article  PubMed  Google Scholar 

  22. Guerrero JG, Ali SAA, Attallah DM. The acquired critical thinking skills, satisfaction, and Self confidence of nursing students and staff nurses through high-fidelity Simulation Experience. Clin Simul Nurs. 2022;64:24–30.

    Article  Google Scholar 

  23. Arrogante O, González-Romero GM, Carrión-García L, Polo A. Reversible causes of cardiac arrest: nursing competency acquisition and clinical simulation satisfaction in undergraduate nursing students. Int Emerg Nurs. 2021;54:1–7.

    Article  Google Scholar 

  24. Demirtas A, Guvenc G, Aslan Ö, Unver V, Basak T, Kaya C. Effectiveness of simulation-based cardiopulmonary resuscitation training programs on fourth-year nursing students. Australas Emerg Care. 2021;24(1):4–10.

    Article  PubMed  Google Scholar 

  25. Tosterud R, Hedelin B, Hall-Lord ML. Nursing students’ perceptions of high- and low-fidelity simulation used aslearning methods. Nurse Educ Pract. 2013;13(4):262–70.

    Article  PubMed  Google Scholar 

  26. Park SM, Hur HK, Chung CW. Learning effects of virtual versus high-fidelity simulations in nursing students: a crossover comparison. BMC Nurs. 2022;21(1):1–9.

    Article  Google Scholar 

  27. Hung CC, Kao HFS, Liu HC, Liang HF, Chu TP, Lee BO. Effects of simulation-based learning on nursing students’ perceived competence, self-efficacy, and learning satisfaction: a repeat measurement method. Nurse Educ Today. 2021;97:104725.

    Article  PubMed  Google Scholar 

  28. Willaert WIM, Aggarwal R, Van Herzeele I, Cheshire NJ, Vermassen FE. Recent advancements in Medical Simulation: patient-specific virtual reality Simulation. World J Surg. 2012;36(7):1703–12.

    Article  PubMed  Google Scholar 

  29. Alconero-Camarero AR, Sarabia-Cobo CM, Catalán-Piris MJ, González-Gómez S, González-López JR. Nursing students’ satisfaction: a comparison between medium-and high-fidelity simulation training. Int J Environ Res Public Health. 2021;18(2):1–11.

    Article  Google Scholar 

  30. Mutairi MA, Alruwaili A, Alsuwais S, Othman F, Ammar A, Baladi Z. View of Satisfaction Level Of Simulation Experience Among Applied Medical Sciences Students: A Cross-Sectional Study. Published 2021. Accessed January 18, 2023.

  31. Guasconi M, Tansini B, Granata C, Beretta M, Bertuol M, Lucenti E, Deiana L, Artioli G, Sarli L. First Italian validation of the satisfaction with simulation experience scale (SSE) for the evaluation of the learning experience through simulation. Acta Biomed. 2021;92(S2):e2021002.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Smrekar M, Ledinski Fičko S, Kurtović B, Ilić B, Čukljek S, Tomac M, Hošnjak AM. Translation and validation of the satisfaction with Simulation Experience scale: cross-sectional study. Cent Eur J Nurs Midwifery. 2022;13(2):633–9.

    Article  Google Scholar 

  33. Tüzer H, Kocatepe V, Yilmazer T, Inkaya B, Ünver V, Levett-Jones T. Turkish validity and reliability of the satisfaction with simulation experience scale. Konuralp tıp derg. 2022;14(3):461–8.

    Article  Google Scholar 

  34. Kwon HJ, Yoou SK. Validation of a Korean version of the satisfaction with simulation experience scale for paramedic students. Korean J Emerg Med Serv. 2014;18(2):7–20.

    Article  Google Scholar 

  35. Vermeulen J, Buyl R, D’haenens F, et al. Midwifery students’ satisfaction with perinatal simulation-based training. Women Birth. 2021;34(6):554–62.

    Article  PubMed  Google Scholar 

  36. Williams B, Dousek S. The satisfaction with simulation experience scale (SSES): a validation study. J Nurs Educ Pract. 2012;2(3):74–80.

    Article  Google Scholar 

  37. Psm L, Siew L, Pauline M, Pharm B. Validating instruments of measure: Is it really necessary? Malaysian Fam Physician. 2013;8(1). Accessed January 18, 2023.

  38. Gudmundsson E. Guidelines for translating and adapting psychological instruments. Nord Psychol. 2009;61(2):29–45.

    Article  Google Scholar 

  39. Merchant RM, Topjian AA, Panchal AR, Cheng A, Aziz K, Berg KM, Lavonas EJ, Magid DJ. Adult Basic and Advanced Life Support, Pediatric Basic and Advanced Life Support, neonatal life support, Resuscitation Education Science, and systems of Care writing groups. Part 1: executive summary: 2020 American Heart Association guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation. 2020;142(16):S337–57.

    Article  PubMed  Google Scholar 

  40. Smith GB, Redfern OC, Pimentel MA, Gerry S, Collins GS, Malycha J, Prytherch D, Schmidt PE, Watkinson PJ. The National Early warning score 2 (NEWS2). Clin Med (Lond). 2017;19(3):260.

    Article  Google Scholar 

  41. Achrekar MS, Murthy V, Kanan S, Shetty R, Nair M, Khattry N. Introduction of Situation, background, Assessment, recommendation into nursing practice: a prospective study. Asia-Pacific J Oncol Nurs. 2016;3(1):45–50.

    Article  Google Scholar 

  42. Costello AB, Osborne J. Best practices in exploratory factor analysis: four recommendations for getting the most from your analysis. Pract Assess Res Eval. 2005;10(1):7. Accessed June 5, 2022.

    Google Scholar 

  43. Gliem JA, Gliem RR, Calculating, Interpreting, And Reporting Cronbach’s Alpha Reliability Coefficient For Likert-Type Scales. Midwest Research to Practice Conference in Adult, Continuing, and Community Education 2003. Accessed May 18, 2022.

  44. Schober P, Boer C, Schwarte LA. Correlation coefficients: appropriate use and interpretation. Anesth Analg. 2018;126(5):1763–8.

    Article  PubMed  Google Scholar 

  45. Cohen J. Statistical Power Analysis for the behavioral sciences. Routledge; 2013. 1–17,109–139,458.

  46. Drevin J, Kristiansson P, Stern J, Rosenblad A. Measuring pregnancy planning: a psychometric evaluation and comparison of two scales. J Adv Nurs. 2017;73(11):2765–75.

    Article  PubMed  Google Scholar 

  47. Mikkonen K, Tomietto M, Watson R. Instrument development and psychometric testing in nursing education research. Nurse Educ Today. 2022;119:105603.

    Article  PubMed  Google Scholar 

  48. Morin AJS, Marsh HW, Nagengast B. Exploratory structural equation modeling. In: Hancock GR, Mueller RO, editors. Structural equation modeling: a second course. IAP Information Age Publishing; 2013. pp. 395–436.

  49. Li CH. Confirmatory factor analysis with ordinal data: comparing robust maximum likelihood and diagonally weighted least squares. Behav Res Methods. 2016;48(3):936–49.

    Article  PubMed  Google Scholar 

  50. Comrey AL, Lee HB. A first course in factor analysis. 2nd ed. Psychology; 1992.

  51. Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, Bouter LM, de Vet HC. The COSMIN study reached international consensus on taxonomy, terminology, and definitions of measurement properties for health-related patient-reported outcomes. J Clin Epidemiol. 2010;63(7):737–45.

    Article  PubMed  Google Scholar 

  52. Dante A, La Cerra C, Caponnetto V, Masotta V, Marcotullio A, Bertocchi L, Ferraiuolo F, Petrucci C, Lancia L. Dose-response relationship between high-Fidelity Simulation and Intensive Care nursing students’ learning outcomes: an Italian Multimethod Study. Int J Environ Res Public Health. 2022;19(2):617.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Ozdemir NG, Kaya H. The effectiveness of high-fidelity simulation methods to gain Foley catheterization knowledge, skills, satisfaction and self-confidence among novice nursing students: a randomized controlled trial. Nurse Educ Today. 2023;130:105952.

    Article  Google Scholar 

  54. Cant R, Gazula S, Ryan C. Predictors of nursing student satisfaction as a key quality indicator of tertiary students’ education experience: an integrative review. Nurse Educ Today. 2023;126:105806.

    Article  PubMed  Google Scholar 

  55. Oanh TTH, Hoai NTY, Thuy PT. The relationships of nursing students’ satisfaction and self-confidence after a simulation-based course with their self-confidence while practicing on real patients in Vietnam. J Educ Eval Health Prof. 2021;18:16.

    Article  PubMed  PubMed Central  Google Scholar 

  56. LeBlanc VR, Posner GD. Emotions in simulation-based education: friends or foes of learning? Adv Simul. 2022;7(1):1–8.

    Article  Google Scholar 

Download references


The authors would like to express their sincere gratitude to all nursing students participants and educators who collaborated during simulation sessions.


No external funding.

Author information

Authors and Affiliations



SA: Conceptualization, methodology, formal analysis, data curation, Project administration, writing—original draft preparation. MG: Conceptualization, methodology, investigation, data curation, writing—review and editing.MB: methodology, formal analysis, data curation, writing—review and editing. GD: Conceptualization, investigation, data curation, writing—review and editing.PV: Conceptualization, investigation, data curation, writing—review and editing.SR: Conceptualization, Methodology, resources, writing—review and editing. FM: Methodology, investigation, data curation, resources, writing—review and editing. AB: Methodology, investigation, data curation, resources, writing—review and editing. PF: Conceptualization, methodology, formal analysis, supervision, writing—review and editing. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sara Alberti.

Ethics declarations

Ethics approval and consent to participate

The study has been approved by the Local Ethics Committee of Vasta Area Emilia Nord (protocol AOU 00254610 of 6th September 2022); it was conducted following the principles of the Declaration of Helsinki of the World Medical Association (1964) and the General Data Protection Regulation (Regulation EU 2016/679). The students were informed that participation in the study was voluntary and anonymous and that they were free to withdraw from the study at any time. Numerical codes were used to ensure anonymity. All participants were asked to read and sign the informed consent form prior to the beginning of the study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alberti, S., Guasconi, M., Bolzoni, M. et al. Assessing satisfaction in simulation among nursing students: psychometric properties of the Satisfaction with Simulation Experience - Italian Version scale. BMC Nurs 23, 300 (2024).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: