Skip to main content

Validation of a short version of the high-fidelity simulation satisfaction scale in nursing students

Abstract

Background

Clinical simulation provides a practical and effective learning method during the undergraduate education of health professions. Currently there is only one validated scale in Spanish to assess nursing students’ satisfaction with the use of high-fidelity simulation, therefore, our objective is to validate a brief version of this scale in undergraduate nursing students with or without clinical experience.

Method

A cross-sectional descriptive study was performed. Between 2018 and 2020, the students from all academic courses of the Fundación Jiménez Díaz nursing school completed the satisfaction scale at the end of their simulation experiences. To validate this scale, composed of 33 items and eight dimensions, exploratory factor analysis (EFA) of the principal components was performed, the internal consistency was studied using Cronbach’s alpha, and the corrected item-test correlation of each of the items of the total scale was reviewed.

Results

425 students completed the scale, after the exploratory factor analysis, a scale consisting of 25 items distributed into six subscales, each containing between two and six items, explained a variance of 66.5%. The KMO test (Kaiser-Meyer-Olkin) obtained a value of 0.938, Bartlett’s sphericity test was < 0.01 and Goodness of Fit Index (GFI) was 0.991.

Conclusion

The modified ESSAF scale, reduced from 33 to 25 items and divided into six subscales, is as valid and reliable as the original scale for use in nursing students of different levels, with, or without clinical experience.

Peer Review reports

Introduction

Some of the new demands that the knowledge society has placed on higher education include encouraging students to develop skills to address situations by applying their knowledge[1]. In the field of nursing education, methodologies aimed at integrating theory with practice are fundamental, assessing both knowledge and skills as well as conveying attitudes [2]. In this context, clinical simulation provides a method of learning and training in which knowledge and skills are intertwined and can lead to learning outcomes that are not achieved through lectures or error with real patients [3].

Simulation has always been present in nursing education, however, in recent years, it has gained significant popularity [4]. Its growth and dissemination are related to the concern for quality and safety in patient care. Specifically, in Spain, simulation is taking center stage in both undergraduate and postgraduate nursing education, with the creation of multiple spaces for simulation within universities, although its implementation and curricular integration is still a challenge [5].

According to Gaba [6], simulation is a technique of learning that amplifies real experiences with guided ones that evoke reality in an interactive manner. It has been shown to be effective for acquiring technical skills and integrating complex clinical knowledge and skills, increasing the degree of retention of what has been learned when compared to traditional teaching methods [7,8,9]. This type of training is associated with a feedback or debriefing session, in which students and teachers analyze the activity performed, its strengths and areas for improvement, accompanied by a phase of reflective-critical thinking to deepen the trained process [10]. The student assumes an active role in their learning, as the protagonist in the construction of their knowledge in contexts that are similar to reality [11].

Several published meta-analyses have concluded the effectiveness of undergraduate simulation programs vs. traditional teaching models [12]. The meta-analysis published by Cook in 2013 reveals the success factors within simulation programs, highlighting the need for debriefing, integration of simulation into the formal curriculum, individualized simulation practice that is spread over time and demonstrating different variants or clinical contexts [13].

In accordance with these needs, the School of Nursing (Fundation Jiménez Díaz – UAM School of Nursing) has proposed a curricular design where clinical simulation is not an independent subject, but rather it is integrated into the curriculum in a cross-cutting fashion. In the 2018–2019 academic year, simulated clinical experiences were carried out with students in the 1st and 2nd year of the nursing degree within the framework of the subjects Nursing Methodology, Adult Nursing I, II and Psychosociology of Care. In the 2019–2020 academic year, the same simulated clinical experiences were carried out with students in the 1st and 2nd year of the nursing degree and were also carried out in the 3rd and 4th years in the framework of the following subjects: Pediatrics, Psychiatry and Management of Critical Situations. All these simulation activities were designed following the recommendations for a successful simulation program published in the latest systematic reviews [13, 14].

In simulation training, higher student satisfaction results in better learning outcomes, and the design features of a simulation influence its learning outcomes [15], it is essential to increase the impact of the simulated experience by designing simulation scenarios appropriate to the level and learning objectives of the students [16].

In addition, the debriefing that takes place after the simulated event also requires prior preparation and should be related to the completion of the learning process. In our case, each of the simulation modules required at least 3 multidisciplinary work sessions between the teachers responsible for the course, clinical experts and simulation experts, with the aim of designing simulation experiences according to the real needs of the students.

Therefore, it is essential that the teacher receives feedback from the student to understand whether the simulated experience has allowed the student to advance in their learning process or whether it has deviated from their real needs in order to complement their theoretical knowledge base [17].

According to the Standards of Best Practice in simulation [18], teachers should ensure the effectiveness of the overall experience with the goal of identifying aspects of the simulation program that support optimal transfer of knowledge, skills and overall competence into practice. This evaluation of the simulation program should be comprehensive, combining evaluation of activities before, during and after the simulations [19].

In this regard, several instruments have been developed to measure student satisfaction in the field of clinical simulation, teamwork and decision making, among others [20,21,22,23]. At present, the only scale validated in Spanish is the High-Fidelity Simulation Satisfaction Scale for Students (ESSAF). This is a 33-item questionnaire, validated by Alconero et al. [12], which assesses student satisfaction and evaluates the students’ perception of the usefulness of clinical simulation training, together with other aspects. This questionnaire was validated with an initial sample of 150 students from the same academic year and we do not know if it is valid for use with students with different experience, since the evidence determines the importance of adapting the type of simulation design most appropriate for the experience of the student [24].

Although this is a valid questionnaire, it is conformed of 33 items, which makes it extensive, and therefore difficult to systematically implement for evaluating satisfaction of all simulations. The development of a simplified version with the same psychometric characteristics would be a way to improve the universalization of this evaluation system. For this reason, the aim of this study was to validate a brief version of the ESSAF questionnaire for its application in the different academic years of the nursing degree and in students with or without clinical experience.

Materials and methods

Design

A cross-sectional descriptive study was employed within the framework of a Teaching innovation project funded by Universidad Autónoma de Madrid (UAM) of undergraduate Nursing students belonging to the Fundaci?n Jimónez Díaz - UAM School of Nursing. The study population was 1st and 2nd year students of the 2018–2019 academic year and 1st, 2nd, 3rd, and 4th year students of the 2019–2020 academic year. 425 students completed the satisfaction survey.

During the months of May to July 2018, the initial simulation program was designed by a multidisciplinary working group over a series of six work sessions. The decision was made to start with first and second year students in order to consolidate and apply the theoretical knowledge prior to their clinical practices, extending it to all courses during the following academic year.

Sample selection

The criteria for carrying out a factorial analysis were used to calculate the sample size. These criteria contemplate 10 subjects for each [25]; therefore, a sample of at least 330 participants was needed.

Description of the activity

A total of 32 simulation sessions were carried out throughout the 2018–2019 academic year, and a total of 59 sessions took place during the 2019–2020 academic year. In each simulation session, small groups of 8–10 students participated, with an approximate duration of two to four hours, in which three scenarios were developed. These sessions were recorded on a video system and viewed in real time by the students. The following link shows an example of a scenario carried out by third year students for verbal restraint of a psychiatric patient. https://www.youtube.com/watch?v=b8gM5u2ihsA.

All simulation scenarios were performed with the same teaching design:

  1. 1)

    Prebriefing or introduction to clinical simulation.

  2. 2)

    Patient presentation and work environment.

  3. 3)

    Three simulated clinical scenarios in which all trainees participated in at least one of the scenarios.

  4. 4)

    A debriefing following each of the scenarios using the sound judgment approach [10].

To conduct the simulation, a main instructor was in charge of clinical simulation immersion and of coordinating the debriefing. In addition, a co-instructor provided support as expert staff of the subject to be trained, and for the management of the simulator and video recording systems. On occasions, actors were needed to faithfully recreate the real situation.

Data collection

The ESSAF scale (Additional File 1) was used, which is a self-administered questionnaire that students completed voluntarily and anonymously at the end of the simulation module. This scale contains 33 statements answered on a 5-point Likert-type scale, with a minimum score of 1 (strongly disagree) to 5 (strongly agree). With appropriate indicators for factoring, the 33 items are grouped into 8 factors or dimensions of student perception of clinical simulation: “Usefulness”, “Characteristics of cases and applications”, “Communication”, “Perceived performance”, “Increased self-confidence”, “Relationship between theory and practice”, “Facilities and equipment” and “Negative aspects”.

Sociodemographic variables such as students’ age, sex and academic year were also collected.

To facilitate data collection and guarantee anonymity, an ad-hoc questionnaire was generated and completed by the students at the end of the simulation practices (Additional File 1. Supplementary material: ESSAF Questionnaire). All students who completed the simulation sessions were included, excluding those who for any reason did not complete the sessions.

Data analysis

All questionnaires were numerically coded using SPSS version 20 statistical software for data collection and analysis.

In terms of baseline quality control, all variables included in the study were analyzed for missing values or errors in data recording (analysis for out-of-range values, incomplete data, statistical analysis for errors or outliers (descriptive, frequencies, means, range).

The distribution of variables was determined by descriptive analysis of the variables and the use of Q-Q graphs, histograms, and box plots; in the event of doubt, the Kolmogorv-Smirnov statistical test was used, in which the null hypothesis assumes normal or Gaussian distribution of the variable. For variables with normal distribution, parametric analyses were used. If the variables had a non-Gaussian distribution, nonparametric analyses were used.

Descriptive statistics: the results of quantitative variables with a normal distribution were expressed by their mean and standard deviation (SD). Quantitative variables with a non-Gaussian distribution were expressed as median and interquartile range (IQR), and qualitative variables were expressed as frequency and percentage. Ordinal variables were analyzed as continuous variables, expressed as median and interquartile ranges. To facilitate comprehensive data interpretation for our readers, both mean (SD) and median (IQR) values will be provided. This approach ensures clarity, especially when certain data exhibit a normal distribution while others do not.

In the present study, the option recommended by several authors was used [26, 27], exploratory factor analysis (EFA) based on polychoric correlations, given that in the univariate analysis of the ordinal items the authors find an excess of kurtosis and skewness. The robust unweighted least squares (ULS) method was used as the factor estimation method [26]. Parallel analysis (PA) was used as the factor selection retention method, and the PROMIN method was used as the factor rotation method.

The FACTOR program (version 10.9.02) was used for Exploratory Factor Analysis (EFA). Initially, a descriptive analysis was conducted for each item, assessing mean, standard deviation, skewness, and the corrected item-test correlation. To minimize noise in the subsequent factor analysis, items were removed with correlations below 0.20, as recommended [27]. We further scrutinized the distribution of items by evaluating the kurtosis and skewness coefficients [28]. Following Kline’s criteria [29], item-test was corrected correlation for the entire scale, excluding items with values below 0.20.

To gauge reliability, which we define as the internal consistency of items measuring a construct, we relied on both the ORION coefficient and Cronbach’s Alpha. ORION (an acronym for “Overall Reliability of fully Informative prior Oblique N-EAP scores”) measures the overall reliability of the aforementioned oblique scores [30]. While the Cronbach’s Alpha coefficient is grounded on the mean correlations between items, it remains the most popular statistic for internal consistency, despite some controversies surrounding it [31,32,33].

Ethical considerations

The study was sent for evaluation to the ethics committee of the Universidad Autónoma de Madrid, which ruled that the project did not contradict ethical standards and did not need to be evaluated as it was a satisfaction survey.

All experimental protocols were approved by the ethics committee of the Universidad Autónoma de Madrid on October 1, 2021.

Participants were informed of the study and confirmed their informed consent to participate in the research. All data were treated confidentially in accordance with the Organic Law 3/2018 of 5 December on the Protection of Personal Data and Guarantee of Digital Rights, keeping them strictly confidential and non-accessible to unauthorized third parties and the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on Data Protection (GDPR). The simulation scenarios were recorded for later analysis during the debriefing, all participants were informed that the recordings would be used exclusively for teaching or research use and signed their informed consent to the recording.

Results

Sociodemographic characteristics

Of the total number of students who completed the satisfaction survey (N = 425), 88.0% (374) were female and the mean age was 20.11 years (SD = 5.34 years) with a median of 19 years (min and max: 18–49), respectively. First year students accounted for 34.5% (147), second year students accounted for 29.8% (127), third year students accounted for 16.7% (71) and fourth year students accounted for 18.8% (80). This more detailed information can be found in Table 1.

Table 1 Sample characteristics

Exploratory factor analysis

The KMO test (Kaiser-Meyer-Olkin) obtained a value of 0.938, and Bartlett’s sphericity test was < 0.01. Therefore, we proceeded to carry out the EFA. Six components explained 66.5% of the variance. To determine the number of factors, the parallel analysis suggested the existence of three factors, however, the starting theory in the development of the questionnaire and the interpretation of the solution found were decisive in obtaining the six factors shown, and the estimation method was unweighted least squares with promin rotation [34], chosen due to the extreme distribution of the data.

The first model of the analysis shows items with saturations lower than 0.3 in any of the six factors considered, and in these cases the items were removed one by one, repeating the analysis each time. Pursuing the principle of parsimony, in order to obtain the simplest model and the most easy to interpret, the items or complex variables were also removed, those that have little influence on the definition of the factor and also do not show a clear belonging to only one factor. Table 2 shows the factor loadings of the modified ESSAF scale and correlation matrix.

Table 2 Factor loadings of the modified ESSAF-scale and correlation matrix

Denomination of the factors

Considering the items grouped in each factor and the weights of each one of them, the six factors have been named as shown in Table 3. These six factors encompass the entire clinical simulation training process in students, evaluating the benefits or impact of the methodology in aspects of previous planning, (F3) direct care in various dimensions such as care, (F1) teamwork and critical reasoning, (F4) learning, safety, and confidence (F5) and communication with the patient and family (F6) and finally, in the subsequent feedback or debriefing (F2).

Table 3 Reliability of the factors, and replicability index

Reliability and replicability of the data collection tool

Table 3 presents the reliability and replicability indices of the modified ESSAF- Scale. With either index, factor reliability was good (≥ 0.70), with factor 3 having the lowest or borderline reliability. Moreover, the replicability of the scale factors is shown with the H-incident that evaluates how well a set of items or elements represents a common factor (values > 0.80 suggest a well-defined latent variable, which is more likely to be stable between different studies), observing that all of them have good replicability, with values close to unity.

Applicability

Table 4 shows the descriptive analysis of the scale reduced to 25 items, as well as the number of items in each subscale, with the mean and median scores obtained in our sample. Additional File 2 includes a table with the definitive distribution of the items for each of the subscales.

Table 4 Descriptive analysis of ESSAF- adapted

Discussion

The ESSAF scale reduced to 25 items and 6 factors, assessing pre-, during and post-simulation (debriefing) aspects with high reliability, makes this scale a simpler and more reliable tool than the original one, which will facilitate comprehensive simulation program evaluation.

This need for comprehensive simulation program evaluation has increased as a result of the development of best practice standards and is a key point for academic and clinical simulation programs to determine if efforts to improve knowledge, skills and/or attitudes have been effective [18, 19]. At the same time, this assessment can be complex and having a simple tool that is applicable to students with different academic backgrounds can help in this evaluation process.

The ESSAF scale presented good internal reliability (α = 0.859) and high replicability indices (H-index close to unity). However, the reliability analysis of the different dimensions in the present study does not replicate the good reliability found by its authors [12]; only two of the 8 factors of the ESSAF tool presented an α ≥ 0.70. The availability of a larger sample, with students from different academic years has made it possible to simplify the sample and to eliminate items, as well as a new classification by subscales or factors.

These six factors perfectly cover all the key aspects of simulation training [35] and encompass all areas of training described in the literature in a simple and reliable manners for all levels of experience in the nursing curriculum. Not only are they focused on the direct assessment of nursing care, but the factors enable the assessment of the benefit on cognitive competencies of reasoning and prior preparation or the benefit of feedback or subsequent debriefing which is currently considered the key to any clinical simulation activity [13, 35] and it is not always evaluated as reflected in the systematic review by Levett-Jones and Lapkin (2014) who included 10 controlled studies in undergraduate nursing, only two of which addressed the benefit or impact of feedback or subsequent debriefing [36].

Table 4 shows the mean scores of the different factors and, as mentioned in the literature reviewed, the factor referring to debriefing (F2) has the highest scores of all the factors, which reflects that our students have recognized the importance of debriefing to generate new models of thinking and how to apply them in future practice [10].

As for the factors that encompass direct care competencies, we can perfectly differentiate those that are not only focused on care (F1) and that are becoming increasingly important in the curricular design of undergraduate training, such as communication skills with patients and family (F6), teamwork (F4) and safety and confidence (F5), all of which were recently highlighted in a systematic review showing the usefulness of simulation training for acquiring these types of competencies [37].

Finally, the factor related to the benefits or usefulness of pre-planning (F3) is going to allow to complete that comprehensive evaluation described in the literature, in this case focused on the pre-simulation part and that will help in the narrative of the clinical scenario, which is related to the decision making in the scenario and also the level of complexity and how to adjust the information given to the students. The teacher can use this to guide decisions on the types of information to provide to adjust the complexity of the clinical scenario [11, 38].

Limitations

As for the limitations of this work, we began with a very limited sample chosen by non-probabilistic convenience sampling. Although various systems are currently used to determine the number of subjects required for validation studies, such as the N/p type criterion and the criterion of 10 times more subjects than items, among others, they are completely discouraged, as they have no solid basis [27]. In fact, there is no consensus, since the minimum recommended size depends on numerous factors. Logically, the larger the sample size available, the more confidence we will have that the solution obtained is stable, especially if the communality is low, or when we have many possible factors to extract and/or few items per factor. Nonetheless, to evaluate the quality of a test, clearly, a sample size of at least 200 cases is recommended, even under optimal conditions of high communalities and well-determined factors [27]. We opted for the criterion of 10 subjects per item, which represents a sample much larger than 200 and therefore adequate for the purpose of the study.

Another of the limitations observed can be found in the homogeneity of the sample where there is an imbalance between the characteristics of the participants since 70.1% have no previous experience in simulation and there is also a higher percentage of first and second year students. However, we consider that it could be a strength because the psychometric characteristics are adequate despite being a non-homogeneous sample.

Conclusions

Conducting ongoing evaluation of the simulation program provides teachers with the data needed to recognize and implement changes in future simulation experiences.

We have observed that the modified ESSAF scale, divided into six subscales, is a practical and reliable tool to be used by nursing students from different academic years and with different degrees of clinical experience, compared to the original scale. This new classification is very useful to provide teachers with feedback not only in relation to the competencies acquired, but also in relation to the design of the simulated clinical experiences and their subsequent analysis or debriefing.

Evaluating each simulation program with different tools can be complex and tiring for teachers and students. This simple and concise tool can be the first step in evaluating a simulation program for nursing students in a comprehensive manner and guide a second, more concise evaluation phase on relevant aspects that have been detected.

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

EFA:

Exploratory factor analysis

EU:

European Union

ESSAF:

High-Fidelity Simulation Satisfaction Scale for Students

GDPR:

General Data Protection Regulation

GFI:

Goodness of Fit Index

IQR:

Interquartile range

KMO:

Kaiser-Meyer-Olkin

ORION:

Overall Reliability of fully Informative prior Oblique N-EAP scores

PA:

Parallel analysis

SD:

Standard deviation

SPSS:

Statistical Package for the Social Sciences

ULS:

Unweighted least squares

References

  1. Piña-Jiménez I, Amador-Aguilar R. La enseñanza de la enfermería con simuladores, consideraciones teórico-pedagógicas para perfilar un modelo didáctico. Enfermería Univ. 2015;12(3):152–9. https://doi.org/10.1016/j.reu.2015.04.007.

    Article  Google Scholar 

  2. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):63–S67. https://doi.org/10.1097/00001888-199009000-00045.

    Article  Google Scholar 

  3. Aggarwal R, Mytton OT, Derbrew M, et al. Training and simulation for patient safety. BMJ Qual Saf. 2010;19(Suppl 2):i34–i43. https://doi.org/10.1136/QSHC.2009.038562.

    Article  Google Scholar 

  4. Urra Medina E, Sandoval Barrientos S, Irribarren Navarro F, Urra Medina E, Sandoval Barrientos S, Irribarren Navarro F. El desafío y futuro de la simulación como estrategia de enseñanza en enfermería. Investig en Educ médica. 2017;6(22):119–25. https://doi.org/10.1016/J.RIEM.2017.01.147.

    Article  Google Scholar 

  5. Durá MJ, Merino F, Abajas R, Meneses A, Quesada A, González AM. Simulación de alta fidelidad en España: De la ensoñación a la realidad. Rev Esp Anestesiol Reanim. 2015;62(1):18–28. https://doi.org/10.1016/j.redar.2014.05.008.

    Article  PubMed  Google Scholar 

  6. Gaba DM. The future vision of simulation in health care. Qual Saf Heal Care. 2004;13(suppl1):i2–i10. https://doi.org/10.1136/qhc.13.suppl_1.i2.

    Article  Google Scholar 

  7. Fraser K, Peets A, Walker I, et al. The effect of simulator training on clinical skills acquisition, retention and transfer. Med Educ. 2009;43(8):784–9. https://doi.org/10.1111/j.1365-2923.2009.03412.x.

    Article  PubMed  Google Scholar 

  8. Jones F, Passos-Neto CE, Freitas O, Braghiroli M. Simulation in Medical Education: brief history and methodology. Princ Pract Clin Res. 2015;1(2). https://doi.org/10.21801/ppcrj.2015.12.8.

  9. Weller JM, Nestel D, Marshall SD, Brooks PM, Conn JJ. Simulation in clinical teaching and learning. Med J Aust. 2012;196(9):1–5. https://doi.org/10.5694/mja10.11474.

    Article  Google Scholar 

  10. Maestre JM, Rudolph JW. Theories and Styles of Debriefing: the Good Judgment Method as a Tool for Formative Assessment in Healthcare. Rev Española Cardiol (English Ed. 2015;68(4):282–5. https://doi.org/10.1016/j.rec.2014.05.018.

    Article  Google Scholar 

  11. León-Castelao E, Maestre JM. Prebriefing in healthcare simulation: Concept analysis and terminology in spanish. Educ Med. 2019;20(4):238–48. https://doi.org/10.1016/j.edumed.2018.12.011.

    Article  Google Scholar 

  12. Alconero-Camarero AR, -Romero AG, Sarabia-Cobo CM, Arce AM-. Clinical simulation as a learning tool in undergraduate nursing: validation of a questionnaire. Nurse Educ Today. 2016;39:128–34. https://doi.org/10.1016/j.nedt.2016.01.027.

    Article  PubMed  Google Scholar 

  13. Cook DA, Hamstra SJ, Brydges R, et al. Comparative effectiveness of Instructional Design features in Simulation-Based Education: systematic review and Meta-analysis. Vol 35. 2013. https://doi.org/10.3109/0142159X.2012.714886.

    Article  Google Scholar 

  14. Al Khasawneh E, Arulappan J, Natarajan JR, Raman S, Isac C. Efficacy of Simulation using NLN/Jeffries nursing Education Simulation Framework on satisfaction and self-confidence of undergraduate nursing students in a Middle-Eastern Country. SAGE Open Nurs. 2021;7:1–10. https://doi.org/10.1177/23779608211011316.

    Article  Google Scholar 

  15. Smith SJ, Roehrs CJ. High-fidelity simulation: factors correlated with nursing student satisfaction and self-confidence. Nurs Educ Perspect. 2009;30(2):74–8.

    PubMed  Google Scholar 

  16. Akhu-Zaheya LM, Gharaibeh MK, Alostaz ZM. Effectiveness of simulation on knowledge acquisition, knowledge retention, and self-efficacy of nursing students in Jordan. Clin Simul Nurs. 2013;9(9). https://doi.org/10.1016/J.ECNS.2012.05.001.

  17. Waxman KT. The development of evidence-based clinical simulation scenarios: guidelines for nurse educators. J Nurs Educ. 2010;49(1):29–35. https://doi.org/10.3928/01484834-20090916-07.

    Article  CAS  PubMed  Google Scholar 

  18. Sittner BJ, Aebersold ML, Paige JB, et al. INACSL Standards of best practice for simulation: past, present, and future. Nurs Educ Perspect. 2015;36(5):294–8. https://doi.org/10.5480/15-1670.

    Article  PubMed  Google Scholar 

  19. Leighton K, Foisy-Doll C, Mudra V, Ravert P. Guidance for Comprehensive Health Care Simulation Program evaluation. Clin Simul Nurs. 2020;48:20–8. https://doi.org/10.1016/J.ECNS.2020.08.003.

    Article  Google Scholar 

  20. Levett-Jones T, Lapkin S, Hoffman K, Arthur C, Roche J. Examining the impact of high and medium fidelity simulation experiences on nursing students’ knowledge acquisition. Nurse Educ Pract. 2011;11(6):380–3. https://doi.org/10.1016/j.nepr.2011.03.014.

    Article  PubMed  Google Scholar 

  21. Oh PJ, Jeon KD, Koh MS. The effects of simulation-based learning using standardized patients in nursing students: a meta-analysis. Nurse Educ Today. 2015;35(5):e6–e15. https://doi.org/10.1016/j.nedt.2015.01.019.

    Article  PubMed  Google Scholar 

  22. Sigalet E, Donnon T, Grant V. Undergraduate students’ perceptions of and attitudes toward a simulation-based interprofessional curriculum: the kidSIM ATTITUDES questionnaire. Simul Healthc. 2012;7(6):353–8. https://doi.org/10.1097/SIH.0B013E318264499E.

    Article  PubMed  Google Scholar 

  23. Franklin AE, Burns P, Lee CS. Psychometric testing on the NLN Student satisfaction and self-confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses. Nurse Educ Today. 2014;34(10):1298–304. https://doi.org/10.1016/j.nedt.2014.06.011.

    Article  PubMed  Google Scholar 

  24. Ahn H, Kim HY. Implementation and outcome evaluation of high-fidelity simulation scenarios to integrate cognitive and psychomotor skills for korean nursing students. Nurse Educ Today. 2015;35(5):706–11. https://doi.org/10.1016/J.NEDT.2015.01.021.

    Article  PubMed  Google Scholar 

  25. De Vet HCW, Adèr HJ, Terwee CB, Pouwer F. Are factor analytical techniques used appropriately in the validation of health status questionnaires? A systematic review on the quality of factor analysis of the SF-36. Qual Life Res. 2005;14(5):1203–18. https://doi.org/10.1007/s11136-004-5742-3.

    Article  PubMed  Google Scholar 

  26. Lloret-Segura S, Ferreres-Traver A, Hernández-Baeza A, Tomás-Marco I. El Análisis Factorial Exploratorio de los Ítems: una guía práctica, revisada y actualizada. An Psicol. 2014;30(3):1151–69. https://doi.org/10.6018/ANALESPS.30.3.199361.

    Article  Google Scholar 

  27. Ferrando PJ, Anguiano-Carrasco C. El análisis factorial como técnica de investigación en psicología. Papeles del Psicol. 2010;31(1):18–33. https://www.redalyc.org/pdf/778/77812441003.pdf. Accessed August 18, 2021.

    Google Scholar 

  28. Hair JF, Anderson RE, Tatham RL, Black WC. Análisis Multivariante. Prentice-Hall; 2004.

  29. Kline P. A handbook of test construction: introduction to psychometric design. 1986:259.

  30. Ferrando PJ, Lorenzo-Seva U. A note on improving EAP trait estimation in oblique factor-analytic and item response theory models. Psicologica. 2016;37(2):235–47.

    Google Scholar 

  31. Green SB, Yang Y. Evaluation of dimensionality in the Assessment of Internal consistency reliability: Coefficient Alpha and Omega Coefficients. Educ Meas Issues Pract. 2015;34(4):14–20. https://doi.org/10.1111/EMIP.12100.

    Article  Google Scholar 

  32. Huysamen GK. Coefficient alpha: unnecessarily ambiguous; unduly ubiquitous. SA J Ind Psychol. 2006;32(4):34–40. https://doi.org/10.4102/SAJIP.V32I4.242.

    Article  Google Scholar 

  33. Kelley K, Cheng Y. Estimation of and confidence interval formation for reliability coefficients of homogeneous measurement instruments. Methodology. 2012;8(2):39–50. https://doi.org/10.1027/1614-2241/A000036.

    Article  Google Scholar 

  34. Lorenzo-Seva U, Promin. A method for oblique factor rotation. Multivar Behav Res. 1999;34(3):347–65. https://doi.org/10.1207/S15327906MBR3403_3.

    Article  Google Scholar 

  35. Cheng A, Grant V, Robinson T, et al. The promoting Excellence and reflective learning in Simulation (PEARLS) Approach to Health Care Debriefing: a Faculty Development Guide. Clin Simul Nurs. 2016;12(10):419–28. https://doi.org/10.1016/j.ecns.2016.05.002.

    Article  Google Scholar 

  36. Levett-Jones T, Lapkin S. A systematic review of the effectiveness of simulation debriefing in health professional education. Nurse Educ Today. 2014;34(6). https://doi.org/10.1016/J.NEDT.2013.09.020.

  37. Foster M, Gilbert M, Hanson D, Whitcomb K, Graham C. Use of Simulation to develop teamwork skills in prelicensure nursing students: an integrative review. Nurse Educ. 2019;44(5):E7–E11. https://doi.org/10.1097/NNE.0000000000000616.

    Article  PubMed  Google Scholar 

  38. Wilson RD, Hospital MC, Klein JD, Design. Implementation and evaluation of a nursing Simulation : A Design and Development Research Study. J Appl Instr Des. 2009;2(1):57–68.

    Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Contributions

AM: Conceptualization, Investigation, Methodology, Writing – original draft, Writing – review & editing. JR: Conceptualization, Formal analysis, Methodology, Writing – original draft, Writing – review & editing. EV: Formal analysis, Software, Writing – original draft. PR: Investigation, Writing – original draft, Writing – review & editing, Visualization. AT: Investigation, Writing – original draft, Writing – review & editing, Visualization. AH: Conceptualization, Investigation, Methodology, Writing – original draft, Writing –review & editing, Visualizationa

Corresponding author

Correspondence to Julián Rodríguez-Almagro.

Ethics declarations

Ethics approval and informed consent to participate

All methods were carried out in accordance with relevant guidelines and regulations. All experimental protocols were approved by the ethics committee of the Universidad Autónoma de Madrid on October 1, 2021. The study was sent for evaluation to the ethics committee of the Universidad Autónoma de Madrid, which ruled that the project did not contradict ethical standards and did not need to be evaluated as it was a satisfaction survey on October 1, 2021. Participants were informed of the study and confirmed their informed consent to participate in the research. All data were treated confidentially in accordance with the Organic Law 3/2018 of 5 December on the Protection of Personal Data and Guarantee of Digital Rights, keeping them strictly confidential and non-accessible to unauthorized third parties and the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on Data Protection (GDPR). The simulation scenarios were recorded for later analysis during the debriefing, all participants were informed that the recordings would be used exclusively for teaching or research use and signed their informed consent to the recording.

Consent for publication

Not applicable.

Competing interests

No conflict of interest has been declared by the author(s). This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary Material 2

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Martínez-Arce, A., Rodríguez-Almagro, J., Vélez-Vélez, E. et al. Validation of a short version of the high-fidelity simulation satisfaction scale in nursing students. BMC Nurs 22, 344 (2023). https://doi.org/10.1186/s12912-023-01515-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12912-023-01515-2

Keywords