Skip to main content

Development of a patient-reported outcome measure of digital health literacy for chronic patients: results of a French international online Delphi study

Abstract

Background

A psychometrically robust patient-reported outcome measure (PROM) to assess digital health literacy for chronic patients is needed in the context of digital health. We defined measurement constructs for a new PROM in previous studies using a systematic review, a qualitative description of constructs from patients, health professionals and an item pool identification process. This study aimed to evaluate the content validity of a digital health literacy PROM for chronic patients using an e-Delphi technique.

Methods

An international three-round online Delphi (e-Delphi) study was conducted among a francophone expert panel gathering academics, clinicians and patient partners. These experts rated the relevance, improvability, and self-ratability of each construct (n = 5) and items (n = 14) of the preliminary version of the PROM on a 5-point Likert scale. Consensus attainment was defined as strong if ≥ 70% panelists agree or strongly agree. A qualitative analysis of comments was carried out to describe personal coping strategies in healthcare expressed by the panel. Qualitative results were presented using a conceptually clustered matrix.

Results

Thirty-four experts completed the study (with 10% attrition at the second round and 5% at the third round). The panel included mostly nurses working in clinical practice and academics from nursing science, medicine, public health background and patient partners. Five items were excluded, and one question was added during the consensus attainment process. Qualitative comments describing the panel view of coping strategies in healthcare were analysed. Results showed two important themes that underpin most of personal coping strategies related to using information and communications technologies: 1) questionable patient capacity to assess digital health literacy, 2) digital devices as a factor influencing patient and care.

Conclusion

Consensus was reached on the relevance, improvability, and self-ratability of 5 constructs and 11 items for a digital health literacy PROM. Evaluation of e-health programs requires validated measurement of digital health literacy including the empowerment construct. This new PROM appears as a relevant tool, but requires further validation.

Peer Review reports

Background

Digital health literacy (DHL) is a concept that aims to improve competencies of patients and communities who are facing problems associated with processing health information from digital devices every day [1]. DHL is defined as “the ability to search, find, understand, evaluate health information from electronic sources and apply the knowledge gained to address or solve a health problem” [2, 3]. It has been estimated that 75.8% of patients have low or problematic DHL in the German population, 72% in the Swiss population and 52.7% in the Portuguese population [4]. De Gani and colleagues point out that in Switzerland, low DHL particularly affects the elderly, people living with a chronic disease, or living in financial deprivation, and those having difficulties with the local language or receiving little social support [5]. People with high DHL report better self-perceived health, are less likely to have chronic diseases or health problems and feel less restricted in their activities if they do suffer from chronic diseases or health problems [6, 7]. Healthier people could also probably have better DHL, so it is essential for nurses and other health professionals taking care of chronic patients to measure DHL [8, 9].

Nurses are uniquely positioned to initiate and facilitate DHL evaluation in clinical practice when using any forms of information or communication technologies (ICTs), such as telehealth interventions [10]. Furthermore, there are clinical recommendations for the use of hybrid models that include in person and virtual care, with the aim of facilitating or maximizing the quality and effectiveness of patient care [11, 12]. Efforts have been made, therefore, to specify the measurement needed to improve patients’ DHL. Several tools were developed to assess DHL, such as the eHealth Literacy Scale or eHEALS [13], the eHealth Literacy Questionnaire (eHLQ) [14], the Digital Health Literacy Instrument (DHLI) [15], and HL-DIGI of M-POHL 2019 [16].

Existing tools have been criticized for being too long, of poorly reported psychometric properties [17,18,19]. First, existing tools don’t account for patients’ abilities to interact about adaptation coping process with health professionals (e.g., typing, search information, share opinion and emotion) through digital devices [17], which is indispensable when using e-health. In addition, common techniques to elicit information from adults are questionnaires or online questionnaires [17, 19]. It should also be pointed out that one key limiting factor in enabling patients to engage with digital resources is DHL [18]. So, it is an important methodological point to get input from the target population and clinicians to provide a clear definition of the concept DHL to be measured. Considering input of patient and health professionals enables to have a definition that “is a statement of an understanding of the construct DHL to be measured” in clinical practice, and of measurements constructs such as digital literacy, information literacy [20]. By better understanding the level of DHL of individuals, it is possible to identify the needs of specific groups in order to develop appropriate information or education interventions and ensure equitable access to healthcare for a broader public.

A comprehensive, psychometrically robust patient-reported outcome measure (PROM) to assess DHL for personal health among chronic patients does not yet exist. PROMs are defined as standardized, validated questionnaires (also called instruments) related to patient’s health status, that are completed by patients [21, 22]. This multi-phase research project aimed to develop a DHL PROM for chronic patients following the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) recommendations [19]. Using the Roy adaptation model as a framework [23, 24], we previously conducted a qualitative exploration of chronic patients and professionals (nurses and doctors) understanding and definition of DHL when using ICTs in healthcare. These findings, combined with those of a systematic review of existing DHL measures, informed the development of a preliminary PROM [25]. This study aimed to evaluate the content validity of a digital health literacy PROM for chronic patients using an e-Delphi technique (step 1.4 in Fig. 1).

Fig. 1
figure 1

Flow-chart of the PROM development process

Methods

Preliminary development (Phase 1, step 1.1 to 1.3)

Following the COSMIN’s recommendations [26], the first step – conceptual framework – addressed the need for a detailed definition of the constructs of DHL and chronic patients in the context of use of ICTs. A systematic review of DHL PROMs [17] and a qualitative analysis of outcome constructs resulting from DHL’s skills that are relevant for patients and clinicians were performed (steps 1.1 and 1.2 in Fig. 1).

The item pool identification process for each defined measurement construct described as relevant was extracted from the systematic review. This process allowed to list all measurement instruments and a set of relevant items that can be used to measure digital health literacy [25]. So, each item from each retained PROM was extracted and listed (steps 1.1 and 1.2 in Fig. 1). A first evaluation process on the initial pool of items (n = 67) was done with 4 patient partners of our research committee with different levels of digital health literacy. The items were then confirmed following the cross check of the qualitative results of the previous step (step 1.2).

A total of 27 items were retained from this first screening. Content for each item was extracted from existing measures and missing items were formulated. Then, a consensus process by three researchers was made for each domain and items according to De Walt's criteria on consistence, clarity, applicability and not confusing of items (27, p.4) (steps 1.3 in Fig. 1). Finally, 11 items based on an existing questionnaire, DHLI [15], were translated from English to French using the WHODAS 2.0 Translation package [28,29,30] and three items were formulated by patient partners members of our research committee. A formal request for the use of DHLI’s questions was sent to the authors. This PROM aimed at measuring self-reported improvement in abilities when using personal health information from ICTs.

e-Delphi study (Phase 1, step 1.4)

We used an e-Delphi approach to conduct this study. This manuscript is written in accordance with the CREDES guideline for the reporting of Delphi studies [31]. We followed recommendations to define outcomes criteria indicators [32, 33] (see Table 1). The Delphi method is a structured process whose components (anonymity, iteration, controlled feedback, and statistical aggregation) aim to improve the pooling of experts’ opinions [34, 35]. More specifically, the Delphi method structures the communication process through rounds of questionnaires, eliminating or reducing some of the problems often present when experts are directly confronted in face-to-face discussions (e.g., dominating personalities, time constraints) to obtain a more reliable group opinion and to identify areas of consensus or dissent [36,37,38]. Given the complex professional and caring relationships between primary care clinicians (doctors, nurses, etc.) and patients, as well as the potential tensions between clinical, experiential and academic expertise, we thought the Delphi approach offered a safe and rigorous communication’s format. Furthermore, experts that are geographically distant could take part in the asynchronous communication process due to the accessibility of the questionnaire online.

Table 1 Definitions, statistical measures of consensus and outcome criteria indicators

In this study, we used a three-round online Delphi (e-Delphi) process to assess the content validity of the constructs and items of a preliminary PROM by assessing experts’ agreement on relevance (R), improvability (I), and self-ratability (S-R). Expert assessed each construct or item according to three criteria [39]: 1) relevance: the construct or item is relevant to assess digital health literacy; 2) improvability: measurement of construct or item is improvable by a clinical intervention; 3) self-ratability: the construct or item is self-ratable by patients.

Because the first qualitative exploration was made in French in Switzerland, methods in this study used a French version of the questionnaire while this paper is reported in English.

Expert eligibility, recruitment of the panel

Panel size

There is no universal guide to sample size calculation in Delphi studies. According to Belton et al. a minimum of 5–20 should be used for Delphi studies [40]. Furthermore, a personalized approach is essential when communicating with participants to sustain their engagement and reduce dropouts [38]. According to a systematic review [41] of 80 Delphi studies in healthcare, the median number of Delphi panel members is 17 (interquartile range = 11–31). Considering the heterogeneity of the expert panel pursued in our study (patients, academics, clinicians), we aimed to recruit around 40 participants to obtain a manageable panel of about twice that median size assuming a 15% loss to follow up. No specific ratios per type of expert have been pre-defined.

Participant recruitment

A purposive sampling method was used. Experts from three distinct groups were recruited in French speaking countries worldwide if they were aged 18 or above, able to read and write in French and willing to participate:

  • Academics: eligible academics had to have 2 peer-reviewed publications focused on e-health, digital health literacy or self-reported measures indexed in Pubmed. They were identified through existing researcher networks and Universities.

  • Clinicians: Healthcare professionals who used ICTs: technological resources including health websites, health apps or connected objects, telemedicine or telehealth in clinical practice with chronic patients (e.g., nurses, doctors). Eligible participants had to have a relevant qualification (e.g., medical, nursing degree) and over 6 months experience in clinical practice, including working with low literacy populations. They were identified through existing professional networks in the research committee.

  • Patient partners: These participants were purposively selected based on their known expertise in the use of ICTs and chronic condition/s from official registered patient partners’ associations in Belgium, France, Canada, and Switzerland. They had to have an actual utilization or to have used technological resources, either directly or with the support of a relative.

The research project was approved by the Ethics Committee of Laval University (CERUL) in Canada (2021–057,10th December 2021). Participant recruitment and data collection took place from January 2022 to April 2022.

e-Delphi process

Online questionnaires content was based on the preliminary development process (Fig. 1). All rounds were completed electronically and anonymously using Research Electronic Data Capture (REDCap) software [42, 43]. We planned to stop the Delphi process after three rounds, because we thought that additional rounds were unlikely to introduce significant changes and were not worth the risk of increasing attrition rates due to the repetitive nature of the exercise [44].

All online questionnaires were elaborated and pretested with a member from the research team and one patient partner, to ensure readability and clarity. The first questionnaire in round one was used to assess agreement level with DHL constructs. Statements (items) within DHL constructs were rated in rounds 2 and 3. All questionnaires used a 5-point Likert scale (from 1 = strongly disagree to 5 = strongly agree) and optional open text comment sections. An example of the assessed statements is presented in Table 2.

Table 2 Sample of survey questions used for empowerment construct

Group opinion was measured as percentages of agreement (%), and interquartile ranges (IQR) were calculated for round two responses. The percentage of agreement indicates the proportion of panelists who "agreed" or "strongly agreed" that each construct or item met the given assessment criteria. A higher percentage means that the statement was more widely endorsed by the group. Consensus attainment level was defined as strong (≥ 70% agreement), moderate (50–69%) or low (< 50%). It was not mandatory to write a comment, but the questionnaire emphasized that comments are crucial components for considering their opinion and improving the quality of choice of the PROM’s items. That way, experts’ comments were summarized and rephrased in a neutral manner to help orient the next round of consultation and to identify any major concern raised by the expert panel through the qualitative analysis [45].

Participants were invited by e-mail. Then, they received a survey link to an introduction page (Additional file 1). This page included information on the research team, the study, and a consent question, which were followed by a socio-demographic and clinical questionnaire. Personalized follow-ups were provided to panel members throughout the study, with a reminder sent to nonrespondents after two weeks during each round. Should there be no contact within the 2 weeks then no further communication was sent. Details on data collection and analysis are reported in the following paragraphs on a per-round basis. Qualitative data analyses were conducted using Word and Excel software (Microsoft Corp.), and statistical analyses were performed using Stata 14 (StataCorp LP).

Round one: construct assessment

The first round aimed to assess consensus on the inclusion of 5 constructs in the PROM and their operational definitions (Table 3). Definitions of DHL and constructs were given to ensure that panelists would consider the DHL’s characteristics for patients in daily life. We also provided experts with the initial version of the PROM (14 items), so that they could link the constructs to the PROM’s items. Consensus for construct inclusion was defined as strong agreement (≥ 70%).

Table 3 Measurements constructs and definitions

The results revealed that experts considered that the constructs were not well suited for self-rating, except for empowerment. Analysis of their comments led us to suspect that a potential misunderstanding of what was meant by self-ratability may have led to a systematic error in respondents’ ratings of this criterion. Therefore, it was clarified in the second-round survey: “Can the construct/item be directly evaluated by the patient, taking into account what the patient believes to be true (perception) and what he/she can do (with coping strategies leading to observable behavior)?” Finally, a major concern raised by the experts was the possibility that patients ask for help from someone else when using ICTs. Thus, we chose to incorporate this aspect directly into the concerned PROM’s questions to be assessed in round 2.

Round two: assessment of items

In round two, experts assessed the PROM’s items (n = 14) to be included or excluded under each construct. At the beginning of the second-round questionnaire, they were invited to consider a summary of comments provided in round one. Interquartile ranges (IQR) were also calculated alongside percentages of agreement (%). The IQR refers to the dispersion of obtained ratings. A smaller range (low dispersion) indicates that the opinions of panelists were more consistent. Consensus for item inclusion was defined as strong agreement (≥ 70%) and low dispersion (IQR < 1) within the expert panel.

Many experts highlighted that the revised formulations of the PROM’s items incorporating the possibility to obtain help from someone else could be interpreted as double-barreled questions (DBQs). A DBQ is a question that asks about two or more issues but leaves a possibility for just a single answer. Basically, whenever respondents are force to answer two questions (disguised as one) with a single answer. So respondents may understand the stimuli in a DBQ differently, and answer based on one of them while disregarding the other. This can lead to an adverse effect on validity [46]. To avoid this, we opted to return to the original items’ formulations, as assessed in round one, during the final round but kept the ratings made by the experts in round two to decide whether these items should be included, excluded, or reassessed. An expert suggested that we look into the Health Assessment Questionnaire (HAQ) by B. Bruce & J.F. Fries [47] as an example to include a modality for considering the help of a person in a question. We used the question coming from the French adaptation of the HAQ Disability Index by Guillemin, Briancon & Pourel [48] to formalize the help of a person’s aspect by adding a separate question in the third round. A formal request for the use of the question was sent to the authors.

Round three: final assessment of items

The third round aimed to assess revised consensus on 7 items that had initially obtained moderate or inconsistent agreement in the second round (Fig. 1) and one added question. Two items were reworded based on experts’ comments and one item was removed because it was seen as redundant with another. At the beginning of the third round, experts received individualized feedback on their round-two ratings, the position of the group for each item (Additional file 2) and a synthesis of comments. Participants were invited to consider the answers of the group to reassess their position and add final comments. Items were presented in descending order based on their percentage of agreement, as recommended in Delphi methodology [49].

Data analysis

Quantitative data analysis

There is no definite criterion to determine consensus in Delphi studies. We choose a percentage agreement (> 70%) for all rounds and added a proportion within a specified range (IQR < 1) in round 2 (items assessment) to measure the level of consensus attainment [50, 51]. Descriptive statistics were performed to assess the convergence towards consensus (percentage, interquartile range, and number of comments) as suggested by von der Gracht [32]. A distributional analysis was done to assess the position of our panel regarding the evaluation of the three criteria for each construct or item.

Thus see Table 1, construct or items were considered acceptable for inclusion in the questionnaire if 70% of experts agreed or strongly agreed with the statement for all three criteria (relevance, improvability and self-ratability). A statement with an agreement between 50 and 69% was resubmitted in the next round. A statement with less than or equal to 49% agreement was rejected. For round two only we additionally considered that an IQR under 1.00 was required to indicate consensus for immediate inclusion.

Qualitative data analysis

Qualitative data were analyzed for each round according to the analysis method described by Miles, Hubermann & Sadaña [45], and organized by theme. Results of the analysis were presented using a conceptually clustered matrix charting participants’ comments about selected concepts [45]. All comments collected during the e-survey are presented in a way that remains close to the data provided (number of comments, inventory). This feature is crucial in presenting the results of a topical survey according to Sandelowski's typology [52]. From an application perspective, the comments were mapped to the Adaptation Model [23, 24] and the influencing factors that can affect coping strategies: i) personal (beliefs, values, genetics), ii) collective/group factors (physical facilities, financial resources, interpersonal relationships, social background and culture, decision-making and information systems), iii) technical (process, methods based on scientific knowledge, used in information management and decision making systems, iv) policy (health policy: context, infrastructure, evidence based nursing practice process and delivery systems) [53, 54]. The purpose of the qualitative analysis was to describe problems identified with the use of ICTs in healthcare, and the personal coping strategies used, as expressed by the panel.

According to the analysis method described by Miles & Hubermann [55], the first step is to use the clustering technique to define specific issues. The analyst identifies problems (or tensions) that underlie the comments. Then, a similar clustering is made in relation to "what can be done to solve the problem" and between these solutions as potential coping strategies. Then a conceptual sorting is carried out: is the proposed coping strategy of a personal, collective, technical, or political nature. The data entered were essentially short sentences (with the participant's code), and we used the double confirmation decision rule. We selected a similar proportion (> 50%) of comments from each round to double-code and analyze, ensuring that all DHL constructs were covered. A total of 81 comments (53%) were independently double-coded to ensure the reliability of the analyses. Then, inferences were directly made from the data presented: establishing patterns, themes and factoring (i.e. identifying a few general variables underlying many specific variables) which are illustrated by excerpts from the study comments. Full agreement between both researchers was required for inclusion of statements, with disagreements resolved through discussion [55]. Senior researchers reviewed the data at each stage for feedback and revision prior to dissemination. The presentation of the data was done using a conceptually clustered matrix (Additional file 3).

Results

From the 42 experts invited, two were found to be ineligible (no consent given). The remaining 40 gave their consent to participate. Their characteristics are presented in Table 4. The majority of the panel experts were female (60%). Age was relatively balanced in the panel. Nursing represented the most common area of expertise (37.5%). Nine patient partners shared their expertise. Expert panel attrition was 10% during the second round (n = 4, 3 clinicians and 1 academic) and 5% during the third round (n = 2 clinicians). Thirty-four experts completed all e-Delphi rounds.

Table 4 Description of the panel of experts

Results round 1

During round one (n = 40 experts), five potentially important constructs were submitted. Table 5 summarizes the results of the Delphi process during each round. Based on the results obtained from the analysis of the qualitative and quantitative data, experts systematically raised concerns with the self-ratability nature of all constructs, except for empowerment (R- relevance: 87.2%, I = improvability: 92.3%, SR-self-ratability: 89.7%). SR’s percentage of agreement was 67.5% for digital literacy, 41% for reliability of information on internet, while relevance of information to personal health and privacy were both at 66.7%. The percentage of agreement about the relevance of the privacy construct was 69.2%, very close to our 70% threshold. We decided to keep it because it’s a relatively new construct in digital health literacy [15] and it represents a common preoccupation shared by experts. Interestingly, this construct is not present in health literacy studies [56, 57]. Finally, experts raised the need to account in the PROM for the possibility to ask someone for help when using technologies. The 5 constructs and 14 related items were proposed during the 2nd round.

Table 5 Summary of results (R: relevance; I: improvability; SR: self-ratability; + : consensus achieved; ?: sent to next round; × : rejection)

Results round 2

Thirty-six panelists completed round two. During the second round, 6 items (3 within digital literacy and 3 within empowerment) achieved a consensus level of 70% or greater and were retained in the final PROM. No item was rejected outright during this round, but one item was removed as it was deemed redundant by experts. Three items were modified following experts’ comments: decide whether the information is reliable and check different websites to see if they provide the same information were merged into a single revised item; and know who can read the message was revised. The comments received confirmed that experts viewed privacy as an important construct. Furthermore, the notion of trust emerged linked to the issue of sharing health related information online to persons who are more or less known.

Seven items that did not reach consensus and one additional item [48] were reassessed in the final round.

Results round 3

Thirty-four panelists assessed 8 items during the third round. Three items did not reach the acceptance threshold and were ultimately excluded from the PROM. The additional items accounting for external help from someone were accepted.

From our starting pool of 5 constructs and 14 items, 5 constructs and 11 items reached consensus to be included in the PROM. The 11-item PROM, named Lisane, is currently only available in its original French version.

Synthesis of qualitative results

The conceptually clustered matrix of results is in Additional file 3. Of the 180 comments made in the e-Delphi, 155 were usable (round 1 n = 44, round 2, n = 84, round 3, n = 27). Non usable comments were excluded because they were redundant on DBQs. The qualitative data extracted and analyzed [45] from the three round’s comments showed several problems arising from the utilization of ICTs. Firstly, most of the problems raised by the experts were formulated using minimal or no coping strategy. The most frequent problems identified related to the difficulty of accessing and using digital resources due to a lack of knowledge, understanding, skills or dexterity. Secondly, there was rarely a structural or organizational response highlighted by experts: the problems were largely treated as an individual learning effort with the support of the group or a technical support even if the possibility of adaptation with electronic devices may have limits (e.g., related to age, cognitive disorders, understanding and critical thinking for use). Thirdly, there are two main realities underlying described problems. First is the ability to assess specific personal coping strategies and contextual characteristics (stimuli) of DHL that are modifiable. This would involve considering the answers rated as "difficult" or very "difficult" in the questionnaire and asking the patient: “For what reason(s)?” This would make it possible to adapt the way personal needs are discussed and to propose individualized intervention and TIC that is adapted to one’s digital literacy level. The second is to consider the digital tool as a contextual factor influencing the care of patients and their families within the health system.

Discussion

This study used a e-Delphi approach to conduct a systematic process to assess consensus on the content of a new digital health literacy PROM. Three evaluation criteria, relevance, improvability and self-ratability, were used by experts to assess each item and construct. Over a three-round process, thirty-four panelists, including patient partners, clinicians and academics, reached consensus on 5 constructs and 11 items. This result reflected the multidimensional nature of DHL’s outcomes as described in previous work [26]. For health literacy measurement [58], the way DHL is understood should be closely linked to how it is measured. This means that measures need to follow the evolution of clinical practice with ICTs and chronic patient’s outcomes that are multiple [38, 59]. Furthermore, using a Delphi technique allowed us to determine which outcomes to measure in clinical practice, and for further research [60]. In our e-Delphi study, we captured academics, clinicians and patients’ comprehension of the characteristics of digital health literacy, which had substantial implications on the content validity of our new DHL PROM.

The instrument constructs that achieved a consensus among academics, clinicians and patients in this study were about digital literacy, reliability of information on the internet, relevance of information to personal health, privacy, and empowerment. These constructs are consistent with previous qualitative research investigating important aspects of chronic patients' skills when using ICT. A phenomenological study (n = 10) aimed at exploring the experience of using telemedicine with people with chronic obstructive pulmonary disease (COPD) identified several themes that are aligned with our consensus constructs, such as accessibility (health service), support from healthcare professionals (regular follow-up from nurses), enhanced clinical insight (e.g. daily self-measurement of clinical parameters), and mutual language (effective communication) [61]. An other study with a meta-ethnographic design (12 studies) targeting the experience of using telemedicine among patients living with COPD also identified constructs in concordance with our questionnaire. The synthesis revealed three first-order constructs and their second-order constructs: 1) presence: with accessibility, digital proximity; 2) transparency: with clinical awareness (an overview of patients' health status enabling greater awareness of their individual data), reciprocal dialogue (sharing clinical data and horizontality of clinical language); 3) ambivalence: independent but close (sense of security, control, dignity and independence), restricted but detached [62]. Empowerment more specifically is in line with the meta-analysis of Fernandes et al., 2022 who saw it as an important enabler for engaging in telehealth interventions [63]. Our findings reaffirm the prominent role of personal skills and empowerment for patients using ICT in health care. Moreover, our study highlighted that structural or organizational responses to problems arising from the use of ICTs were rarely considered, even if personal adaptation to electronic devices may have its limits. There are many barriers which can restrict the use of digital devices: infrastructure barriers (e.g. 4G available or not), financial barriers (e.g. an internet-connected smartphone), social attitude-exclusion (e.g. psychological issues), government support, and education, training and individual support [64,65,66]. It is important that resources are allocated to ensure that these barriers are removed so that people are able to access healthcare services.

We used an online survey strategy to recruit French-speaking experts knowledgeable in DHL for chronic patients. The e-Delphi method allowed patients to share their personal points of view about digital health literacy. Other methodologies can be used to obtain consensus on the constructs and items of a measure, including nominal groups techniques and focus groups. In-person methods can allow for richer discussions but are limited by the availability of the experts. Currently there is no best method to use for consensus [59].

The results of this study should be of interest to anyone seeking a better understanding of the measurement aspects of DHL in chronic patients, such as clinicians involved in e-health care environments. The measurement developed in this study is different from other validated measures of DHL because it identified relevant, improvable and self-ratable constructs as well as items including empowerment for chronic patients processing health information with electronic devices.

Further validation of the PROM will be required to consider issues regarding samples and setting in field-testing as recommended by Haynes, Richard & Kubany [67]. Since the PROM is self-completed, the usability and validity from the perspective of people presenting a wide range of literacy levels should be studied. According to evidence on existing measurement instruments of health literacy [19], we suggest that well developed instruments and validated instruments must be appropriately selected based on clinical practice.

Strengths and limitations

This research project followed COSMIN recommendations for PROM development [21] and other recommendations to define outcomes criteria indicators [33, 34]. Limitations are to be noted. First, there were comparatively fewer patient than academic or clinical members in the panel. Although the Delphi process allowed patients’ comments to be fed back anonymously to potentially refine clinicians’ and academics’ opinions, patients may have had less weight on overall consensus in our study. Secondly, the change in focus of the assessment between the initial and subsequent rounds, carried out to introduce the experts gradually to the underlying constructs of DHL and the PROM, may have limited the number of iterations to assess the items. Two reworded items were ultimately rejected because only one of the three criteria turned out to be "uncertain". We cannot be sure that an additional round would have resolved this uncertainty in one way or another, but it remains a possibility.

The strengths of the study are the diversity of experts’ profile, and the low attrition rate between the three rounds of the e-Delphi. Patient partners play an important role in the health care system, but they are few and could not be representative of all patients. We used a structured and rigorous communication approach that circumvented important biases in group reasoning. Panelists submitted many rich comments and explanations to strengthen the group communication process. However, despite its numerous advantages, the e-Delphi method presents inherent challenges to patients with little or no digital health literacy. In addition to being limited in their access to online communication process, their understanding and opinions about the three evaluation criteria assessed in this study may have been very different from other panel members, thus increasing the difficulty of reaching consensus [68]. Nevertheless, their perspectives on the clarity and usability of the PROM are extremely relevant and should be explicitly sought out during the next stages of instrument validation.

Implications for research

This multiple phase research project is working towards a new PROM for digital health literacy assessment of chronic patients who are facing problems associated to process health information from digital devices. Current measurement approaches report four main conceptual models and related measures to respect the dynamic context of DHL [27], without considering patient’s empowerment. Further research would benefit from assessing DHL anchored in person-centered frameworks [69] such as the Adaptation Model for an appropriate use in health care to ensure that any scale developed encompasses outcomes, values, and patients’ preferences to access health care. We have yet to evaluate the response process validity, and the internal and external validity of the proposed measure. To achieve this, it is essential to ensure that online administration of the questionnaire does not impede digitally disadvantaged groups from completing it.

Conclusion

This study identified 5 DHL constructs and 11 items on which patients, clinicians, and academics agreed regarding their relevance, improvability, and self-ratability. In addition, we found that minimal or no coping strategies were expressed by experts in terms of difficulty to access and use digital resources. Structural or organizational responses were rarely highlighted, even though adaptation with electronic devices may have limits (e.g., related to age). However, it is critical to consider DHL assessment instruments, their ease of deployment and the applications of their outputs in practice as we further integrate digital healthcare delivery. The resulting PROM will undergo further evaluation and may help the assessment of chronic patients’ abilities when using digital health.

Availability of data and materials

The data that support the findings of this study and the questionnaire Lisane are available from the corresponding author upon reasonable request.

Abbreviations

COSMIN:

COnsensus-based standards for the selection of health measurement instruments

DHL:

Digital health literacy

e-Delphi:

Electronic-Delphi

ICTs:

Information and communications technology

PROM:

Patient-reported outcome measure

REDCap:

Research electronic data capture system

References

  1. European Commission. Flash Eurobarometer 404 “European citizens' digital health literacy”. 2014. https://doi.org/10.2759/88726.

  2. Novillo Ortiz D. Health: digital health literacy. First Meeting of the WHO GCM/NCD Working Group on Health Literacy for NCDs. 2017 Feb 27–28 [Online]. Retrieved from https://vdocuments.mx/digital-health-literacy-world-health-organization-first-meeting-of-the-who-gcmncd.html?page=1. Accessed 22 January 2023.

  3. Norman CD, Skinner HA. eHealth Literacy: Essential Skills for Consumer Health in a Networked World. J Med Internet Res. 2006;8(2):e9. https://doi.org/10.2196/jmir.8.2.e9.

    Article  PubMed  PubMed Central  Google Scholar 

  4. M-POHL. Publications [Online]. https://m-pohl.net/HLS_Project_Publications_Presentations. Accessed January 22, 2023.

  5. De Gani SM, Jaks R, Bieri U, Kocher JPh. Health Literacy Survey Schweiz 2019–2021. Schlussbericht im Auftrag des Bundesamtes für Gesundheit BAG [Final report commissioned by the Federal Office of Public Health FOPH. In German with an English summary]. Zurich: Careum Stiftung; 2021. Retrieved from https://m-pohl.net/sites/m-pohl.net/files/inline-files/HLS19-21-CH_Schlussbericht_Careum%20Gesundheitskompetenz_Health%20Literacy%20Survey_20210914.pdf. Accessed 22 January 2023.

  6. Pelikan JM, Straßmayr C, Ganahl K. Health Literacy Measurement in General and Other Populations: Further Initiatives and Lessons Learned in Europe (and Beyond). In: Logan GD, Siegel ER, editors. Health Literacy in Clinical Practice and Public Health. Amsterdam: IOS Press; 2020. p. 170–91.

    Google Scholar 

  7. Pelikan JM, Link T, Straßmayr C. The European Health Literacy Survey 2019 of M-POHL: a summary of its main results. European Journal of Public Health. 2021;3:164–497. https://doi.org/10.1093/eurpub/ckab164.497.

    Article  Google Scholar 

  8. Délétroz C, Bou-Malhab P, Bodenmann P, Gagnon MP. Chapitre 1.6. Les spécificités de la littératie en santé numérique des patients à l’heure d’Internet et du numérique. In Bodenmann P, Vu F, Wolff H, Jackson Y (Eds), Vulnérabilités, diversités et équité en santé. Geneva: Planète Santé; 2022.

  9. Drossaert C. Measuring digital health literacy, why and how? Ann Rheum Dis. 2018;77:35. https://doi.org/10.1136/annrheumdis-2018-eular.7798.

    Article  Google Scholar 

  10. Kappes M, Espinoza P, Jara V, Hall A. Nurse-led telehealth intervention effectiveness on reducing hypertension: a systematic review. BMC Nurs. 2023;22(1):19. https://doi.org/10.1186/s12912-022-01170-z.

    Article  PubMed  PubMed Central  Google Scholar 

  11. World Health Organization. Recommendations on digital interventions for health system strengthening. Geneva: WHO; 2019.

    Google Scholar 

  12. Heart and stroke Foundation of Canada. 2021 CSBP-F20-Virtual Care Decision Framework. 2021. Retrieved from https://heartstrokeprod.azureedge.net/-/media/1-stroke-best-practices/csbp-f20-virtualcaredecisionframework-en.ashx?la=en&rev=9db7990386364a1b8253401c0313d634. Accessed 22 January 2023.

  13. Norman CD, Skinner HA. eHEALS: the eHealth literacy scale. J Med Internet Res. 2006;8(4): e27. https://doi.org/10.2196/jmir.8.4.e27.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Kayser L, Karnoe A, Furstrand D, Batterham R, Christensen KB, Elsworth G, Osborne RH. A Multidimensional Tool Based on the eHealth Literacy Framework: Development and Initial Validity Testing of the eHealth Literacy Questionnaire (eHLQ). J Med Internet Res. 2018;20(2): e36. https://doi.org/10.2196/jmir.8371.

    Article  PubMed  PubMed Central  Google Scholar 

  15. van der Vaart R, Drossaert C. Development of the Digital Health Literacy Instrument: Measuring a Broad Spectrum of Health 1.0 and Health 2.0 Skills. J Med Internet Res. 2017;19(1):e27. https://doi.org/10.2196/jmir.6709.

    Article  PubMed  PubMed Central  Google Scholar 

  16. M-POHL. The HLS19-DIGI Instrument to measure Digital Health Literacy. 2022 Jun. Retrieved from https://m-pohl.net/sites/m-pohl.net/files/inline-files/Factsheet%20HLS19-DIGI.pdf. Accessed 22 January 2023.

  17. Délétroz C, Canepa Allen M, Yameogo AR, Sasseville M, Rouquette A, Bodenmann P, Gagnon M-P. Systematic review of the measurement properties of patient-reported outcome measures (PROMs) of eHealth literacy in adult populations. Syst Rev. 2023; [submitted].

  18. Faux-Nightingale A, Philp F, Chadwick D, Singh B, Pandyan A. Available tools to evaluate digital health literacy and engagement with eHealth resources: A scoping review. Heliyon. 2022;8(8): e10380. https://doi.org/10.1016/j.heliyon.2022.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Tavousi M, Mohammadi S, Sadighi J, Zarei F, Kermani RM, Rostami R, Montazeri A. Measuring health literacy: A systematic review and bibliometric analysis of instruments from 1993 to 2021. PLoS ONE. 2022;17(7): e0271524. https://doi.org/10.1371/journal.pone.0271524.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Terwee CB, Prinsen CAC, Chiarotto A, Westerman MJ, Patrick DL, Alonso J, Bouter LM, de Vet HCW, Mokkink LB. COSMIN methodology for evaluating the content validity of patient-reported outcome measures: a Delphi study. Qual Life Res. 2018;27(5):1159–70. https://doi.org/10.1007/s11136-018-1829-0.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. De Vet HC, Terwee CB, Mokkink LB, Knol DL. Measurement in medicine: a practical guide. Cambridge: Cambridge University Press; 2011.

    Book  Google Scholar 

  22. Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, Bouter LM, de Vet HC. The COSMIN study reached international consensus on taxonomy, terminology, and definitions of measurement properties for health-related patient-reported outcomes. J Clin Epidemiol. 2010;63(7):737–45. https://doi.org/10.1016/j.jclinepi.2010.02.006.

    Article  PubMed  Google Scholar 

  23. Roy C. The Roy adaptation model. 3rd ed. Upper Saddle River (NJ): Prentice Hall; 2008.

    Google Scholar 

  24. Senesac, P. Roy, C. Chapter 10. Callista Roy’s Adaptation Model. In Smith, M. C. (5th Ed.). Nursing Theories and Nursing Practice. Philadelphia: F. A. Davis; 2020. p. 149–63.

  25. Prinsen CA, Vohra S, Rose MR, Boers M, Tugwell P, Clarke M, Williamson PR, Terwee CB. How to select outcome measurement instruments for outcomes included in a “Core Outcome Set”–a practical guideline. Trials. 2016;17(1):1–10. https://doi.org/10.1186/s13063-016-1555-2.

    Article  Google Scholar 

  26. Délétroz C, Canepa Allen M, Sasseville M, Rouquette A, Bodenmann P, Gagnon MP. Revue systématique des mesures de littératie en santé numérique pour les patients: résultats préliminaires. Scie Nursing Health Practices/Scie infirmière et pratiques en santé. 2022;5:15. https://doi.org/10.7202/1093075ar.

    Article  Google Scholar 

  27. DeWalt DA, Rothrock N, Yount S, Stone AA; PROMIS Cooperative Group. Evaluation of item candidates: the PROMIS qualitative item review. Med Care. 2007 May;45(5 Suppl 1):S12–21. https://doi.org/10.1097/01.mlr.0000254567.79743.e2.

  28. World Health Organization. WHODAS 2.0 Translation Package (Version 1.0) Translation And Linguistic Evaluation Protocol And Supporting Material. Retrieved from https://terrance.who.int/mediacentre/data/WHODAS/Guidelines/WHODAS%202.0%20Translation%20guidelines.pdf. Accessed 15 February 2023.

  29. Kalfoss M. Translation and Adaption of Questionnaires: A Nursing Challenge. SAGE Open Nurs. 2019;23(5):2377960818816810. https://doi.org/10.1177/2377960818816810.

    Article  Google Scholar 

  30. Behr D. Assessing the use of back translation: The shortcomings of back translation as a quality testing method. Int J Soc Res Methodol. 2017;20(6):573–84. https://doi.org/10.1080/13645579.2016.1252188.

    Article  Google Scholar 

  31. Jünger S, Payne SA, Brine J, Radbruch L, Brearley SG. Guidance on Conducting and REporting DElphi Studies (CREDES) in palliative care: Recommendations based on a methodological systematic review. Palliat Med. 2017;31(8):684–706. https://doi.org/10.1177/0269216317690685.

    Article  PubMed  Google Scholar 

  32. von der Gracht H. Consensus measurement in Delphi studies Review and implications for future quality assurance. Technol Forecast Soc Chang. 2012;79:1525–36. https://doi.org/10.1016/j.techfore.2012.04.013.

    Article  Google Scholar 

  33. Diamond IR, Grant RC, Feldman BM, Pencharz PB, Ling SC, Moore AM, Wales PW. Defining consensus: a systematic review recommends methodologic criteria for reporting of Delphi studies. J Clin Epidemiol. 2014;67(4):401–9. https://doi.org/10.1016/j.jclinepi.2013.12.002.

    Article  PubMed  Google Scholar 

  34. Foth T, Efstathiou N, Vanderspank-Wright B, Ufholz LA, Dütthorn N, Zimansky M, Humphrey-Murto S. The use of Delphi and Nominal Group Technique in nursing education: A review. Int J Nurs Stud. 2016;60:112–20. https://doi.org/10.1016/j.ijnurstu.2016.04.015.

    Article  PubMed  Google Scholar 

  35. Watkins RE, Elliott EJ, Halliday J, O’Leary CM, D’Antoine H, Russell E, Hayes L, Peadon E, Wilkins A, Jones HM, McKenzie A, Miers S, Burns L, Mutch RC, Payne JM, Fitzpatrick JP, Carter M, Latimer J, Bower C. A modified Delphi study of screening for fetal alcohol spectrum disorders in Australia. BMC Pediatr. 2013;25(13):13. https://doi.org/10.1186/1471-2431-13-13.

    Article  Google Scholar 

  36. Linstone HA, Turoff M. The Delphi Method. Reading (MA): Addison-Wesley; 1975.

    Google Scholar 

  37. Dalkey N. An experimental study of group opinion: The Delphi method. Futures. 1969;1(5):408–26. https://doi.org/10.1016/S0016-3287(69)80025-X.

    Article  Google Scholar 

  38. Del Grande C, Kaczorowski J, Pomey MP. What are the top priorities of patients and clinicians for the organization of primary cardiovascular care in Quebec? A modified e-Delphi study. PLoS ONE. 2023;18(1): e0280051. https://doi.org/10.1371/journal.pone.0280051.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  39. Murphy M, Hollinghurst S, Salisbury C. Agreeing the content of a patient-reported outcome measure for primary care: a Delphi consensus study. Health Expect. 2017;20(2):335–48. https://doi.org/10.1111/hex.12462.

    Article  PubMed  Google Scholar 

  40. Belton I, MacDonald A, Wright G, Hamlin I. Improving the practical application of the Delphi method in group-based judgment: A six-step prescription for a well-founded and defensible process. Technol Forecast Soc Change. 2019;147:72–82. https://doi.org/10.1016/j.techfore.2019.07.002.

    Article  Google Scholar 

  41. Boulkedid R, Abdoul H, Loustau M, Sibony O, Alberti C. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLoS ONE. 2011;6(6): e20476. https://doi.org/10.1371/journal.pone.0020476.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)–a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81. https://doi.org/10.1016/j.jbi.2008.08.010.

    Article  PubMed  Google Scholar 

  43. Harris PA, Taylor R, Minor BL, Elliott V, Fernandez M, O’Neal L, McLeod L, Delacqua G, Delacqua F, Kirby J, Duda SN. REDCap Consortium The REDCap consortium: Building an international community of software platform partners. J Biomed Inform. 2019;95:103208. https://doi.org/10.1016/j.jbi.2019.103208.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Rowe G, Wright G. Expert Opinions in Forecasting: The Role of the Delphi Technique. In: Armstrong JS, editor. Principles of Forecasting: A Handbook for Researchers and Practitioners. Boston, MA: Springer US; 2001. pp. 125–144. https://doi.org/10.1007/978-0-306-47630-3_7.

  45. Miles MB, Huberman AM, Saldaña J. Qualitative data analysis: a methods sourcebook. 3rd ed. Thousand Oaks (CA): SAGE Publications; 2014.

    Google Scholar 

  46. Menold N. Double Barreled Questions: An Analysis of the Similarity of Elements and Effects on Measurement Quality. Journal of Official Statistics. 2020;36(4):855–86. https://doi.org/10.2478/jos-2020-0041.

    Article  Google Scholar 

  47. Bruce B, Fries JF. The Health Assessment Questionnaire (HAQ). Clin Exp Rheumatol. 2005 Sep-Oct;23(5 Suppl 39):S14–8. PMID:16273780.

  48. Guillemin F, Briancon S, Pourel J. Measurement of the functional capacity in rheumatoid polyarthritis: a French adaptation of the Health Assessment Questionnaire (HAQ). Rev Rhum Mal Osteoartic. 1991;58(6):459–65.

    CAS  PubMed  Google Scholar 

  49. Pare´ G, Cameron A-F, Poba-Nzaou P, Templier M. A systematic assessment of rigor in information systems ranking-type Delphi studies. Inf Manag. 2013;50:207–217. https://doi.org/10.1016/j.im.2013.03.003.

  50. Keeney S, Hasson F, McKenna HP. A critical review of the Delphi technique as a research methodology for nursing. Int J Nurs Stud. 2001;38(2):195–200. https://doi.org/10.1016/s0020-7489(00)00044-4.

    Article  CAS  PubMed  Google Scholar 

  51. Keeney S, Hasson F, McKenna, HP. The Delphi Technique in Nursing and Health Research. Oxford: Wiley-Blackwell; 2011. https://doi.org/10.1002/9781444392029.

  52. Sandelowski M, Barroso J. Classifying the findings in qualitative studies. Qual Health Res. 2003;13(7):905–23. https://doi.org/10.1177/1049732303253488.

    Article  PubMed  Google Scholar 

  53. Russell GE, Fawcett J. The conceptual model for nursing and health policy revisited. Policy Polit Nurs Pract. 2005;6(4):319–26. https://doi.org/10.1177/1527154405283304.

    Article  PubMed  Google Scholar 

  54. Ducharme F. Le pouvoir infirmier: Des résultats probants ... à la politique. Perspect Infirm. OIIQ. 2013;10(2):31–36. PMID:23539862.

  55. Miles MB, Huberman AM. Analyse des données qualitatives. Brussels: De Boeck Supérieur; 2003.

    Google Scholar 

  56. de Loë RC, Melnychuk N, Murray D, Plummer R. Advancing the State of Policy Delphi Practice: A Systematic Review Evaluating Methodological Evolution, Innovation, and Opportunities. Technol Forecast Soc Chang. 2016;104:78–88. https://doi.org/10.1016/j.techfore.2015.12.009.

    Article  Google Scholar 

  57. Kickbusch I, Pelikan JM, Apfel F, Tsouros AD. Health literacy: the solid facts. Copenhagen: World Health Organization. Regional Office for Europe; 2013. Retrieved from https://apps.who.int/iris/bitstream/handle/10665/326432/9789289000154-eng.pdf. Accessed 15 Feb 2023.

  58. Urstad KH, Andersen MH, Larsen MH, Borge CR, Helseth S, Wahl AK. Definitions and measurement of health literacy in health and medicine research: a systematic review. BMJ Open. 2022;12(2): e056294. https://doi.org/10.1136/bmjopen-2021-056294.

    Article  PubMed  PubMed Central  Google Scholar 

  59. Sasseville M, Chouinard MC, Fortin M. Evaluating the content of a patient-reported outcome measure for people with multimorbidity: a Delphi consensus. Qual Life Res. 2021;30(10):2951–60. https://doi.org/10.1007/s11136-021-02888-0.

    Article  PubMed  Google Scholar 

  60. Sinha IP, Smyth RL, Williamson PR. Using the Delphi technique to determine which outcomes to measure in clinical trials: recommendations for the future based on a systematic review of existing studies. PLoS Med. 2011;8(1): e1000393. https://doi.org/10.1371/journal.pmed.1000393.

    Article  PubMed  PubMed Central  Google Scholar 

  61. Barken TL, Thygesen E, Söderhamn U. Unlocking the limitations: Living with chronic obstructive pulmonary disease and receiving care through telemedicine-A phenomenological study. J Clin Nurs. 2018;27(1–2):132–42. https://doi.org/10.1111/jocn.13857.

    Article  PubMed  Google Scholar 

  62. Barken TL, Söderhamn U, Thygesen E. A sense of belonging: A meta-ethnography of the experience of patients with chronic obstructive pulmonary disease receiving care through telemedicine. J Adv Nurs. 2019;75(12):3219–30. https://doi.org/10.1111/jan.14117.

    Article  PubMed  Google Scholar 

  63. Fernandes LG, Devan H, Fioratti I, Kamper SJ, Williams CM, Saragiotto BT. At my own pace, space, and place: a systematic review of qualitative studies of enablers and barriers to telehealth interventions for people with chronic pain. Pain. 2022;163(2):e165–81. https://doi.org/10.1097/j.pain.0000000000002364.

    Article  PubMed  Google Scholar 

  64. World Health Organization. Global strategy on digital health 2020–2025. World Health Organization; 2021 Aug 21. Retrieved from https://www.who.int/publications/i/item/9789240020924. Accessed 22 November 2023.

  65. Bhaskar S, Rastogi A, Menon KV, Kunheri B, Balakrishnan S, Howick J. Call for Action to Address Equity and Justice Divide During COVID-19. Front Psychiatry. 2020;3(11): 559905. https://doi.org/10.3389/fpsyt.2020.559905.

    Article  Google Scholar 

  66. Bhaskar S, Nurtazina A, Mittoo S, Banach M, Weissert R. Editorial: Telemedicine During and Beyond COVID-19. Front Public Health. 2021;16(9): 662617. https://doi.org/10.3389/fpubh.2021.662617.

    Article  Google Scholar 

  67. Haynes SN, Richard DCS, Kubany ES. Content validity in psychological assessment: A functional approach to concepts and methods. Psychol Assess. 1995;7(3):238–47. https://doi.org/10.1037/1040-3590.7.3.238.

    Article  Google Scholar 

  68. Chalmers J, Armour M. The Delphi Technique. In: Liamputtong P, editor. Handbook of Research Methods in Health Social Sciences. Springer Singapore; 2019. pp. 715–35.

  69. Bull C, Teede H, Watson D, Callander EJ. Selecting and Implementing Patient-Reported Outcome and Experience Measures to Assess Health System Performance. JAMA Health Forum. 2022;3(4):e220326. https://doi.org/10.1001/jamahealthforum.2022.0326.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We acknowledge each panelist: patient partners, clinicians and academics. Carole Délétroz acknowledges the REFLIS research network and HESAV, her employer.

Funding

This study had a funding support of the University of Lausanne and the Université Laval. Funding name: Appel à projet dans le cadre du partenariat privilégié UL-UNIL, Project number 3088.

Author information

Authors and Affiliations

Authors

Contributions

CDE was responbible for the project design and managment. CDE, CDG, SAM, MAS, PBO and MPG contributed to the study conceptualization and methodology. CDE and SAM made the data curation. CDE and CDG performed the formal analysis, statistical and qualitative data analysis. CDE and CDG contributed to interpretation of the data. CDE, CDG wrote the original draft. MSA, MPG and PBO review the manuscrit. All authors read and approved the final manuscrit.

Corresponding author

Correspondence to Carole Délétroz.

Ethics declarations

Ethics approval and consent to participate

The research project was approved by the Ethics Committee of Laval University (CERUL) in Canada (2021–057,10th December 2021), and conducted in accordance with the principles of the Declaration of Helsinki. All methods in this study were performed in accordance with relevant guidelines and regulations. A written informed consent was obtained from each panelist in the e-Delphi process (REDCap).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

e-Delphi_ survey introduction_ ENG

Additional file 2.

MatrixRound-2_Feedback Participant Results

Additional file 3.

Thematic Conceptual

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Délétroz, C., Del Grande, C., Amil, S. et al. Development of a patient-reported outcome measure of digital health literacy for chronic patients: results of a French international online Delphi study. BMC Nurs 22, 476 (2023). https://doi.org/10.1186/s12912-023-01633-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12912-023-01633-x

Keywords