Design
This psychometric methodological study was conducted in Australia. It had two phases: Phase 1, Determinants of Salt-Restriction Behaviour Questionnaire (DSRBQ) adaptation; and Phase 2, Psychometric testing of both the revised Chinese and translated English DSRBQs (see Fig. 1). The cross-cultural adaptation process was guided by previous psychometric testing studies conducted in Asian Pacific Region [20,21,22] and the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) checklist for cross-sectional studies was used to guide the preparation of this report [23].
Instrument
The original Chinese DSRBQ consists of 3 parts. Parts 1 and 2 include 34 items (Table 1):
-
1.
Demographic characteristics (n = 9), including age, gender, ethnicity, education, marital status, employment, income and health conditions.
-
2.
Personal dietary practice (n = 12):
-
3 items measured using categorical variable scales
-
5 items measured using 5-point Likert scales ranging from ‘daily (1) to never (5)’, ‘never (1) to always (5)’ and ‘very light (1) to very salty (5)’
-
3 items requiring participants to directly answer the number of meals being consumed at home and the percentage of food consumed at home vs away from home.
-
1 binary scale (yes/no) item.
-
3.
Salt-related health education/medical advice that the participant has received (n = 7), measured using a binary scale (yes/no).
-
4.
Salt-related health knowledge (n = 6) measured using categorical variable scales.
Part 3 includes 47 items forming six subscales:
-
1.
Perceived threat (n = 5)
-
2.
Knowledge/perceived susceptibility to and severity of the disease (n = 6)
-
3.
Perceived benefits of action subscale (n = 3)
-
4.
Perceived benefits of using a measuring spoon (n = 3)
-
5.
Likelihood of following the recommended interventions (n = 10)
-
6.
Perceived barriers (n = 20).
All items are measured using a 5-point Likert scale ranging from strongly disagree (1) to strongly agree (5).
Phase 1: Adaptation of the Determinants of Salt-Restriction Behaviour Questionnaire
The Brislin’s model for translation and validation of instruments was used to guide the translation of the DSRBQ [24]. The original Chinese questionnaire [19] was translated from Chinese to English by an experienced Chinese-English interpreter, and accuracy was confirmed using the back-translation method by an author (AC). Discrepancies, mostly in terminologies such as the word selections for sauce, condiment and paste, were amended by the author. The revised versions were reviewed by an independent bilingual Mandarin-English layperson to ensure the questionnaires were accurately translated and written with an appropriate level of literacy.
Expert panel review for translation equivalence and content relevance
The translation equivalence and content relevance were evaluated by a panel of experts between September and October 2019. Australia is home to many migrants from different countries and administrative divisions. Chinese people in Australia come from China, Hong Kong, Taiwan, Vietnam, Malaysia and Singapore. To promote the accuracy of the validation, the expert panel members were Chinese-English bilingual and purposively invited from different regions in Asia. They consisted of one researcher, two nurses and three health care consumers. They were recruited through the authors’ professional networks. An independent research assistant sent a personalised invitation letter to the potential experts to invite them to participate in the study.
Translation equivalence evaluation and content validity
The panel members were invited to evaluate the translation equivalence and content relevance of each item across the Chinese and English versions. Firstly, the translation equivalence of each item was evaluated by using a four-point scale ranging from 1 = not equivalent to 4 = most equivalent. Any items that received a score of 1 or 2 by more than 20% (n = 1) of the panel members were reviewed [21] and revised. Secondly, the expert panel members used a four-point scale (1 = not relevant, 2 = somewhat relevant, 3 = relevant, and 4 = most relevant) to evaluate the content relevance of each item to the Chinese Australian culture and community. The Content Validity Index (CVI) is the most common approach for content validity in questionnaire development and adaptation [22]. For each item in the questionnaire, an Item-level CVI (I-CVI) was calculated by dividing the number of panel members who scored 4 (very relevant) by the total number of panel members (n = 6) [22]. Then, a Scale-level CVI (S-CVI) would be calculated by using the mean I-CVI, which was the sum of all I-CVIs divided by the total number of items (n = 81) [20]. A S-CVI score of 0.80 (80%) or higher is considered as having a good content validity [20,21,22].
Phase 2: Psychometric testing of the revised Chinese and translated English Determinants of Salt-Restriction Behaviour Questionnaires
The reliability and validity of both the Chinese and English versions of the DSRBQ were evaluated through a cross-sectional descriptive study. Participants were invited to complete an anonymous English or Chinese DSRBQ, either online or paper-based, twice at an interval of two weeks. Similar answers should be obtained in the two tests if the tool has a high test–retest reliability [25].
Participants and setting
Participants were recruited through social media such as Facebook, Twitter, WeChat and Weibo from January to March 2020 and from July to November 2020 using a convenience sampling method. Participants completed either the Chinese or English questionnaire according to their preferences. Data collection was suspended between March and July 2020 due to the coronavirus (COVID-19) pandemic in Australia, when people were overwhelmed by COVID-19-related health information and lockdown. The inclusion criteria were: a) adults over 18 years old of Chinese ancestry; and b) those who had lived in Australia for at least 6 months. Adults who were unable to read a Chinese or English questionnaire were excluded from the study.
In general, there are a variety of recommended sample sizes for test–retest reliability. The recommended sample size ranged from 50 to over 1000 subjects or the item to response ratio was from 1:3 to 1:20 [26]. Perneger et al. [27] noted that a sample size of 30 could achieve a power of 80% to detect a problem that occurs in 5% of population. In this study, it was assumed that the null hypothesis value was 0.00, and the study aimed to achieve a power of 80% in a two-tailed test. Therefore, the minimum sample size for each version needed to be 50 or greater to detect a Kappa of 0.40 [28]. The same sample size determination was used in Girard et al. [25]. It is important to acknowledge that a small sample size may under identify problems with a questionnaire [27]. Given the data collection was heavily affected by the first global wave of the COVID-19 pandemic, the target sample size of at least 50 participants for each version was the most practical and appropriate option for Phase 2 study at that time.
Data analyses
The Statistical Package for the Social Sciences version 25 was used for data analysis. The level of significance was set at 0.05 for all tests. To minimise bias in data analysis, any questionnaires with more than 10% of the items missing were excluded from the analysis [29].
For test–retest reliability, Pearson Correlation and McNemar tests were used to examine the association and consistency in response to the continuous and nominal variables/items in Parts 1 and 2 of the questionnaires in the first test (T1) at week 0 and retest (T2) at week 2. Intraclass correlation coefficient (ICC) was used to examine the six Likert subscales in part 3. Any subscales with a low reliability value (i.e. less than 0.5) are considered to have poor reliability [30].
The internal consistency reliability was measured by Cronbach’s alpha to identify the homogeneity of the items in the questionnaire (Parts 2 and 3). An alpha of 0.65–0.80 was considered satisfactory [31]. In addition, the item-to-total correlation test was used to assess the internal consistency of the knowledge assessment questions in part 2.
Ethical considerations
This descriptive cross-sectional study was approved by the Human Research Ethnics Committee at the University of Newcastle, Australia in which the study was conducted (approval number: H-2019–0180), and permission to use the questionnaire was obtained from the author of the DSRBQ (Chen et al., 2014). In Phase 1, expert panel members were assured of their confidentiality and signed a written informed consent form. All identifiable personal information was removed. Informed consent to participate was assumed by participants in Phase 2. Prior to completing the anonymous questionnaire, participants could choose to receive a paper copy or electronic copy of the participant information statement, which detailed the purpose and aims of the study. Participants were asked to indicate their consent to the study by ticking a box at the beginning of the questionnaire. On the completion of the questionnaire, if participants chose to be included in a draw to win one of three $100 gift vouchers, they were asked to enter their contact details in a separate database so that their responses could not be identified.