Core criteria for translation-quality assessment_Shanghai Translation Company
Does translation of terminology compromise critical appraisal?
The concepts used in table 1 are based on Lincoln and Guba’s (1985) translation of criteria to evaluate the trustworthiness of findings. Acknowledging the difference in terminology does not obviate the rationale or process for critical appraisal. There might be good congruence between the intent of meanings relevant to key aspects of establishing study criteria, as demonstrated in table 1.
Table 1: Criteria to critically appraise findings from qualitative research
Aspect |
Qualitative Term |
Quantitative Term |
Truth value |
Credibility |
Internal Validity |
Applicability |
Transferability |
External Validity or generalisibility |
Consistency |
Dependability |
Reliability |
Neutrality |
Confirmability |
Objectivity |
This scheme outlines some of the core elements to be considered in an assessment of the quality of qualitative research. However, the concept of confirmability might not be applicable to approaches inspired by phenomenology or critical paradigms in which the researcher’s experience becomes part of the data (Morse, 2002). The choice of critical appraisal instruments should preferably be inspired by those offering a multi-dimensional concept of quality in research. Apart from methodological rigour, that would also include quality of reporting and conceptual depth and bread.
What indications are we looking for in an original research paper?
There are a variety of evaluation techniques that authors might have included in their original reports, that facilitate assessment by a reviewer and that are applicable to a broad range of different approaches in qualitative research. However, it should be stated that some of the techniques listed only apply for a specified set of qualitative research designs.
·Assessing Credibility: Credibility evaluates whether or not the representation of data fits the views of the participants studied, whether the findings hold true.
Evaluation techniques include: having outside auditors or participants validate findings (member checks), peer debriefing, attention to negative cases, independent analysis of data by more than one researcher, verbatim quotes, persistent observation etc.
·Assessing Transferability: Transferability evaluates whether research findings are transferable to other specific settings.
Evaluation techniques include: providing details of the study participants to enable readers to evaluate for which target groups the study provides valuable information, providing contextual background information, demographics, the provision of thick description about both the sending and the receiving context etc.
·Assessing Dependability: Dependability evaluates whether the process of research is logical, traceable and clearly documented, particularly on the methods chosen and the decisions made by the researchers.
Evaluation techniques include: peer review, debriefing, audit trails, triangulation in the context of the use of different methodological approaches to look at the topic of research, reflexivity to keep a self-critical account of the research process, calculation of inter-rater agreements etc.
·Assessing Confirmability: Confirmability evaluates the extent to which findings are qualitatively confirmable through the analysis being grounded in the data and through examination of the audit trail.
Evaluation techniques include: assessing the effects of the researcher during all steps of the research process, reflexivity, providing background information on the researcher’s background, education, perspective, school of thought etc.
The criteria listed might generate an understanding of what the basic methodological standard is a qualitative study should be able to reach. However, a study may still be judged to have followed the appropriate procedures for a particular approach, yet may suffer from poor interpretation and offer little insight into the phenomenon at hand. Consequently, another study may be flawed in terms of transparency of methodological procedures and yet offer a compelling, vivid and insightful narrative, grounded in the data (Dixon-Woods et al, 2004). Defining fatal flaws and balancing assessment against the weight of a message remains a difficult exercise in the assessment of qualitative studies. As in quantitative research, fatal flaws may depend on the specific design or method chosen (Booth, 2001). This issue needs further research.