• Home   /  
  • Archive by category "1"

Dr Tl Sullivan Homework Answers

Educating Pharmacy Students to Improve Quality (EPIQ) in Colleges and Schools of Pharmacy

aThe University of Arizona College of Pharmacy

bPurdue University College of Pharmacy

cSullivan University College of Pharmacy

dSt. John Fisher College School of Pharmacy

eVirginia Commonwealth University School of Pharmacy

fMidwestern University – Chicago College of Pharmacy

gDepartment of Pharmacy Administration, University of Mississippi

Corresponding author.

Corresponding Author: Adrienne Gilligan, MS, College of Pharmacy-Pulido Center, 1295 N Martin P.O. Box 210202, Tucson AZ 85721-0202. Tel: 214-478-7178. Fax: 520-626-7355. E-mail: ude.anozira.ycamrahp@enneirdA

Author information ►Article notes ►Copyright and License information ►

Received 2012 Jan 12; Accepted 2012 Mar 9.

Copyright © 2012 American Association of Colleges of Pharmacy

Adrienne M. Gilligan, MS, PhD Candidate,aJaclyn Myers, PharmD, PhD Student, James D. Nash, PharmD,cJill E. Lavigne, PhD,dLeticia R. Moczygemba, PharmD, PhD,eKimberly S. Plake, PhD,bAna C. Quiñones-Boex, PhD,fDavid Holdford, PhD,eDonna West-Strum, PhD,g and Terri L. Warholak, PhDa

This article has been cited by other articles in PMC.


Objective. To assess course instructors’ and students’ perceptions of the Educating Pharmacy Students and Pharmacists to Improve Quality (EPIQ) curriculum.

Methods. Seven colleges and schools of pharmacy that were using the EPIQ program in their curricula agreed to participate in the study. Five of the 7 collected student retrospective pre- and post-intervention questionnaires. Changes in students’ perceptions were evaluated to assess their relationships with demographics and course variables. Instructors who implemented the EPIQ program at each of the 7 colleges and schools were also asked to complete a questionnaire.

Results. Scores on all questionnaire items indicated improvement in students’ perceived knowledge of quality improvement. The university the students attended, completion of a class project, and length of coverage of material were significantly related to improvement in the students’ scores. Instructors at all colleges and schools felt the EPIQ curriculum was a strong program that fulfilled the criteria for quality improvement and medication error reduction education.

Conclusion The EPIQ program is a viable, turnkey option for colleges and schools of pharmacy to use in teaching students about quality improvement.

Keywords: quality improvement, medication error, pharmacy education, pharmacy student, assessment, curriculum


More than a decade after the release of the Institute of Medicine's report, To Err is Human, teaching students about patient safety and quality improvement remains a concern for pharmacy educators. The Accreditation Council for Pharmacy Education (ACPE) guidelines require that pharmacy students be able to apply “quality improvement strategies, medication safety and error reduction programs and research processes to minimize drug misadventures and optimize patient outcomes” upon graduation.1 Nevertheless, Holdford and colleagues identified a need to improve how pharmacy students are taught about medication safety and the science underlying it.2

In response to the recognized need for safety and quality improvement curriculum materials developed specifically for pharmacists, the Pharmacy Quality Alliance funded the development of the Educating Pharmacy Students and Pharmacists to Improve Quality (EPIQ) program. The EPIQ program focuses specifically on the knowledge and skills necessary for reducing medication errors and applying quality improvement techniques to ensure patient safety.3

EPIQ, which has been described in detail elsewhere,3 contains curricular content and pedagogy for administration of a 3-credit class. It includes 5 modules: (1) status of quality improvement and reporting in the US health care system; (2) quality improvement concepts; (3) quality measurement; (4) quality-based interventions and incentives; and (5) application of quality improvement to the pharmacy practice setting. Although EPIQ focuses on quality improvement, several modules include more specific issues such as patient safety, medication errors, and adverse drug event reduction. Each module is comprised of several 50-minute educational sessions, each of which includes a mini-lecture and in-class activities. Supplemental readings, discussion questions, project ideas, and other relevant topic-specific materials are included in an instructor’s guide. The instructor’s guide also provides examples on how faculty members can integrate EPIQ modules into existing course structures if desired. (EPIQ is available free of charge and upon request at http://www.pqaalliance.org/files/EPIQ-Flyer_MAR2010.pdf.)

Prior EPIQ studies have focused on faculty perceptions of program quality and their intent to implement the EPIQ program at their institution.4 However, further program evaluation was needed after EPIQ was implemented at several colleges and schools of pharmacy. Therefore, the purpose of this study was to evaluate EPIQ program implementation in several doctor of pharmacy (PharmD) curricula; and to assess the instructors’ and students’ perceptions of the effectiveness of the EPIQ program.


Investigators solicited instructors (ie, faculty members who taught medication safety/quality improvement) from a convenience sample of colleges and schools of pharmacy that had implemented the EPIQ program in their PharmD curriculum (n = 19). Of these 19 colleges and schools of pharmacy, 7 agreed to participate in the evaluation of faculty and student perceptions. All faculty members who participated in the program evaluation received institutional review board approval (IRB) from their institutions and were named as co-investigators in this study. Three of the investigators collaborated on the development of the EPIQ program prior to this evaluation.3

Data were collected from the 7 participating institutions. However, due to IRB constraints and timing issues, student data (eg, student questionnaire results and demographics) from only 5 colleges and schools are reported here. This evaluation targeted students enrolled in the EPIQ program (ie, first-year pharmacy students through third-year students depending on where each institution placed EPIQ material in the curriculum).

The investigators developed a retrospective pretest-posttest study to measure students’ perceptions about their knowledge and the importance of quality improvement and medication error reduction. The retrospective pretest and posttest were administered to students from 5 of the 7 institutions after completing the EPIQ program material. The retrospective pretest portion asked subjects to recall how they felt prior to the intervention. A retrospective pretest-posttest is a validated study design that can help to limit construct-shift bias, a phenomenon that may occur when an individual’s interpretation of an internal construct changes over time.5-9 A retrospective pretest-posttest study design was chosen because students’ understanding of the “quality improvement” construct may have changed during the class (ie, at the beginning of a class the typical student may not be aware of what he/she does and does not know).

The student questionnaire, which was adapted from a previous study by Jackson and colleagues, was reliable and valid.10 The first portion (items 1-9) of the retrospective pretest and posttest asked students to assess their perception (weak, fair, good, or very good) of their knowledge of quality improvement and medication error reduction knowledge before and after taking the EPIQ class, respectively. The second portion (items 10-16) of the retrospective pretest and posttest asked the students to report their level of agreement (disagree, somewhat disagree, somewhat agree, or agree) with statements about the importance of quality improvement and medication error reduction education before and after the EPIQ class, respectively. The final portion of the questionnaire collected demographic data on the student’s age, gender, previous quality improvement experience, year in pharmacy school, and pharmacy experience and work setting, and whether other family members were health care professionals. (A copy of the student questionnaire is available from the corresponding author upon request.)

Investigators developed a questionnaire for EPIQ instructors, which was e-mailed to each participating college or school. The questionnaire contained both qualitative and quantitative questions regarding implementation of the EPIQ program. Instructors were asked to express their opinion of the EPIQ program (ie, strongest and weakest points), describe how they adapted the program to their curriculum, and make recommendations for improvements. In addition, each instructor was asked to provide their opinions concerning the importance and impact of quality improvement and medication error reduction coverage in pharmacy colleges and schools.

Student data were analyzed using Rasch analysis. Rasch analysis is a probabilistic technique to test student responses against what might be predicted using a mathematical model. If the data fit the model, ordinal level data can be converted to interval level data and reliability and validity evidence are obtained. Rasch analysis allows the evaluation of individual person measures and each item’s contribution to the overall instrument.10,11 When evaluating pretest to posttest measures, Rasch provides an advantage over other statistical methods because it quantifies changes in the attitudes and ability of each student and if present, identifies construct-shift bias. The Wolfe and Chiu procedure for item anchoring of pre- and post-data was used in this analysis,13 which was conducted using Winsteps, version 3.7.2. (Mesa Press, Chicago, IL).

The main outcome of interest was the change in each student’s scores from pretest to posttest. Once data were converted to interval level measures, the Rasch logit change scores for both portions of the student questionnaire (ie, items 1-9 and items 10-16) were used as the dependent variables in multiple linear regression to determine if demographic characteristics (independent variables) impacted student change scores. Independent variables of interest included: gender, previous quality improvement experience, university attended, length of class coverage, and completion of a class quality improvement project. University attended was added to account for variability in teaching styles. SPSS statistical analysis system, version 17.0 for Windows (SPSS Inc, Chicago, Ill) was used for regression analysis. An alpha of 0.05 was assumed for all analyses.

A qualitative coding approach was used to categorize comments for faculty questionnaire data as recommended by Richards, including descriptive coding, topic coding, and analytical coding.14 Descriptive coding was used to code participants’ demographic characteristics about the EPIQ program at their university. Topic coding was used to label the responses according to the respondent and consisted of 2 steps: (1) a general classification of categories, and (2) an iterative recoding process to include more subcategories. Finally, analytical coding was used to evaluate potential implications of responses.


Three hundred forty-seven of 530 (66%) students across 5 universities responded to the EPIQ questionnaire. In 4 out of 5 universities, the majority (over 96%) of students responding to the questionnaire were in their second year of the PharmD program. Respondents’ work experience ranged, on average, from 2 to 4 years. Respondents’ mean age ranged from 26 to 29 years (depending on the university), and the majority of respondents were female (approximately 65%).

Reliability and validity of the questionnaire were determined via Rasch analysis. The requirements for demonstrating proper rating scale functions were met as follows: (1) the number of observations in each category were greater than 10; (2) the average category measures increased with the rating scale categories; (3) INFIT and OUTIFT MNSQ11,12 statistics for measured steps were within acceptable range; (4) category thresholds increased with the rating scale categories; (5) category thresholds were at least 1.4 logits apart; and (6) the shape of each rating scale distribution was peaked.15 The questionnaire met the above 6 qualifications indicating that this measurement tool possessed strong reliability and validity. The group means for student ability logit measures (ie, dependent student’s t test) was significantly different from pretest to posttest (p < 0.05).

The hierarchical ordering of items 1-9 as it relates to students’ perceptions of their quality improvement knowledge are shown in Figure 1. The right side of Figure 1 shows the item hierarchy, with items at the bottom of the hierarchy being the easiest to answer positively and items at the top being the most difficult for students to endorse positively. For example, item 5, “My awareness of the impact of medication errors on patient health” was the easiest item for students to endorse positively (ie, to give oneself a high rating on). The item hierarchy shows that item 3, “Ability to implement methods to reduce medication errors” was the most difficult of the 9 items to endorse positively (ie, to assess a high level of ability).

Figure 1.

Expected score map and student normative distributions (Items 1-9).

Student responses relative to each item are evaluated using the pretest and posttest normative distributions provided in Figure 1 for items 1-9. For example, the normative distribution for the pretest shows that for item 7, “My ability to improve quality in pharmacy practice,” the majority of students rated their ability as weak or fair. However, this is in contrast to the results on the interpretation of the normative distribution for this item on the posttest where it is shown that the majority of students now perceived their ability as good or very good. Results from the other 8 items can be interpreted similarly. Improvement in students’ perceived ability was reported across all 9 items.

Results from the multiple linear regression model indicated that when examining what variables significantly affected the students’ change score, the university the student attended (p = 0.02), the completion of a class project (p = 0.03), and the length of coverage (ie, number of credit hours in the program) (p = 0.01) were positively related to students’ change scores. This indicates that these variables contributed to the improvement in the students’ perceived ability across items 1-9. Gender (p = 0.57) and previous quality improvement experience (p = 0.91) were not significant.

Figure 2 displays the hierarchical ordering of items 10-16 as each relates to students’ perceptions about the importance of quality improvement in pharmacy education. The right side of Figure 2 shows the item hierarchy, with items at the bottom of the hierarchy being the easiest to answer positively and items at the top being the most difficult for students to endorse positively. For example, item 14, “Medication errors are a major issue in pharmacy” was the easiest item for students to endorse positively (ie, to agree with). The item hierarchy shows that item 16, “This class provided information that I can apply in practice” was the most difficult of the 7 items to endorse positively (ie, to agree with). However, while item 16 may have been the most difficult item to endorse, as can be viewed from the figure, the majority of responses from students in both the pretest and posttest somewhat agreed or agreed with these statements. This indicates that students’ opinions of these issues were already positive before the class. Previous quality improvement experience (p = 0.04) positively affected students’ scores on items 1-9. School (p = 0.34), completion of a class project (p = 0.25), number of credit hours (p = 0.77), and gender (p = 0.86) were not associated with students’ scores.

Figure 2.

Expected score map and student normative distributions (Items 10-16).

Faculty Survey Results

Seven faculty respondents from colleges and schools that had implemented the EPIQ program provided feedback (Table 1). The colleges and schools varied according to the number of years of the curriculum, the school calendar, the year in which EPIQ material was taught in the curriculum, whether the EPIQ program was a required part of the curriculum, and educational methods used to teach the EPIQ curriculum. In 6 of 7 colleges and schools, EPIQ content was added to address ACPE requirements. At the time of the survey, none of the colleges or schools used EPIQ in interprofessional education, introductory pharmacy practice experiences (IPPEs), or advanced pharmacy practice experiences (APPEs).

Table 1.

Characteristics of Seven US Colleges and Schools of Pharmacy That Implemented the Educating Pharmacy Students to Improve Quality Curriculum (EPIQ)

In all 7 colleges and schools, EPIQ was taught using lectures and in-class activities, predominantly as part of a separate course (4 of 7 colleges and schools). A typical class session included a mini-lecture, in-class activity, debriefing, and discussion of homework. Six of 7 schools use the Warholak and Nau companion textbook because it complemented the EPIQ lecture material.16 In-class exercises were most often used as formative assessments (5 of 7 colleges and schools), while summative assessments were more varied, with 4 colleges and schools using examinations, 2 using attitudinal assessments, and 1 using a team project.

The EPIQ program was implemented differently at each institution. Coverage of the EPIQ program ranged from 2 lectures to a full 3-credit hour course, and spanned from 1 to 32 weeks. Most participating faculty members either added to or integrated their previous quality improvement materials into the EPIQ curriculum (n=6) and/or omitted topics because of time and other constraints (n=6). Content added included additional medication error identification and reduction techniques, assessment techniques from the Institute for Safe Medication Practices, postmarketing surveillance and the Science of Safety (as defined by the Food and Drug Administration), lessons from the Institute for Healthcare Improvement, medication reconciliation, drug-drug interactions, and state-specific quality improvement laws.

Table 2 describes faculty respondents' opinions of EPIQ. Faculty members responded that the EPIQ content was useful in achieving their intended curricular outcomes pertaining to patient medication safety and quality improvement. All 7 colleges and schools indicated that the student-centered activities were the most helpful types of educational materials contained in the EPIQ program. Suggestions for improving EPIQ content included: adding more application opportunities (ie, using more cases), decreasing redundancy, adding materials similar to what the faculty members added (mentioned earlier), keeping it updated, changing the evaluation questions to assess higher-level objectives, and adding more real-world examples.

Table 2.

Opinions of Instructors at a Colleges or School of Pharmacy That Implemented the Educating Pharmacy Students to Improve Quality Curriculum (EPIQ), N = 7

When asked “What was your major challenge in teaching the EPIQ material?,” 2 respondents indicated there was too much material for the time allotted in their curriculum, and 3 responded that teaching the concepts covered in EPIQ was a challenge because many of their students and some of their colleagues did not acknowledge the importance of quality improvement in pharmacy practice. Six faculty members indicated that learning the EPIQ material would help students become better pharmacists. All respondents agreed that the EPIQ program provided information that students will use and that decreasing medication errors is a major issue.


Overall, the EPIQ program was well received by faculty members. The majority reported that the quality of the EPIQ program was good or excellent and agreed or strongly agreed that the EPIQ program helped to meet their course goals. The EPIQ program facilitated implementation of a quality improvement curriculum at each faculty members’ college or school. Given the differences among colleges and schools of pharmacy, the flexibility in the program design allowed each faculty member to tailor the program to meet their needs, including supplementing the program with additional content. In addition, the variety of lecture materials and student-centered activities was appealing to instructors.

The implementation of the EPIQ program varied among the 7 colleges and schools, particularly with regard to the type and extent of sessions incorporated into each quality improvement course. Faculty members tended to use more of the sessions in modules 1 (status of quality improvement and reporting in US health care system), 2 (quality improvement concepts), and 3 (quality measurement), and less of the material in modules 4 (quality-based interventions and incentives) and 5 (application of quality improvement). Faculty members seemed to focus more on basic quality improvement principles (modules 1, 2 and 3) rather than application-based principles (modules 4 and 5). One explanation for this is that some of the sessions in modules 4 and 5 were covered in other courses in the curriculum; however, this cannot be determined as this information was not collected.

Although only 2 faculty members used the Implementing Your Own Pharmacy QI Program session, which included the completion of a class project, this was positively and significantly associated with the student’s change score in knowledge, skill, and ability. These results are consistent with a previous study assessing preceptors’ opinions of the impact of quality assurance projects. Preceptors felt that these quality improvement projects were beneficial to patient care, the practice site, and the preceptors themselves.17 As quality improvement evolves in pharmacy curricula, consideration should be given to integrating application-based projects into quality improvement content as it is common for quality improvement curricula in other disciplines such as medicine to include both lecture and experiential content.18 In addition, research suggests that quality improvement projects have broad applications and can be added to a medication safety class or the IPPE sequence. 10

In general, the EPIQ program positively impacted students’ confidence in their ability, knowledge, motivation, and awareness of quality improvement and medication error reduction. Although improvement was reported for all questions, items such as “awareness of the impact of medication errors on patient health” were easier to comprehend compared to items such as “ability to implement methods to reduce medication errors.” Similarly, for the second portion of the survey instrument, which assessed perceptions of the importance of learning quality improvement and medication error reduction, it was easy for students to comprehend the importance of quality improvement in pharmacy practice and more difficult to agree that course content was applicable in pharmacy practice. This aligns with the tendency of faculty members to use sessions in modules 1, 2, and 3 vs. 4 and 5. These results also support the importance of providing application-based quality improvement projects for students to feel good about their ability to use and apply quality improvement strategies in pharmacy practice. In addition to improving students’ ability to implement quality improvement measures, the completion of a class project or other application-based experience can also highlight the relevance and importance of quality improvement in medication error reduction.

The age of respondents in this study ranged from 26-29 years, which is slightly older than that of most pharmacy students. While older age might be associated with more exposure to EPIQ or quality improvement programs, only 14% (n = 45) of the students indicated they had previous quality improvement experience.

Prior studies of the EPIQ program focused on respondents’ intent to implement the EPIQ program at their institution or on faculty perceptions of program quality.4 This evaluation was unique in that it explored the perspectives of student and faculty perceptions after EPIQ implementation. This evaluation provides insights into the different ways colleges and schools implement the EPIQ program in the PharmD curricula. Assessing implementation through the evaluation of faculty members’ experiences and measuring student perspectives allowed us to identify factors that may have a bearing on student-perceived learning and attitude changes (ie, the inclusion of a quality improvement project). This will allow faculty members to adapt the EPIQ program to optimize student learning for future classes.

This study did not include a comparator group (ie, a university that did not implement the EPIQ program) in the evaluation. However, this program evaluation was designed specifically to assess implementation of the EPIQ program and its impact on student self-reported knowledge and attitudes across several universities. Research comparing colleges and schools that have implemented EPIQ to those that teach quality improvement and patient safety by other means (not using any portion of the EPIQ program) is planned. Results from this program evaluation should be interpreted cautiously as a convenience sample was used and the number of colleges and schools that participated is small. Because this investigation was designed as a program evaluation, participating colleges and schools were not intended to be representative of all US colleges and schools of pharmacy. Also, only 66% of the students from 5 universities responded to the EPIQ attitudinal questionnaire; thus, response bias may be present. Because of IRB restrictions, student grades could not be included in this program evaluation so it is not known whether the students who responded were the students who had higher grades. Finally, student data could not be collected at 2 of the 7 colleges and schools because of IRB restrictions and timing issues.


Evidence suggests that the EPIQ program is a viable, turnkey course that can be used to help pharmacy students build their knowledge of key quality improvement and patient safety concepts. Institutions should incorporate student quality improvement projects as part of the EPIQ program as this has been shown to increase student learning.


The project described was supported in part by an award to Dr. Moczygemba from the National Center for Research Resources. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Research Resources, National Cancer Institute, or the National Institutes of Health.


1. Accreditation Council for Pharmacy Education. Accreditation standards and guidelines for the professional program in pharmacy leading to the doctor of pharmacy degree. https://www.acpe-accredit.org/pdf/S2007Guidelines2.0_ChangesIdentifiedInRed.pdf. Accessed June 6, 2012.

2. Holdford DA, Warholak TL, West-Strum D, Bentley JP, Malone DC, Murphy JE. Teaching the science of safety in US colleges and schools of pharmacy. Am J Pharm Educ. 2011;75(4):Article 77.[PMC free article][PubMed]

3. Warholak TL, West D, Holdford DA. The educating pharmacy students and pharmacists to improve quality program: tool for pharmacy practice. Journal of the American Pharmacists Association. 2010;50(4):534–538.[PubMed]

4. Warholak TL, Noureldin M, West D, Holdford D. Faculty perceptions of the Educating Pharmacy Students to Improve Quality (EPIQ) program. Am J Pharm Educ.2011;75(8):Article 163.[PMC free article][PubMed]

5. Aiken LS, West SG. Invalidity of true experiments: self-report pretest biases. Eval Rev.1991;14(4):374–390.

6. Sprangers M, Hoogstraten J. Pre testing effects in retrospective pretest-posttest designs. J Appl Psychol.1989;74(2):265–272.

7. Howard GS. Response-shift bias: a problem in evaluating interventions with pre/post self-reports. Eval Rev.1980;4(1):93–106.

8. Skeff KM, Stratos GA, Bergen MR. Evaluation of a medical faculty development program: a comparison of traditional pre/post and retrospective pre/post self-assessment ratings. Eval Health Professions.1992;15(3):350–366.

9. Bray JH, Howard GS. Methodological considerations in the evaluation of a teacher- training program. J Educ Psychol.1980;72(1):62–70.

10. Jackson TL. Application of quality assurance principles: teaching medication error reduction skills in a “real world” environment. Am J Pharm Educ.2004;68(1):Article 17.

11. Wright BD, Masters GN. Rating Scale Analysis. Chicago: MESA Press; 1982.

12. Wright BD, Stone MH. Best Test Design. Chicago: MESA Press; 1979.

13. Wolfe EW, Chiu CWT. Measuring pretest-posttest change with a Rasch rating scale model. J Outcome Meas.1999;3:134–61.[PubMed]

14. Richards L. 1st ed. London: Sage Publications; 2005. Handling qualitative data: a practical guide.

15. Linacre JM. Investigating rating scale category utility. Journal of Outcome Measurement.1999;3(2):103–122.[PubMed]

16. Warholak TL, Nau DP. 1st ed. McGraw-Hill Professional; 2010. Quality & Safety in Pharmacy Practice.

17. Warholak TL. Preceptor perceptions of pharmacy student team quality assurance projects. Am J Pharm Educ.2009;73(3):Article 47.[PMC free article][PubMed]

18. Windish DM, Reed D, Boonyasai RT, Chakraborti C, Bass EB. Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change. Acad Med.2009;84(12):1677–1692.[PubMed]

Articles from American Journal of Pharmaceutical Education are provided here courtesy of American Association of Colleges of Pharmacy



Depression is a highly prevalent and costly disorder. Effective treatments are available but are not always delivered to the right person at the right time, with both under- and over-treatment a problem. Up to half the patients presenting to general practice report symptoms of depression, but general practitioners have no systematic way of efficiently identifying level of need and allocating treatment accordingly. Therefore, our team developed a new clinical prediction tool (CPT) to assist with this task. The CPT predicts depressive symptom severity in three months’ time and based on these scores classifies individuals into three groups (minimal/mild, moderate, severe), then provides a matched treatment recommendation. This study aims to test whether using the CPT reduces depressive symptoms at three months compared with usual care.


The Target-D study is an individually randomized controlled trial. Participants will be 1320 general practice patients with depressive symptoms who will be approached in the practice waiting room by a research assistant and invited to complete eligibility screening on an iPad. Eligible patients will provide informed consent and complete the CPT on a purpose-built website. A computer-generated allocation sequence stratified by practice and depressive symptom severity group, will randomly assign participants to intervention (treatment recommendation matched to predicted depressive symptom severity group) or comparison (usual care plus Target-D attention control) arms. Follow-up assessments will be completed online at three and 12 months. The primary outcome is depressive symptom severity at three months. Secondary outcomes include anxiety, mental health self-efficacy, quality of life, and cost-effectiveness. Intention-to-treat analyses will test for differences in outcome means between study arms overall and by depressive symptom severity group.


To our knowledge, this is the first depressive symptom stratification tool designed for primary care which takes a prognosis-based approach to provide a tailored treatment recommendation. If shown to be effective, this tool could be used to assist general practitioners to implement stepped mental-healthcare models and contribute to a more efficient and effective mental health system.

Trial registration

Australian New Zealand Clinical Trials Registry (ANZCTR 12616000537459). Retrospectively registered on 27 April 2016. See Additional file 1 for trial registration data.

Electronic supplementary material

The online version of this article (doi:10.1186/s13063-017-2089-y) contains supplementary material, which is available to authorized users.

Keywords: Depression, Clinical prediction tool, Prognosis, Stepped care, General practice, Randomized controlled trial


Background and rationale

Depression affects at least 350 million people worldwide [1] and is a leading cause of non-fatal burden of disease [2]. It is costly to individuals in terms of relationships and functioning and to society in terms of direct medical costs and costs due to loss of individual productivity [3]. Despite significant investments in mental health globally, there is no evidence of a reduction in the burden of disease associated with depression [4]. One of the biggest challenges facing mental healthcare systems is the need to develop efficient methods of allocating clinically effective treatment in a cost-effective way to the people that need them most [5].

The majority of depression cases are identified, treated, and followed up in primary care [6]. However, general practitioners (GPs) have been criticized for both under- and over-diagnosing, and treating, depression [7–10]. For example, only 16% of Australians with case level depression or anxiety receive an adequate “dose” of an evidence-based intervention consistent with treatment guidelines [9]. On the other hand, antidepressant prescriptions far outnumber patients for whom such medication is indicated [11].

Multi-country studies report that 24–55% of patients in primary care waiting rooms meet screening criteria for being “probably depressed” [12]. However, among this population of “probably depressed,” a range of illness trajectories exist which contribute to the difficulty experienced by practitioners in making a diagnosis and treatment recommendation [13–18]. It may be that the heterogeneity of clinical presentation which characterizes depression in the primary care setting is leading to the simultaneous problems of both over- and under-diagnosis and treatment.

Currently, there is a mismatch in primary care between patient need and the depression care received, possibly as a result of poor treatment allocation. For example, delivery of intensive interventions to people with minimal or mild symptoms is unnecessarily costly and risks medicalizing normal fluctuations in mood [19]. Conversely, without a targeted intensive intervention, people likely to experience severe and chronic symptoms are likely to experience significant disability, which could have been avoided [20, 21].

Stepped care models, in which patients are, in the first instance, provided with the least time- and resource-intensive intervention that will be effective [22], have been promoted as a potential solution to the problem of poor treatment allocation. Although limited empirical evidence exists as to their effectiveness [23], these models make intuitive sense and feature in both clinical guidelines and policy directives in Australia [24] and the UK [22]. Currently, a key obstacle to the implementation of stepped care models is the lack of effective treatment allocation tools to guide GPs in matching the intensity of treatment to a patient’s needs. In particular, current recommendations focus on matching treatment to the patient’s current symptom severity, rather than patient’s likely course of illness in the future. This is out of step with the management of other conditions (e.g. cardiovascular disease or cancer), which routinely take prognostic factors into account when deciding upon treatment recommendations. Further, it contrasts with calls for research, policy, and practice to make prognosis-based intervention a priority [25]. To date, there has been no quick and systematic way for GPs to identify depression outcomes that a particular person is likely to experience in the future and recommend treatment accordingly.

One option for systematizing treatment recommendations is to use a clinical prediction tool (CPT). CPTs are based on a prognostic model that uses clinical and non-clinical information to estimate an individual’s risk of a specific outcome [26]. The prognostic model is applied in clinical practice using the CPT which stratifies patients into different treatments according to their estimated risk [27]. While CPTs are common in many fields of medicine, they are not readily available for use in mental-healthcare settings. [28]

To address this gap, we wanted to develop a simple, easy-to-use CPT to assist primary care clinicians to triage patients presenting with depressive symptoms and allocate to appropriate treatment. First, we investigated whether an existing prognostic model for depression could be used to build the CPT. We identified several prognostic models that have been developed to predict current [29, 30] or future major depression [31–34] or treatment response [35–37]. However, none of these prognostic models were found to be suitable for incorporating into a CPT which could be easily administered in routine care [38].

Therefore, we developed a novel prognostic model using data from the diamond cohort study [39] to predict depressive symptom severity at three months [38]. It comprises 17 items assessing depressive symptom severity at baseline as measured by the Patient Health Questionnaire-9 (PHQ-9) [40]: sex; current anxiety; history of depression; presence of chronic illness affecting daily functioning; self-rated health; living alone; and perceived ability to manage on available income. Based on an individual’s score, he or she is stratified into one of three groups based on predicted depressive symptom scores at three months; namely, minimal/mild (those predicted to have a PHQ score of ≤ 10 at three months), moderate (PHQ > 10 and < 13), and severe (≥13). Cutoffs for the three groups were established during the development of the diamond CPT and are explained in full elsewhere [38]. In the intervention being tested in the current study, individuals are then:

  1. Presented with feedback reflecting their responses to the CPT;

  2. Provided an opportunity to set priorities and reflect on motivation to change; and

  3. Presented with an evidence-based treatment recommendation matched to group classification.

The presentation of feedback and treatment recommendation was informed by the principles of motivational interviewing [41] and an iterative development process employing user-centered design principles to ensure the information is presented in a way that is meaningful and engaging for participants [42].


The Target-D randomized controlled trial (RCT) aims to test whether using the diamond CPT to tailor treatment recommendations to an individual’s predicted depressive symptom severity is a clinically effective and economically efficient way of reducing depressive symptoms, relative to usual care. This paper presents the study protocol for the Target-D RCT, adhering to the SPIRIT guidelines for intervention trial designs ([43]; see Additional file 2 for SPIRIT checklist).

The primary objective of the Target-D trial is to determine if using the diamond CPT to triage individuals with depressive symptoms into symptom severity-appropriate treatment reduces depressive symptoms at three months compared with usual care.

Secondary objectives are to: (1) test whether individuals in the intervention and comparison arms differ in depressive symptom severity at 12 months, quality of life, anxiety symptoms, self-efficacy, and health service use at three and 12 months; (2) determine whether the outcomes differ between the two study arms within each of the three depressive symptom severity groups; and (3) evaluate the cost-effectiveness of the new model of care compared to usual care.

Trial design

Target-D is a stratified individually RCT with two parallel arms, modelled on the trial undertaken by Hill et al. who tested the stratified management of low back pain [44]. Participants will be randomized to the intervention or usual care arm with 1:1 allocation, stratified by general practice and predicted depressive symptom severity group. Participants in the intervention arm will be categorized into one of three treatment groups according to their diamond CPT results; participants in the usual care arm will complete the diamond CPT but will not receive feedback, an opportunity for reflection, or a treatment recommendation specific to their predicted depressive symptom severity. An intention-to-treat (ITT) approach will be used in the analysis (explained further below).


Participants, interventions, and outcomes

Study setting

The study will be conducted in at least ten general practices in Victoria, Australia (see Additional file 3 for the location of study sites).

Eligibility criteria

General practices will be eligible if they: see more than 50 adults aged 18–65 years per day; agree to waiting room screening; have a private space available to be used for the Target-D intervention; and have the majority of their GPs willing collaborate with the Target-D team.

Patients attending participating general practices will be assessed for eligibility using a self-report survey delivered via an iPad. Patients will be eligible if they score 2 or more on the two-item version of the Patient Health Questionnaire (PHQ-2) [45] (indicating depressive symptoms but not necessarily a diagnosis of major depressive disorder), are aged 18–65 years, have access to the Internet for the duration of follow-up, have sufficient written English to follow an Internet-based cognitive behavioral therapy (iCBT) program, have not changed depression medication in the past month (if they take such medication), and agree to randomization to either the usual care or intervention arm. They will be ineligible if they are currently taking antipsychotic medication, are regularly seeing or planning to see a psychologist in the next three months, or are currently using an iCBT program. Eligible patients will be asked to provide informed consent online (see Additional file 4) and complete baseline measures prior to completing the diamond CPT.


Participants will complete the diamond CPT online on a purpose-built study website (henceforth referred to as the Target-D website). They will then be contacted by phone by a trained research assistant (RA) to provide encouragement and support and answer questions as necessary. This phone call will occur within one week of diamond CPT completion.

Intervention arm

As described above, the intervention being tested comprises feedback on CPT responses, an opportunity to set priorities, and a treatment recommendation (based on predicted depressive symptom severity). Immediately after completing the diamond CPT, participants in the intervention arm will see these components displayed on sequential pages of the Target-D website.

The follow-up phone call from an RA will involve a discussion about the treatment recommendation they received, using the results of the diamond CPT to tailor the discussion to the individual’s classification. To encourage treatment engagement, this discussion will use motivational interviewing, an approach to conversations that promotes collaboration and aims to strengthen the person’s motivation and commitment to making a change [41].

The recommended treatment for each of the three groups were selected based on a stepped care approach [22], with treatment intensity lowest in the minimal/mild group and highest in the severe group. To select the specific treatments offered to each level of intensity, we examined existing primary care data from the diamond cohort study to describe the characteristics, treatment, and service use of individuals stratified to each group. We also reviewed systematic reviews of the evidence relevant to each group and presented these findings to our investigator team to inform treatments offered. Another comprehensive description of the interventions delivered, using the TIDier checklist, will be included in the primary outcome paper as per Hoffmann et al. [46] and CONSORT guidelines.

Minimal/mild depressive symptoms at three months

Participants who are likely to have minimal or mild depressive symptoms at three months will be offered self-help and automated follow-up using the myCompass iCBT program which has been shown in randomized trials to be effective in improving outcomes for patients with mild depression [47, 48]. myCompass is an interactive, self-help Internet resource consisting of information, accounts of others’ experiences, treatment modules with home tasks, and mood tracking functions. myCompass uses an internal algorithm to recommend components tailored to participant symptoms and needs. Participants can choose to follow the recommendation or not and may undertake components of the program in any order.

Participants in the minimal/mild group will receive two automated emails from the research team to encourage uptake and adherence to the treatment recommendation. These emails will be sent immediately after the participant receives the recommendation to use myCompass and one week later and are in addition to any correspondence the participant receives from the myCompass program.1 Emails will provide participants with the link to myCompass, encouragement to get started, and reminders of some of the benefits of the program. This will mimic what would be feasible in the routine clinical setting.

Moderate depressive symptoms at three months

The moderate group will be offered clinician-guided iCBT via the Worry and Sadness course in the This Way Up program, which has randomized trial evidence of effectiveness in reducing moderate symptoms of depression [49]. This Way Up comprises six structured online lessons using CBT principles and includes lessons in the form of an illustrated story about someone with depression, printable summaries, and homework assignments, and symptom monitoring at the beginning of each session [49]. Lessons are completed in a linear order and each becomes available five days after the previous lesson is completed.

Target-D will follow standard This Way Up protocol, with participants provided with weekly individualized support via phone/email, until they have completed Lesson Two [49]. Support will include positive encouragement to commence or continue treatment, reiterate the importance of homework completion, and respond to general questions by referring back to program materials. This role will be filled by RAs, in line with evidence supporting the effectiveness of non-clinician provided support to This Way Up users [50]. In keeping with published protocol [49], after the completion of Lesson Two phone contact will be made in response to patient request or a deterioration in condition (defined as an increase of ≥ 5 on the PHQ-9 [51]).

Severe depressive symptoms at three months

The severe group will be offered collaborative care, an enhanced form of patient care shown to be effective for treatment of moderate to severe depression in primary care [21, 52–54]. Collaborative care is defined as including four key ingredients: a multi-professional approach to patient care; a structured management plan; scheduled follow-ups; and coordinated communication between health professionals involved in management [21, 55]. Target-D participants involved in collaborative care will receive eight appointments with a trained case manager (CM). The CM role will be filled by a non-mental-health specialist such as a registered nurse. This decision was made as it is in keeping with the role filled by practice nurses in managing other chronic conditions such as diabetes and thus should enhance scalability of the Target-D model of care should it be effective.

The Target-D approach to collaborative care is underpinned by the principles of motivational interviewing, to enhance patient engagement and action. The Target-D CM will receive training in the intervention approach by a qualified psychologist and will receive regular supervision and support from the psychologist and project manager (SF) throughout the trial.

Patients in this group will be reminded of upcoming appointments with the Target-D CM via SMS. After each appointment, patients will receive an email from their CM summarizing their discussion and outlining the actions the patient intends to take to manage his/her mental health. With the patient’s consent, the CM will also send a copy of this email to the treating GP and other professionals involved in the patient’s mental healthcare.

Comparison arm

Participants randomized to the comparison arm will access health services as usual. The choice of “usual care” as a comparator was made as the study aims to determine the extent to which the intervention improves (or worsens) patient outcomes relative to standard practice [56]. Participants in this arm will also receive some non-therapeutic attention from Target-D to control for any effect of contact with the study team following completion of the diamond CPT; thus, this study arm is referred to as “Usual care plus Target-D” or UC+. UC+ participants will be blinded to their depressive symptom severity group allocation and will not receive a tailored treatment recommendation. Instead, they will see a screen on the Target-D website advising them that they will be asked to provide feedback on: (1) their opinions on research in primary care; and (2) how they normally manage their emotional health and wellbeing. Similar to the procedure for intervention participants, those in the usual care arm will be contacted by phone by an RA within one week of diamond CPT completion. The RA will reiterate the importance of the participant’s involvement in the study, ask a series of structured questions about the participant’s views on research and the involvement of their general practice in research, and advise the participant that he or she will be contacted via email in 12 weeks.


The nature of the study interventions is such that no substantive modifications are anticipated. Patients in the minimal/mild and moderate groups may discontinue using the online program at any time and treatment for those in the severe group may be discontinued at patient request.

If any participant indicates high levels of suicidal ideation during contact with a member of the study team (as indicated by a response of “nearly every day” to question 9 on the PHQ-9: “thoughts that you would be better off dead or of hurting yourself in some way”), regardless of study arm allocation, a standardized suicidal ideation assessment used previously by the study team will be administered and the patient’s GP alerted. This will be reported as an adverse event but is unlikely to result in treatment discontinuation or modification. The assessment will determine if the adverse event was related directly to the intervention or other circumstances not intervention related.

Concomitant care

In both the usual care and intervention arms, participants will be permitted to continue any treatment they were engaged with at entry to the trial. Concomitant care will be assessed via self-report questionnaire and routinely collected Government data (see below).

Treatment adherence

In the intervention arm, adherence to treatment in the minimal/mild and moderate depressive severity groups will be assessed using website analytics within myCompass and This Way Up (i.e. tracking individual log-ins, access of components, completion of modules and lessons). In the severe group, adherence to the treatment plan will be assessed by the Target-D CM as part of the planned follow-up schedule.

In the control arm, in order to compare “usual care” before and during participation in the trial, information about health service use will be collected at each study assessment (see below).


Outcome measures will be collected at baseline and three and 12 months post randomization. These time points were selected to balance the benefits of multiple assessments against the risk of unduly burdening participants. They allow us to examine both the immediate and longer-term effect of the intervention and, because they are commonly used in trials of mental-health interventions, will permit comparisons to be drawn with other relevant studies.

Primary outcome

The primary outcome is the difference between the two treatment arms in mean depressive symptom severity at three months, controlling for baseline depressive symptom severity. Depressive symptom severity was selected as the outcome measure rather than a clinical diagnosis of major depressive disorder as it is more relevant to the design and delivery of stepped mental healthcare.

Secondary outcomes

Secondary outcomes include difference between study arms in mean depressive symptom severity at 12 months and mean mental-health self-efficacy and anxiety at three and 12 months. The cost-effectiveness of the intervention over the study period will comprise an additional secondary outcome.

Sample size

Sample size calculation was based upon our trial experience, a systematic review of depression trials [55] and current data from the diamond study [39]. The primary objective is to test for a standardized effect size of 0.2SD in mean depressive symptoms at three months between the intervention and comparison arms. However, we based our calculations on our planned subgroup analyses because the sample size required would need to be larger to test for difference between study arms within each of the three depressive symptom severity groups than combined. Therefore, we based sample size on detecting a standardized mean difference of 0.2 between arms for the minimal/mild group (given the potential floor effect we anticipate a smaller intervention effect). We hypothesized a standardized effect size of 0.5 in the moderate and severe depressive symptom severity groups as they have room for greater improvement and will receive more intensive treatments.

Based on the CPT development work, we anticipated that 70% of participants will be classified as being likely to have minimal/mild depressive symptoms, 15% as moderate, and 15% as severe depressive symptoms at three months. We used these estimates to extrapolate the total sample size needed to ensure that we had sufficient power for the sub-group analyses. Based on these assumptions, we required 158 (78 in each arm) participants in each of the moderate and severe groups to detect a standardized effect size of 0.5 and 740 (370 in each arm) in the mild/minimal group to detect a smaller standardized effect size of 0.2, with 80% power and 5% significance level for a two-sided test.

This leads to an anticipated sample size of 1056 participants (528 in each arm), which is also sufficient for the primary objective to detect a standardized effect size of 0.2 in the mean PHQ-9 between study arms, with 90% power and 5% significance level. A standardized effect size of 0.2 is equivalent to a mean change of 1.35 points in the mean depressive symptoms assuming a standard deviation on 6.75 (based on diamond data). This effect size is in keeping with those found in systematic reviews of interventions to decrease depressive symptom severity in primary care [57].

We inflated the required sample size to 1320 to allow for 20% attrition at 12 months. Based upon documented response rates and depressive symptom prevalence gathered from our experience of recruiting participants with depressive symptoms in the primary care setting [39], achieving this sample size at baseline requires that we invite 22,000 adults to complete the screening tool (Fig. 1).

Fig. 1

Expected progression of participants through the study


Study sites

We will follow principles of good recruitment by engaging with all stakeholders, branding the Target-D trial, and using a well-developed engagement strategy [58]. We will recruit general practices via our Victorian Primary Care Practice-Based Research Network (VicReN), which has around 200 GP members located in Victoria, Australia. Practices will be contacted by phone and/or email to introduce the study and establish interest. One of the Target-D researchers will then visit interested practices to determine eligibility, provide detailed information about the study, and gain consent to participate. This process will continue until sufficient practices are recruited to obtain the required sample size.

To enhance GP and practice staff engagement in the trial and the activities necessary to make it function, we will be guided by the principles of Normalization Process Theory (NPT) [59]; namely, coherence (meaning of the trial to GPs and staff), cognitive participation (commitment and engagement), collective action (the work GPs and staff do to make the trial function), and reflexive monitoring (GP and staff appraisal of the trial). In each participating practice, GPs and staff will be given a training session clarifying the goals and activities of the trial, in order to instill a sense that the trial is a good idea and worth committing to. In addition, we will clearly outline the trial procedures and how they are likely to affect the work of the practice, with emphasis on how the trial fits with the overall goals of the general practice.

Minimizing contamination

Randomizing individuals that were recruited from the same general practice, where the clinician is not blinded to the participant’s study arm or the intervention is implemented at the practice level, may have a greater risk of contamination between intervention and comparison arms than randomizing a group of individuals that do not belong to the same practice [60]. However, the risk of contamination in this trial is expected to be minimal because of several factors. First, recruitment of participants will be conducted in the waiting room by an RA who is not involved in delivering the study intervention and has no access to the allocation schedule. While patients in the waiting room or from the same family or friendship groups may share information, it is unlikely that this will impact the intervention effect as the intervention can only be accessed by permission of the study team. Second, GPs will only be informed of participants allocated to collaborative care treatment. Even if other patients inform their GP that they are participating in Target-D, GPs will not be informed of their treatment allocation nor be able to access study interventions for UC+ patients. Third, the intervention for minimal/mild groups will be via Internet-based programs delivered outside the practice, reducing the potential for practice-based contamination. We will assess the number of UC+ participants registering for these programs to measure the degree of potential contamination. Fourth, the risk that GPs may implement some of the intervention to patients predicted to have severe symptoms and allocated to the UC+ group is small. We anticipate that fewer than ten such participants will be recruited per practice and this small number of patients will be seen by different GPs. We have successfully used a similar approach in a previous RCT in general practice; data from this trial showed very low levels of interaction between comparison participants and the GP during the study time-frame [61].

Primary care patients

Potential participants will be alerted to the study via posters and information pamphlets displayed in practice waiting rooms; an awareness raising strategy used by the research team previously. All study materials and procedures were developed and tested with focus groups and individual feedback to ensure they are engaging and user-friendly. Upon completion of the screening survey, eligible participants will enter their name, telephone number, and email address into an online form. They will then be presented with an electronic copy of the plain language statement and provide online consent to participate.

To achieve our required sample size, trained RAs will approach an average of 50 adults per practice per day, taking approximately 440 working days to approach our required 22,000 adults. These numbers are typical of RCTs in primary care and we have achieved similar numbers in previous successful studies [39, 62, 63]. While this number may seem ambitious, several factors contribute to it being achievable. First, we have designed the trial so that participants are recruited by RAs rather than expecting already time-pressured GPs and practice staff to take responsibility for recruitment. Second, the 22,000 patients RAs will approach include all adult patients in the waiting room; RAs or practice staff are not required to identify only those patients who are presenting to the GP for mental health reasons. Finally, we have tried this approach out in a clinic and shown that on weekdays alone an RA can approach at least 50 patients per day and invite 1100 patients per month to the study. Our pilot work has shown that an RA spends only 1 or 2 min with each patient and can comfortably approach 150 patients in a working day. Based on this experience and after accounting for weekend recruitment in some practices, we anticipate participant recruitment to take place over approximately 18 months. Recruitment of participants will continue until the numbers within each depressive symptom severity group have been met.

Assignment of interventions


Consent and baseline measures will be collected prior to randomization to minimize reporting and selection bias. When the individual has completed the diamond CPT, he or she will be randomly assigned in a 1:1 ratio to the intervention or comparison arm. Randomization will be stratified by general practice and depressive symptom severity group. The allocation sequence will be computer-generated sequentially within stratum using a biased-coin algorithm [64] embedded within the Target-D website which is housed on the secure National eResearch Collaboration Tools and Resources (Nectar) cloud which provides computing infrastructure to Australian researchers. Using restricted randomization within the stratum ensures that number of individuals is balanced between study arms within stratum and the stratification factors will be balanced in each study arm. The randomization will be triggered automatically within the Target-D website, after a participant has completed baseline measurements and the diamond CPT, thus ensuring allocation concealment.


Due to the nature of the intervention, participants cannot be blinded to their treatment allocation. However, GPs will not be notified of their patients’ allocation to either intervention or comparison arm. No emergency unblinding of GPs is anticipated, including in the case of the research team alerting the GP to patient suicidality. As outcome assessment is conducted online, no blinding of outcome assessors is required. All study analyses will be conducted by a statistician blind to participants’ allocation; study arm allocation will be coded as A or B, with the code for the study arm revealed only after data are analyzed.

Data collection

Participant data will be collected from intervention and comparison arms using validated questionnaires on the Target-D website at screening, baseline and three and 12 months (Fig. 2). diamond CPT data will also be collected on the Target-D website. Participants will receive an automated email from the website at 80 and 358 days after diamond CPT completion with a unique link to the three-month and 12-month survey, respectively. Participants will be informed that if they decide to withdraw from the study, the data already provided will be retained and used in the analyses unless they request otherwise.

Fig. 2

Schedule of enrolment, interventions and assessments. PHQ-9 Patient Health Questionnaire – 9; GAD-7 Generalized Anxiety Disorder scale; MHSES Mental Health Self-Efficacy Scale; AQoL-8D Assessment of Quality of Life scale; MBS Medicare Benefits...


Demographic characteristics, including age, gender, highest level of education, and employment status, will be assessed at baseline.

Primary outcome

Depressive symptom severity will be assessed at each timepoint using the PHQ-9. The PHQ-9 assesses the nine DSM symptoms of depression over the last two weeks using a 4-point Likert scale (where 0 = “not at all” and 3 = “nearly every day”). Total scores are in the range of 0–27, with suggested cut points of 5, 10, and 15 indicating mild, moderate, and severe depression, respectively [40]. The PHQ-9 is a validated diagnostic measure in primary care [65], with demonstrated efficacy and sensitivity as an outcome measure for treatment trials with a recommended Reliable Change Index [51].

Secondary outcomes

Self-efficacy will be measured using the Mental Health Self-Efficacy Scale (MHSES) [66]. The MHSES comprises six items that require respondents to rate on a 10-point Likert scale how confident they are in performing behaviors related to mental health self-care (from 1 = “not at all confident” to 10 = “totally confident”). Total scores are in the range of 6–60 and provide a unidimensional measure of self-efficacy; higher scores indicate greater levels of self-efficacy. The MHSES displays high internal consistency (Cronbach’s alpha = 0.91) and good construct validity, correlating well with measures of depression, anxiety, and functional impairment.

The seven-item Generalized Anxiety Disorder scale (GAD-7) will be used to assess anxiety [67]. The GAD-7 assesses the presence of anxiety symptoms over the past two weeks using a 4-point Likert scale. Scoring is similar to the PHQ-9; each item is scored from 0 to 3 (for a total possible score of 0–21), with cut points of 5, 10, and 15 corresponding to mild, moderate, and severe anxiety symptoms. The GAD-7 has excellent internal consistency (Cronbach’s alpha = 0.92) and test–retest reliability. Its construct, convergent, and discriminant validity are high; it correlates well with measures of depression and functioning (while assessing a distinct construct), as well as with other measures of anxiety.

Quality of life will be assessed at each time point using the Assessment of Quality of Life (AQoL-8D) [68]. This is a validated, reliable measure [69] that comprises eight dimensions (independent living, senses, pain, mental health, happiness, self-worth, coping, and relationships) that can be used to calculate quality-adjusted life years (QALYs) via a utility algorithm. The AQoL-8D has been shown to be sensitive to depressive symptom severity levels [69].

Cost-effectiveness of the intervention will be measured through assessment of health service use, effects on productivity, and calculation of QALYs. Health service use will be tracked using data extracted from the Australian Government Department of Health: the Medicare Benefits Schedule (MBS) that maintains information about visits to healthcare providers and diagnostic tests; and the Pharmaceutical Benefits Scheme (PBS) database of medications supplied on prescription. Participants will provide additional consent to access their MBS and PBS data. Other resource use not captured by these national databases, including the use of broader health and welfare services and effects on productivity (i.e. education and workforce participation), will be assessed via self-report using an adapted questionnaire developed by members of the research team and used in numerous other Australian mental-health intervention trials [70–72].

Process data

To complement the outcome data collected as part of the RCT, a parallel process evaluation will be conducted in order to understand the context in which the outcomes were achieved. The evaluation will identify challenges of implementation and provide important guidance for future translation of trial findings, using the framework set out by the Medical Research Council [73]. The process evaluation will draw from data collected through a variety of sources, including but not limited to recruitment logbooks, interviews and surveys of GP and practice staff, intervention uptake and adherence data (as described above), and interviews with randomly selected participants (across both the two study arms and three depressive symptom severity groups). A comprehensive protocol for this evaluation will be published separately.


To encourage retention at each study time point, non-responders will receive up to five reminders in total via phone, text, and email. These reminders will also provide the option of completing the baseline, three-month, or 12-month survey over the phone with an RA or being mailed a hard copy of the questionnaire to complete and return via reply paid envelope. At three and 12 months, participants who still do not complete the survey will be offered the option of completing the primary outcome measure (PHQ-9) alone. Outcome assessments may be completed in multiple sittings, with participants provided the option of saving their responses and returning later via a link emailed to them upon exiting the survey.

To acknowledge the time spent by participants and to further promote retention at three and 12 months, random draws for a $100 gift card will be conducted monthly for each follow-up survey, with all participants who completed the survey in the previous month eligible to receive a gift card. The selected participant will be contacted via phone and email. Participants will be advised of the draw in the initial email with their unique link to the relevant survey and in subsequent reminders.

Data management

Participants will enter data directly into the Target-D website, which will store responses coded according to standard practice for each validated questionnaire. The website presents each item on a separate page to minimize the chance of items inadvertently being missed. Data integrity will be enforced through the use of forced or multiple choice items wherever possible; valid value and range checks will also be built into the website for free text fields where appropriate.

The coded study data will be downloaded weekly from the Target-D website, stored securely, and backed up regularly on a central password-protected University system. A data manager will check all data to identify and, where possible, resolve errors prior to analyses being conducted. Data will be kept for 15 years after study completion after which time they will be destroyed in accordance with University protocol [74].

The Research Electronic Data Capture (REDCap) secure software application [75] will be used to manage contact with participants and track progress through the study, with participant information transferred manually into REDCap from the Target-D study website. Both REDCap and the Target-D website are password-protected and housed on secure University servers; only the study team will have access to the identified data.

Statistical methods

Descriptive statistics will be used to compare participant characteristics between the study arms, in total and stratified according to depressive symptom severity group. Linear mixed-effects model using restricted maximum likelihood with random intercepts for individuals will be used to estimate the difference in mean outcome between study arms at three and 12 months. All regression models will adjust for baseline outcome measure (where appropriate), stratification factors (practice, depressive symptom severity group) and time (baseline, three and 12 months), with a two-way interaction between study arm and time except baseline where means in the study arms will be constrained to be equal. Baseline variables strongly associated with the outcome that are found to be imbalanced between the study arms will also be considered for adjustment in the regression analyses. Estimated intervention effects will be reported as the difference in the means of the outcome between study arms (intervention-comparison), with 95% confidence intervals and p values. Similar regression analyses will also be used to compare the outcomes between intervention and comparison arms separately for each of the three depressive symptom severity groups. In a secondary analysis, we will investigate the intervention effect on individuals who would comply with their assigned treatment using a complier average casual effect (CACE) analysis [76]. A detailed analysis plan will be developed for the secondary and sensitivity analyses. All analyses will be performed using Stata 13.0 [77].

Missing data

Analyses will use an ITT approach, where participants will be analyzed in the study arm to which they were allocated [78]. In the first instance, we will implement strategies to minimize the missing outcome data, including the participant retention strategies outlined above. Reasons participants are lost to follow-up will be recorded. Sensitivity analysis will be used to assess the robustness of the missing data assumption.

Cost-effectiveness and cost-utility analysis

Incremental cost-effectiveness ratios (ICERs) will be determined (cost of intervention – costs of comparison/outcome of intervention – outcome of comparison) using the AQoL-8D to determine QALYs. ICERs using other important study outcomes (such as cost per remitted case) will also be determined. Variation will be determined by bootstrap and regression analyses and results presented in cost-effectiveness planes and acceptability curves. Sensitivity analyses will also be used to determine the impact of important study parameters (such as unit cost price variation). Dependent on trial results, modeling may also be used to extrapolate beyond the trial time horizon.


The Target-D study will be monitored by the Steering Committee (SC) and a Data Monitoring Committee (DMC). The SC will comprise all named investigators and the project manager and will be led by the Chief Investigator. The SC will have biannual meetings to monitor recruitment progress, troubleshoot any areas of concern, ensure that the project is being conducted according to protocol, and identify additional training or support required by the research staff to facilitate the smooth running of the trial.

The DMC will comprise at least three members and be led by Professor Jon Emery, an experienced researcher independent of the research team. Collectively, DMC members will have clinical, research, and statistical expertise across primary care and mental health. Members of the DMC will be provided with a Charter outlining their scope of responsibilities (Additional file 5). The DMC will meet biannually to monitor trial processes and progress, and review complaints, harms, and adverse events. Adverse events may be serious or otherwise; the former are defined as those which “might be significant enough to lead to important changes in the way the [intervention] is developed” [79]. In light of the fact that the interventions used in the study are evidence-based, and all participants are linked in with health services, routine data collection will assess adverse events and no interim analyses or auditing are planned. All adverse events will be recorded (including relation to study, severity, potential for the event to have been anticipated, and action taken) and reported to the DMC. Serious adverse events will also be reported to the University ethics committee.

Ethics and dissemination

The University of Melbourne Human Research Ethics Committee (HREC) has approved this study protocol (ID number 1543648). Collection of MBS and PBS data has been approved by the Australian Government Department of Human Services Information Services Branch (ID: MI3794). Approval from these two ethics committees applies to all study sites. Any substantive modifications to this protocol that affect the conduct or nature of the study will be submitted to the responsible HREC for approval prior to implementation.

Eligible patients will receive a plain language statement outlining the potential risks and benefits of participating in Target-D and give informed consent to participate in the study through the Target-D website. A copy of the plain language statement will also be provided via email. Consent will apply only to the current research study. Participants will be advised at the time of study consent that they will be asked for separate consent to collect their MBS/PBS data. Participants will subsequently receive a plain language statement regarding MBS and PBS data collection (Additional file 4) and a link to provide informed consent online. Participants will be advised that consenting to provide access to their MBS/PBS information is optional and will not affect their participation in Target-D. All information provided to participants regarding the collection of this data adheres to Australian Government requirements.

Confidentiality of participants will be protected by assignment of an identification number to each participant. Participants’ study information will not be released outside of the study without permission, except where maintaining confidentiality endangers the health or safety of the participant or someone else. Only investigators included in the original ethics applications or subsequent amendments will have access to the identified dataset.

Declaration of interests

GA heads the Clinical Research Unit for Anxiety and Depression, which is home to This Way Up. As GA will not be involved in data analysis or interpretation, this interest will have no undue influence on the study findings. No other authors have competing interests to declare. During the trial, all authors will comply with their respective institution’s policies on conflicts of interest.

Dissemination policy

Regardless of the magnitude or direction of effect, the results of this trial will be presented at relevant research conferences and as published articles in peer-reviewed journals. The study will be reported following the CONSORT and TIDier guidelines. Authorship eligibility guidelines at the respective institutions will be followed. The results of the trial will be communicated to participants via a trial newsletter and to the involved GP clinics via a personal visit and community reports. The findings from this trial have the potential to affect healthcare policy and will be reported to relevant government bodies. There are no plans to allow public access to the dataset or statistical code.


The burden of disease associated with depression is large and shows no sign of decreasing, despite significant investment in an array of effective treatments. One reason for this is suggested to be poor allocation of treatment, with both over-treatment for mild symptoms and under-treatment for severe symptoms common. Stepped care models, in which people receive the least time and resource intervention that will be effective, are posited as a solution to this mismatch; however, there is currently no systematic way of identifying which “step” of treatment an individual should be allocated to. In addition, mental health lags behind other fields of medicine in focusing on which step is appropriate given the person’s current symptoms, rather than their future course of illness.

We have therefore developed a new CPT which predicts depressive symptom severity at three months and provides an evidence-based treatment recommendation accordingly. In the Target-D trial, we will test whether using this tool to match individuals to treatment is a clinically effective and cost-efficient way of reducing depressive symptom severity, relative to usual care. If the Target-D model for depression management is efficacious and cost-effective, implementation into practice could reduce unnecessary treatment burden and improve allocation of treatment resources.

Trial status

At the time of submission, patient recruitment to the Target-D trial is ongoing. The anticipated study completion date is July 2018.

Additional files

Additional file 1:(90K, pdf)

Trial registration data. Table presenting the World Health Organization Trial Registration Data Set. (PDF 89 kb)

Additional file 2:(60K, pdf)

SPIRIT checklist. Table identifying where each SPIRIT checklist item is addressed in the manuscript. (PDF 59 kb)

Additional file 3:(29K, pdf)

Target-D study sites. List of confirmed study locations at the time of submission. (PDF 28 kb)

Additional file 4:(1.0M, pdf)

Informed consent materials. Plain language statements and consent forms. (PDF 1027 kb)

Additional file 5:(212K, pdf)

Target-D Data Monitoring Committee Charter. Charter outlining the aims and terms of reference of the trial Data Monitoring Committee. (PDF 212 kb)


The data used to develop the diamond CPT were collected as a part of the diamond project which is funded by the National Health and Medical Research Council (ID: 299869, 454463, 566511, 1002908). Refinement and testing of the diamond CPT was supported by NHMRC project ID 1059863. We acknowledge the 30 dedicated GPs, their patients, and practice staff for making the diamond study possible. We also acknowledge Eman Alatawi for early work that informed the presentation of the diamond CPT as well as the 24 focus group participants that provided feedback on early versions of the Target-D materials. Finally, we thank Adam Lodders and Anchalee Laiprasert of the Melbourne Networked Society Institute (MNSI) for their assistance in building the Target-D website.


This study is funded by a grant from the National Health and Medical Research Council (NHMRC) (ID: 1059863). The funding source had no role in the design of this study and will not have any role during its execution, analyses, interpretation of the data, or decision to submit results.

Availability of data and materials

Not applicable.


AQoL-8DAssessment of Quality of Life (8 dimension version)
CACEComplier average casual effect
CMCase manager
CPTClinical prediction tool
DMCData monitoring committee
DSMDiagnostic and Statistical Manual
GAD-7Generalized Anxiety Disorder scale
GPGeneral practitioner
iCBTInternet-based cognitive behavioral therapy
ICERIncremental cost-effectiveness ratio
MBSMedicare Benefits Schedule
MHSESMental Health Self-Efficacy Scale
NectarNational eResearch Collaboration Tools and Resources
NPTNormalization Process Theory
PBSPharmaceutical Benefits Scheme
PHQ-2Patient Health Questionnaire (2-item version)
PHQ-9Patient Health Questionnaire (9-item version)
QALYQuality-adjusted life years
RAResearch assistant
RCTRandomized controlled trial
REDCapResearch Electronic Data Capture
SCSteering committee
UC+Usual care plus Target-D
VicReNVictorian Primary Care Practice-Based Research Network

Authors’ contributions

JG conceived of the study, initiated the study design, wrote the grant application, and gained funding for the study. JG, CW, SF, AC, and SD contributed to development and drafting of the protocol. JG, CM, KH, PC, SD, GA, EM, VP, and CD all provided input to study design and are grant holders. All authors contributed to refinement of the study protocol and approved the final manuscript. Chief Investigators: Prof Jane M Gunn, A/Prof Cathrine Mihalopoulos, Prof Kelsey Hegarty, Dr Alishia Williams, Prof Leon Sterling, Dr Patty Chondros, Dr Sandra Davidson. Associate Investigators: Prof Gavin Andrews, Dr Victoria Palmer, Prof Elizabeth Murray, Prof Christopher Dowrick, Dr Giles Ambresin, Dr Antonette Mendoza, Prof Frances Griffiths.


Ethics approval and consent to participate

This study protocol has been approved by the Human Research Ethics Committee at the University of Melbourne (ID number 1543648). The Australian Government Department of Human Services Information Services Branch has approved the collection of MBS and PBS data (ID: MI3794). All participants will provide informed consent to participate in the study and separate informed consent to allow the researchers to collect MBS and PBS data.

Consent for publication

Not applicable.

Competing interests

GA heads the Clinical Research Unit for Anxiety and Depression at St Vincent’s Hospital in Sydney, which is home to This Way Up. The rest of the authors have no competing interests to declare.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


1myCompass users can elect to receive helpful tips, facts, and motivational messages via SMS or email. The program also offers SMS/email reminders to facilitate symptom tracking and completion of homework activities; users can opt in and out of these services as they wish and can choose the frequency and timing of messages.

Electronic supplementary material

The online version of this article (doi:10.1186/s13063-017-2089-y) contains supplementary material, which is available to authorized users.

Contributor Information

Jane Gunn, Email: ua.ude.bleminu@nnug.j.

Caroline Wachtler, Email: es.ik@relthcaw.enilorac.

Susan Fletcher, Email: ua.ude.bleminu@flnasus.

Sandra Davidson, Email: ua.ude.bleminu@vads.

Cathrine Mihalopoulos, Email: ua.ude.nikaed@soluopolahim.yhtac.

Victoria Palmer, Email: ua.ude.bleminu@remlap.v.

Kelsey Hegarty, Email: ua.ude.bleminu@ytrageh.k.

Amy Coe, Email: ua.ude.bleminu@eoc.yma.

Elizabeth Murray, Email: ku.ca.lcu@yarrum.htebazile.

Christopher Dowrick, Email: ku.ca.looprevil@dfc.

Gavin Andrews, Email: ua.ude.wsnu@anivag.

Patty Chondros, Email: ua.ude.bleminu@sordnohc.p.


1. Depression [http://www.who.int/mediacentre/factsheets/fs369/en/]. Accessed July 2017.

2. Ferrari AJ, Charlson FJ, Norman RE, Patten SB, Freedman G, Murray CJL, et al. Burden of depressive disorders by country, sex, age, and year: Findings from the Global Burden of Disease Study 2010. PLoS Med. 2013;10(11) doi: 10.1371/journal.pmed.1001547.[PMC free article][PubMed][Cross Ref]

3. Greenberg PE, Fournier AA, Sisitsky T, Pike CT, Kessler RC. The economic burden of adults with major depressive disorder in the United States (2005 and 2010) J Clin Psychiat. 2015;76(2):155–62. doi: 10.4088/JCP.14m09298.[PubMed][Cross Ref]

4. Insel T. Translating scientific opportunity into public health impact: A strategic plan for research on mental illness. Arch Gen Psychiatry. 2009;66(2):128–33. doi: 10.1001/archgenpsychiatry.2008.540.[PubMed][Cross Ref]

5. Whiteford H, Harris M, Diminic S. Mental health service system improvement: Translating evidence into policy. Aust Nz J Psychiat. 2013;47(8):703–6. doi: 10.1177/0004867413494867.[PubMed][Cross Ref]

6. Parslow R, Jorm A. Who uses mental health services in Australia? An analysis of data from the National Survey of Mental Health and Wellbeing. Aust Nz J Psychiat. 2000;34(6):997–1008. doi: 10.1080/000486700276.[PubMed][Cross Ref]

7. Mitchell AJ, Vaze A, Rao S. Clinical diagnosis of depression in primary care: A meta-analysis. Lancet. 2009;374(9690):609–19. doi: 10.1016/S0140-6736(09)60879-5.[PubMed][Cross Ref]

8. Aragones E, Pinol JL, Labad A. The overdiagnosis of depression in non-depressed patients in primary care. Fam Pract. 2006;23(3):363–8. doi: 10.1093/fampra/cmi120.[PubMed][Cross Ref]

9. Harris M, Hobbs M, Burgess P, Pirkis J, Diminic S, Siskind D, et al. Frequency and quality of mental health treatment for affective and anxiety disorders among Australian adults. Med J Aust. 2015;202(4):185–9. doi: 10.5694/mja14.00297.[PubMed]

One thought on “Dr Tl Sullivan Homework Answers

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *