ABSTRACT
This article examines the public reporting of impact, defined as progress towards a charity's mission and long-term objectives, by Canadian charities through their annual reports. The public reporting behaviour of those accredited under Imagine Canada's Standards Program is compared with a matched sample of charities that have not sought accreditation. The objective is to explore whether trust-building activities like public disclosures of impact and third-party accreditation are convergent. The study finds that accreditation status correlates with impact measurement and reporting; both trends are linked to organizational size, and accreditation does not appear to be causing charities to increase their disclosures of impact, which suggests that there may be underlying factors driving both behaviours. These findings generally affirm earlier research that correlates organizational size with impact measurement, adding that the effect is weak.
RÉSUMÉ
Cet article examine comment les associations caritatives canadiennes, dans leurs rapports annuels, rendent compte de leur impact, c'est-å-dire de leur progres par rapport å leur mission et å leurs objectifs å long terme. Cette étude compare les comptes rendus d'associations accréditées par le Programme de normes d'Imagine Canada avec un échantillon apparié d'associations qui n'ont pas été accréditées. L'objectif est de déterminer s'il y a convergence parmi les démarches entreprises pour gagner la confiance du public telles que l'accréditation par un tiers et la divulgation d'impact. Cette étude observe que les associations accréditées sont plus enclines å mesurer et å divulguer leur impact; que ces deux pratiques sont plus communes dans les grandes associations; et que l'accréditation å elle seule n'entraîne pas forcément les associations caritatives å divulguer leur impact, ce qui suggere que des facteurs sous-jacents sont peut-etre responsables pour les deux pratiques. En général, ces conclusions confirment des recherches antérieures trouvant une corrélation entre la grandeur d'un organisme et le désir de mesurer son impact, bien que ce lien semble etre faible.
Keywords / Mots clés Nonprofit self-regulation; Accreditation; Transparency; Impact reporting; Imagine Canada Standards Program / Autorégulation des organismes å but non lucratif; Accreditation; Transparence; Communication d'impact; Programme de normes d'Imagine Canada
INTRODUCTION
A rising chorus of voices in the charitable sector is calling on the media and watchdogs to stop measuring and reporting on the effectiveness of charities using the easy-to-calculate "overhead" ratio of administrative-to-program costs. This financial ratio is misleading and does not correlate well with actual measures of effectiveness or performance (Wetherington & Daniels, 2013). Instead, these watchdogs are being called upon to measure and report on charitable impact, which should result in charities making a bigger, better difference in their communities, and make it easier for them to connect with donors and the public (MacLaughlin, 2016; Maloney, 2012; Morino, 2011; Pallotta, 2013). The chorus reflects public opinion, with 74 percent of Canadians saying they want more information on the impact that charities have (Lasby & Barr, 2013). Some charities have started listening to this chorus and are picking up the tune (GuideStar, 2013).
An important step to institutionalizing better governance and organizational practices that should, in theory, enhance the performance and impact of charities was the creation by the sector of an accreditation program in 2012. The Imagine Canada Standards Program is a rigorous system of accreditation involving self-study and peer assessment on 73 standards, including nine standards requiring boards to review or evaluate parts of their organization's operation, and another nine requiring organizations to publicly disclose specific information on their public website.
The objective of this article is to explore the extent to which these trends are convergent and to understand whether accreditation might be contributing to increased impact reporting. Public reporting of impact by Canadian charities is examined through their annual reports, comparing the reporting behaviour of those accredited under Imagine Canada's Standards Program with a matched sample of charities that have not sought accreditation. The purpose in exploring this question is to understand whether these two trust-building activities are occurring together (or if one leads to the other) and to identify if organizational capacity, measured using both annual revenue and accreditation status as proxies, plays a significant role in charities building trust with the public. The expectation is that participation in an accreditation program would increase public disclosures of impact, in part because the emphasis in the standards on improved governance practices and the regular evaluation of information by organizational boards builds capacities that contribute to effective and integrated evaluative activities in nonprofit organizations (Carman & Fredericks, 2010). Before turning to the empirical analysis, the first step is to establish whether the literature supports this view and whether creating greater transparency has a positive effect on public trust, which is the rationale underpinning accreditation systems.
LITERATURE REVIEW
Trust and transparency
This article begins by examining the notion that public reporting on impact by charities contributes to stronger connections among charities, donors, and the public. Making this link is important because trust building (referred to as "public confidence") is a core goal of the Imagine Canada (2012) Standards Program. For many people, the overall level of trust in a charity correlates with familiarity with charities and their work (Lasby & Barr, 2013); in other words, Canadians who know more, trust more. Unfortunately, only 25 percent of Canadians are "highly familiar with charities and their activities" (Imagine Canada, 2016, n.p., which means that charities that are concerned with public trust have work to do.
Broadly, trust, transparency, and accountability are closely related, mutually reinforcing concepts for organizations with perceived transparency, and they correlate with higher levels of trust in an organization (Auger, 2014; Schnackenberg & Tomlinson, 2016). When talking about the trust that people have in an organization, it is mostly determined by competence-the ability to accomplish what it says it will do and a record of successfully doing what it says it will do-and is affected by an organization's openness to criticism and admission of mistakes (Auger, 2014). Transparency, through an obligation to inform and to explain and justify conduct, is an element of accountability (Bovens, Schillermans, & Goodin, 2014) and is built on dimensions of disclosure, clarity, and accuracy (Schnackenberg & Tomlinson, 2016). The charitable sector in Canada has talked about accountability measures in these same terms, saying "the ultimate goal of accountability is to demonstrate that an organization does good in a good way," (Panel on Accountability and Governance in the Voluntary Sector, 1999, p. 36). Trust in organizations can also be supported, but not driven, by the disclosure of program outcome information on agency websites (Grimmelikhuijsen, 2012).
While general levels of trust in charities in Canada are high, and have been stable for some time (Lasby & Barr, 2013; McKeown & McKechnie, 2000), Canadians would like more information from charities about two things: 1) the programs they provide and their impact, and 2) the cost of fundraising and the use of donations (Lasby & Barr, 2013). Higher selfrated subjective knowledge of the sector correlates with feelings of trust and accountability (Lasby & Barr, 2013) and influences donating and volunteering behaviour more than demographics alone would predict (Bourassa & Stang, 2016).
However, those who give more may also expect more information and more transparency from the charities they support (Bourassa & Stang, 2016), and the returns for feeding this expectation may not be high enough to justify the costs as, in some cases, increased charitable accountability may not correlate with increased donations (Berman & Davidson, 2003), including cases of increased externally imposed accountability driven by third-party rating systems (Szper & Prakash, 2011). So, recognizing the costs, charities should avoid pursuing transparency for its own sake and use it with intention and purpose as a tool for pursuing other objectives (Tyler, 2013).
While it is clear that the public wants to know more about what charities do for their communities, it is unclear how or if their behaviour will change if they do know more (Szper & Prakash, 2011). Based on the literature, it can be reasonably argued that a likely result, and reasonable objective, of addressing the program/impact information gap should be increased public knowledge leading to greater trust in charities, which, in turn, should contribute to increased donations and volunteerism. Presumably, posting program and impact information on charity websites would be an appropriate way of sharing this information with donors and the public.
Relationship between standards and trust
Imagine Canada (2016), which developed and administers a voluntary standards program for Canadian charities found that, "transparency and sound management are the top considerations when deciding whether to support a charity for 86% of Canadians ... Nearly three quarters (72%), for example, said they were more likely to trust and have confidence in charities that have achieved third-party accreditation. Half indicated that they would be more likely to give to a charity that had achieved rigorous accreditation standards" (n.p.). This mirrors findings from the Netherlands that donors who know about voluntary charitable accreditation systems trust charities more and donate more, especially among people with moderate and higher levels of general social trust (Bekkers, 2003).
Imagine Canada's (2014) standards deal with governance, financial oversight, fundraising, staff management, and volunteer management. They do not address program evaluation, impact, or the public reporting of impact (except by requiring organizations to not lie or make misleading statements). This means that any effect from accreditation on impact reporting behaviour will be indirect and likely connected to the demonstrated willingness of accredited organizations to comply with standards requiring them to evaluate their processes and post information about their organization on their public websites. The public reporting of impact also appears consistent with what principal-agent theory suggests about accountability clubs such as the Imagine Canada standards: they are a means to signal to donors and funders (as principals) that they can trust an organization is using its resources in the manner intended by the principal. They also provide branding benefits to the participating organizations that outweigh the costs of participating in the signalling activity (Prakash & Gugerty, 2010).
Impact measurement
Impact measurement refers to measurable progress that a charity makes towards its mission and long term objectives, and this matters because charities ought to know whether they are providing a net benefit to their beneficiaries (Sawhill & Williamson, 2001) or, at the very least, not doing more harm than good (Dubner, 2017). It includes a range of measurements with different levels of rigour, validity, and difficulty that has emerged and developed over the last 20 years.
The distinct measurement of impact appeared when the United Way of America started using and promoting outcome measurement with the release of its outcome measurement kit in 1996 (Legowski & Albert, 1999; Plantz, Greenway, & Hendricks, 1997). Even though it is important, there is no broad, common understanding of what impact is or how to measure it (Rockefeller Foundation & Goldman Sachs Foundation, 2003).
In some paradigms, impact measurement is not directed at a single decision-maker but at "all major stakeholders who may play a role in maintaining, modifying or eliminating the program [and] these stakeholders in turn should be appropriately informed of the results of the evaluation" (Thompson, 1992, p. S68). This fuzziness is complicated by calls to add co-determination measures to impact measures, where the goals of an organization's beneficiaries are negotiated into the charity's measurements of success (Benjamin & Campbell, 2015; Gripper, Kazimirski, Kenley, McLeod, & Weston, 2017; Legowski & Albert, 1999). There is also some suggestion that the choice of impact measurement methodology can or should vary with project, program, or organizational life-cycle stage (Clark, Rosnzweig, Long, & Olsen, 2004; Preskill, Parkhurst, & Splansky Juster, 2014).
In order to make sense of the broad, inconsistent, and overlapping typologies and models for impact measurement found in the academic and grey literature, Table 1 consolidates the models and definitions into something resembling a cohesive, defined set of measurement categories. To capture as many of the broad and emerging impact measurements as possible, the full range of measures in Table 1 were looked for during the study. Of particular importance: outcome measurement is classified as a sub-type of impact measurement, and output measurement is classified as a sub-type of activity measurement.
Organizational barriers to impact measurement and reporting
Despite what a funder or member of a public may want to know about the impact of a program, the capacity of a charity to meaningfully measure impact is a concern (Benjamin, 2010; Hall, Phillips, Meillat, & Pickering, 2003; Legowski & Albert, 1999; Panel on Accountability and Governance in the Voluntary Sector, 1999; Schmitz, Raggo, & Bruno-van Vijfeijken, 2012). Measurement is an activity that costs money and time, and charities of any size typically do not have a lot of either, as spending on administration is proportionately consistent regardless of size (van der Heijden, 2013).
There are a number of hazards to charities that report on impact measures that may be creating barriers to measurement: funding shifting to easier-to-measure programs or programs with more appealing outcomes; compromised privacy for beneficiaries and staff; adjusting, cherry-picking, or fudging reported data to make the impact seem better than it is; and imposing additional costs that other competing organizations may not have to incur (Legowski & Albert, 1999). Possibly due in part to these barriers, most measurement and reporting seems to be happening in the realm of inputs, processes, and outputs (Legowski & Albert, 1999), which is easier to do but less indicative of impacts. Complicating these barriers is the finding that charities are unclear on the definitions of outputs and outcomes, which may be leading them to incorrectly believe they are measuring impact when they are actually measuring activities (Hall, Phillips, Meillat, & Pickering, 2003).
Additionally, for religious charities that operate as houses of worship, the organizations may believe that they do not provide tangible services and do not have measurable objectives, making impact measurement and reporting by these charities not meaningful (Hackett, 2016).
Organizations that are measuring impact
Generally, about two-thirds of charities in the United States and Canada believe they are measuring some form of impact. A 2003 (Hall et al., 2003) self-reported survey of 1,965 randomly selected Canadian charities found that 76 percent measured activities/outputs, 66 percent measured outcomes/impact, 65 percent measured satisfaction, and 54 percent measured financial costs. In interviews with leaders of United States-based transnational charities, 42.8 percent of leaders reported that their organizations were evaluating projects and programs as part of their accountability programs (Schmitz, Raggo, & Bruno-van Vijfeijken, 2012). A survey of organizations in the United States found 95 percent measuring outputs, 68 percent measuring outcomes, and 87 percent measuring satisfaction (Salamon, Geller, & Mengel, 2010). In a 2017 survey, 65 percent of responding Canadian foundations used some form of evaluation for internal learning (Philanthropic Foundations Canada, 2017). In all of these cases, results were self-reported, likely leading to overstated positive results. In many cases, data were collected from a sub-sector or a set of organizations with greater capacity, which may limit its generalizability to the charitable sector as a whole.
Public disclosure of impact is much less prevalent than measurement. Only 28.3 percent of United States-based transnational charities were publicly disclosing information as an accountability practice (Schmitz et al., 2012), only 17 percent of Canadian foundations share the results of their evaluations externally (Philanthropic Foundations Canada, 2017), and for American foundations, "[only a] small minority of organizations ... seemed to make a concerted effort to build trust and allay donor concerns through extensive efforts at transparency and voluntary disclosure" (Saxton & Guo, 2011, p. 287).
For both measurement and the disclosure of impact, size matters. In Canada, organizations with larger revenue were more likely to conduct evaluations (Hall et al., 2003). A review of 117 community foundation websites in the United States found that online disclosure and dialogue accountability activities correlated with foundation size, as measured by the value of assets (Saxton & Guo, 2011). Interestingly, a study of Taiwanese medical organizations found that the voluntary public disclosure of financial results correlates with smaller institutional size (Saxton, Kuo, & Ho, 2012). Organizational size having the opposite effect in Taiwan from what other studies have found in predominantly Anglo contexts may indicate that there is some interaction between cultural context, size, and transparency practices.
Recently, shared measurement, where organizations with similar programs are sharing metrics, using common tools, and sometimes pooling their findings, seems to have become an emerging theme (Gripper et al., 2017). This practice may lead to an increase in the number of organizations doing impact measurement by reducing the administrative burden for single organizations and increasing disclosure and the sharing of results within networks.
Impact reporting in annual reports
The next thing to consider is: where might a member of the public or a potential donor expect to find information on a charity's impact? First, organizations accredited by Imagine Canada (2014) are required to produce and post annual reports on their website under standard B10. Second, many non-accredited organizations produce annual reports as a way to communicate results to their donors, partners, and other stakeholders. Third, a Canadian guide to annual reports has been produced that encourages organizations to use part of their annual report to "explain how the activities of the past year relate to the organization's strategy. Performance measures can be used to define and measure an organization's progress towards achieving its goals" (Chartered Professional Accountants Canada, 2011, p. 13), which is an idea that is gaining traction with regulators in similar contexts (Australian Accounting Standards Board, 2015; Spencer, 2015; Tyler, 2013). Fourth, 74 percent of Canadian charities use evaluation information to increase awareness of their cause to at least a moderate extent, and 52 percent use it for fundraising (Hall et al., 2003). Canadian charities have indicated that an annual report is the easiest place to share their impact measurement results (Panel on Accountability and Governance in the Voluntary Sector, 1999). Therefore, a reasonably likely place where a member of the public or a donor could find information about an organization's measured impact is in its annual report.
HYPOTHESES
In order to explore whether trust-building activities like public disclosures of impact and third-party accreditation are convergent, three hypotheses (described below) are proposed. These are intended to test convergence in three distinct, but closely related ways: whether the accreditation status of charities matters; if accreditation matters, how the rate of disclosures compares to what previous studies have found; and the effect of organizational size on disclosure behaviour. Charities come in a wide range of sizes and capacities. Those that are accredited by Imagine Canada have shown a capacity and interest in improving their governance and management practices, regardless of their size (the smallest accredited charity had an annual revenue of less than $45,000 in 2015). Based on demonstrated capacity, it is possible that these accredited charities are more likely to measure and publicly disclose their impact.
Hypothesis 1 Imagine Canada accreditation will correlate with a greater public disclosure of program impact through annual reports than occurs among Canadian non-religious charities in general.
Other findings of rates of impact measurement (Hall et al., 2003; Philanthropic Foundations Canada, 2017; Salamon et al., 2010; Schmitz et al., 2012) were based on self-reported impact measurement activity, which would tend to be over- stated. Additionally, organizations are more likely to report positive results, which may lead to public under-reporting of impact measurement activities that are neutral or negative.
Hypothesis 2 Imagine Canada accreditation will correlate with rates of public disclosure of program impact that are less than the rates of impact measurement found in earlier self-reported studies.
Despite having the demonstrated capacity to gain accreditation, smaller organizations still have fewer resources and capacity than larger organizations.
Hypothesis 3 Within accredited organizations, larger size, as measured by annual revenue, will correlate with more rigorously measuring and publicly disclosing program impact.
METHODOLOGY
Quantitative methods using nominal level variables were used to examine the variance of reporting behaviour by charities with their accreditation status and size by revenue. Data were collected from the Canada Revenue Agency (CRA) website (including the publicly available portions of T3010 filings), charity websites, and by reviewing charity annual reports.
Two lists were created: the population of Imagine Canada-accredited organizations as of August 15, 2017, and a control sample of 381 random non-religious Canadian registered charities from a population of 53,313 for a confidence interval of +/- five percent, 19 times out of 20.
To create the Imagine Canada accredited list:
1. The list of accredited organizations was pulled from the Imagine Canada website.
2. The website of each organization was visited to pull its business number.
3. The business number was compared to the CRA list to determine the registered charitable category.
4. The annual revenue (line 4700) and expenses (line 5100) were copied from their 2015 T3010 data on the CRA website.
5. Each organization's posted annual report was reviewed following the process outlined below.
Four accredited organizations are nonprofits that are not registered as charities, and they were excluded from data collection and analysis. Twenty-one accredited charities (including fifteen YMCAs and two YWCAs) fall under religious categories, and they were excluded from analysis.
A listing of all active registered non-religious Canadian charities in 2015, including their websites and categories, was obtained from the CRA. Religious charities were excluded because they make up a large portion of charities in Canada (mostly local places of worship), a small number of Imagine Canada-accredited organizations, and they are unlikely to report impact (Hackett, 2016).
To create the control sample of 381 random non-religious Canadian registered charities:
6 A random number function was used to assign a number to each of the 53,312 active registered non-religious Canadian charities in 2015.
7 The random numbers were sorted from largest to smallest, and the largest 381 numbers were selected for the sample.
8 Each organization's annual revenue (line 4700) and expenses (line 5100) were copied from their 2015 T3010 data on the CRA website.
9 Posted annual reports were reviewed following the process outlined below.
The comparison year was selected to be 2015 because it is the most recent year where complete information was likely to be available. Charities have up to six months from their year-end to file their T3010s with the Canada Revenue Agency, and it can take some time after filing for their information to become publicly available. The data for 2016 was not yet available for all charities at the time of writing.
In order to simulate the experience of a typical casual donor or member of the public, only publicly available information was used, and no private, confidential, or by-request-only information was requested, accessed, or referenced. Only the website listed on either the Imagine Canada site or in the CRA data table was accessed; no internet searches were performed to find unlisted or changed websites. At no time were charities informed of data collection.
Locating annual reports on websites was done by browsing the navigation menu; by using a site's internal search function; and through Google-powered site searches using the terms "annual" and "report" (English sites) or "annuel" and "rapport" (French sites). For both accredited and control charities, an annual report labelled 2015 was used whenever possible. In situations where an organization labelled annual reports using the fiscal year (e.g., 2014-2015 or 2015-2016) the 2014- 2015 annual report was used. If a report labelled "annual report" could not be found, reports with the same content but different titles were used (impact report, accountability report, report to community, report to donors, victory report, etc.). In a few cases, only one annual report was available from an adjacent year (2014 or 2016) on the website and that report was used. In instances where no annual report from any period at least partially covering 2014, 2015, or 2016 was readily accessible, the charity was rated as not publicly disclosing any measurements.
Annual reports were reviewed against the Table 1 categories of impact measurement. Because information in annual reports was highly variable, an "at least one" threshold was used for each rating; if at least one measurement of a type was observed, then the organization was rated as reporting on that measure. This was necessary because complete information on the number, scope, and intended impact of charities' programs was unavailable, so it was not possible to assess the extent to which published information captured the organization's activities or reflected its performance against intended results.
Reporting on activities that generated inputs-most frequently fundraising and volunteer programs that supported the operations of the charity-were not rated as an impact measurement. This is a rejection of Barbara Legowski and Terry Albert's (1999) classification of donor base diversity, sustainability, and growth as outcomes in fundraising organizations in light of more recent writing that would classify these as measures of organizational capacity (Sawhill et al., 2001) and states, for example, that "the effectiveness of [foundations] depends substantially on the performance of grant recipients" (Tyler, 2013, p. 77).
Some utilization rate measures-such as program wait times and the length of wait lists, average treatment times, and vacancy rates-could not be classified using the Table 1 categories. These were generally ignored for this study. In the case of grant-making organizations, including hospital foundations, reports on granting activity and the purchases of assets for donees were rated as output measurement. Quantitative reports on how the grant was used to support an activity by a recipient were rated as outcome measurement.
The use of testimonials was very common in annual reports, but it was not included in the measurement typology, as it did not include a quantifiable measurement. The exception was where significant research, advocacy, or litigation activities were reported together with the resulting change in legislation, regulation, policy, or practice, in which case this was considered to be both output and outcome measurement (Organizational Research Services, 2007); this is an affirmation of Legowski et al.'s (1999) classification of policy changes and issues as outcomes in advocacy organizations and their classification of changes in knowledge, attitudes, and behaviour as outcomes in prevention/promotion/education organizations.
Because this was a review of publicly reported impact measurement, journalistic work appearing in annual reports-reporting on trends, statistics, progress, or activities within a sub-sector of the organization that was not clearly linked to its own activities-was not counted as reporting impact. Unless evidence of measurement was given, general statements of intended, expected, assumed, or future change or benefit were not rated as reporting impact. Where a consolidated annual report for two related charities (usually an operating charity and a foundation) was the only report available, and where clear reporting on impact measures for the accredited or sample charity was not included, the accredited or sample charity was not rated for reporting impact.
The presentation of financial information was highly varied between annual reports. If only percentages were provided, then the organization was rated as providing proportional financial measures. If, however, enough absolute information was provided to allow a reader to calculate a dollar value from a percentage, then the organization was rated as providing financial accountability metrics. A frequent presentation was a donut diagram with percentages by category and a total revenue or expense number appearing adjacent; given that this presentation appears to be an informal standard, it was rated as financial accountability metrics as opposed to a key performance indicator or dashboard. Where a percentage was clearly labelled as a specific ratio (such as overhead ratio or cost to raise a dollar) the organization was rated as providing key performance indicators.
Information in the notes of financial statements, on other website pages, or in marketing and solicitation materials was not reviewed. Where financial statements were available in a separate document or file from the annual report, the financial statements were not reviewed or rated.
Data coding and analysis was done in Microsoft Excel 2010 using the Real Statistics Resource Pack release 5.2 (Zaiontz, 2017). Chi-squared tests were conducted during analysis because of the categorical nature of the coded data. Cramer's V was calculated to determine effect size because more than two categories were needed to describe reporting behaviour in order to accommodate the wide variety of observed reporting behaviour.
FINDINGS
Hypothesis 1
Imagine Canada accreditation will correlate with a greater public disclosure of program impact through annual reports than occurs among Canadian non-religious charities in general.
Table 2 provides descriptive statistics for both the control and accredited samples and their reporting behaviours. With 47.72 percent of accredited charities reporting impact compared to 4.20 percent of charities in general, support for this hypothesis might appear strong on the surface. However, accredited charities tend to be much larger than charities in general, with the median size of an accredited charity being well beyond the extreme outlier range for non-religious charities in general. To account for size, the samples were both trimmed to the control sample's inter-quartile range for size by revenue (both to remove very small and large charities from the control sample and to remove very large charities from the accredited population). The trimmed samples produce a difference of 5.35 percent in impact reporting behaviour, which is just outside the calculated margin of error for the control sample as a whole, and is likely within the margin of error for the trimmed sample.
In order to explore the role of organization size for both the control sample and accredited charities, a chi-squared test with Cramer's V co-efficient was done. This shows a significant correlation (control p value < 0.01; accredited p value < 0.01) with a weak effect size (control Cramer's V = 0.2695; accredited Cramer's V = 0.2664) between an organization's public reporting of impact and its size by revenue (see Tables 3 and 4). Interestingly, the effect size is quite close between samples, which suggests that the proportionate effect of revenue on public reporting might remain stable as organizations grow.
In order to control for organizational size, an additional chi-squared test with Cramer-V co-efficient was done for just those organizations in the fourth quartile of the control sample in Table 2, as this is where the gap in behaviour between the control charities and the accredited charities appears to widen. Table 5 demonstrates that accreditation has a significant (p value < 0.01) and strong (Cramer's V = 0.7311) effect on reporting behaviour for the largest charities.
In order to take a rough look at whether it is accreditation itself or some other factor that might be causing the difference in reporting behaviour between large control charities and accredited charities, Table 6 looks at the correlation between accreditation status and public reporting. This is done by segmenting accredited charities by accreditation date using a chi-squared test and splitting the accredited charities into two samples: those accredited prior to December 31, 2015, representing those charities that were either accredited or substantially through the accreditation process at the time that their annual report was published, and those accredited from January 1, 2016, onward, whose annual reports were unlikely to have been directly influenced by accreditation requirements. There is no significant correlation (p = 0.33) between those charities that were accredited (or that were substantially through the accreditation process) and those charities accredited after the period under study. This suggests that while accreditation and public reporting correlate, accreditation itself may not be a factor in an organization's public reporting, and there may be some other factor underlying both accreditation and reporting behaviours.
Given all of this, Hypothesis 1 is supported for charities with annual revenues of greater than $512,159, but not for smaller organizations. The determinants of the correlation are not clear from this study.
Hypothesis 2
Imagine Canada accreditation will correlate with rates of public disclosure of program impact that are less than the rates of impact measurement found in earlier self-reported studies.
As shown by the observed rates of impact reporting by sample and organization size (see Table 7), program outcome measurement is greater for mid-sized (second and third quartile) accredited charities than two out of four earlier studies, and greater for the largest (fourth quartile) accredited charities than any earlier studies. Program output reporting is greater for all but the smallest (second quartile and larger) accredited charities than any earlier studies. Financial accountability measures are greater for second quartile charities than in two out of three past studies and greater for larger than median charities than any past studies. These three areas of measurement (outcomes, outputs, and financial accountability measures) are also the most frequently reported measures in annual reports for accredited charities. As a result, Hypothesis 2 is unsupported, as it appears that accredited charities are engaging in some forms of reporting at rates higher than in self-reported studies. However, similar to Hypothesis 1, this is more the case for larger charities than for smaller charities.
Hypothesis 3
Within accredited organizations, size as measured by annual revenue will correlate with more rigorously measuring and publicly disclosing program impact.
Referring to Table 7, the public reporting of all measurements increases with the size of the organization (12 of 14 measures increase from quartile one to quartile two, eight of 14 increase from quartile two to quartile three, 10 of 14 increase from quartile 3 to quartile four) in almost all cases. Decreases in rates are largely in third-quartile accredited charities in measurements of capacity and activity, with rates of financial accountability increasing at the same level and impact measures increasing moderately. Possibly, this might be around the size where organizations have the capacity to shift their organizational narrative from one based on activity measures to one based on impact measures, or where they are able to become more sophisticated in their financial management. This connects back to the results in Table 4, which were discussed in connection with Hypothesis 1, that show a significant correlation between size and public disclosure with a weak effect size.
Oddly, even though client satisfaction surveys are one of the easiest methods of evaluating impact, and are quite common (Hall et al., 2003; Salamon et al., 2010), if they are being used, Canadian organizations rarely publicly report the results. Overall, there appears to be support for Hypothesis 3, with the caveat that the differences between quartile two and quartile three accredited organizations might be worth a further look in the future.
DISCUSSION
Returning to the question of the extent to which the trends of accreditation and impact reporting are convergent and whether accreditation contributes to increased impact reporting, the answer seems to be that accreditation and impact reporting converge as organizational size (measured by revenue) increases, but that it does so because of an unknown factor, not because of accreditation itself. From these findings, a general conclusion can be drawn that capacity matters, with organizational revenue being one element of capacity. Other, undetermined, elements of capacity for measurement and disclosure are likely more present in Imagine Canada-accredited charities than in non-religious charities in general, with accreditation being a consequence of those elements.
The most common publicly reported impact measures are output measurements, which are generally insufficient for describing impact. This may be an ongoing consequence of the confusion between outcomes and outputs. The next most common is outcome measures, with just 46.19 percent of Canadian charities with a demonstrated capacity and interest in improving their management and governance practices reporting on outcomes. However, only 4.20 percent of the control sample did any impact reporting.
These findings generally affirm earlier research that correlates organizational size (measured by revenue) with impact measurement, adding that the effect is weak. The observed differences between accredited charities, not-yet-accredited charities, and non-accredited charities also suggest that there may be additional factors underlying the public reporting of impact. The common factor between the public reporting of impact and Imagine Canada accreditation may be organizational paradigms that see evaluation as an activity with a broad audience that influences the opinions of widely defined stakeholder groups. Intuitively, it seems logical that both are products of organizational cultures, philosophies, and systems that are attentive to and seek to manage public reputation.
These findings are an outside review of behaviour using large samples, including a simple random control sample, instead of the self-reported information collected from a smaller sample frame. This study also applied consistent definitions to ratings of whether organizations were reporting impact, instead of relying on the subjective, internal understandings of research subjects.
CONCLUSIONS
This study did not measure trust directly, rather it looked at accreditation and information-sharing behaviours that others have shown are likely to increase trust in charities. In general, Canadian charities are not sharing impact information for the public to evaluate them on, which the public itself is saying it wants to see. This may be because of the costs and barriers associated with measurement, or it may be because measurement is happening but the results are being withheld. Either way, it is a missed opportunity to build trust with the public. Counter to this, accredited organizations, especially the larger ones, are taking advantage of this opportunity and are publicly reporting on their impacts at levels much higher than Canadian charities in general and at rates higher than previous self-reported studies have found. Accreditation does not, however, appear to be causing this behaviour. While accreditation may be achieving its other goals of improving the efficiency, effectiveness, and transparency of management and governance in Canadian charities, it is not appropriate to say that it leads to greater impact. There is undoubtedly a correlation between the public disclosure of impact and accreditation, but the relationship between the two is unlikely to be causal and the determinants still need to be found. In order to further explore the correlation between accreditation and public reporting, longitudinal data should be gathered to determine if one generally precedes the other, with the results of this study suggesting that public reporting of impact likely precedes accreditation. Future research could also examine accreditation and public reporting in light of paradigmatic or cultural models.
Other questions raised for future researchers include: what specific capacities help organizations do impact reporting and seek accreditation, and do these vary by size or sub-sector? Are charities doing measurement that they are choosing not to report on, and if so, what is keeping them from sharing their results? And, is impact reporting really all that the sector expects it to be-do organizations that engage in more reporting generate more support because of it?
Limitations
Ratings were subjective and done by a single person; while attempts were made to be consistent, it is possible that ratings were consistently biased for or against certain types of measurements or presentations of information. Because an "at least one" threshold was used, it really was just one measure of one program among many for some charities, so the decent percentage of accredited charities reporting outcome measures (46.19 percent), for example, should not be taken as an indication that that many organizations are making wide use of outcome measurement. There were no checks on quality or validity of reported impact; organizations were taken at their word that they had achieved the results they reported. This study assumed a positivist, qualitative approach to impact measurement and reporting. Many organizations used feedback statements, testimonials, and profiles in their annual reports to illustrate impact; given the wide variety of formats, content, and context these could not be consistently rated as reports of impact and were ignored.
Sampling and data collection were affected by the CRA charities listings in a number of ways: the list supplied by CRA for 2015 included ten charities registered sometime after that tax year; data supplied by charities to the CRA was incomplete; and registered charity categories appear to relate more to purpose at time of registration and less to current activities so segmenting based on CRA category was not as useful as initially thought, and a planned segmented analysis based on registered category was abandoned.
While Imagine Canada accredited charities were required to have at least the three most recent years of annual reports on their websites at the time of accreditation to be compliant with "Standard B10" (Imagine Canada, 2014, p. 7), sixteen did not have them available at the time of this study, which is likely the result of compliance slippage.
ACKNOWLEDGEMENTS
I gratefully acknowledge: the guidance and support of Susan Phillips of Carleton University in conducting this research and preparing the manuscript; feedback and questions from ANSER 2018 conference attendees where a draft version of this paper was presented; and the thoughtful feedback of two anonymous peer reviewers.
WEBSITE
Canada Revenue Agency, https://www.canada.ca/en/revenue-agency/services/charities-giving/charities-listings.html
ABOUT THE AUTHOR / L'AUTEUR
Christopher Nicholas Dougherty is a Master of Philanthropy and Nonprofit Leadership graduate student from Carleton University, an Analyst at Shock Trauma Air Rescue Service (STARS) Foundation, and a volunteer peer reviewer for the Imagine Canada Standards Program. Email: [email protected]
REFERENCES
Australian Accounting Standards Board. (2015). Exposure draft (ED) 270: Reporting service performance information. Melbourne, AU: Commonwealth of Australia.
Auger, G.A. (2014). Trust me, trust me not: An experimental analysis of the effect of transparency on organizations. Journal of Public Relations Research, 26(4), 325-343.
Bekkers, R. (2003, December). Trust, accreditation, and philanthropy in the Netherlands. Nonprofit and Voluntary Sector Quarterly, 32(4), 596-615.
Benjamin, L.M. (2010). Funders as principals: Performance measurement in philanthropic relationships. Nonprofit Management & Leadership, 20(4), 383-403.
Benjamin, L.M. (2012). Nonprofit organizations and outcome measurement: From tracking program activities to focusing on frontline work. American Journal of Evaluation, 33(3), 431-447.
Benjamin, L.M., & Campbell, D.C. (2015). Nonprofit performance: Accounting for the agency of clients. Nonprofit and Voluntary Sector Quarterly, 44(5), 988-1006.
Berman, G., & Davidson, S. (2003, December). Do donors care? Some Australian evidence. Voluntas: International Journal of Voluntary and Nonprofit Organizations, 14(4), 421-429.
Bourassa, M.A., & Stang, A.C. (2016). Knowledge is power: Why public knowledge matters to charities. International Journal of Nonprofit and Voluntary Sector Marketing, 21(1), 13-30.
Bovens, M., Schillermans, T., & Goodin, R. (2014). Public accountability. In M. Bovens, T. Schillermans, & R. Goodin (Eds.), The Oxford handbook of public accountability (pp. 1-22). Oxford, UK: Oxford University Press.
Brinkerhoff, D.W. (2001). Taking account of accountability: A conceptual overview and strategic options. Washington, DC: U.S. Agency for International Development.
Candler, G., & Dumont, G. (2010, June). A non-profit accountability framework. Canadian Public Administration, 53(2), 259-279.
Carman, J.G., & Fredericks, K.A. (2010). Evaluation capacity and nonprofit organizations: Is the glass half-empty or half-full? American Journal of Evaluation, 31(1), 84-104.
Chartered Professional Accountants Canada. (2011). Improved annual reporting by not-for-profit organizations. Toronto, ON: Chartered Professional Accountants Canada.
Clark, C., Rosnzweig, W., Long, D., & Olsen, S. (2004). Double bottom line project report: Assessing social impact in double bottom line Ventures: Methods catalog. New York, NY: The Rockefeller Foundation.
Dubner, S.J. (2017, July 12). When helping hurts. Freakonomics. URL: http://freakonomics.com/podcast/when-helping -hurts/ [December 17, 2017].
Fenton, J.J., Jerant, A., Bertakis, K.D., & Franks, P. (2012, February). The cost of satisfaction: A national study of patient satisfaction, health care utilization, expenditures, and mortality. Archives of Internal Medicine, 172(5), 405-411.
Grimmelikhuijsen, S. (2012). Linking transparency, knowledge and citizen trust in government: An experiment. International Review of Administrative Sciences, 78(1), 50-73.
Gripper, R., Kazimirski, A., Kenley, A., McLeod, R., & Weston, A. (2017). Global innovations in measurement and evaluation. London, UK: New Philanthropy Capital.
GuideStar. (2013, June 17). BBB Wise Giving Alliance, Charity Navigator, and GuideStar join forces to dispel the charity "overhead myth." URL: https://learn.guidestar.org/news/news-releases/2013/2013-06-17-overhead-myth
Hackett, S.P. (2016, April 29). Reporting service performance information exposure draft -AASB ED270. Canberra, AU: Australian Catholic Bishops Conference General Secretariat.
Hall, M.H., Phillips, S.D., Meillat, C., & Pickering, D. (2003). Assessing performance: Evaluation practices & perspectives in Canada's voluntary sector. Toronto, ON: Canadian Centre for Philanthropy.
Imagine Canada. (2012). Strengthening public confidence in Canada's charitable sector: Overview of Imagine Canada's new standards program. URL: http://www.imaginecanada.ca/sites/default/files/www/en /standards/standardsprogram_overview_benefits_april2012.pdf [December 17, 2017].
Imagine Canada. (2014). Standards program for Canada's charities & nonprofits. Toronto, ON: Imagine Canada.
Imagine Canada. (2016, November 15). Greater transparency would spur charitable donations: Survey. URL: http:// www.imaginecanada.ca/who-we-are/whats-new/news/greater-transparency-would-spur-charitable-donations -survey [December 17, 2017].
Lasby, D., & Barr, C. (2013). Talking about charities 2013: Canadians' opinions on charities and issues affecting charities. Edmonton, AB: The Muttart Foundation.
Legowski, B., & Albert, T. (1999, October). A discussion paper on outcomes and measurement in the voluntary health sector in Canada. Voluntary Health Sector Project. URL: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1 .552.9525&rep=rep1 &type=pdf [April 8, 2019].
MacLaughlin, S. (2016). Data driven nonprofits. Glasgow, UK: Saltire Press.
Maloney, D. (2012). The mission myth. San Diego, CA: Business Solutions Press.
McKeown, L., & McKechnie, A.-J. (2000, Autumn). Trust, accountability and support for charities: The views of Canadians. Canadian Centre for Philanthropy Research Bulletin, 7(4), 1-9.
Morino, M. (2011). Leap of reason: Managing to outcomes in an era of scarcity. Washington, DC: Venture Philanthropy Partners.
Muir, K., & Bennet, S. (2014). The compass: Your guide to social impact measurement. Sydney, AU: The Centre for Social Impact.
Organizational Research Services. (2007). A guide to measuring advocacy and policy. Baltimore, MD: Annie E. Casey Foundation.
Pallotta, D. (2013, March 11). The way we think about charity is dead wrong. TED. URL: https://www.ted.com/talks /dan_pallotta_the_way_we_think_about_charity_is_dead_wrong/details [December 17, 2017].
Panel on Accountability and Governance in the Voluntary Sector. (1999, February). Building on strength: Improving governance and accountability in Canada's voluntary sector. URL: http://sectorsource.ca/sites/default/files /resources/files/2458_Book.pdf
Philanthropic Foundations Canada. (2017). A portrait of Canadian foundation philanthropy. Montréal, QC: Philanthropic Foundations Canada.
Plantz, M.C., Greenway, M.T., & Hendricks, M. (1997, Fall). Outcome measurement: Showing results in the nonprofit sector. New Directions for Evaluation, (75), 15-30.
Prakash, A., & Gugerty, M.K. (2010). Trust but verify? Voluntary regulation programs in the nonprofit sector. Regulation & Governance, 4, 22-47.
Preskill, H., Parkhurst, M., & Splansky Juster, J. (2014). Guide to evaluating collective impact 02: Assessing progress and impact. San Francisco, CA: Collective Impact Forum.
Rockefeller Foundation, & Goldman Sachs Foundation. (2003). Social impact assessment: A discussion among grantmakers. New York, NY: The Goldman Sachs Foundation and The Rockefeller Foundation.
Salamon, L.M., Geller, S.L., & Mengel, K.L. (2010). Communiqué No. 17: Nonprofits, innovation, and performance measurement: Separating fact from fiction. Baltimore, MD: Johns Hopkins University.
Sawhill, J., & Williamson, D. (2001, May). Measuring what matters in nonprofits. McKinsey Quarterly, 1-11.
Saxton, G.D., & Guo, C. (2011). Accountability online: Understanding the web-based accountability practices of nonprofit organizations. Nonprofit and Voluntary Sector Quarterly, 40(2), 270-295.
Saxton, G. D., Kuo, J.-S., & Ho, Y.-C. (2012). The determinants of voluntary financial disclosure by nonprofit organizations. Nonprofit and Voluntary Sector Quarterly, 41(6), 1051-1071.
Schmitz, H.P., Raggo, P., & Bruno-van Vijfeijken, T. (2012). Accountability of transnational NGOs: Aspirations vs. practice. Nonprofit and Voluntary Sector Quarterly, 41(6), 1175-1194.
Schnackenberg, A.K., & Tomlinson, E.C. (2016, November). Organizational transparency: A new perspective on managing trust in organization-stakeholder relationships. Journal of Management, 42(7), 1784-1810.
Spencer, J. (2015). Service performance reporting - Closing the performance reporting gap for NFP entities. Sydney, AU: Chartered Accountants Australia New Zealand.
Szper, R., & Prakash, A. (2011). Charity watchdogs and the limits of information-based regulation. Voluntas, 22, 112-141.
Thompson, J.C. (1992, March/April). Program evaluation within a health promotion framework. Canadian Journal of Public Health, 83 (Supplement 1: Health Promotion Research Methods: Expanding the Repertoire), S67-S71.
Tuan, M.T. (2008). Measuring and/or estimating social value creation: Insights into eight integrated cost approaches. Seattle, WA: Bill & Melinda Gates Foundation.
Tyler, J. (2013). Transparency in philanthropy: An analysis of accountability, fallacy, and volunteerism. Washington, DC: The Philanthropy Roundtable.
van der Heijden, H. (2013). Small is beautiful? Financial efficiency of small fundraising charities. The British Accounting Review, 45(1), 50-57.
Wetherington, J.M., & Daniels, M.K. (2013). The relationship between learning organization dimensions and performance in the nonprofit sector. Journal for Nonprofit Management, 90-107.
Zaiontz, C. (2017). Real statistics resource pack release 5.2 [Computer software]. URL: http://www.real-statistics.com /free-download/real-statistics-resource-pack/ [August 10, 2017].
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is published under https://creativecommons.org/licenses/by-nc-nd/2.5/ca/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
This article examines the public reporting of impact, defined as progress towards a charity’s mission and long-term objectives, by Canadian charities through their annual reports. The public reporting behaviour of those accredited under Imagine Canada’s Standards Program is compared with a matched sample of charities that have not sought accreditation. The objective is to explore whether trust-building activities like public disclosures of impact and third-party accreditation are convergent. The study finds that accreditation status correlates with impact measurement and reporting; both trends are linked to organizational size, and accreditation does not appear to be causing charities to increase their disclosures of impact, which suggests that there may be underlying factors driving both behaviours. These findings generally affirm earlier research that correlates organizational size with impact measurement, adding that the effect is weak.




