Content area
Purpose
Fierce competition in the crowdfunding market has resulted in high failure rates. Owing to their dedication and efforts, many founders have relaunched failed campaigns as a second attempt. Despite the need for a better understanding, the success of campaign relaunches has not been well-researched. To fill this research gap, this study first theorizes how founders’ learning may enhance their competencies and influence investors’ attribution of entrepreneurial failure. The study then empirically documents the extent and conditions under which such learning efforts impact campaign relaunch performance.
Design/methodology/approach
This study examines 5,798 Kickstarter-relaunched campaigns. The founders’ learning efforts are empirically captured by key changes in campaign design that deviate from past business practices. Word movers’ distances and perceptual hashing algorithms (pHash) are used separately to measure differences in campaign textual descriptions and pictorial designs.
Findings
Differences in textual descriptions and pictorial designs during campaign failure–relaunch are positively associated with campaign relaunch success. The impacts are further amplified when the previous failures are more severe.
Originality/value
This study is one of the first to examine the success of a campaign relaunch after an initial failure. This study contributes to a better understanding of founders’ learning in crowdfunding contexts and provides insights into the strategies founders can adopt to reap performance benefits.
1. Introduction
Crowdfunding has become one of the most promising approaches for new ventures to incubate creative ideas and serves as a useful complement to conventional financial channels (Strausz, 2017). The most recognized crowdfunding model, and the one of interest in this study, is reward-based crowdfunding (Belleflamme et al., 2014; Mollick, 2014). In this transaction model, investors receive a reward (usually a product or service) for backing a campaign, rather than a monetary return. Kickstarter, the leading reward-based crowdfunding platform in the world, raised over 7.3 billion dollars from over 22 million backers as of May 2023, and this number continues to increase (Kickstarter, 2023).
The intense competition for entrepreneurs to enter crowdfunding markets has led to high failure rates in crowdfunding campaigns (de Larrea et al., 2019; Mollick, 2014). For example, Kickstarter reported a success rate of less than 40%, indicating that more than 60% of its campaigns failed to reach their fundraising goals. Since considerable time and effort have been devoted, a significant number of founders who experienced initial failures are still motivated to bring their creative ideas back into the crowdfunding market for a second attempt. Consequently, crowdfunding campaign relaunches have become an increasingly common and visible phenomenon. The crowdfunding literature has extensively investigated how campaign design features explain and predict campaign fundraising success (Belleflamme et al., 2014; Böckel et al., 2021; Kunz et al., 2017). These discussions generally consider crowdfunding campaigns to be static and independent releases, elucidating the determinants of single campaign success. Nevertheless, the failure–relaunch of a campaign is an interconnected dynamic process in which entrepreneurs may accumulate and interpret prior experience to strategically adapt to subsequent financing activities (Greenberg and Gerber, 2014; Leone and Schiavone, 2019; Piening et al., 2021). New theoretical perspectives and discussions are required to understand the performance of campaign relaunches better.
This study aims to bridge this gap in the literature by examining the role of founders’ learning efforts in crowdfunding markets. Drawing on the theoretical lens of organizational theories, we conceptualize founders’ learning as how they receive feedback from prior failures, convert it into knowledge and leverage it in behavioral changes (Argote et al., 2021). Following relevant studies, we empirically capture founders’ learning efforts by examining the extent to which campaign design deviates from previous practices and measure this deviation using differences in textual descriptions and pictorial designs (Katila and Ahuja, 2002; Piening et al., 2021). First, founders’ participation in the learning process may improve their knowledge and competence (Argote et al., 2021; Argote and Miron-Spektor, 2011). Furthermore, positive signals of founders’ learning may be captured by potential investors and influence their attribution after observing a notable event, i.e. prior failure (Kibler et al., 2017; Roccapriore et al., 2021). Through both these mechanisms, the performance of campaign relaunches may be affected. This raises the first research question: Whether and to what extent do founders’ learning efforts influence crowdfunding relaunch performance?
Additionally, not all relaunches are based on the same circumstances, because the severity of the previous failure may differ. Some crowdfunding campaigns experience serious failures and barely gain acceptance from crowds. Other campaigns may not achieve their funding targets but amass a percentage of funding progress that indicates their legitimacy in the crowdfunding market. The severity of failure may reflect how inadequate previous practice is and how it affects the crowd’s perceptions and judgments, thus impacting the learning–performance relationship. To obtain more contextualized evidence on entrepreneurial learning, this study investigates the moderating effects of failure type (severe failure or less severe failure) on the relationship between entrepreneurial learning and crowdfunding relaunch success.
To empirically examine the proposed research questions, we collected data from Kickstarter and identified 5,798 valid campaigns that suffered from initial failure but bounced back to relaunch. The word2vec algorithm, in combination with the words’ mover distance, was used to capture semantic differences in textual descriptions between campaigns’ initial and second releases (Kusner et al., 2015; Mikolov et al., 2013), whereas the perceptual hashing algorithm (pHash) was employed to measure differences in pictorial design based on pixel distance (Beskow et al., 2020; Horsman et al., 2014). The results of the logistic regression analysis indicate a positive relationship between such differences and the success rate of campaign relaunches, even when controlling for other known factors. Moreover, the empirical evidence of the moderating effects indicates that the above relationships are strengthened when the previous failure is more severe. We found consistent results after performing a series of robustness tests, including propensity score matching (PSM) and alternative measures of the independent and dependent variables. Moreover, empirical evidence indicates that the impacts of adding textual descriptions and enhancing visual designs vary across crowdfunding categories.
This study contributes to the existing literature in several ways. First, it investigates campaign relaunch success, thereby contributing to the crowdfunding literature, which has primarily focused on campaign performance from a static and independent perspective. Second, this study leverages organizational learning theory to explain crowdfunding relaunch success and theorizes two mechanisms through which founders’ learning efforts impact their performance. In doing so, it provides theoretical insights into crowdfunding and the organizational literature. Third, this study extends this theoretical contribution by identifying the boundary conditions for more effective entrepreneurial learning. Moreover, this study provides additional theoretical insights by revealing the tension between founders’ learning efforts and failure severity. It is also of great importance in practice to help founders reverse previous failures and bring creative ideas into reality at relaunch.
The remainder of this study is organized as follows. Section 2 reviews the relevant studies to identify critical research gaps in the literature. Section 3 introduces the theoretical background and proposes research hypotheses. Section 4 presents the methodology. Section 5 summarizes the results of the empirical analyses. Finally, Section 6 discusses the theoretical and practical implications.
2. Literature review
2.1 Determinants of crowdfunding success
Crowdfunding draws inspiration from microfinance and crowdsourcing, enabling entrepreneurs to access small amounts of capital from investors worldwide (Mollick, 2014). In this novel financing market, online crowdfunding platforms have replaced intermediaries in traditional financing channels. Founders present various design features to communicate directly with crowds via platforms (Kunz et al., 2017). Given its economic significance, the majority of crowdfunding literature has focused on how available information signals affect investors’ decision-making and campaign performance.
Several studies have demonstrated that a campaign’s design features may significantly affect financing performance. For instance, appropriate financing targets have repeatedly been shown to determine campaign performance (Kunz et al., 2017; Mollick, 2014). Higher funding targets communicate the founder’s confidence and send positive quality signals, but simultaneously it can impede campaign success and increase the potential opportunity cost to investors. Reward design significantly influences investor decisions (Chen et al., 2016; Du et al., 2019). The literature notes that as the number of reward options increases, investors with heterogeneous requirements are better able to find satisfactory investment options and are thus more likely to participate in crowdfunding activities (Hu et al., 2015; Kunz et al., 2017). Compared to the flexible model (keep-it-anyway), the research found that a fixed model (all-or-nothing) significantly reduced the perceived risk of online investors and, therefore, contributed substantially to the campaign’s success rate, especially for those pursuing higher financing goals (Burtch et al., 2018; Strausz, 2017). Several studies have demonstrated the impact of campaign novelty and innovation on financing performance (Chan and Parhankangas, 2017; Davis et al., 2017). Campaign presentation is also a central determinant of fundraising success (Burtch et al., 2018; Mollick, 2014). For instance, a detailed description is recommended to boost the likelihood of project success (Bi et al., 2017; Lagazio and Querci, 2018). Clear and precise language enhances readability and facilitates investors’ decision-making processes (Parhankangas and Renko, 2017). The founders’ efforts in visualization are also critical for achieving the desired financing outcomes (Bi et al., 2017; Kunz et al., 2017).
Additionally, as founders in the crowdfunding market are often in the early stages of their entrepreneurship, their business practices and market reputations are not readily accessible. Investors’ decisions rely heavily on evaluating the founders’ characteristics (Greenberg and Mollick, 2017; Liu et al., 2018; Younkin and Kuppuswamy, 2018). The evidence showed that founders’ demographic characteristics, including gender and ethnicity, have a significant impact on campaign performance (Greenberg and Mollick, 2017; Younkin and Kuppuswamy, 2018). From the perspective of capital theory, many studies have discussed the positive impacts of founders’ human and social capital on their financing performance (Buttice et al., 2017; Colombo et al., 2015; Courtney et al., 2017). Moreover, previous research demonstrated that founders’ financing records are an important consideration (Buttice et al., 2017; Courtney et al., 2017; Soublière and Gehman, 2020). Specifically, the performance of founders’ previous financing activities may influence investors’ recognition of a campaign’s legitimacy in terms of its achievements, thereby affecting their decision-making and evaluation (Soublière and Gehman, 2020).
In summary, crowdfunding literature has accumulated a wealth of research on how information signals affect the funding performance of a single crowdfunding campaign. Despite their wealth of theoretical knowledge, founders still face high failure rates. Information technology has significantly lowered the threshold for founders to enter the online crowdfunding market, leading to fierce competition. Lack of experience also leads to potentially imperfect design during the campaign’s initial launch. For example, the world’s leading Kickstarter platform reported a success rate of 40% (Kickstarter, 2023). Given the considerable effort and cost invested, founders who suffer initial failures still have the incentive to return to crowdfunding platforms for relaunching. Recent crowdfunding research has explored the emerging phenomenon of campaign relaunch.
2.2 Research on crowdfunding campaign relaunch
In the event of a crowdfunding campaign failure, crowdfunding platforms typically offer founders the chance to receive feedback, tweak campaign design and relaunch business ideas (Clauss et al., 2018). Several recent studies have opened the door to a discussion on crowdfunding relaunch from the following two perspectives (summarized in Table 1).
First, several studies have examined the factors affecting founders’ subsequent launches after their initial failures. Greenberg and Gerber (2014) identified several key factors that discouraged or motivated subsequent launches. It has been suggested that crowdfunding requires spiritual and financial support from friends. Founders may hold concerns that relaunching would ruin friendships and embarrass them on social networks. Furthermore, they may lose confidence in their previous failures and view the second attempt as a waste of effort, money and time. However, for others, the initial failure may provide an opportunity to collect feedback and form connections with supporters, thus motivating them to relaunch campaigns. Relaunch decisions are also influenced by how a campaign performed in its first release. Fan-Osuala (2021) suggested failure severity as an antecedent of founders’ behavior, noting that founders who fail marginally are more likely to launch a subsequent crowdfunding campaign. Additionally, Stevenson et al. (2022) claimed that positive market validation increases founders' persistence in their entrepreneurial behavior after crowdfunding failure.
Second, several studies explored how founders learn from initial failures and present dynamic changes in campaign designs. Creative business ideas are the most important factors attracting investors to crowdfunding markets. Consequently, founders who fail early often invest considerable attention in redesigning prototypes and improving communication skills (Greenberg and Gerber, 2014; Leone and Schiavone, 2019; Piening et al., 2021). A text analysis report by Piening et al. (2021) captures entrepreneurs’ revisions of product content and technical details after failure. One of the founders interviewed shared his own experiences after failure, emphasizing the importance of structuring an appealing project description (Greenberg and Gerber, 2014). In a case study, Leone and Schiavone (2019) observed that the founder redesigned the product and changed the communication language. Additionally, because of the importance of visual clues in the crowdfunding marketplace, founders’ self-reflection and efforts are also evident in improving their visual presentation quality, such as changing background colors and revising visual content (Chandler et al., 2022; Greenberg and Gerber, 2014; Leone and Schiavone, 2019). Other strategies include lowering fundraising targets and switching transaction modes (Greenberg and Gerber, 2014; Lee and Chiravuri, 2019).
Despite some academic understanding of founders’ behavioral responses to crowdfunding failure, few studies have related vital revisions to campaign relaunch success. This research gap is crucial because campaign relaunches are an increasingly common practice in online crowdfunding markets. Providing founders with guidance on optimizing campaign relaunch design helps them avoid successive failures and ensures market sustainability. Strategies generated in the failure–relaunch process differ clearly from theories developed for single campaign launches in terms of how they affect campaign performance, allowing for a further extension of the crowdfunding literature. In response to practical requirements and theoretical knowledge gaps, this study investigates how and under what circumstances founders’ learning efforts affect crowdfunding success. We introduced the organizational learning theory to explain the relationship between entrepreneurial learning and performance consequences in the crowdfunding context.
3. Conceptual framework and hypothesis development
3.1 Founder learning in the crowdfunding market
Founder learning in the crowdfunding market refers to how founders accumulate experience and create knowledge for subsequent campaign launches. This process is compatible with organizational learning theory, in which feedback is received from the previous performance, converted into knowledge and then leveraged for behavioral changes (Argote et al., 2021). Specifically, the learning process occurs in a context that includes the organization and the environment in which it is set (Argote and Miron-Spektor, 2011). This is triggered by obvious task performance, such as the failure of a campaign’s initial release, which is the focus of this study (Argote and Miron-Spektor, 2011). Observing, analyzing and reflecting on task performance generate varied knowledge reservoirs, serving as the basis for effectively interacting with the context (Argote et al., 2021; Argote and Miron-Spektor, 2011).
The learning process manifests itself in behavioral change (Argote and Miron-Spektor, 2011), where organizations challenge prior practice and alter available behavioral options in subsequent tasks and events (Thomas et al., 2001). The literature conceptualizes organizational learning efforts as the extent to which firms depart from their business practices from the past (Katila and Ahuja, 2002; Piening et al., 2021). With the development of digital businesses, data on commercial activities have become readily available to the public. Learning efforts can be empirically captured by comparing differences in a firm’s business content. Specifically, recent studies have reflected organizational learning as a semantic difference in project textual descriptions (Angus, 2019; Piening et al., 2021). Following the existing literature, we included enhancements made to the pictorial design in this study. Information technology has made visual presentations a vital part of crowdfunding campaign design. Visual signals are automatic, swift, parallel, effortless and superior in recall and recognition (Kim and Krishnan, 2015; Pieters and Wedel, 2004). They are critical in providing comprehensive and accurate commercial information, enhancing prototype understanding and influencing investor impressions (Bi et al., 2017; Kunz et al., 2017). Entrepreneurs are conscientious about adapting pictorial design when trying to resolve previous crowdfunding failures (Chandler et al., 2022; Leone and Schiavone, 2019). Combining the changes in textual descriptions and pictorial designs allowed us to reflect on the founders’ learning efforts in detail.
3.2 Impacts of entrepreneurial learning on campaign relaunch success
Following the two mechanisms described below, we proposed that changes in the textual description and image design between a campaign’s initial release and relaunch are positively associated with its relaunch success.
First, learning is widely recognized as enhancing firms’ ability to acquire knowledge and skills, enabling them to detect events, seize opportunities and gain sustainable advantages (Cope, 2005; Johannessen and Olsen, 2003; Morgan and Hunt, 1999). Learning from past failures, founders can infer and interpret unsatisfactory fundraising performance through direct experience (Argote and Miron-Spektor, 2011). Crowdfunding also allows for timely feedback from the crowd or other founders through comments or messages (Chemla and Tinn, 2020), thus providing the founders with additional references to reflect on their business practices. Through critical self-reflection, founders may become more aware of the factors that have resulted in financing outcomes that are different from their aspirations (Argote et al., 2021). Consequently, information is amassed, and new knowledge is developed that can assist in the implementation of corrective measures in subsequent business activities (Sarasvathy et al., 2013). The founders’ tailoring of prototype designs and presentations for the campaign relaunch reflects their improved abilities and keeps the campaign on track. These revisions are likely to comply with investor preferences and market expectations, resulting in a more effective campaign design and higher likelihood of success.
Second, changes in product prototypes and presentations as signals of founders’ learning are likely to be captured by the crowd, affecting their perception of entrepreneurial failure and ultimately determining their investment decisions (Kibler et al., 2017; Roccapriore et al., 2021). As crowdfunding platforms trace founders’ financing records, potential investors typically draw meaningful inferences from founders’ prior experiences (Colombo et al., 2015). By observing a significant event (i.e. entrepreneurial failure), investors are likely to engage in a cognitive process to determine a causal explanation, known as attribution in social psychology (Weiner, 1985). As a consequence of such a process, investors are able to determine who was involved and responsible for the event and the likelihood of it occurring again in the future, thus enabling them to make more effective investment decisions (Weiner, 2000). The founders’ learning efforts presented during the failure–relaunch are likely to affect their relaunch success when interacting with potential backers’ attribution processes (Kibler et al., 2017; Roccapriore et al., 2021).
The founders’ efforts to learn, for one thing, demonstrate their self-reflection and establish an image of responsibility-taking (Raju et al., 2021). Generally, when individuals attribute others’ failures, they tend to overemphasize internal responsibility while minimizing the influence of situational factors (Zuckerman, 2010). People also believe that others have a moral responsibility to maximize the use of resources over which they have control (Morales, 2005). Particularly, in the online crowdfunding market, founders are required to demonstrate full responsibility toward investors, as they cannot exist without funding (Nielsen, 2018). Founders who relaunch campaigns with no apparent learning efforts may appear to be denying their responsibility and refusing to initiate a solution for failure (Raju et al., 2021), leading to investors’ negative judgment. Contrastingly, when investors observe positive differences, they may recognize and reward founders for their responsibility (Morales, 2005; Roccapriore et al., 2021). Additionally, learning efforts suggest a contrast between previous and current campaigns, signaling that the venture has learned lessons from failure and executed relevant actions to bring about improvement (Mantere et al., 2013; Roccapriore et al., 2021). Investors are more likely to be persuaded that the factors leading to failure will not reoccur, thus retaining a more positive attitude toward the campaign’s success.
Overall, the changes presented in campaign relaunches are positively related to relaunch success in the sense that they reflect founders’ increased knowledge and capabilities as well as sending signals about responsibility and the reversal of prior failures. Therefore, the following hypotheses are proposed:
3.3 Moderating effect of prior failure severity
Failures are not all equal but can differ in severity. In the context of crowdfunding failure–release patterns, failure severity is the most relevant circumstance factor and closely aligns with the presented theoretical arguments. Considerably, in recent influential crowdfunding research (Soublière and Gehman, 2020), prior failures can be classified as severe or less severe, based on the achieved fundraising percentage in the campaign’s initial release. Specifically, severe failures refer to those that basically did not get the involvement of investors (Soublière and Gehman, 2020). Such failures clearly indicate that something in business practice went wrong (Piening et al., 2021) and convey signals that the project falls well short of the market expectations (Soublière and Gehman, 2020). Contrastingly, less severe failures have gained market recognition for their reasonableness, taking important first steps toward their targets (Soublière and Gehman, 2020).
We argued that failure type moderates the effects of entrepreneurial learning on crowdfunding relaunch success. The severity of failure reflects the level of founders’ knowledge and competence deficiencies in their previous practice (Piening et al., 2021). From a resource-based perspective, a firm’s performance depends significantly on its strategic assets, such as knowledge and capabilities (Bharadwaj et al., 2009). In this sense, the failure of a crowdfunding campaign indicates a deficiency in the underlying knowledge and capabilities. The importance and necessity of founders’ self-enhancement are further magnified when failure is serious. Specifically, entrepreneurs who experience severe failures can bridge the gap between their capabilities and ambitions through a learning process in which the causes of the prior problem are identified and the frustrating experience is transformed into new knowledge. Such a process will increase an entrepreneur’s ability to capture investor preferences, present an appropriate prototype design and appeal in subsequent practice, and play a decisive role in its relaunch success. To a certain extent, a less severe failure implies that the founders’ previous business practices have captured the interest and participation of the crowd (Soublière and Gehman, 2020). The founders’ knowledge and competence are relatively close to what is needed to achieve fundraising objectives. Despite contributing positively to the success of a relaunch, entrepreneurs’ learning efforts are likely to be less effective than those who have suffered severe failures.
Furthermore, the severity of the failure may also affect the importance for founders to send learning signals to potential investors. First, investors are more likely to view founders as underutilizing internal and controllable resources if the funds raised by the original campaign are far less than their targets (Chang et al., 2015). Specifically, the founders may have taken appropriate measures to prevent such adverse events. It is particularly important to present information concerning founders’ efforts to relaunch a campaign, as they must demonstrate that they manage responsibility and are motivated to rebound to reduce investors’ negative reactions. Second, as the severity of failure increases, investors may evaluate founders’ capabilities or strategies far from what is required to achieve success and perceive a higher probability that the campaign will fail again. To reverse such a judgment, founders suffering serious failures must demonstrate that they have learned from previous failures and are, therefore, able to reverse the previously failed outcome (Roccapriore et al., 2021). Overall, entrepreneurial learning had a greater impact on campaigns that suffered severe failures. This led us to propose the following hypotheses:
4. Research methodology
4.1 Data collection and sampling
Data for this study were collected from Kickstarter.com, a world-leading reward-based crowdfunding platform. We developed a Python web crawler to obtain data on all observable crowdfunding campaigns launched between April 2014 (the date of their establishment) and December 2021. This larger dataset was narrowed down to founders engaged in a failure–relaunch process. First, only founders who launched a new campaign after an initial failure were selected from the full sample. Second, the campaign pair’s title and blurbs (a brief description of the campaign’s core values) were analyzed to identify founders who relaunched failed campaigns rather than directly abandoning their original ideas. Specifically, we manually checked each campaign pair whose titles shared a common substring of more than half the text length and/or whose blurbs’ semantic similarity exceeded the sample median and included valid observations in the final sample. Furthermore, we sampled the remaining campaign pairs with less similar titles and blurbs and found that they did not qualify as relaunches. The end result was a final sample consisting of 5,798 relaunched campaigns.
4.2 Measurement
4.2.1 Dependent variable
A key dependent variable is the success of the launch campaign (success), since a start-up can only get funded and make its creative ideas a reality if the campaign succeeds, especially under the all-or-nothing scheme adopted by most mainstream crowdfunding platforms. Success is a binary indicator that takes the value of 1 if a crowdfunding campaign reaches the preset target and 0 otherwise.
4.2.2 Independent variables
The independent variables of interest focused on changes in the prototype design and presentation. TextDiff was calculated as the semantic distance between the relaunch of the campaign and the origin release (Angus, 2019). In keeping with the paradigm of previous studies (Angus, 2019; Taeuscher et al., 2021), we first preprocessed the campaign description by removing non-English words, punctuation and numbers, stop words and lemmatizing words in the primitive form. To develop an effective corpus, we generated parts-of-speech tagging using the natural language toolkit and removed words for which there were no specific semantic expressions, such as articles, pronouns or exclamations. Using word2vec, we converted the preprocessed documents into a vector representation. The word2vec, proposed by Google in 2013, has become the most widely adopted natural language processing model in recent years (Mikolov et al., 2013). It builds lightweight neural networks based on continuous bag-of-words and skip-gram models to transform word semantics into an N-dimensional vector space. Specifically, this study leveraged the Google pre-trained model to perform a 300-dimension vector for each word in our corpus. Next, the word movers’ distance (WMD) was used to calculate the document similarity. This method was developed by Google in 2015 and is based on word2vec embedding (Kusner et al., 2015). It has been suggested that WMD performs better than word-based calculation methods such as Euclidean and cosine distance because it considers the alignment of words (Kusner et al., 2015). Particularly, WMD represents a document as a normalized bag-of-words vector. The weight of word i in document d is expressed in Equation (1), where ci is the number of occurrences of the word. In semantic space, cost(i,j) represents the cost of transferring word i to word j. The mover’s distance refers to the minimum cost of moving document d to document d’ (as shown in Equation (3)), which can be computed as a solution to a well-known transportation problem. This study used the Gensim package in Python to calculate the WMD between the textual descriptions of the campaign’s first and second releases.(1)(2)(3)
Another independent variable of interest captures the differences in the campaign’s visualization design, which complements semantic distance. ImageDiff was measured as the pixel difference between the initial release and the relaunch of campaign cover photos. A cover photo is the requisite and most representative visual design for the campaign release (as shown in Figure 1). It repeatedly appears in the most conspicuous positions, including search pages, founder profile pages and campaign homepages, and plays a significant role in influencing impression formation and decision-making (Luo et al., 2021; Xia et al., 2020). In this study, we applied pHash to measure the pixel distance, which has been proven to be effective in calculating image similarity (Beskow et al., 2020; Horsman et al., 2014). This algorithm first grayscales the color image and shrinks the image resolution to 32 × 32 to remove high frequencies or unnecessary details. The image is transformed from the pixel domain to the frequency domain based on a discrete cosine transformation (Equation (4)), where (u,v) represents the spatial location of the pixel points, f(i,j) represents the input pixel points, X and Y are the total numbers of pixel rows and columns, respectively. The low-frequency components that best represented the overall image information were compared with the discrete cosine transformation to obtain a 64-bit image fingerprint sequence. The Hamming distance between two fingerprint sequences reflects pixel differences.(4)
The research context of online crowdfunding allows us to empirically classify the severity of failure (SevereFailure). In accordance with the paradigm of the relevant research, the inflection point of the fundraising progress distribution was adopted as the cut-off point (Soublière and Gehman, 2020). On Kickstarter, the first inflection point fell by approximately 20%, as shown in Figure 2. This implies that an empirical threshold of approximately 80% of projects reaching 20% progress will eventually succeed. Following Soublière and Gehman (2020), we coded failed campaigns that raised between 0 and 20% as severe failures, and those that raised more than 20% but ultimately failed to reach the preset fundraising goal as less severe failures.
To ensure the validity of the data analysis, we controlled for other known factors that have been suggested to affect a single campaign’s success (Frydrych et al., 2014; Kunz et al., 2017; Mollick, 2014). These include the fundraising goal of the campaign (Goal), the duration of the campaign (Duration), the total number of images (Images) and words (TextLens) used to describe the campaign, and whether the campaign adopts video for presentation (Video). Other factors are the number of available reward options (Options), the average reward level for the campaign (RewardPrice) and fixed effects for the category associated with the campaign and the year in which it was launched.
We also controlled for a set of variables to reflect other possible changes between the two releases, including the ratio of fundraising targets at relaunch relative to the initial release (GoalChange), the ratio of ongoing campaigns in the same category at relaunch relative to the initial release (CmpChange) and the time interval between launches (Interval). This study performs a logarithmic transformation to count the variables and converts the money-related variables to the US currency according to the exchange rate. Tables 2 and 3 present the descriptive statistics and correlation matrices of the key variables, respectively.
5. Empirical analysis and results
5.1 Model
A logistic regression model is used to estimate the effect of textual and image differences on campaign relaunch according to Equation (5), where represents whether campaign i reaches its fundraising goal in the second release; and reflect entrepreneurs’ learning efforts; β and capture the impacts that are of our most interest; represents the set of campaign characteristics, category fixed effect and time-fixed effect. A cross-sectional data set was constructed with one observation for each relaunched campaign.(5)
5.2 Results
The regression model was estimated stepwise. Column (1) of Table 4 reports the estimation results of the control variables using the dependent variable as a baseline for the empirical analysis. The coefficient estimates and significance tests for the control variables were significantly consistent, as in previous studies (Frydrych et al., 2014; Kunz et al., 2017; Mollick, 2014). Notably, SevereFailure was shown to have a noticeable impact. Particularly, severe failure had an adverse effect on campaign launch success (β = −2.590, p < 0.01), supporting the assumptions proposed in hypothesis development. We estimated the logit model of success as a function of TextDiff and ImageDiff in Columns (2) and (3) of Table 4. The regression results suggest positive coefficients for textual differences (β = 0.784, p < 0.01) and cover image differences (β = 0.737, p < 0.01). This implies that the founders’ efforts to learn from prior failures significantly increased campaign relaunch success. These effects remain significant in the full regression model (Column (4)), supporting hypotheses H1a and H1b.
To offer a better estimate of the different effects of founders’ learning efforts across contexts (Venkatraman, 1989), we adopted a subgroup analysis to examine the moderating effect of the failure type (Cirillo, 2019; Kwon et al., 2016). Specifically, we divided the sample into two subsamples according to SevereFailure, estimated the regression coefficients of the independent variables separately, and compared the coefficient differences statistically. The results show that text and image differences have strong positive effects (β = 0.916, p < 0.01; β = 1.020, p < 0.01) on campaign relaunch success for samples with greater severity of failure. For samples with modest prior failures, the effect of text differences was significant but weaker (β = 0.567, p < 0.05), whereas the effect of images was not significant. Fisher's exact test was used to verify the moderating effects (Herm-Stapelberg and Rothlauf, 2020; Zheng et al., 2014). The coefficients were significant for TextDiff (|Diff| = 0.349, p < 0.01) and ImageDiff (|Diff| = 1.27, p < 0.01). In summary, the severe failure of an initial campaign release suggests that the campaign design deviates significantly from market expectations, sending negative signals to subsequent audience members. Under such circumstances, entrepreneurs’ efforts to learn have a significant impact on the success of a relaunch. Contrastingly, a less severe failure can serve as a signal to potential backers that the campaign design is close to market expectations. Therefore, the positive effects of entrepreneurial learning behavior are weakened, supporting hypotheses H2a and H2b.
5.3 Robustness checks and additional analysis
5.3.1 Propensity score matching
First, PSM was performed to eliminate the risk of selection bias. Selection bias occurs when independent variables are correlated with other covariates. Specifically, whether entrepreneurs attempt to learn and present a noticeable change in campaign relaunches may not be an independent event. Specific factors (such as failure type and fundraising targets) may influence entrepreneurs’ willingness to learn. Changes in the prototype’s design and presentation could co-vary with other design features (e.g. images and videos). In this case, the unbiasedness and accuracy of the regression estimates are compromised.
PSM is a widely adopted method for correcting selection bias and assisting in making causal inferences (Caliendo and Kopeinig, 2008). Following previous studies (Kim et al., 2016; Li et al., 2021), we used the median values of the independent variables as cut-off points to divide the samples into groups that made significant changes (treated group) and those that did not (control group). The variable Z equals 1 if the sample belongs to the treatment group (Z = 0 if the sample belongs to the control group). When PSM is performed on one independent variable, the other independent variable and all control variables are used as covariates to predict the propensity score p for which a sample belongs to the treatment group (Z = 1). Samples from the treatment group were matched with those from the control group, which had similar propensity scores. Consequently, systematic differences in covariates between the matched and control samples were eliminated, ensuring that the dependent variable changed only as a result of the key independent variable. For each independent variable, a caliper match within 0.01 width was performed in STATA using the psmatch2 command (Becker and Caliendo, 2007). The results of the bias correction are presented in Tables 5 and 6, where U-rows show the mean differences of the unmatched samples, and M-rows show the mean differences of the matched sample. Figures 3 and 4 illustrate that PSM effectively reduces bias to the extent that it does not threaten the validity of the regression (below 5%).
The research model was re-estimated for the matched sample by adding the [pw = _weight] command to the logistic regression used to perform the main analysis. By doing so, samples that do not have any pairs within the specified radius will be excluded from the full sample. According to Columns (1) to (3) of Table 7, text differences still positively influenced campaign launch success in the matched sample. Significant differences in coefficients were found between the groups at 0.01 level. The positive effects of text differences were accentuated for campaigns with higher initial failure severities. Similarly, the positive impact of image differences and between-group difference test yielded significant results at the 0.01 and 0.05 levels, indicating the robustness of our research findings.
5.3.2 Alternative measures of failure severity
The severity of failure was discretized into a binary variable (SevereFailure) in the main analysis. Although this approach is mathematically meaningful (corresponding to the first inflection point) and theoretically sound, alternative measures are required to ensure the validity of the results. We first treated prior failure severity as a continuous variable severity, measured as 1 minus the fundraising percentage achieved (Piening et al., 2021). For instance, if the campaign raised 10% of its target at its prior release, the failure severity was 90%. Furthermore, while funding progress provides a more intuitive picture of failure severity, potential backers may react to other performance indicators. Research has demonstrated that backers normally observe their peers’ engagement behaviors during the decision-making process (Xiao et al., 2021). The number of investors attracted to the campaign’s initial release may represent the crowd’s acceptance of the focal business concept and could serve as a critical reference in backers’ investment decisions. Therefore, we created the variable LastBackers, which refers to the log-transformed value of backers attracted during the campaign’s initial release.
Table 8 summarizes the regression results for alternative moderators. According to Column (1), replacing the discrete measure continuously resulted in similar regression results. Specifically, changes in textual descriptions and visualization design have beneficial effects on the success of campaign relaunches. With the increasing severity of failure, the learning efforts described above can have a greater impact. The consistency of the results is not threatened by replacing the measure with the number of investors attracted to the campaign’s first release. The results reported in Column (2) suggest significant negative coefficients for the interaction terms between LastBackers and the two independent variables, indicating that the achieved support substitutes for learning efforts.
5.3.3 Alternative indicators of campaign performance indicators
The dependent variable used in the main analysis was whether the campaign achieved success during its second launch. We are particularly interested in this variable because most mainstream crowdfunding platforms adopt an all-or-nothing scheme in which founders can only access capital if the campaign is successful. It is particularly important for crowdfunding campaigns that have initially failed to reverse their outcomes and achieve success. However, there are other meaningful indicators of campaign performance. We used OLS (Ordinary Least Squares) regression to estimate the research model on the log-transformed number of investors (backers) and the log-transformed number of funds raised (pledges). The results in Table 9 indicate that the differences illustrated in the campaign’s textual description and pictorial design significantly increased campaign performance in terms of backers and pledges. In line with the main analysis, campaigns with severe failures were more likely to benefit from learning efforts. However, for campaigns with less serious failures, such effects are weaker and less significant.
5.3.4 Additional analysis: role of campaign type
The study combines textual descriptions and pictorial designs to describe entrepreneurs’ learning behavior during the failure–relaunch process. In developing the hypotheses, we did not discuss these two variables separately because their direct effects on campaign launch success and their interaction effects with failure severity share similar theoretical mechanisms. Nevertheless, considering the variety of crowdfunding activities, there may be other important conditional factors such as campaign categories that differentiate the outcomes of entrepreneurs’ efforts in textual communication and visual presentation. Evidence of possible heterogeneous impacts can provide more nuanced guidance for entrepreneur practices. Therefore, in this additional analysis, we examined the possible moderating effect of the campaign category.
For the following two reasons, we classified the samples as commercial or cultural campaigns. First, the crowdfunding market supports entrepreneurs in their quest for commercial value (technology, design, food, craft, etc.) and welcomes unique value in the cultural sphere (art, music, film, etc.). This classification paradigm is a unique approach for differentiating between business activities in the crowdfunding context (Bürger and Kleinert, 2021; Josefy et al., 2017). Additionally, cultural campaigns differ from economic activities as their outcomes are more cultural, symbolic, aesthetic and experiential than those of utilitarian and commercial (Throsby, 1994). The fundamental differences between these two campaign categories correspond to the characteristics of verbal and visual clues in message delivery.
As shown in Table 10, improvements in textual and pictorial content are associated with significantly different influencing strengths across campaign categories. Specifically, since visual signals could effectively convey abstract, conceptual, imaginative and experiential information (Chang, 2012; Novak and Hoffman, 2009), improving graphic design quality is one of the most important aspects of cultural entrepreneurs’ self-enhancement and can significantly impact campaign relaunch success (β = 0.646, p < 0.05). In contrast, cognitive thinking and reasoning are closely tied to textual comprehension (Elder and Krishna, 2010). Therefore, the optimization of textual descriptions during the campaign relaunch had a greater impact on commercial campaigns (β = 1.440, p < 0.01) than on cultural campaigns (β = 0.361, p < 0.05). Based on Fisher’s test results, the coefficient differences of the two independent variables between campaign categories are significant at the 0.1 and 0.01 levels, respectively.
6. Discussions and implications
6.1 Summary of findings
High market competition forces crowdfunding campaigns to have a high probability of failure. Since considerable time and effort have been made, founders after initial failures are motivated to bring creative ideas back to the crowdfunding market for relaunching. Despite its growing popularity, the key to its success remains unclear. In this study, we conceptualized entrepreneurs’ learning efforts as the differences they make in textual description and pictorial design and explore the extent to which such efforts contribute to crowdfunding success.
We illustrated the positive impact of entrepreneurial learning using two mechanisms. Organizational learning theory suggests that entrepreneurs acquire knowledge and abilities through accumulated experience and self-reflection. Consequently, campaign designs can be improved to better match the market and crowd expectations. Additionally, signaling theory and analysis of investors’ attribution processes suggest that these changes are a sign of responsibility and self-improvement in entrepreneurship. There is a greater chance of investor confidence if shown that the prior failure will be reversed. Our empirical analysis proved that campaigns presenting differences in textual descriptions and pictorial designs have a higher likelihood of success. The impact of entrepreneurial learning on campaign launch success is more salient when previous failures are more severe. The findings are consistent across several robustness tests. Further empirical investigation indicated that textual descriptions are more crucial for commercial campaigns, whereas image design quality is more critical for cultural campaigns.
6.2 Academic implications
This study contributes to theoretical development in several ways. First, this study contributes to crowdfunding literature by examining campaign relaunch success. Extensive research has been conducted on how campaign designs and founders’ signaling strategies influence investors’ decisions and contribute to campaign success (Buttice et al., 2017; Kunz et al., 2017; Zhang et al., 2020). However, these discussions consider crowdfunding campaigns as independent releases. Little attention has been paid to campaign relaunches after the initial failures. This study examines how founders’ learning efforts during the failure–relaunch process affect their relative success rates. This study revealed new variables for the explanation and prediction of campaign performance and provided inspiration for the literature by focusing on an emerging research topic.
Second, this study leveraged organizational learning theory to explain crowdfunding relaunch success and theorized the impact of founders’ learning efforts through two mechanisms. In doing so, it provides theoretical insights into crowdfunding and the organizational literature. Unlike a single crowdfunding release, a crowdfunding relaunch involves several dynamic changes over time. Founders can gain experience, conduct self-reflection, learn from their failures and perform behavioral changes in subsequent business practices. Consequently, organizational learning provides a suitable and conjunct theoretical framework that provides a new lens for understanding crowdfunding market phenomena. Additionally, as technology makes online business activities more accessible to the public, potential investors can observe and respond to the founders’ dynamic changes. Traditional learning theories emphasize the enhancement of knowledge and competence as a result of learning (Argote et al., 2021). In this study, we drew further on attribution theory to explain how founders’ learning signals shape positive images of responsibility and the reversal of prior failure outcomes, thereby influencing potential investors’ perceptions, assessments and behaviors. This study extended the influence of learning mechanisms to the unique context of crowdfunding.
Third, we extended this theoretical contribution by identifying the boundary conditions for more effective entrepreneurial learning. In this study, we theorized how serious failures indicate entrepreneurs’ deficiencies in knowledge and capabilities and determine the necessity of learning in their subsequent business practices, and how severity affects the importance of sending learning signals to potential investors. By discussing the theoretical tensions between founders’ learning and the severity of failure, this study provides additional theoretical insights.
6.3 Practical implications
These findings provide practical recommendations for founders and platforms. First, crowdfunding campaigns are prone to failure because of intense competition. It is crucial to identify strategies that can help founders reverse previous failures and bring creative ideas to life during campaign relaunches. Based on our findings, founders need to conduct learning efforts during the failure–relaunch process to improve the chances of succeeding at a subsequent campaign launch. Specifically, we revealed the impact of campaign differences on textual descriptions and visual designs. Founders should be aware that such efforts are effective in improving campaign design quality and influencing investor perceptions and behavior. Second, this study revealed the moderating role of failure type in the relationship between founders’ learning efforts and campaign launch success. Therefore, founders should consider the severity of prior failures in the design of the relaunch campaign, be aware of the necessity to conduct experiential learning and put in corresponding efforts. Third, founders’ improvements differ across campaign types. Commercial campaigns are more dependent on improving textual descriptions, whereas cultural campaigns are more dependent on improving image design quality. Consequently, we offered entrepreneurs careful advice on directing their efforts in the correct direction. Fourth, platforms should provide relevant tips for relaunched campaigns or develop functions to highlight the changes they have made. These measures can improve the effectiveness of financing campaigns. Founders may also be more courageous in making a second attempt and, therefore, not exit the specific platform. This contributes to the platform’s long-term growth and competitiveness.
6.4 Future research
This study had several limitations. First, we examined the proposed hypotheses using data collected from a crowdfunding platform. As Kickstarter is the most representative crowdfunding platform, our research findings should be generalizable. However, testing the current findings across a wider range of platforms may provide deeper insight into the impact of cultural differences. Second, we focused on founders whose first two releases are in the failure–relaunch process. The limitation of the samples was the intention to establish a precise research scenario and enhance the credibility of the proposed influence mechanism. Nevertheless, founders’ previous financing experience (whether successful or unsuccessful) may influence the effectiveness of self-enhancement or complicate attribution to investors. In the future, experience-related factors should be investigated as moderating factors in an unrestricted sample. Third, we could not detect the dynamic impacts of such strategies because of data limitations. In light of the available panel data, further research should explore whether the influence strength varies with the progress of fundraising.
The authors gratefully acknowledge the insightful comments of the anonymous reviewers and the Associate Editor, as well as the support of the Senior Editor and the Editor-in-Chief. The authors would like to thank the Talent Fund of Beijing Jiaotong University [2023XKRCW014] and the National Natural Science Foundation of China (Nos. 72071038, 72121001, 91846301) for their support.
Figure 1
Cover photograph in crowdfunding platform
[Figure omitted. See PDF]
Figure 2
Distribution of the overall achieved percentage on Kickstarter
[Figure omitted. See PDF]
Figure 3
Bias correction for text difference
[Figure omitted. See PDF]
Figure 4
Bias correction for image difference
[Figure omitted. See PDF]
Table 1
Summary of research on crowdfunding campaign relaunch
| Research question | Paper | Methodology | Key factors |
|---|---|---|---|
| Antecedents of founders’ relaunch behavior | Greenberg and Gerber (2014) | Interview/Empirical | Detractors: consumption of friendship; lost confidence; waste of time and money |
| Fan-Osuala (2021) | Empirical | Failure severity | |
| Stevenson et al. (2022) | Experiment | Positive market validation | |
| Dynamic changes in campaign relaunch | Greenberg and Gerber (2014) | Interview/Empirical | Communication strategy; lower goal setting |
| Leone and Schiavone (2019) | Case study | Visual presentation; project redesign | |
| Lee and Chiravuri (2019) | Empirical | Lower goal setting; flexible transaction model | |
| Piening et al. (2021) | Empirical | Improvement of textual description | |
| Chandler et al. (2022) | Case study | Visual presentation; experience learning |
Source(s): Author’s own creation
Table 2
Summary statistics
| Variable | N | Mean | SD | Definition |
|---|---|---|---|---|
| Success | 5,798 | Binary indicator for the campaign’s success, set to 1 if successful | ||
| Pledges | 5,798 | 5.532 | 3.047 | Log-transformed value of the total amount of funds raised by the campaign (converted to USD) |
| Backers | 5,798 | 2.532 | 1.721 | Log-transformed value of the total number of backers who fund the campaign |
| TextDiff | 5,798 | 0.474 | 0.351 | The word mover’s distance between the campaign’s textual description and the description of its first release |
| ImageDiff | 5,798 | 0.260 | 0.191 | The perceptual hash distance between the campaign’s cover image and the image used at its first release |
| SevereFailure | 5,798 | Binary indicator representing the failure type of the campaign’s first release, set to 1 if the failure is severe | ||
| Goal | 5,798 | 8.144 | 1.689 | Log-transformed value of the campaign’s fundraising target (converted to USD) |
| Images | 5,798 | 1.328 | 1.264 | Log-transformed value of the number of images presented on the homepage of the campaign |
| Video | 5,798 | 0.526 | 0.408 | Log-transformed value of the number of videos presented on the homepage of the campaign |
| TextLens | 5,798 | 5.860 | 0.932 | Log-transformed value of the word count for the campaign narrative |
| Options | 5,798 | 7.437 | 5.459 | Log-transformed value of the number of available rewards in the campaign |
| RewardPrice | 5,798 | 4.577 | 1.438 | Log-transformed value of the mean value of all the price options of the campaign (converted to USD) |
| Duration | 5,798 | 3.444 | 0.473 | Log-transformed value of campaign duration (days) |
| GoalChange | 5,798 | 0.839 | 1.208 | Log-transformed value of the ratio of fundraising target at relaunch relative to the initial release |
| CmpChange | 5,798 | −0.024 | 0.459 | Log-transformed value of the ratio of ongoing campaigns in the same category at relaunch relative to the initial release |
| Interval | 5,798 | 2.647 | 1.178 | Log-transformed value of the time interval between the campaign's initial release and relaunch (weeks) |
Source(s): Author’s own creation
Table 3
Correlations
| VIF | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 1.Success | 1.00 | ||||||||||||||||
| 2.Pledges | 0.64*** | 1.00 | |||||||||||||||
| 3.Backers | 0.69*** | 0.91*** | 1.00 | ||||||||||||||
| 4.TextDiff | 1.31 | 0.02* | −0.01 | 0.02* | 1.00 | ||||||||||||
| 5.ImageDiff | 1.19 | 0.07*** | 0.09*** | 0.11*** | 0.30*** | 1.00 | |||||||||||
| 6.SevereFailure | 1.25 | −0.55*** | −0.51*** | −0.55*** | 0.08*** | −0.02 | 1.00 | ||||||||||
| 7.Goal | 1.99 | −0.26*** | 0.07*** | 0.07*** | −0.01 | 0.02* | 0.15*** | 1.00 | |||||||||
| 8.Images | 2.06 | 0.26*** | 0.46*** | 0.52*** | 0.01 | 0.10*** | −0.28*** | 0.13*** | 1.00 | ||||||||
| 9.Video | 1.23 | 0.13*** | 0.27*** | 0.27*** | 0.00 | 0.08*** | −0.10*** | 0.19*** | 0.31*** | 1.00 | |||||||
| 10.TextLens | 1.53 | 0.19*** | 0.36*** | 0.37*** | −0.23*** | 0.03* | −0.18*** | 0.19*** | 0.44*** | 0.24*** | 1.00 | ||||||
| 11.Options | 1.45 | 0.23*** | 0.40*** | 0.43*** | −0.01 | 0.03** | −0.23*** | 0.17*** | 0.38*** | 0.21*** | 0.32*** | 1.00 | |||||
| 12.RewardPrice | 1.62 | 0.04*** | 0.20*** | 0.13*** | −0.07*** | 0.01 | −0.01 | 0.44*** | 0.08*** | 0.19*** | 0.24*** | 0.36*** | 1.00 | ||||
| 13.Duration | 1.14 | −0.18*** | −0.06*** | −0.05*** | 0.02 | 0.01 | 0.16*** | 0.23*** | −0.01 | 0.02 | 0.01 | 0.02 | 0.06*** | 1.00 | |||
| 14.GoalChange | 1.41 | 0.26*** | 0.10*** | 0.09*** | −0.01 | 0.02 | 0.01 | −0.46*** | 0.04*** | 0.04*** | 0.01 | −0.02 | −0.10*** | −0.18*** | 1.00 | ||
| 15.CmpChange | 1.19 | −0.01 | −0.03*** | −0.04*** | −0.03** | 0.02* | 0.02 | 0.02 | −0.03* | 0.01 | −0.03** | −0.04*** | −0.02* | −0.02 | 0.03** | 1.00 | |
| 16.Interval | 1.29 | 0.03** | 0.11*** | 0.13*** | 0.31*** | 0.28*** | 0.01 | 0.10*** | 0.14*** | 0.11*** | 0.08*** | 0.05*** | 0.04*** | 0.16*** | −0.03** | −0.05*** | 1.00 |
Note(s): VIF shorts for variance inflation factor; *P < 0.1, **P < 0.05, ***P < 0.01
Source(s): Author’s own creation
Table 4
Main analysis
| Baseline | Impacts of IVs | Moderating effects | ||||
|---|---|---|---|---|---|---|
| (1) | (2) | (3) | (4) | (5)Severe | (6)Severe | |
| TextDiff | 0.784*** | 0.723*** | 0.567** | 0.916*** | ||
| (0.122) | (0.125) | (0.229) | (0.161) | |||
| ImageDiff | 0.737*** | 0.466** | −0.250 | 1.020*** | ||
| (0.207) | (0.213) | (0.348) | (0.289) | |||
| SevereFailure | −2.590*** | −2.654*** | −2.598*** | −2.654*** | ||
| (0.085) | (0.085) | (0.085) | (0.086) | |||
| Goal | −0.461*** | −0.468*** | −0.462*** | 0.552*** | 0.033 | −0.734*** |
| (0.036) | (0.037) | (0.036) | (0.044) | (0.069) | (0.051) | |
| Images | 0.436*** | 0.413*** | 0.426*** | 0.037 | 0.241*** | 0.459*** |
| (0.043) | (0.043) | (0.043) | (0.038) | (0.072) | (0.058) | |
| Video | 0.370*** | 0.363*** | 0.364*** | 0.063 | 0.126 | 0.479*** |
| (0.100) | (0.101) | (0.101) | (0.087) | (0.175) | (0.132) | |
| TextLens | 0.192*** | 0.283*** | 0.196*** | −0.467*** | 0.157* | 0.354*** |
| (0.050) | (0.052) | (0.050) | (0.037) | (0.082) | (0.072) | |
| Options | 0.052*** | 0.049*** | 0.052*** | 0.409*** | 0.023 | 0.0657*** |
| (0.010) | (0.010) | (0.010) | (0.043) | (0.017) | (0.014) | |
| RewardPrice | 0.173*** | 0.186*** | 0.174*** | 0.359*** | 0.127* | 0.211*** |
| (0.035) | (0.035) | (0.035) | (0.101) | (0.070) | (0.045) | |
| Duration | −0.291*** | −0.272*** | −0.282*** | 0.279*** | −0.401*** | −0.0672 |
| (0.085) | (0.086) | (0.086) | (0.052) | (0.146) | (0.119) | |
| GoalChange | 0.550*** | 0.552*** | 0.549*** | 0.049*** | 0.943*** | 0.416*** |
| (0.044) | (0.044) | (0.044) | (0.010) | (0.114) | (0.049) | |
| CmpChange | 0.0622 | 0.073 | 0.049 | 0.186*** | 0.200 | −0.0624 |
| (0.086) | (0.086) | (0.087) | (0.035) | (0.147) | (0.102) | |
| Interval | 0.131*** | 0.052 | 0.097*** | −0.267*** | −0.240*** | 0.169*** |
| (0.036) | (0.037) | (0.037) | (0.086) | (0.069) | (0.049) | |
| Year fixed effects | Yes | Yes | Yes | Yes | Yes | Yes |
| Month fixed effects | Yes | Yes | Yes | Yes | Yes | Yes |
| Category fixed effects | Yes | Yes | Yes | Yes | Yes | Yes |
| Cons | 2.052*** | 1.433*** | 1.951*** | 1.415*** | 0.560 | −1.029 |
| (0.490) | (0.502) | (0.491) | (0.502) | (0.829) | (0.678) | |
| Observations | 5,798 | 5,798 | 5,798 | 5,798 | 1,912 | 3,886 |
| Pseudo R2 | 0.400 | 0.402 | 0.398 | 0.402 | 0.154 | 0.325 |
Note(s): IVs shorts for independent variables. Robust standard errors are reported in parentheses; *P < 0.1, **P < 0.05, ***P < 0.01
Source(s): Author’s own creation
Table 5
Bias correction for textual difference
| Unmatched | Mean | Mean | ||||||
|---|---|---|---|---|---|---|---|---|
| Matched | Variable | Treated | Control | %Bias | Variable | Treated | Control | %Bias |
| U | ImageDiff | 0.3148 | 0.2054 | 59.9 | Options | 1.3666 | 1.2897 | 6.1 |
| M | 0.3146 | 0.3213 | −3.7 | 1.3651 | 1.3381 | 2.1 | ||
| U | SevereFailure | 0.8492 | 0.8278 | 1.8 | RewardPrice | 0.5304 | 0.5213 | 2.2 |
| M | 0.8484 | 0.8448 | 0.3 | 0.5302 | 0.5244 | 1.4 | ||
| U | Goal | 0.7054 | 0.6351 | 15.0 | Duration | 5.7144 | 6.0048 | −31.5 |
| M | 0.7049 | 0.7132 | −1.8 | 5.7210 | 5.6695 | 5.6 | ||
| U | Images | −0.0419 | −0.0062 | −7.8 | GoalChange | 7.4753 | 7.3991 | 1.4 |
| M | −0.0418 | −0.0314 | −2.3 | 7.4827 | 7.2414 | 4.4 | ||
| U | Video | 3.0051 | 2.2890 | 63.8 | CmpChange | 4.4809 | 4.6729 | −13.4 |
| M | 3.0048 | 2.9273 | 6.9 | 4.4840 | 4.4341 | 3.5 | ||
| U | TextLens | 8.1336 | 8.1545 | −1.2 | Interval | 3.4537 | 3.4333 | 4.3 |
| M | 8.1327 | 8.1816 | −2.9 | 3.4537 | 3.4633 | −2.0 | ||
Source(s): Author’s own creation
Table 6
Bias correction for image difference
| Unmatched | Mean | Mean | ||||||
|---|---|---|---|---|---|---|---|---|
| Matched | Variable | Treated | Control | %Bias | Variable | Treated | Control | %Bias |
| U | TextDiff | 0.5630 | 0.3797 | 54.0 | Options | 7.5579 | 7.3081 | 4.6 |
| M | 0.5611 | 0.5638 | −0.8 | 7.5551 | 7.4611 | 1.7 | ||
| U | SevereFailure | 0.6657 | 0.6751 | −2.0 | RewardPrice | 4.6072 | 4.5444 | 4.4 |
| M | 0.6657 | 0.6711 | −1.2 | 4.6042 | 4.6022 | 0.1 | ||
| U | Goal | 8.1982 | 8.0861 | 6.6 | Duration | 3.4518 | 3.4347 | 3.6 |
| M | 8.1944 | 8.1887 | 0.3 | 3.4514 | 3.4499 | 0.3 | ||
| U | Images | 1.4149 | 1.2352 | 14.3 | GoalChange | 0.8468 | 0.8296 | 1.4 |
| M | 1.4128 | 1.4027 | 0.8 | 0.8435 | 0.8369 | 0.5 | ||
| U | Video | 0.5541 | 0.4955 | 14.4 | CmpChange | −0.0218 | −0.0265 | 1.0 |
| M | 0.5524 | 0.5443 | 2.0 | −0.0247 | −0.0155 | −2.0 | ||
| U | TextLens | 5.8746 | 5.8436 | 3.3 | Interval | 2.9317 | 2.3425 | 51.6 |
| M | 5.8745 | 5.844 | 3.3 | 2.9216 | 2.9184 | 0.3 | ||
Source(s): Author’s own creation
Table 7
Regression results on matched samples
| Matched samples on textual difference | Matched samples on image difference | |||||
|---|---|---|---|---|---|---|
| Baseline | Moderating effects | Baseline | Moderating effects | |||
| (1) | (2)Severe | (3)Severe | (4) | (5)Severe | (6)Severe | |
| TextDiff | 0.728*** | 0.295 | 1.079*** | 0.781*** | 0.496* | 1.038*** |
| (0.148) | (0.256) | (0.194) | (0.147) | (0.258) | (0.194) | |
| ImageDiff | 0.611** | 0.080 | 1.194*** | 0.502** | −0.175 | 1.027*** |
| (0.307) | (0.464) | (0.413) | (0.225) | (0.364) | (0.303) | |
| SevereFailure | −2.936*** | −2.683*** | ||||
| (0.117) | (0.105) | |||||
| Goal | −0.499*** | 0.069 | −0.808*** | −0.479*** | −0.001 | −0.736*** |
| (0.055) | (0.097) | (0.074) | (0.043) | (0.076) | (0.060) | |
| Images | 0.442*** | 0.339*** | 0.433*** | 0.380*** | 0.252*** | 0.388*** |
| (0.064) | (0.108) | (0.083) | (0.053) | (0.084) | (0.071) | |
| Video | 0.143 | −0.362 | 0.520*** | 0.351*** | 0.209 | 0.435*** |
| (0.146) | (0.225) | (0.201) | (0.124) | (0.208) | (0.167) | |
| TextLens | 0.279*** | 0.029 | 0.425*** | 0.305*** | 0.0934 | 0.439*** |
| (0.077) | (0.108) | (0.107) | (0.064) | (0.099) | (0.087) | |
| Options | 0.036*** | 0.011 | 0.047*** | 0.060*** | 0.0476** | 0.065*** |
| (0.012) | (0.023) | (0.016) | (0.012) | (0.019) | (0.016) | |
| RewardPrice | 0.198*** | 0.204** | 0.229*** | 0.188*** | 0.107 | 0.230*** |
| (0.048) | (0.089) | (0.060) | (0.044) | (0.082) | (0.057) | |
| Duration | 0.019 | −0.134 | 0.170 | −0.130 | −0.202 | 0.034 |
| (0.121) | (0.197) | (0.164) | (0.100) | (0.168) | (0.137) | |
| GoalChange | 0.631*** | 1.168*** | 0.468*** | 0.580*** | 1.009*** | 0.451*** |
| (0.063) | (0.146) | (0.068) | (0.053) | (0.133) | (0.060) | |
| CmpChange | 0.066 | 0.102 | 0.116 | 0.080 | 0.225 | −0.004 |
| (0.120) | (0.193) | (0.142) | (0.110) | (0.159) | (0.147) | |
| Interval | −0.060 | −0.368*** | 0.088 | −0.022 | −0.237*** | 0.098 |
| (0.057) | (0.092) | (0.068) | (0.050) | (0.082) | (0.066) | |
| Year fixed effects | Yes | Yes | Yes | Yes | Yes | Yes |
| Month fixed effects | Yes | Yes | Yes | Yes | Yes | Yes |
| Category fixed effects | Yes | Yes | Yes | Yes | Yes | Yes |
| Cons | 0.907 | 0.188 | −1.662* | 1.117* | 0.233 | −1.534* |
| (0.691) | (0.989) | (0.955) | (0.616) | (0.994) | (0.826) | |
| Observations | 5,789 | 1,912 | 3,877 | 5,767 | 1,908 | 3,859 |
| Pseudo R2 | 0.423 | 0.195 | 0.332 | 0.403 | 0.158 | 0.317 |
Note(s): Robust standard errors are reported in parentheses; *P < 0.1, **P < 0.05, ***P < 0.01
Source(s): Author’s own creation
Table 8
Robustness checks-alternative measures of failure severity
| (1) Severity | (2) LastBackers | ||
|---|---|---|---|
| TextDiff | 0.799*** (0.129) | TextDiff | 0.925*** (0.142) |
| TextDiff*Severity | 0.914 (0.893) | TextDiff*LastBackers | −0.269** (0.111) |
| ImageDiff | 0.421** (0.217) | ImageDiff | 0.669*** (0.245) |
| ImageDiff*Severity | 6.336*** (1.592) | ImageDiff*LastBackers | −0.517*** (0.197) |
| Severity | −7.840*** (0.320) | LastBackers | 1.460*** (0.050) |
| Goal | −0.435*** (0.038) | Goal | −1.124*** (0.050) |
| Images | 0.364*** (0.044) | Images | 0.274*** (0.046) |
| Video | 0.376*** (0.102) | Video | 0.280** (0.110) |
| TextLens | 0.297*** (0.053) | TextLens | 0.263*** (0.054) |
| Options | 0.044*** (0.010) | Options | 0.016 (0.010) |
| RewardPrice | 0.169*** (0.036) | RewardPrice | 0.309*** (0.041) |
| Duration | −0.137 (0.088) | Duration | −0.243*** (0.091) |
| GoalChange | 0.623*** (0.045) | GoalChange | −0.078* (0.046) |
| CmpChange | 0.088 (0.087) | CmpChange | 0.078 (0.093) |
| Interval | 0.038 (0.039) | Interval | −0.043 (0.044) |
| Year fixed effects | Yes | Year fixed effects | Yes |
| Month fixed effects | Yes | Month fixed effects | Yes |
| Category fixed effects | Yes | Category fixed effects | Yes |
| Cons | −0.499 (0.491) | Cons | 6.148*** (0.531) |
| Observations | 5,798 | Observations | 5,798 |
| R2 | 0.434 | R2 | 0.467 |
Note(s): Robust standard errors are reported in parentheses; *P < 0.1, **P < 0.05, ***P < 0.01
Source(s): Author’s own creation
Table 9
Robustness checks-alternative dependent variables
| Backers | Pledges | |||||
|---|---|---|---|---|---|---|
| Baseline | Moderating effects | Baseline | Moderating effects | |||
| (1) | (2)Failure | (3)Failure | (4) | (5)Failure | (6)Failure | |
| TextDiff | 0.255*** | 0.157* | 0.264*** | 0.291*** | 0.120 | 0.343*** |
| (0.049) | (0.091) | (0.056) | (0.101) | (0.117) | (0.125) | |
| ImageDiff | 0.276*** | 0.123 | 0.299*** | 0.372** | −0.001 | 0.501** |
| (0.087) | (0.131) | (0.106) | (0.170) | (0.177) | (0.228) | |
| SevereFailure | −1.552*** | −2.521*** | ||||
| (0.038) | (0.062) | |||||
| Goal | 0.108*** | 0.630*** | −0.024* | 0.133*** | 0.912*** | −0.040 |
| (0.014) | (0.029) | (0.015) | (0.027) | (0.041) | (0.032) | |
| Images | 0.304*** | 0.168*** | 0.291*** | 0.509*** | 0.175*** | 0.555*** |
| (0.019) | (0.029) | (0.023) | (0.036) | (0.040) | (0.047) | |
| Video | 0.245*** | −0.002 | 0.289*** | 0.577*** | 0.111 | 0.627*** |
| (0.043) | (0.074) | (0.050) | (0.084) | (0.103) | (0.110) | |
| TextLens | 0.176*** | 0.115*** | 0.176*** | 0.325*** | 0.167*** | 0.336*** |
| (0.021) | (0.035) | (0.025) | (0.041) | (0.044) | (0.053) | |
| Options | 0.057*** | 0.037*** | 0.062*** | 0.082*** | 0.026*** | 0.107*** |
| (0.006) | (0.005) | (0.008) | (0.009) | (0.006) | (0.015) | |
| RewardPrice | −0.031** | −0.171*** | −0.014 | 0.116*** | 0.004 | 0.099*** |
| (0.014) | (0.029) | (0.016) | (0.029) | (0.038) | (0.035) | |
| Duration | 0.028 | 0.009 | 0.047 | −0.012 | −0.087 | 0.076 |
| (0.034) | (0.047) | (0.043) | (0.069) | (0.069) | (0.096) | |
| GoalChange | 0.181*** | 0.404*** | 0.121*** | 0.308*** | 0.579*** | 0.236*** |
| (0.016) | (0.040) | (0.017) | (0.032) | (0.058) | (0.036) | |
| CmpChange | −0.005 | 0.113** | −0.058 | −0.040 | 0.058 | −0.066 |
| (0.037) | (0.054) | (0.045) | (0.072) | (0.072) | (0.095) | |
| Interval | 0.047*** | −0.029 | 0.063*** | 0.072** | −0.076** | 0.101*** |
| (0.015) | (0.026) | (0.018) | (0.030) | (0.037) | (0.037) | |
| Year fixed effects | Yes | Yes | Yes | Yes | Yes | Yes |
| Month fixed effects | Yes | Yes | Yes | Yes | Yes | Yes |
| Category fixed effects | Yes | Yes | Yes | Yes | Yes | Yes |
| Cons | −0.171 | −1.758*** | −1.468*** | 1.449*** | −1.127** | −0.978 |
| (0.336) | (0.353) | (0.296) | (0.468) | (0.491) | (0.641) | |
| Observations | 5,798 | 1,912 | 3,886 | 5,798 | 1,912 | 3,886 |
| R2 | 0.549 | 0.579 | 0.335 | 0.469 | 0.600 | 0.279 |
Note(s): Robust standard errors are reported in parentheses; *P < 0.1, **P < 0.05, ***P < 0.01
Source(s): Author’s own creation
Table 10
Moderating effects of project type
| (1) | (2) | |
|---|---|---|
| TextDiff | 1.440*** (0.215) | 0.361** (0.158) |
| ImageDiff | 0.017 (0.373) | 0.646** (0.267) |
| SevereFailure | −3.182*** (0.140) | −2.342*** (0.111) |
| Goal | −0.405*** (0.058) | −0.527*** (0.050) |
| Images | 0.533*** (0.071) | 0.293*** (0.057) |
| Video | 0.197 (0.153) | 0.536*** (0.145) |
| TextLens | 0.204** (0.081) | 0.341*** (0.069) |
| Options | 0.036** (0.015) | 0.058*** (0.015) |
| RewardPrice | 0.177*** (0.055) | 0.212*** (0.047) |
| Duration | −0.206 (0.150) | −0.279*** (0.105) |
| GoalChange | 0.623*** (0.076) | 0.540*** (0.056) |
| CmpChange | −0.028 (0.160) | 0.101 (0.110) |
| Interval | 0.048 (0.067) | 0.027 (0.047) |
| Year fixed effects | Yes | Yes |
| Month fixed effects | Yes | Yes |
| Category fixed effects | Yes | Yes |
| Cons | 0.872 (0.811) | 1.682*** (0.638) |
| Observations | 2,547 | 3,251 |
| R2 | 0.467 | 0.369 |
Note(s): Robust standard errors are reported in parentheses; *P < 0.1, **P < 0.05, ***P < 0.01
Source(s): Author’s own creation
© Emerald Publishing Limited.
