By Craig Fowlie, Global Editorial Director, Social Science Books, Taylor & Francis & Alan Jarvis, Global Publishing Director, Taylor & Francis Books
I spent the first half of my publishing career in almost complete ignorance of approval systems - what they were, how they worked, and how important they were. I was in the editorial department so perhaps this wasn't that surprising.
But notwithstanding, even when I did talk to colleagues in sales and marketing, we might discuss our relationship with high street bookshops - then an important driver of business - or how we might persuade an academic to adopt our textbooks for their courses.
At that time, what was driving our library sales was for me, and for all but a small number of specialists in the company, something of a mystery.
Gradually more through a process of knowledge osmosis than active engagement, I began to understand more about how the approval programmes worked and the role they played in our company. But, I was still in the select few who noticed such things and thought of their significance to our business model.
If you’ll forgive this exercise in personal recollection, what I think it demonstrates is that while ignorance can be bliss, it can also be extremely dangerous, especially in a sector like academic publishing which has arguably changed more in the previous decade than in the previous century.
When the print sales of some our library products, especially our research monographs, begin to decline markedly as approval plans began to be superseded, people across the company noticed rather than just the small band of specialists who had previously concentrated on such things.
The lesson that we drew from this was that it was absolutely essential that more people inside the company understood the nascent changes to the purchasing patterns in the library sector to ensure that we weren’t caught out by developments in the marketplace again and that we were also in a position to explain these changes to our key stakeholders – our staff and our authors.
As the largest publisher of humanities and social science research monographs, we needed to engage with a significant shift from scholarly libraries buying ‘just in case’ to ‘just in time’. It was imperative therefore for our staff to understand the implications of libraries buying our products using demand/patron driven acquisition, evidence-based selection and via digital short-term loans, for example.
This rapid change in the buying and consumption procedures at libraries was a textbook example of how digital and technological developments can disrupt longstanding business models and familiar commercial practices – the so-called ‘disruptive innovation’ much discussed by business analysts and tech gurus.
But while such disruption creates uncertainty by challenging the established supply chain, it also offers openings for those companies willing to embrace technological innovation and stay true to their core purpose.
We believe that the primary function of a publisher is to connect authors with readers, and the specific role of academic publishers is to connect more knowledge and expertise to more students, professionals, scholars, and researchers. People should buy books because they value the content, not just because they tick the right boxes on an approval plan.
Technology offers us exciting opportunities to broaden and deepen our involvement in the world of scholarly communication. New ways to connect our authors with more readers, new ways to enrich their content, new ways to improve access to their work and new ways to provide richer information on how people respond to it.
For academic publishers, this means not only understanding the emerging library acquisition models, but also developing skills, experience, knowledge of, and engagement with, such key issues as discoverability, usage, accessibility, networks, metrics, and analytics.
Until relatively recently, academic publishers existed in a state of blissful ignorance about what happened to their books after a sale. Usage (or lack of) was someone else’s problem. But in the new library world where usage drives purchase rather than being a by-product of it, it is obviously essential that we make our content as easy to find – as discoverable - as possible.
In this regard, academic book publishing is playing catch up with scholarly journals - where the practice of enhancing discoverability through the use of basic metadata such as keywords and abstracts and the creation of DOIs to facilitate linking to individual articles - are well-established parts of the publishing process.
Catching up with our journals colleagues in terms of generating chapter level metadata has necessitated changes to our production workflows, our editorial practices and even our relationship with our authors as we explain to them why such enhancements are essential if their book content is to achieve the exposure and sales that it deserves.
We need our own author base on board with the discoverability agenda especially as they are better placed than anyone else to create metadata such as abstracts and keywords for their own books.
But increasing discoverability is also essential for our back list titles and this necessitates partnerships with third party technology specialists who can provide the software, technical expertise, and experience to facilitate this.
We’re currently working with a number of digital partners on plans to create abstracts and keywords for one million backlist chapters, on semantic enrichment and tagging of our content on a granular level, on creating more relationships between our content and on other ways of improving searching and discoverability.
At the same time, we are also experimenting with a number of digital products and platforms from Routledge Handbooks Online and the Routledge Encyclopaedia of Modernism to a new unified platform for all our books and journal content. Similarly, our partnerships with our key library suppliers, vendors and information providers such as ProQuest, EBSCO, and Bertrams are an absolutely integral part of this process.
The smoother the information workflow and the richer the data we can supply them, the more likely it is to be accessed, read, purchased, interacted with and impactful.
Discoverability is, therefore, a multi-faceted and constantly evolving process. At the meta-level, having abstracts, keywords, and some searchable chapter content is a prerequisite of having books listed on the most popular search engines like Google Scholar. It is also crucial that our metadata aids readers in higher education institutions to find our content through library discovery services and that our own digital platforms create a seamless end user experience for scholars when they are looking for the research that matters to them.
Increased discoverability also offers considerable opportunities for active curation of our content which we can then package on behalf of our customers. It would be relatively straightforward to generate a collection of material on, say, the Zika virus or the pros and cons of the UK remaining in the EU from our books and journals and this could have considerable utility for academic research or teaching. We are also developing new software and state-of-the-art search facilities which will allow our customers to curate our content themselves.
Other technological advances mean that we are now much better placed to measure the usage and impact of our material. This is again an area in which books are playing catch up with journals where basic metrics such as the number of downloads, the volume, and frequency of citation and impact factors are ingrained parts of the scholarly architecture.
As many higher education institutions and other funding bodies now use the social impact of research as one of their key criteria for awarding grants, engagement with this is vital for scholarly publishers.
Once, the only measures of engagement and impact that a book publisher could report back to an author were reviews in scholarly journals (often several years after publication), or specialist publications like Choice, and the number of print copies sold.
Nowadays, as well as these crude but important measures (print sales still matter to most publishers and authors still like reviews!), we are able to draw on a far more sophisticated series of metrics such as most downloaded chapters, number of citations of the work and the presence of the book on social media – Twitter mentions and retweets, Facebook likes, YouTube hits, blog and Wikipedia references, Goodreads and Amazon reviews, discussions on scholarly social platforms such as SSRN, academia.edu etc.
Developing this capacity has also necessitated structural changes within the company – the creation of digital product teams, a research and analytics department and much ongoing education of staff – as well as networking and partnering with third parties such as Web of Science for their Book Citation Index and Scopus for scholarly book citations as well as Altmetric on the social media metrics.
These metrics provide both the publisher and the author with a much more sophisticated and deeper understanding of the impact of their content and how it is being used and engaged with but they can increasingly be employed by publishers as a source of information for their acquisition editors to source content that is in demand.
In other words the editors now have demonstrable quantitative evidence of which authors and which topics are generating the most downloads, citations, social media mentions etc., all of which should allow them to make better, more informed, decisions about which areas to sign up books in and who they should be approaching to write them.
The developments outlined above are only part of the rapidly evolving landscape for scholarly publishers – one area where more data would be incredibly useful is which parts of a book are actually read as Joe Esposito discusses here - but these trends all point to the fact that we have moved from an environment where ignorance is bliss to one where information is boss!