STM & PROFESSIONAL PUBLISHING
The Royal Society of Chemistry and MIT have inked what may be the first “read and publish” agreement in the United States
. As part of the two-year arrangement, MIT will continue to subscribe to RSC’s journals, but all articles written by MIT authors will immediately be made Open Access upon publication. The news prompts many questions. Will other publishers join with RSC in agreeing to such terms? If the top 100 (or even 50) largest universities were to sign similar arrangements with every publisher, would the smaller universities continue to subscribe to journals published under this model when the bulk of the articles are now OA? This, of course, is both the hope and the Achilles heel of such agreements: they (much like parades, cotton candy, and green OA) are best in moderation.
As to the question of whether the read and publish model will reach a critical mass in the US, our friend and colleague Roger Schonfeld is skeptical. He has analyzed the situation and concludes that the environment that gave rise to the read and publish model is peculiar to the nature of funding in Europe and has no precise analogue in the US. In particular, Roger points to the diverse and distributed nature of decision-making in US higher education, where nationwide negotiations are hard to envision.
Source: MIT Libraries, Scholarly Kitchen
The Chronicle of Higher Education
admirably fact-checks a claim it made in its own pages that “the average academic article is read by about 10 people, and half of these articles are never read at all." Reporter Arthur G. Jago follows the thread of claims back to its original source
, which turns out to be a 2007 Physics World
article that has been formally cited over 300 times. When Jago contacted the author of the paper, he confessed that the statement was added not by him but by the editor of the journal. Jago contacted the editor in question, who cited a course he had taken years before. Not to be deterred, Jago tracked down the course instructor, who could not provide a source for the statement (but insists there must be one). Oddly, no one bothered to check with a publisher who (with access to usage statistics) could clear up the misinformation in a moment. While we are heartened to see the extent to which the Chronicle
will go to debunk a specious claim, we were subsequently dispirited by the number of people we saw tweeting the headline for this story as evidence that half of academic articles are not read (the very claim the article debunks).
Jago’s article reminded us of a piece in Nature from December in which Richard Van Noorden deconstructs the myth of “uncited” articles and concludes that the uncited constitute a very small number. Van Noorden provides a careful analysis of many “uncited” pieces and meta-articles about them. Both the Jago and Van Noorden articles call to mind the quip by programmer Alberto Brandolini that “The amount of energy necessary to refute bullshit is an order of magnitude bigger than to produce it.” Indeed.
Source: Chronicle of Higher Education, Nature
Edwards Brothers Malloy (formed from the merger of Ann Arbor firms Edwards Brothers and Malloy Inc. in 2012) closed its doors on June 15
, after 125 years in the printing business. The company had initially planned a restructuring to consolidate all offset printing into its Michigan location (closing operations in North Carolina) by the end of 2018, but subsequently decided to close. We remember fondly our trips to Ann Arbor for the EB Book Manufacturing Seminar, where we got to “play printer” for 2 days of hands-on experience with how books are built.
Source: Publishers Weekly
Yet more metrics
Source: Physics Today
China gets serious about policing research misconduct
. New initiatives include a database of misconduct cases and a blacklist of journals deemed to be of poor quality. While such blacklists abound, we are not aware of one developed and maintained by a state agency. (MEDLINE, a service of the National Library of Medicine, is, of course, a state-managed whitelist.) The implications give one pause. There is is likely to be scant recourse for publishers who feel they have been inappropriately listed. While maligned publishers could threaten Jeffrey Beall with lawsuits, good luck suing Beijing. On the other hand, the proliferation of predatory and fraudulent journals continues unfettered with no real industry or academic initiatives to combat them. It was inevitable that someone would attempt to step in to clean up this mess. We are cautiously optimistic some good may come of this development, but the devil, as always, will lie in the details.
In STM publishing we have our own version of the “culture wars,” and that is the ongoing debate, which at times becomes vitriolic, over just what scholarly publications should look like and why we continue to use the PDF despite all the swell new technologies that have come onstream. In a thoughtful piece by Sarah Andrus of OUP
, we hear the perspective of enlightened conservatism. Without dismissing the merits of new technical capabilities, Andrus argues that the current form of scientific articles satisfies the requirements of the researchers themselves and is thus not likely to make way for a new form, unless that new form displays properties that are unequivocally superior to what we rely on today.
Source: Scholarly Kitchen
Wiley reports its fiscal year 2018 results. The company cites a “strong performance in Research, growth in Solutions, and broader operating efficiencies” as the cause for a slight uptick in revenues and more substantial growth in operating income. Also of note, Wiley reports that 73% of its revenues now come from digital products, up from 68% in the prior year.
After 22 years at the helm of Nature, Philip Campbell, the 7th Editor-in-Chief, bids farewell—he remains in the newly created role of Editor-in-Chief of Springer Nature, a role in which he will be responsible for editorial policies across Springer Nature. Magdalena Skipper begins her tenure as Nature’s 8th Editor-in-Chief, the first woman to hold this role in the publication’s nearly 150-year history.
The Andrew W. Mellon Foundation has made a grant to the University of North Carolina Press for a pilot project for the publication of Open Access monographs. The challenges here are very large, but the plan put together by Press Director John Sherer is the most thoughtful and business-like piece we have yet seen on this difficult topic. Sherer’s perspective is that university presses publish all books the same way, whether they are titles with significant market potential (by university press standards) or monographs that will sell only a few hundred copies to academic libraries and a handful of specialists. Thus UNC is experimenting with “breaking apart” the workflow usually attached to monograph publishing and bringing it closer to original Web publishing. The UNC (pilot) model proposes that publication be broken into three distinct stages: 1) editorial—much as we see it today, 2) direct and immediate Web publishing in OA form, and 3) formal publishing, to follow many months later after market opportunities have been tested through the OA edition. Each of these steps has the potential to be separately funded. The first two stages, for example, may be supported by APCs of different kinds (author-pays, provost-pays, funder-pays, and so forth), and the third stage, which is optional, would be supported by the marketplace.
Source: Longleaf Services
A planned $160 million renovation of the University of Virginia’s beloved Alderman Library has caused a kerfuffle. Some of the faculty and students are concerned that the renovation plans will reduce the number of print volumes in the library and will even (gasp) employ compact shelving systems. Even for a university known for caring deeply about architecture, it would be easy to dismiss the whole affair as a tempest in a teapot. However, we are glad to see both the ongoing investment in world-class academic library facilities and impassioned debate about the continued value of print archives. (Also compact shelving really is terrible.) A relatively recent piece in the historic university’s alumni magazine places the renovation in context, situating the role of Alderman among the 14 UVA libraries and their central role in the professional lives of faculty and students (the cozy photos have us reaching for a hardcover and a cup of tea).
Source: Chronicle of Higher Education, Cavalier Daily, Virginia Magazine
Meanwhile, across campus, the University of Virginia Press becomes a service provider, striking a deal with the University of Delaware to provide copy editing, design, production, and distribution. Distribution is actually via Longleaf Services, itself a venture of the University of North Carolina Press. We have long been advocates of such interpress partnership arrangements as they provide a means to leverage economies of scale while maintaining editorial independence.
Source: University of Virginia Press
We follow Lorcan Dempsey’s work carefully and are particularly impressed with his four-part series of blog posts on library consortia. Whether you are a librarian or a publisher, these pieces will illuminate how consortia work and the implications of these networks of libraries and librarians. Central to this analysis is the place of scale in consortial operations. What should be done at the highest and largest level? What should be scoped more locally and with greater granularity? While consolidation on the publishing side, with companies acquiring other companies, is well understood at this time, the pace of consolidation in the library world has been rapid. In negotiations today, giants are increasingly facing off to giants.
The California Digital Library has thrown down the gauntlet in its negotiations for materials, especially journals. In a manifesto that is being passed around the Internet, CDL outlines its goals for its dealings with publishers, including a bulleted list of priorities in negotiations. Most of this is obvious (lower prices, more services), but some of the items will require careful assessment by publishers, especially the requirement that all papers by University of California faculty immediately be made Open Access upon publication. As CDL notes, the UC faculty is responsible for 8% of the US total of published papers; thus granting this concession is likely to ripple through all parts of the scholarly communications ecosystem. (It’s worth remembering that the UC system has 10 campuses and includes the powerhouses of Berkeley, UCLA, and UCSD.) The language of the manifesto is high-minded and abstract, much like that of a political movement, which we hope will not set the tone for inflexible negotiations. It is not clear why these demands were made public when CDL knows exactly who its vendors are and is in regular communication with them; this only strengthens the publishers’ hands. As a rule, success in negotiations comes not from where you stand but where you sit.
Source: University of California
Preliminary findings (summarized in an interview) have been released from a study by the ScholCommLab of review, promotion, and tenure policies. While we wait for the full study to be completed and its results published, it is worth noting that the researchers have found little uptake on OA publishing in the policies of research-oriented institutions: “Only about 5% of the institutions studied made explicit mention of open access in their guidelines.” Further, references to OA are typically offered only to warn authors against predatory and other poor quality OA journals. Based on our own experience interviewing academics, we are not surprised by these findings. Quality (and sometimes, unfortunately, quantity) of work, and not the business model under which it was published, remains (thankfully) the primary criteria for promotion and tenure.
Hot on the heels of GDPR, the EU contemplates another set of regulations
aimed at reining in the (largely American) tech industry. One proposed change (Article 13) to the European online copyright rules would require technology platform providers to proactively screen content loaded to their platforms for copyright violations as opposed to the current state which requires copyright holders to inform said platforms of violations. A second change (Article 11) would compensate (primarily news) publishers by allowing the publisher to charge for use of snippets in search results and social feeds. The tech industry has waged an extensive lobbying and information campaign against these changes
—a campaign that appears to be working. Opponents of the bill in the European Parliament have succeeded in employing a parliamentary maneuver that reopens the bills for amendment before they can be voted on. We anticipate the public relations war over online copyright changes to intensify. The publishing industry has been slow to counter the tech industry’s narrative (“it will kill memes,” “it will kill the Internet”) and the parade of tech glitterati opposing the measure. This is starting to change (though may be too little too late) with the likes of Paul McCartney throwing his not inconsiderable weight behind proposed rule changes
Source: TorrentFreak, Financial Times, Politico
Among many other ignominious hallmarks, 2018 is likely to be remembered as the year that “AI” moved from the realm of science fiction to that of marketing. It is scarcely possible to order a taco without hearing about the AI process used to either make the taco, source its ingredients, or deliver it to your table. (And we pity anyone so antiquated and out of touch as to pay for said taco with anything other than the latest cryptocurrency.) It is into this moment of AI over-exuberance that Michael Jordan’s (not that one) recent piece on Medium shines
like a laser beam, cutting through the encrustation of bluster and hype. Jordan is a Professor in the Department of Electrical Engineering and Computer Sciences and the Department of Statistics at UC Berkeley. His article provides a history as to how we got here and unpacks the term “AI” into its various flavors and what they really mean from a computational perspective.
Meanwhile, the Brookings Institution published this piece on the uses of AI by not-for-profits
. The authors of this article, while not as careful as Jordan (see above article) do at least parse between “AI,” “data analytics,” “machine learning” and “automation." There are some good pointers here for tools to help with fraud detection, online troll patrol, emergency responses, and other activities.
Source: Brookings Institution
In this captivating history of mathematical type
, Eddie Smith tells the story not only of the development of printed math symbols but, in doing so, of printing since Gutenberg. It has taken over 500 years for typeset math to reach the point where it can adequately reproduce the work of a human hand. This piece is essential reading for anyone interested in the history of academic publishing.
Source: Practically Efficient
David Papineau postulates in the TLS
that much of the so-called reproducibility crisis in science can be traced back to a basic misunderstanding of significance tests
: “One of the great scandals of modern intellectual life is the way generations of statistics students have been indoctrinated into the farrago of significance testing.” This is not an essay debating the proper threshold for P
-values but rather about whether objective significance tests are even sensible in many cases. It turns out that Thomas Bayes (he of Bayesian probability theory) had given this matter some thought and argued for applying, in many cases, initial probabilities to hypotheses based on prior experience. A good explanation of how we got here, statistically speaking.
Source: Times Literary Supplement
Ken Auletta’s entertaining and informative piece
in the New Yorker
describes the arc from the era of the creative element in advertising (whom Auletta calls “Mad Men,” a reference to Madison Avenue advertising and, of course, the eponymous TV show) to our current environment, represented by “Math Men”: the world of Big Data and massive automation. Auletta shrewdly notes that both the Mad and the Math Men have the same objective, which is to provide more information to advertisers so that they can target their ads with greater efficiency. So who has the moral high ground here? Professional and academic publishers that have been investing in the capture and analysis of end-user information are likely to find their activities regulated in order to protect the business models of media companies entirely outside the world of science and the academy. Auletta’s piece is taken from his newly published book
Source: New Yorker, Penguin Random House
A long read from the Economist
rarely disappoints and that was our feeling about this fascinating piece on what data analytics can and cannot do
. The essay surveys any number of instances where data analytics has surfaced important signals, including the popular account of the use of data analytics as captured in the book and movie Moneyball
. But the real question the author asks is when is a hunch still a good thing, something that goes beyond all of our computation, however well thought out it may be? The example to make the case for hunches is that of John Hammond, the music producer—usually called “the legendary John Hammond,” which is not the most quantitative description one can come up with—whose career is a long series of astonishing discoveries of creative talent, from Billie Holiday to Bob Dylan. This article is worth reading in its own right, but it is particularly apt in our professional world as we contemplate where data analysis ends and human intuition comes in.
Source: Economist 1843 Magazine
FROM OUR OWN PENS
Joe has authored a piece
in the Scholarly Kitchen
on the MIT-RSC read-and-publish business model. Joe notes that the model has an inherent structural problem—namely, that the smaller institutions, which produce only a small number of research articles every year, may find themselves in the unexpected situation whereby they can cancel their subscriptions without significantly diminishing access to the many papers written at the largest institutions. When Harvard pays for its authors to publish all their papers as Gold OA, the beneficiaries are the smaller institutions, which now get OA at no cost. Put another way, a system whereby the costs are widely distributed would see those costs consolidated. The consolidated costs would require Harvard to pay far more than it pays now in the distributed system. Will research-intensive institutions be willing to subsidize everyone else?
Source: Scholarly Kitchen
JOIN CLARKE & ESPOSITO
Clarke & Esposito is seeking an Associate. This is an opportunity for a rising star looking for an intellectual challenge as well as work-life balance and flexibility. The individual selected for this role will have the opportunity to work on business strategy projects with the top executives at not-for-profit societies, publishers, universities, software companies, and other organizations working in professional publishing and communication. It is a great position for interacting with all segments of the industry and learning about how many different organizations operate. For inquiries or applications, please contact John Hartnett (firstname.lastname@example.org
) at Jack Farrell and Associates, who is conducting the search.
If you do not resist the apparently inevitable, you will never know how inevitable the inevitable was. —Terry Eagleton