Closed acrymble closed 6 years ago
I'd prefer not to have these prominently displayed on the site. They're useful internally (but Google Analytics is probably just fine) but I don't think we want to have prominent popularity markers on the site.
Thanks @ianmilligan1, is that because you don't want to promote what might look like internal competition? I figured we'd do quite well compared to your typical history publication, so I thought it might be a good way to show that PH tutorials were widely accessed and talked about, rather than suggest tutorial X was more popular than Y.
Yeah, basically. I think we can convey the wide access and discussion better than putting altmetrics badges everywhere. I also think they're kinda tacky. 😄
I second @ianmilligan1 👍 A blog post on our annual metrics is a great way to promote and question our engagement both internally and externally with context and interpretation, without signing on to a system that, at its core, promotes a kind of commodification of scholarly labor.
Thanks for the feedback. Does anyone have a different view before we move on?
even if I like altmetrics in principle as an alternative measure of scholarly engagement and value, I second @ianmilligan1 and @mdlincoln, that if used on its own, they/we seem to promote a "commodification of scholarly labor" and a pure quantifiable approach to our pedagogical endeavour. A reflective blog post with some metrics will give a more holistic idea of our achievements.
Thanks. I'm not sure I understand what you mean by 'commodification of scholarly labor'.
Showing metrics of various kinds is now fairly standard in academic publishing, and we're quite behind the curve in this regard. For example, this paper of mine has 5 different types of metrics shown on the home page:
Is the sentiment I'm getting that we are anti-metric? If so, we're really missing a trick in promoting the work of our authors. My paper in that screenshot has 325 views in 4 years. Ian's 'Topic Modelling' lesson has more than 100,000. That's a good-news story that we should be proud to tell. And it is in line with #686 and plans for demonstrating impact of what we're doing. Blog posts are part of our solution of course, and will continue to be.
So can I just clarify what the resistance is against so I can go looking for ideas that the group will be happier to endorse?
Uhmmm I understand some concerns about the "commodification of scholarly labor" but, on the other hand, it is good to share some of our traffic/visits not only in order to demonstrate impact but also because we love being open. Maybe, as an alternative to those badges, we could implement a filter that organizes lessons by "most visits"? (Not sure if that is possible on a tecnical level).
On @arojascastro's filter point, that kinda view comes as standard with Ubiquity Press (eg https://olh.openlibhums.org/ and https://www.liberquarterly.eu/). Now sure how I feel about it tbh.
Ian's 'Topic Modelling' lesson has more than 100,000. That's a good-news story that we should be proud to tell.
+1000
As a side note to this, I am wondering when and how should we take a decision.
I mean, do we have to reach a quorum in order to make a change or make a decision. I am wondering this not only because of this discussion but others we have had. We are about 15 members but only a few have time or want to discuss or express their views. I do no think it has to be compulsory to participate but i do think it is desiderable to reach a quorom most of the times. So if you we are 15 people, at least 8 should be enough. I am saying this because when I collaborated with the EADH, the president called for voting every time a big decision had to be made. There was usually a week for voting... of course, some people did not vote because did not have time or interest, which is fine. That would make everything slower but... what are your views on this? I can open a new issue for this if you want.
Sounds like a good discussion @arojascastro. I agree. A new issue.
Heh, comparing our topic modelling lesson to your Social History article is kind of an apples and oranges type thing.
I guess I like the blog post to demonstrate impact – and that can be part of our "metrics," as are the fact that lessons integrate quite nicely into Google Scholar and ORCID and other platforms – but I think highlighting these prominently and continuously in the user-facing parts of the site valorizes hits. And Altmetrics more generally, for standard pieces I suspect one of the major variables is the number of followers that the authors have on their personal twitter accounts (great for me but largely for reasons unrelated to the piece).
I think we should talk about this on the blog, and in the 'about' page, and in presentations, I just don't like baking it into the infrastructure itself.
My hesitation about the Altmetric badge per se is that it showcases influence without putting much context on it and that sometimes it can used as a proxy measure for quality. I m not saying that Ian's 100.000 views aren't a great number to celebrate, but I 'm thinking that such approaches, available, as @ianmilligan1 mentioned, in the user facing parts of the site, might give a false impression for our overall pedagogical goal. I repeat: I m not negative to Altmetrics, I am just a bit hesitant about how they 'll align with the overall PH profile and how they 'll be received by the community.
Ok. I hear the resistance to Altmetric. That's fine.
I'd still like to move forward with giving authors (and us) ways to highlight the value of this work in ways that make sense to employers. This was something recently discussed in #730 Discussion on institutional credit. So I'd welcome ideas (with your head of department hat on, not your DH blog-reading hat on).
I am going to act as a mediator, if I am allowed :)
So if the quourum is 6 and the default situation according to the lazy consensus is yes, we have 3 noes and 1 yes and 2 I do not know/I do not answer, right?
But, because it is related to #730, we should think about alternatives. So please submit your ideas or proposals in the following days so we can move on or close this issue.
Golden rule: alternative proposals and enmendations should come from people saying no specially - but not only of course.
Publishing a blog post is my alternative proposal, and I like the idea of volume numbers.
North America is different, I guess, and so is each department. To my mind the onus is on the author and the venue to convey impact. Hits are one dimension of impact, but they're not universally accepted nor would I just pass along hits to my department chair. Was the lesson cited? Was it used in teaching venues and events? Has it been built upon? We can signal that it's important with volume numbers, but I don't think there are easy computational solutions here.
I'm not sure if we can link to it easily/dynamically, but Google Scholar does have citation counts for some of our lessons: https://scholar.google.co.uk/scholar?cites=2008642895384331595&as_sdt=2005&sciodt=0,5&hl=en
This conversation is moving away from altmetric obviously, which is fine. I think @ianmilligan1's comment that "North America is different" is quite an important one, particularly in light of recent conversations about internationalisation by @arojascastro. We've got to make sure we do our best to help our international authors benefit from their efforts with us. So perhaps we should be thinking about what would be harmful to expose to readers/authors, rather than what we find personally useful in our own scholarly context.
There are a number of pieces of information that could in theory be exposed about lessons, their use, etc, which may or may not be useful to certain people:
There's also the option of metrics in context, which could be presented fairly discretely on a lesson for those interested in seeing more. Altmetrics does it like this:
These are all just ways of promoting openness, as @arojascastro said, and giving our authors tools to say: this work matters and here is evidence of it.
I appreciate that doesn't matter in all scholarly contexts, but I don't think it probably hurts in any of them. So maybe we need to shift the conversation in that direction. We're that slightly strange, doesn't quite fit the mold publication, so giving people evidence for them to use as they see fit strikes me as a good idea. The question of which evidence and how it's presented can come further down the line.
on another note, @acrymble whats the pricing policy on altmetrics badges? Its a bit confusing on their website -
They charge by the journal. I suspect it's negotiable, but we're talking considerably more than £1k (I did get a quote, but I won't share it openly online. Email me if you want to know).
I propose that we can close this particular issue, because there clearly isn't support for altmetric badges, which was the original question. I'm going to keep brainstorming about ways of highlighting the impact of individual lessons (with context), but I'll do that in another ticket.
If anyone wants to reopen this or add further comments, please do.
Just wondering if there's interest in Altmetric badges for the lessons?
If you don't know, they collate attention that an article gets in non-traditional (ie not citations) venues. Things like tweets, facebook posts, news coverage, wikipedia mentions, etc. I thought they might be an interesting way to encourage our authors to more widely promote their articles, and could also highlight to us where we are and are not getting noticed.
Here's a link to an example from one of my articles so you can get a feel for it. https://www.altmetric.com/details/2982944
It is expensive, however. So we'd need to be committed to its value.