This week we're going to take the work we did from Week 4 - figuring out what we do and who cares about it - and figure out how do we translate that into things that can be measured by using the Metrics Toolkit.
The Metrics Toolkit was created by a group of academics who specialize in scientometrics - the measurement of scholarly literature. Traditionally that's meant how many people have cited a work, but it's been rapidly expanding in the past decade or so. Now there's other forms of impact that can be used to measure the impact of a work, such as where and how it's being shared (social media, Wikipedia, government policy documents, etc.)
The Metrics Toolkit tries to collect the many varied ways that exist for us to measure the impact of scholarly work and provide guidance on how to track it for your own work.
This week's challenge is to identify one or two metrics you can start using now to help you measure the impact of your own work. You can try searching for a specific metric, but we suggest clicking the "Browse Metrics" button at the top. You might be surprised by what you find.
When considering which metric(s) to use, keep last week's worksheet in mind. Try to focus on the scholarly outputs most important to you and consider what are the best ways to measure them. Maybe it is citations - it's still the dominant measurement in many disciplines - but more and more we're seeing how scholarship can have impact outside of traditional research.
No advanced challenge this week.
Sometimes (OK, a lot of the times), academia gets too focused on scholarly metrics and places too much weight in them. Like most other tools, scholarly metrics are not perfect. They can be gamed by journals wanting to up their prestige, they might not actually measure what we think they're measuring, and they often reflect our biases.
So it's important to always keep whatever metric you're using in context. First of all, the citation and sharing practices of different disciplines can vary greatly, so always consider a metric in light of the norms for your specific research area. A Journal Impact Factor (one of the most popular if overused metrics out there that looks at the average number of citations an entire journal receives over two years) of 2 might not sound great, but for many fields, that's actually really good.
In other cases, a metric might be biased against newer researchers, such as the H Index, which factors in how many articles a researcher has published. You might have an article that's received 50 citations, but if you haven't published more than 2 articles, you're H Index will never be more than 2.
Metrics can be a helpful tool, but they become problematic when we rely on them too much, so remember that there are many, many ways scholarship can have impact that might not be easily quantified.