Quantifying research impact is essential for organizations aiming to streamline operations and demonstrate the value of their research efforts. Traditional methods often focus on citations or publication counts, but these fail to capture operational efficiency and the ease of conducting research.Â
As a ResearchOps Lead, I created the Researcher Effort Score (RES) to help address this gap. Inspired by the Customer Effort Score (CES), the RES measures how easy or difficult it is for researchers to complete tasks like recruiting participants or accessing reports.
I believe this metric can transform how organizations evaluate research processes, making it easier to identify bottlenecks and improve workflows.Â
Here are six actionable ways to measure research impact using the Researcher Effort Score.
Surveys are a direct method for collecting data on how researchers perceive the ease of performing specific tasks. The RES uses simple questions such as:Â
Researchers can embed these surveys at critical moments in the research workflow — immediately after completing a task or accessing a tool — to ensure feedback is timely and relevant.
Implementing surveys requires careful design to avoid bias and ensure clarity. Questions should focus on specific interactions rather than general impressions. For example, asking about recruitment processes immediately after researchers complete participant selection provides actionable insights into barriers they may face.
You can use a scale from 1-5 or 1-7; I prefer the former to keep it short. There's also different ways of structuring this metric by polarity.
In my experience, it’s best to stick with what people are used to seeing in your company.
The RES calculation is simply an average: the sum of responses divided by the total number of responses. So if you have 5 responses with scores of 4, 5, 3, 3, and 4, your RES is 3.8.
With this metric, organizations can analyze survey results to identify patterns in feedback. Low scores in certain areas, such as recruitment or tools access, highlight operational inefficiencies that need addressing. Regularly administering these surveys ensures RES data remains up-to-date, which enables teams to make continuous improvements.
Embedding RES metrics directly into research tools simplifies data collection and ensures feedback is tied to specific actions. For example, adding a short survey within a participant recruitment platform or research repository allows researchers to rate their experience immediately after completing a task. Researchers are able to minimize recall bias and provide more accurate data on effort levels.
I’ve found it helpful to embed RES questions in existing workflows. Tools that automate feedback collection save time and reduce manual effort for both researchers and administrators. For instance, after downloading a report from a repository, researchers could be prompted with a question like, "How easy/difficult was it to find the report you needed?" Responses can then be aggregated to calculate an overall RES score for that interaction.
Organizations without automated tools can still implement RES by sending periodic surveys via email or internal communication channels. While less immediate, this method ensures all Researchers and People Who Do Research (PWDR) have an opportunity to provide input on their experiences. Embedding metrics into tools remains the ideal approach for real-time feedback collection.
Measuring RES over time provides valuable insights into how operational changes impact researcher experiences. Conducting RES surveys at regular intervals, such as quarterly or semi-annually, allows organizations to track trends and identify whether initiatives are improving ease of use. For example, if scores for accessing reports improve after implementing a new repository tool, this indicates positive progress.
In my team, we use longitudinal tracking as part of our RES implementation strategy, conducting surveys twice a year and comparing results across periods. In doing so, we’ve observed a 30% improvement in overall ease of conducting research within six months after launching initiatives like repository redesigns and researcher forums.
(While this frequency is lower than we’d like, we’re still early in our evolution and have some challenges — like over 400 PWDR to support.)Â
Longitudinal tracking helps identify persistent challenges that require further attention. To maintain comparability, organizations should ensure consistency in survey questions across periods. Additionally, sharing results with stakeholders demonstrates accountability and highlights the impact of ResearchOps efforts on improving workflows.
While numerical scores provide a quantitative measure of effort levels, comments from researchers offer qualitative insights into specific pain points and potential solutions. Open-ended questions such as "What challenges did you face during recruitment?" encourage detailed feedback that adds more color to RES scores.
Comments can reveal underlying issues not captured by numerical data alone. For instance, one of our teams noted difficulties with accessibility features in reports stored in our repository. This qualitative feedback led to targeted training sessions on creating accessible reports and improved repository guidelines. Comments also help prioritize initiatives by identifying areas with the greatest impact on researcher satisfaction.
Organizations should analyze comments systematically by categorizing them into themes such as tool usability, process efficiency, or training needs. Sharing qualitative insights alongside RES scores provides a holistic view of researcher experiences and informs strategic decision-making.
Connecting RES data with broader organizational metrics like innovation cycles or revenue growth demonstrates the strategic value of research efforts. Reducing friction in research processes accelerates decision-making and product development, ultimately driving business outcomes. For example, faster participant recruitment enabled by streamlined processes can shorten project timelines and enhance agility.
Tracking RES alongside metrics like time-to-recruit or report usage rates provides actionable insights into how operational efficiencies translate into tangible benefits. Organizations can also tie improvements in RES scores to cost savings from reduced administrative burdens or increased productivity among researchers.
At my company, we plan to expand our RES implementation by linking effort scores with project-level impact metrics in 2025. This will allow us to quantify how improved research processes contribute to organizational goals like innovation and customer satisfaction, strengthening the case for investing in ResearchOps.
The Researcher Effort Score isn’t just a tool for operational improvement. It also serves as a compelling way to engage stakeholders and secure their support for research initiatives. Using RES provides leadership teams with clear, actionable data that demonstrates the value of research operations in quantifiable terms.
Stakeholders often demand proof of progress before approving investments in tools, training, or other resources. RES bridges this gap by offering a straightforward metric that reflects the ease of conducting research.
For example, we used RES data to highlight specific pain points, such as challenges in participant recruitment processes and knowledge sharing. This has helped our team advocate for changes that directly addressed these issues, including repository improvements and cross-team collaboration initiatives. To engage stakeholders effectively:
Using RES not only builds credibility but also fosters alignment between research and cross-functional stakeholders and organizational leadership.
The Researcher Effort Score provides a practical way to evaluate and improve the operational aspects of research. By identifying opportunities, offering actionable insights and connecting with other Research or ResearchOps metrics, RES empowers teams to streamline workflows and focus on impactful work.Â
Whether addressing bottlenecks in participant recruitment or simplifying access to tools, RES shifts the focus from abstract measures of research success to actionable improvements that enhance productivity.
Organizations that adopt RES can create environments where researchers thrive, and ensure time and resources are spent on generating meaningful insights rather than navigating administrative hurdles.
—
Thanks to Jack Wolstenholm from the Great Question team for his work in reviewing and adapting the content of this article.
Pedro is ResearchOps Lead at Banco do Brasil, Founder of ResearchPro, and Professor of the MBA in UX Research, Research Operations and Design Leadership at UNIFATEC, in partnership with the Toronto School of Management. Previously, he was the UX Research Lead at Warren Investimentos and a Senior User Researcher at Nuvemshop / Tiendanube for 4 years, participating in the creation and launch of innovative products for thousands of entrepreneurs in Latin America. Pedro is also a UXÂ Research content creator and Cofounder of Observe, the first UXÂ Research conference in Brazil. You can connect with him on LinkedIn or visit his website to get in touch.