
Image – Getty iStockphoto: Gorodenkoff
By Porter Anderson, Editor-in-Chief | @Porter_Anderson
More research from Elsevier:
The Netherlands’ Elsevier: Gender Diversity in World Research Publishing
Amsterdam’s Elsevier: Research and Real-World Impact
Also today on AI:
Spain’s Publishers Study AI Opportunities for the Book Sector
‘AI’s Immense Potential as Well as Its Challenges’
Based in Amsterdam, Elsevier has released a study titled Insights 2024: Attitudes Toward AI. This is another of its periodic reports on the international community of writing researchers—whose work fills the company’s more-than 2,900 scientific journals and reference books, including one of the best-known in the lay world, The Lancet.
This newly reported survey, according to the publishing house, has the input of 2,999 researchers and clinicians from 123 countries, and spots what it classifies as “clear differences in attitude between the United States, China, and India,” the top markets in producing research.
In producing this study, Elsevier’s executive vice-president for strategy Kieran West says, “Our goal is to provide decision makers with evidence-based insights into how researchers and clinicians feel about AI’s immense potential as well as its challenges. Working together with the communities we serve, we strive to shape the future in which AI tools serve all—ethically, faster, and better.”
And the intent here is described as “examining the attitudes of researchers and clinicians toward artificial intelligence and genAI, covering its attractiveness; perceived impact; the benefits to them [the respondents] and wider society; the degree of transparency [required] to be comfortable using tools that capitalize on the technology; and the challenges they see with AI.”
Awareness Outstrips Actual Usage

Image: Elsevier, ‘Insights 2024: Attitudes Toward AI’
High-level trends in response suggest that:
- “Awareness of AI is high, but regular usage is low, generally, with expectations that this will grow.
- “Institutions have not yet clearly conveyed to researchers and clinicians their AI usage restrictions, or their preparations for increased use of AI.
- “Attitudes are mixed, but sentiment is more positive than negative about AI among researchers and clinicians.
- “Specific actions can help increase trust, and by taking and communicating them, providers of AI tools can increase users’ comfort.”
If a thumbs-up and a thumbs-down can be discerned in responses here, the line probably falls between questions about AI’s use in workaday processes and AI’s impact on a sphere of publishing that’s deeply reliant on being seen as accurate, trustworthy, authentic, and unbiased.
Respondents See Worklife Assists in AI

Image: Elsevier, ‘Insights 2024: Attitudes Toward AI’
To take the more positive outlook first, in the realm of clinicians’ and researchers’ regard for AI as being something that can help them and their organizations in their worklife:
- 94 percent of researchers and 96 percent of clinicians surveyed say they think AI will help accelerate knowledge discovery
- 92 percent of researchers and 96 percent of clinicians asked say they think AI will help rapidly increase the volume of scholarly and medical research
- 92 percent of researchers and clinicians surveyed say they foresee cost savings for institutions and businesses
- 87 percent of the respondents say they think it will help increase work quality overall
- 85 percent of both groups, researchers and clinicians, say they believe AI will free up time to focus on higher-value projects
Respondents Flag Fears About Misinformation

Image: Elsevier, ‘Insights 2024: Attitudes Toward AI’
Looking at respondents’ signals that they “fear further rise in misinformation could impact critical decisions,” some survey results are:
- 95 percent of researchers and 93 percent of clinicians responding say they believe that AI will be used for misinformation
- 86 percent of researchers and 85 percent of clinicians surveyed say they believe that AI could cause critical errors
- 81 percent of researchers asked say they worry that AI will erode critical thinking
- 82 percent of doctors surveyed say they’re concern that physicians will become overly reliant on AI to make clinical decisions
- 79 percent of clinicians and 80 percent of researchers polled say they believe that AI will cause “disruption to society”
The Transparency Criterion

Image: Elsevier, ‘Insights 2024: Attitudes Toward AI’
Despite those high percentages of respondents who say they believe that “disruption to society is ahead” because of AI, many of them say they’re willing to use AI tools, with the proviso that transparency—among the most difficult factors to verify—is fundamental to how they feel about it.
“If AI tools are backed by trusted content, quality controls, and responsible AI principles,” the company puts it in its media messaging for today’s (July 14) report:
- 89 percent of researchers surveyed who express an opinion that AI can benefit their work say they would use it to generate a synthesis of articles, while 94 percent of clinicians who say they believe AI can benefit their work respond by saying that they’d use AI to assess symptoms and identify conditions or diseases
- In terms of the importance of transparency, 81 percent of researchers and clinicians asked say they expect to be told whether the tools they’re using depend on generative AI
- 71 percent of those asked say they expect genAI-dependent tools’ results to be based on high-quality trusted sources only
- Clearly of high significance to many in academic publishing, 78 percent of researchers and 80 percent of clinicians asked say they expect to be informed if the peer-review recommendations they receive about manuscripts utilize generative AI
India, China, and the United States
Elsevier’s team has flagged distinctions in the responses they see from what they classify as the world’s top three research-producing nations, the United States, China, and India. In some ways, we begin to see in these numbers that it may not be such a “small, small world, after all,” as the Disney anthem would have it. In questions put to respondents from those three countries, we read:

Image: Elsevier, ‘Insights 2024: Attitudes Toward AI’
- Of those surveyed who say they’re familiar with AI, more than half of them (54 percent) say they’ve “actively used AI” with just under a third (31 percent) saying they’ve used it for a specific work-related purpose. This is higher in China (39 percent) and lower in India (22 percent)
- Only 11 percent of respondents say they consider themselves to be very familiar with AI or use it often. Sixty-seven percent of those who say they have not used AI also say they expect to use it within two to five years, with China (83 percent) and India (79 percent) outpacing the States significantly (53 percent)
- And USA respondents say they’re less likely to feel positive about the future impact of AI on their area of work, the breakdown being 28 percent with a positive outlook in the States vs. 46 percent in China, and 41 percent in India
When it comes to the expectations of responding researchers and clinicians in India, China, and the United States in terms of how AI will assist them, the main areas are (a) for reviewing prior studies; (b) for identifying “gaps in knowledge”; and (c) for generating new research hypotheses. Even in this group of tasks, the States’ respondents seem to be less persuaded (84 percent) than those in China (94 percent), and India (100 percent).
Observations: Critical Thinking at Risk?
One point found in the 45-page report is the fact that women responding are more likely than men responding (by 46 percent to 38 percent) to say that people “see AI’s inability to replace human creativity, judgment, and/or empathy as the main disadvantage.”

Image: Elsevier, ‘Insights 2024: Attitudes Toward AI’
And for those interested in looking more deeply at the responses, a section on perceived drawbacks begins on Page 23. There you’ll find that 24 percent of respondents say they consider a concern that “outputs can be discriminatory or biased” as one of their top three qualms. Forty percent of respondents say they see “the lack of regulation and governance” as one of their top three disadvantages. More concerns being registered here indicate that researchers see accuracy as more important than transparency.
Some 18 percent say they see the well-known phenomenon of AI “hallucinations” as a major disadvantage.
Of special concern to many in the wider book publishing industry–which includes the trade sector–there are at least five mentions in the report of concerns that critical thinking could be eroded in various uses of AI–a parallel to threats already registered by many who are watching censorship efforts expand in various markets. Here’s an instance of this from the report’s Page 32:
“Several other concerns relate to the impact that genAI could have on people and the way they think and behave. In the current study, 81 percent of respondents say they think AI will erode human critical thinking skills. Indeed, there’s a suggestion of a risk that AI will affect the way students think, which any changes in curriculum should consider.”
Ultimately, as West’s commentary points out, the company sees “high-quality verified information, responsible development, and transparency as paramount to building trust in AI tools, alleviating concerns over misinformation and inaccuracy.”

Kieran West
Others who peruse the data in the report will be looking, as Elsevier itself seems to be looking, for what researchers are saying they need in order to feel more comfortable with the evolving presence and deployment of AI, particularly in academic and scholarly publishing, in which accuracy and reliability are the tokens of the realm.
Elsevier’s material indicates that it’s taking onboard a need to state its own “responsible AI principles” to include “We create accountability through human oversight,” a node of endeavor that may, as AI usage expands, require deepening resources.
And it’s interesting to note that the company makes a disclaimer about what it describes as its own extensive use of AI. We edit it here only to remove some promotional elements: “For more than a decade, Elsevier has been using AI and machine-learning technologies in combination with our … peer-reviewed content, data sets, and human oversight to create products that help the research, life sciences, and healthcare communities be more effective. We do so in line with Elsevier’s ‘Responsible AI Principles‘ and Privacy Principles and in collaboration with our communities, to ensure our solutions help them achieve their goals.”
The full report, with convenient breakouts of parts of its results, can be found here.
More from Publishing Perspectives on academic and scholarly publishing and its issues is here; more on artificial intelligence and publishing is here; and more on industry statistics is here.

