PRIMUS Editorial Board Statement of Principle on AI

The purpose of this post is to share the current thinking of the PRIMUS Editorial Board on the use and involvement of Large Language Models (henceforth “LLMs”) in submissions to the journal. We acknowledge that opinions and perspectives among Board members vary and that not all Board members agree fully with every aspect of what follows. 

PRIMUS is a place for critical examination and responsible dissemination of ideas and their impact. New technologies can offer powerful new possibilities for our work, but new possibilities may also unearth ethical and pragmatic concerns that our community must consider. The body of this document focuses on the nascent AI technology. For any technology – graphing calculators, online calculators such as Desmos, computer algebra systems such as MATLAB, and now LLMs – PRIMUS is interested in papers that thoughtfully address the author’s choice(s) in using the technology, acknowledge the impact and any potential negative effects, and make a case regarding the relative merits of using the technology.  The emerging technology of LLMs seems to have the potential for wide-ranging impacts, not only on student work but also on the work of instructors and authors.  Some important considerations for authors of potential PRIMUS papers are: 

  • First, all authors should read and follow Taylor and Francis’s AI policy, which sets forth important guidelines for how to use (and not use) LLMs. Of particular note, “Authors are accountable for the originality, validity, and integrity of the content of their submissions”: the T&F policy also lists several risks of using LLMs, including “inaccuracy and bias”, “lack of attribution”, and “confidentiality and intellectual property risks”. Indeed, authors are prompted in the submission process to affirm that they’ve read and adhered to the policy, as well as to disclose the use of AI in the manuscript.
  • To expand on the issue of “lack of attribution”, note that while T&F “supports the responsible use of Generative AI tools that respect high standards of data security, confidentiality, and copyright protection”, for such tasks as “idea generation and idea exploration”, we are concerned that idea generation can often lead to lack of attribution, as many LLMs are trained on copyrighted data (that they did not purchase) and normally do not cite the source(s) of the text they generate. Authors should proceed with caution and be particularly diligent to ensure that LLM-generated content is attributed (both to AI and to any relevant original human sources).
  • Electronic tools such as Grammarly and Quillbot that are designed to help authors improve their exposition now rely on LLMs. In line with the T&F policy that addresses use of such tools for “language improvement”, these are fine to use; but, we do ask that authors acknowledge that these forms of AI were used in copyediting, and in what ways.

At this time, the editorial board sees two general categories for the use of AI: (1) in the production of the paper itself, and (2) in the work that inspired the paper, including but not limited to faculty leveraging AI in the development of classroom activities or assessments, or students using AI in classroom activities or assessments. Issues that arise in each of these categories are explored in more detail below.

Issues related to writing/authorship:

  • As noted in the T&F AI policy, authors should clearly state the way(s) in which generative LLMs were used, which LLM(s) were employed, and what portions of the content of the paper were the result of LLM queries. This information must be accessible to both reviewers of the anonymized paper and readers of the published paper. It may appear in the body or “Acknowledgements” section of a paper, and authors may want to explain this issue further in their cover letter.  Furthermore, papers should include a statement that affirms they have checked to ensure that any LLM-generated content is not being taken without attribution from other sources.
  • It is inappropriate to use LLMs to generate literature reviews because of many documented instances where they have fabricated citations or produced summaries of articles that do not reflect their actual content.

Issues related to activities described in papers:

  • Papers should describe student use of any AI tools explicitly, such as naming specific technologies and user prompts. There may be a time in the future when LLM tools converge into a uniform-enough format and when PRIMUS readership or our students all have a shared understanding of how to use those tools (perhaps like calculators have become), but until that time, PRIMUS papers must describe the pedagogical use of these tools explicitly in order to support the sharing of practical advice.
  • As is expected for articles on the use of any technology, papers should provide an argument delineating any benefits (interpreted broadly) resulting from the use of the LLM in contrast to having one or more humans do the same thing, or discuss any drawbacks resulting from the use of the LLM. Accordingly, authors should attend to the ethical issues around the strategies for which they advocate.  
  • LLMs are evolving rapidly, and any paper that uses them for educational purposes should reflect on how the authors think their proposed use might change in the next several months or years.

We urge all authors to remember: LLMs are built on probabilistic models and trained on the content of the internet, which makes them exceptionally unreliable. LLMs use an algorithm called “next-token prediction” to probabilistically determine the most likely completion of a sentence or paragraph; they do not have the capacity to produce factual output, only statistically likely output. Furthermore, an LLM is constructed to optimize a language metric based on local averaging/interpolation of existing language; however, novel scholarly work frequently involves ideas that don’t align with previous metrics, that fit together globally in unusual ways, or that won’t appear as interpolations between existing strategies.

Every PRIMUS paper should involve critical reflection by the author(s). The journal has a proud tradition of sharing authors’ high-quality ideas for technology-supported activities that support mathematical learning. We take the editorial stance that novel technologies often lead to novel opportunities for mathematical learning (possibly more powerful or inclusive than previous opportunities), especially by allowing students to participate in mathematical exploration that would otherwise be too slow or infeasible. 

In summary, we are open to the “responsible use” of LLMs, provided authors of such papers meet the usual expectations of PRIMUS articles having most or all of the following traits: (1) a novel idea for teaching and learning, (2) discussion of how that idea impacts student learning, (3) practical, transferable advice facilitating implementation by practitioners, and (4) exploration of at least one interesting problem in undergraduate mathematics pedagogy or curriculum.


Following is the prompt that authors will encounter in the T&F online submission system:

Generative Artificial Intelligence (AI)

□ I have read and agree to comply with the Taylor & Francis AI Policy. I confirm that in accordance with the Taylor & Francis AI Policy, if I have used Generative AI tools (e.g., ChatGPT) or any large language models at any point in the preparation of my manuscript, I have clearly disclosed the use in the manuscript, including the full name of the tool used (with version number), how it was used, and the reason for use.

Leave a Reply