George Veletsianos, PhD

Professor, researcher, consultant, speaker, educational technology, learning design, online education, emerging technologies

Image

CFP: Online Learning Journal Special Issue on State of the Science: Evidence Synthesis in Open, Distance, and Digital Education

Below is a call for proposals from the Online Learning Journal (OLJ), drawn from the full announcement:

State of the Science: Evidence Synthesis in Open, Distance, and Digital Education (ODDE)

Systematic Reviews, Meta-Analyses, and Meta-Syntheses on Key Questions in ODDE

Target publication: Online Learning Journal (OLJ), Vol. 30, No. 3 (2026)

Guest Editors:

Proposal Deadline: November 30, 2025
Full Manuscript Deadline: February 16, 2026
Expected Publication: November 1, 2026

Rationale for Special Issue

In recent years, the number of evidence syntheses published in the field of education has increased significantly. By consolidating diverse bodies of research, systematic reviews and meta-analyses provide scholars and practitioners with a more comprehensive and reliable understanding of what is known in a rapidly evolving field, as well as informing policy. However, recent umbrella reviews and large-scale meta-analyses indicate that the overall quality of such reviews in the areas of Open, Distance, and Digital Education (ODDE) and educational technology remains uneven (Bond et al., 2024; Buntins et al., 2024, Zawacki-Richter et al., 2025). Common concerns include insufficient transparency in reporting methodological steps, lack of adherence to established review protocols, and weaknesses in the synthesis of empirical results. These issues reduce the trustworthiness, rigor, and replicability of review findings, which are essential for building cumulative knowledge in online learning research.

Aims and Scope of Special Issue

The Online Learning Journal invites submissions for a special issue dedicated to systematic, meta-analytic, and meta-synthetic reviews, umbrella reviews, and scoping reviews that consolidate what is known—and what remains uncertain—about critical topics in online, open, digital, and distance education (ODDE).

This issue seeks to:

  • Map the evidence base across key domains in digital education.
  • Quantitatively or qualitatively synthesize findings to inform research, design, methods, and policy.
  • Highlight gaps and set future research agendas.
  • Address weaknesses in methods outlined above
  • Strengthen OLJ’s position as a leading venue for integrative scholarship.

New paper: Is educational research available to the broader public?

We have published a new paper examining the extent to which educational research is available to the broader public, and I am shamelessly copying and pasting Josh Rosenberg’s announcement of it below:

This paper came from a curiosity (maybe even a frustration) — what is returned when you search for an educational research article on Google Scholar? And, relatedly, how widely accessible is research in our field? And, relatedly, how accessible is research in our own field?

We looked at over 2,500 articles published between 2010 and 2022 across six AERA journals. Using what we described as a “Public Internet data mining” approach we asked a simple set of questions: Is the article available? In what form? And where? The work was just published in Teachers College Record. This is the first study of its kind to empirically document the accessibility of educational research and our hope is that it could inform efforts to make our work more accessible to teachers, leaders, and the public.

Here’s what we found:

  • About 65% of articles were accessible in some form—a much higher rate than the roughly 28% reported for scholarly articles in general in other, prior work.
  • Most of those accessible versions were the published PDFs, often posted on sites like ResearchGate.
  • Only about 6% were openly licensed, meaning they can be freely reused.
  • The rest were a mix of preprints, temporary “free” versions, or other file types.

On the one hand, this is encouraging: many more articles are available than we might expect. On the other hand, the picture is messy. Access depends on whether an author uploaded a copy to a site, whether you know where to look, and whether reuse is even allowed.

Perhaps the bigger question is what kind of field we want educational research to be. If our work is meant to inform teaching, policy, and public understanding, shouldn’t the default be that anyone—teachers, school board members, families—can actually read it?

Shout out to my fantastic colleagues George Veletsianos, Enilda Romero-Hall, and Emilie Allen for the collaboration on this. You can access the article on TCR’s homepage here.

And (of course!) there is an open-access version — that’s on OSF here.

I loved working with Josh, Enilda, and Emilie on this. Writing some of the code for the data mining work that went into this paper gave me an idea for the use of AI in education research, which is a topic I’ve been working on for about a year now. More on this soon.

Simple Checklists to Verify the Accuracy of AI-Generated Research Summaries

Do you share AI-generated audio/video summaries of your research with students? or with the broader public on social media? Below is a short article I wrote encouraging researchers to share a checklists alongside those summaries verifying their accuracy and noting their limits (the final version is at Veletsianos, G. (2025). Simple Checklists to Verify the Accuracy of AI-Generated Research Summaries. Tech Trends, XX(X), Xx-xx but here’s a public pre-print too).

Simple Checklists to Verify the Accuracy of AI-Generated Research Summaries

Picture this: An educational technology researcher shares a seven-minute AI-generated audio or video of their latest paper on social media. It sounds engaging and professional. But buried in that smooth narration, the AI has quietly transformed “may suggest” into “proves,” dropped crucial limitations, and expanded the study’s claims beyond what the data supports. The listeners, including students, policymakers, and journalists, have no way of knowing.

The proliferation of AI-generated audio and video summaries of research papers—through tools like Google’s NotebookLM and others—represents both an opportunity and a challenge for scholarly communication. These summaries are promising as they can expand the reach, accessibility, and consumption of our research for diverse audiences (cf. Veletsianos, 2016). They also allow us to efficiently engage with literature outside of our expertise. A seven-minute podcast consumed during a commute may reach audiences who would never read a 30-page paper.

Yet this convenience comes with risks. Peters and Chin-Yee (2025) for example, found that summaries generated by Large Language Models omitted study details and made overgeneralizations. Such risks can propagate misunderstandings, particularly when summaries circulate without clear indicators of their accuracy or limitations.

While some technical solutions to address this problem exist (e.g., including fine-tuning models and implementing algorithmic constraints) these approaches remain inaccessible to most researchers. We need a low-barrier intervention that empowers authors to assess and communicate the quality of AI-generated summaries to listeners.

I propose that researchers who share AI-generated summaries complete and publish a brief verification checklist alongside their summary. This practice serves two purposes: it encourages authors to critically review AI output before dissemination, and it provides audiences with transparency about the summary’s accuracy and limitations. Just as we expect ethics approval for research, we should normalize quality assurance for AI-generated scholarly content.

To facilitate this practice, below are two verification checklists, one for academic audiences and another for the general public, even though the latter could serve both audiences. Both are deliberately concise to enable sharing across digital platforms where these summaries circulate, from social media to publishers’ websites to course management systems.

Checklist 1: For Academic Audiences Checklist 2: For the General Public
Author verification: This summary of [paper title] was AI-generated using [tool name] on [date] and reviewed by the author(s). It accurately represents our work. For full details, nuance, and context, please refer to the original work at [URL].

 

The following items were verified:
✓ Research purpose or questions stated correctly
✓ Study design described correctly
✓ Summary matches study results (no fabricated data)
✓ Conclusions are explicitly limited to the study’s scope and context
✓ Key terminology used properly
✓ Theoretical, conceptual, and/or methodological frameworks are framed appropriately and are neither omitted, nor misrepresented
✓ Major limitations are included
✓ Context and scope are clear
✓ There summary does not omit anything of significance
✓ The tone is consistent with the original work

 

Issues noted: [Note any issues]

Author verification: This summary of [paper title] was AI-generated using [tool name] on [date] and reviewed by the author(s). It accurately represents our work. For full details, nuance, and context, please refer to the original work at [URL].

 

What we checked:
✓ Main findings are correct – nothing made up
✓ Doesn’t overstate what we found
✓ Includes what we studied and who participated
✓ Mentions important limitations
✓ Uses language appropriately
✓ Matches our original tone and message

 

Issues noted: [Note any issues, using plain language]

 

These checklists are a starting point, not a comprehensive solution. I have attempted to make them flexible enough to accommodate different research paradigms, but if you do use them, you should refine them to fit your needs and orientation. The point is not to develop the perfect checklist, but to provide a flexible tool that can be adapted and improved to minimize the risks of AI-generated research summaries. As AI tools become increasingly integrated into research dissemination, we must develop community standards for responsible use. Normalizing transparency practices now contributes toward maintaining the integrity that underpins scholarly communication.

In an academic landscape saturated with contested claims, particularly in education and educational technology where myths and zombie theories persist (e.g., Sinatra, & Jacobson, 2019; Suárez-Guerrero, Rivera-Vargas, & Raffaghelli, 2023), our commitment to accuracy and transparency must remain constant. Verifying AI-generated summaries constitutes a form of reputational stewardship. This quality assurance practice encourages authors to critically review AI output before it circulates, signaling to colleagues, institutions, and the public that they take seriously their role as knowledge custodians. By proactively verifying summaries, researchers can protect the integrity of their findings and build a reputation for reliability that enhances the trustworthiness of their entire body of work. At the end of the day, the few minutes invested in verifying AI-generated summaries of one’s work pale in comparison to the time that might be required to correct a misleading summary that gains traction on social media. Once an AI-generated misrepresentation goes viral, no amount of clarification can fully revise it. In this sense, verification checklists function as both quality control and professional insurance. They are a small investment that yield returns in credibility and peace of mind.

I encourage researchers to adopt versions of these checklists, journals to consider requiring them for AI-generated supplementary materials, and the broader academic community to refine and expand upon this framework. In an era of rapid AI developments, our commitment to scholarly accuracy and transparency must remain constant.

Author notes and transparency statement, as suggested by Bozkurt (2024): This editorial was reviewed, edited, and refined with the assistance of ChatGPT o3 and Gemini Pro 2.5 as of July 2025, complementing the human editorial process to address grammar, flow, and style. I critically assessed and validated the content and assessed potential biases inherent in AI-generated content. The final version of the paper is my sole responsibility.

References

Bozkurt, A. (2024). GenAI et al. Cocreation, authorship, ownership, academic ethics and integrity in a time of generative AI. Open Praxis16(1), 1-10.

Peters, U., & Chin-Yee, B. (2025). Generalization bias in large language model summarization of scientific research. Royal Society Open Science12(4), 241776. https://doi.org/10.1098/rsos.241776

Sinatra, G. M., & Jacobson, N. (2019). Zombie Concepts in Education: Why They Won’t Die and Why You Cannot Kill Them. In P. Kendeou, D. H. Robinson, & M. T. McCrudden (Eds.), Misinformation and fake news in education (S. 7–27). Information Age Publishing, Inc.

Suárez-Guerrero, C., Rivera-Vargas, P., & Raffaghelli, J. (2023). EdTech myths: towards a critical digital educational agenda. Technology, Pedagogy and Education32(5), 605-620.

Veletsianos, G. (2016). Networked Scholars: Social Media in Academia. New York, NY: Routledge.

ChatGPTs ‘Helpful’ Suggestions Are Actually a Design Problem

In a recent op ed in The NY Times, Meghan O’Rourke highlights how AI systems might tempt learners to offload an increasing amount of their work and thinking. It’s an excellent piece that identifies many crucial problems, and she writes:

Students often turn to A.I. only for research, outlining and proofreading. The problem is that the moment you use it, the boundary between tool and collaborator, even author, begins to blur. First, students might ask it to summarize a PDF they didn’t read. Then — tentatively — to help them outline, say, an essay on Nietzsche. The bot does this, and asks: “If you’d like, I can help you fill this in with specific passages, transitions, or even draft the opening paragraphs?” At that point, students or writers have to actively resist the offer of help. You can imagine how, under deadline, they accede, perhaps “just to see.” And there the model is, always ready with more: another version, another suggestion, and often a thoughtful observation about something missing.

To counteract this, she recommends a variety of pedagogical changes, such as reconsidering the essay format and letter grades. These are fine recommendations. Another approach might be for users to add a system prompt to their LLM to guide it in a way in which it limits its suggestions to a specified task. For example, such a prompt might be phrased as follows:

Only respond to my specific request without offering to do additional work, expand your role, or suggest next steps. Do not ask if I’d like help with related tasks, drafting, or improvements unless I explicitly ask. Keep your assistance limited to exactly what I’ve requested.

However, both O’Rourke’s pedagogical reforms and the system prompt I described share a common limitation in that they place the burden of change on educators and users rather than addressing the underlying system design that creates these temptations in the first place. In other words, AI’s invitation to take over additional aspects of one’s work/writing is a particular design decision. Some design decisions – namely system defaults or those settings which are picked for you  – are more powerful than others. It is simple to stick to defaults and challenging to resist or change system defaults. For example, in the past I wrote how YouTube’s default settings (i.e. defaulting to uploaded video a copyright, rather than a Creative Commons license) has important and unanticipated impacts on open education, as well as how the defaults in Learning Management Systems structure faculty-student relationships in particular ways.

Another approach is to address the system – the design of the chatbot itself – such that the handoff of cognitive work becomes more visible and intentional rather than seamless and automatic. For example, some approaches might include:

  1. Adding friction through confirmation prompts: A few years ago, Twitter made a change to its retweeting practice. If you tried to retweet an article without having clicked it first, it asked you if you really wanted to do that. The intent was to add friction, and to address some of the challenges associated with echo chambers, where we all share things we tend to agree with, even if we don’t actually read them. Similarly, the AI system could add friction, by asking: “Are you sure you want me to draft paragraphs for you?” or “This request would significantly reduce your own writing practice – continue anyway?” before taking on substantial work.
  2. Implementing escalation warnings: When a user’s requests progressively increase AI involvement within a session, the chatbot could display messages like “You’ve now asked me to research, outline, and draft – consider what learning opportunities you might be missing.”
  3. Defaulting to partial assistance: Instead of offering complete solutions, the system could default to giving hints, questions, or partial frameworks that require human completion. This changes the pedagogical role of the chatbot. It’s probably one of the most consequential decisions that designers of education-specific chatbots must contend with.

These solutions aren’t without downsides. First, they directly conflict with AI companies’ business incentives. More seamless and extensive AI assistance likely increases user engagement, subscription renewals, and the perceived value of their products. Voluntary adoption of these friction-inducing features are unlikely without regulatory pressure or industry-wide coordination. Second, confirmation prompts might become annoying click-through obstacles that users eventually ignore.

The question isn’t whether these solutions are perfect. The alternative, accepting AI’s current design as inevitable, essentially outsources pedagogy.

Recent AI keynotes and workshops

A few weeks ago, I was at the University of Texas at Arlington to lead deliver a keynote and two workshops. I am sharing short descriptions of these here for posterity, and as examples of the kinds of events that might be of interest to others.

Keynote: GenAI, Imagination, and Education Futures (60 minutes)

This keynote explores the promises, tensions, and challenges surrounding Generative AI in education, grounded in the history, research, and tensions around the use educational technology. The goal of this talk is to provide an open space for reflection, imagination, and strategic thinking about how we want to move forward and what we want to protect along the way.

Workshop 1: Creating Speculative Fiction to Envision Utopian AI Educational Futures (90 minutes)

This workshop engages faculty in writing short speculative fiction pieces that explore positive, utopian educational futures where AI is thoughtfully integrated to enhance human learning and connection.

Workshop 2: Navigating Possible Futures with Emerging technologies (90 minutes)

This workshop applies structured scenario planning techniques to help U of Texas faculty critically examine how their institution might look like in 2035 given current advances in AI and emerging technologies.

3 excellent questions on solving education & human development problems

One of the concerns in our field of study is that persistent focus on things/technologies (e.g., mobile devices, virtual worlds, AI, online courses, etc etc) rather than problems (e.g., poverty, achievement, engagement, etc). The Journal of Computing and Higher Education has a special issue on “The Research We Need” in educational technology and Spencer Greenhalgh has an article in it that asks three important questions:

  1. which problems should we solve?
  2. who should solve those problems?
  3. is solving problems always good?

It’s a thoughtful paper and well worth your time.

The recipient test: a simple test for ethical and responsible AI use in education

I’ve been receiving many, MANY, questions over the last year around what is and isn’t ethical when deciding whether and how to use AI. Two questions I’ve started asking myself are these:

  • Would I be comfortable being on the receiving end of this?
  • Would I want this for my loved ones, like my niece and nephew?

Call this the “Recipient Test.”

This simple question isn’t just about deciding whether to use AI or not; it also prompts us to consider how we use it responsibly and ethically.

Take the example of recommendation letters. Are you tempted to use AI to draft them? Imagine seeing a letter written for you and discovering it was generated by an algorithm that knows nothing about you. Worse, picture your child receiving such a letter at a pivotal moment. This makes us question the wholesale outsourcing of the task. However, the “Recipient Test” can also illuminate a more thoughtful approach, one shared by my brilliant friend and colleague Tonia A. Dousay. Imagine someone who writes a few of these letters a week. How might theυ use AI responsibly? They might provide the job description, the candidate’s key qualifications, and their own personal insights to a tool AI, then dedicate time to revising and personalizing the draft. In this way, the “Recipient Test” can lead us to use AI as an assistant rather than a replacement.

Consider grading and providing feedback – this is a treacherous terrain. Sure, it saves time, but I still remember truly personalized feedback that shaped my learning. Would an AI provide that? I also recall receiving generic feedback – not from an AI – that offered little value. Perhaps the “Recipient Test” here encourages us to think about how AI can augment, rather than fully automate, feedback, allowing educators to focus on providing individualized guidance. I’m suspect of the “saving us time” argument, but that’s a whole different post.

When designing learning activities, you might apply this test by asking: Would using AI to generate a learning activity create a memorable learning moment I’d like to participate in? Would it inspire curiosity in my niece? The “Recipient Test” might compel you to look beyond mere efficiency.

The use of AI in education – like the use of so many other technologies over the decades – is about thoughtful, measured, and critical integration. The test might lead us to conclude that AI is or isn’t appropriate in a particular case, but it also pushes us to define the parameters of its use to ensure a positive and ethical outcome for the recipient. By consistently asking this question, we are forced to pause, reflect on the impact, and make informed choices.

Aside: This kind of thoughtful consideration might be more challenging in cases where AI is embedded within tech/edtech products, where some of our agency is limited.

Page 1 of 85

Powered by WordPress & Theme by Anders Norén