Image

Stephen Downes

Knowledge, Learning, Community

Select a newsletter and enter your email to subscribe:

Email:

Vision Statement

Stephen Downes works with the Digital Technologies Research Centre at the National Research Council of Canada specializing in new instructional media and personal learning technology. His degrees are in Philosophy, specializing in epistemology, philosophy of mind, and philosophy of science. He has taught for the University of Alberta, Athabasca University, Grand Prairie Regional College and Assiniboine Community College. His background includes expertise in journalism and media, both as a prominent blogger and as founder of the Moncton Free Press online news cooperative. He is one of the originators of the first Massive Open Online Course, has published frequently about online and networked learning, has authored learning management and content syndication software, and is the author of the widely read e-learning newsletter OLDaily. Downes is a member of NRC's Research Ethics Board. He is a popular keynote speaker and has spoken at conferences around the world.

Stephen Downes Photo
Stephen Downes, [email protected], Casselman Canada

The double-edged sword: Open educational resources in the era of Generative Artificial Intelligence
78859 image icon

I contributed to this paper (9 page PDF) - not a ton, but definitely not nothing. Here's the argument that came out of our exchanges: "We analyze several emerging tensions: the ontological crisis of human authorship, which challenges traditional copyright frameworks; the risk of 'openwashing' where proprietary models appropriate the language of the open movement," and some ethical issues. "This paper argues that the binary definition of 'openness' is no longer sufficient. We conclude that ensuring equity in the AI era requires a transition from open content creation to the stewardship of 'white box' technologies and transparent digital public goods." Now there's a lot of uncharted territory in that final statement. This paper just begins to touch on it, and (in my view) concludes without really explaining what we might mean by all that.

Today: Total: Ahmed Tlili, Robert Farrow, Aras Bozkurt, Tel Amiel, David Wiley, Stephen Downes, Journal of Applied Learning & Teaching, 2026/02/16 [Direct Link]
From data to Viz - Find the graphic you need
78858 image icon

Tom Woodward links to three interesting graphing resources in one post. This first item, a tool for selecting the sort of graphic you want to use, is a number of chart type selections classified according to the number of variables you're looking at. Their poster is probably the best value of the three. If you prefer a more open-ended selection, there's this complete guide to graphs and charts. This page also links to "on-demand courses show you how to go beyond the basics of PowerPoint and Excel to create bespoke, custom charts" costing about $100 per. And how do you make the charts? You could use SciChart, a 'high-performance' Javascript chart and graph library. But the pricing is insane, starting at $116 per developer per month. I'm pretty sure ChatGPT will teach you about the types of charts (actually, I just made one for you while writing this post) and Claude Code will be able to write you a free version of SciChart. 

Today: Total: Yan Holtz and Conor Healy, 2026/02/17 [Direct Link]
GenAI as automobile for the mind, and exercise as the antidote: A metaphor for predicting GenAI's impact
78857 image icon

I like this analogy. "Some of you may remember the Apple ads that emphasized the computer as a 'bicycle for the mind.' GenAI is not like a bicycle for the mind. Instead, it's more like an automobile." Or, says Mark Guzdial, "As Paul Kirschner recently wrote, GenAI is not cognitive offloading. It's outsourcing. We don't think about how to do the tasks that we ask GenAI to do. As the recent Anthropic study showed, you don't learn about the libraries that your code uses when GenAI is generating the code for you (press release, full ArXiv paper)." Maybe. But it depends on how you use AI - there is a 'bicycle method' (to coin a phrase) when using AI, which is what (I think) I do - making sure I understand what's happening each step of the way. As Guzdial says, "Generative AI is a marshmallow test. We will have to figure out that we need to exercise our minds, even if GenAI could do it easier, faster, and in some cases, better." See also: To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making.

Today: Total: Mark Guzdial, Computing Ed Research - Guzdial's Take, 2026/02/17 [Direct Link]
mist: Share and edit Markdown together, quickly (new tool)
78856 image icon

This is pretty cool: it's a collaborative markdown editor with a couple of interesting features: "all docs auto-delete 99 hours after creation. This is for quick sharing + collab"; and "Roundtripping: Download then import by drag and drop on the homepage: all suggestions and comments are preserved." Built over the weekend using Claude Code. And it reminds me or a remark I heard on TWIT: coding with AI is the best video game out there right now. "You know it's very addictive using Claude Code over the weekend. Drop in and write another para as a prompt, hang out with the family, drop in and write a bit more, go do the laundry... scratch that old-school Civ itch, 'just one more turn.' Coding as entertainment."

Today: Total: Matt Webb, Interconnected, 2026/02/16 [Direct Link]
The Intrinsic Value of Diversity
78855 image icon

I've made a similar argument in my own writings on ethics: "diversity in general is intrinsically valuable, and there's no good reason to treat moral diversity as an exception." People will have as different understanding than you or I on what's right and good, and overall (within reason) that's a good thing. Now the reasoning offered here is based on aesthetic premises: "a world where everyone liked, or loved, the same things would be a desperate, desolate world." Or as Eric Schwitzgebel summarizes, "An empty void has little or no value; a rich plurality of forms of existence has immense value, no further justification required." My own reasoning is more pragmatic: a world where we all valued the same things would be static and unchanging, and therefore, could never learn or adapt. 

Today: Total: Eric Schwitzgebel, The Splintered Mind, 2026/02/16 [Direct Link]
The Shortcut That Costs Us Everything
78854 image icon

The title is provocative, but maybe a bit overstated. Here's the argument: why not have students analyze AI-generated writing (instead of writing their own essays)? Because "this approach becomes the dominant mode, displacing rather than supplementing the generative work students need to do themselves." You can only get so far studying what others have written; you have to write for yourself to really understand it. Couros decomposes the original suggestion, identifying assumptions it rests on (for example: students are able to analyze writing, students don't need to generate their own). But even more importantly, there's the risk that students won't develop sufficient critical thinking skills. "Critical media literacy isn't just a nice academic skill. It's a survival capacity. And we're proposing to develop it by removing the very experiences that might allow students to understand, at a visceral level, what synthetic content lacks." But... is that the skill people really need? We need better standards than "two legs good, zero legs bad." I think what we really need (and never really been taught well) is the means to distinguish between what can be trusted and what can't (no matter who or what created it). 

Today: Total: Alec Couros, Signals from the Human Era, 2026/02/16 [Direct Link]

Stephen Downes Stephen Downes, Casselman, Canada
[email protected]

Copyright 2026
Last Updated: Feb 17, 2026 07:37 a.m.

Canadian Flag Creative Commons License.