|
|
Log in / Subscribe / Register

Eventual Rust in CPython

[LWN subscriber-only content]

Welcome to LWN.net

The following subscription-only content has been made available to you by an LWN subscriber. Thousands of subscribers depend on LWN for the best news from the Linux and free software communities. If you enjoy this article, please consider subscribing to LWN. Thank you for visiting LWN.net!

By Daroc Alden
December 5, 2025

Emma Smith and Kirill Podoprigora, two of Python's core developers, have opened a discussion about including Rust code in CPython, the reference implementation of the Python programming language. Initially, Rust would only be used for optional extension modules, but they would like to see Rust become a required dependency over time. The initial plan was to make Rust required by 2028, but Smith and Podoprigora indefinitely postponed that goal in response to concerns raised in the discussion.

The proposal

The timeline given in their pre-PEP called for Python 3.15 (expected in October 2026) to add a warning to Python's configure script if Rust is not available when Python is built. Any uses of Rust would be strictly optional at that point, so the build wouldn't fail if it is missing. At this stage, Rust would be used in the implementation of the standard library, in order to implement native versions of Python modules that are important to the performance of Python applications, such as base64. Example code to accomplish this was included in the proposal. In 3.16, the configure script would fail if Rust is missing unless users explicitly provide the "--with-rust=no" flag. In 3.17 (expected in 2028), Python could begin strictly requiring Rust at build time — although it would not be required at run time, for users who get their Python installation in binary form.

Besides Rust's appeal as a solution to memory-safety problems, Smith cited that there are an increasing number of third-party Python extensions written in Rust as a reason to bring the proposal forward. Perhaps, if Rust code could be included directly in the CPython repository, the project would attract more contributors interested in bringing their extensions into the standard library, she said. The example in the pre-PEP was the base64 module, but she expressed hope that many areas of the standard library could see improvement. She also highlighted the Rust for Linux project as an example of this kind of integration going well.

Cornelius Krupp was apprehensive; he thought that the Rust-for-Linux project was more of a cautionary tale, given the public disagreements between maintainers. Those disagreements have settled down over time, and the kernel community is currently integrating Rust with reasonable tranquility, but the Rust for Linux project still reminds many people of how intense disagreements over programming languages can get in an established software project. Jacopo Abramo had the same worry, but thought that the Python community might weather that kind of disagreement better than the kernel community has. Smith agreed with Abramo, saying that she expected the experience to be "altogether different" for Python.

Steve Dower had a different reason to oppose the proposal: he wasn't against the Rust part, but he was against adding additional optional modules to Python's core code. In his view, optional extensions should really live in a separate repository. Da Woods called out that the proposal wouldn't bring any new features or capabilities to Python. Smith replied (in the same message linked above) that the goal was to eventually introduce Rust into the core of Python, in a controlled way. So, the proposal wasn't only about enabling extension modules. That didn't satisfy Dower, however. He said that his experience with Rust, mixed-language code, and teams forced him to disapprove of the entire proposal. Several other community members agreed with his disapproval for reasons of their own.

Chris Angelico expressed concern that Rust might be more susceptible to a "trusting trust" attack (where a compiler is invisibly subverted to introduce targeted backdoors) than C, since right now Rust only has one usable compiler. Sergey Davidoff linked to the mrustc project, which can be used to show that the Rust compiler (rustc) is free of such attacks by comparing the artifacts produced from rustc and mrustc. Dower agreed that Rust didn't pose any more security risk than C, but also wasn't sure how it would provide any security benefits, given that CPython is full of low-level C code that any Rust code will need to interoperate with. Aria Desires pointed to the recent Android Security post about the adoption of Rust as evidence that mixed code bases adopting Rust do end up with fewer security vulnerabilities.

Not everyone was against the proposal, however. Alex Gaynor and James Webber both spoke up in favor. Guido van Rossum also approved, calling the proposal a great development and saying that he trusted Smith and others to guide the discussion.

Stephan Sokolow pointed out that many people were treating the discussion as being about "Rust vs. C", but that in reality it might be "Rust vs. wear out and stop contributing". Paul Moore thought that was an insightful point, and that the project should be willing to put in some work now in order to make contributing to the project easier in the future.

Nathan Goldbaum is a maintainer of the PyO3 project, which provides Rust bindings to the Python interpreter to support embedding Python in Rust applications and writing Python extensions in Rust. He said that having official Rust bindings would significantly reduce the amount of work he has to do to support new Python versions. Another PyO3 maintainer, David Hewitt, agreed, going on to suggest that perhaps CPython would benefit from looking at the API that PyO3 has developed over time and picking "the bits that work best".

Raphael Gaschignard thought that the example Rust code Smith had provided would be a more compelling argument for adopting Rust if it demonstrated how using the language could simplify error handling and memory management compared to C code. Smith pointed out one such example, but concurred that the current proof-of-concept code wasn't a great demonstration of Rust's benefits in this area.

Gentoo developer Michał Górny said that the inclusion of Rust in CPython would be unfortunate for Gentoo, which supports many niche architectures that other distributions don't:

I do realize that these platforms are not "supported" by CPython right now. Nevertheless, even though there historically were efforts to block building on them, they currently work and require comparatively little maintenance effort to keep them working. Admittedly, the wider Python ecosystem with its Rust adoption puts quite a strain on us and the user experience worsens every few months, we still manage to provide a working setup.

[...]

That said, I do realize that we're basically obsolete and it's just a matter of time until some projects pulls the switch and force us to tell our users "sorry, we are no longer able to provide a working system for you".

Hewitt offered assistance with Górny's integration problems. "I build PyO3 [...] to empower more people to write software, not to alienate." Górny appreciated the thought, but reiterated that the problem here was Rust itself and its platform support.

Scaling back

In response to the concerns raised in the discussion, Smith and Podoprigora scaled back the goals of the proposal, saying that it should be limited to using Rust for optional extension modules (i.e. speeding up parts of the standard library) for the foreseeable future. They still want to see Rust adopted in CPython's core eventually, but a more gradual approach should help address problems raised by bootstrapping, language portability, and related concerns that people raised in the thread, Smith said.

That struck some people as too conservative. Jelle Zijlstra said that if the proposal were limited to optional extension modules, it would bring complexity to the implementation of the standard library for a marginal benefit. Many people are excited about bringing Rust to CPython, Zijlstra said, but restricting Rust code to optional modules means putting Rust in the place that it will do the least good for the CPython code. Several other commenters agreed.

Smith pushed back, saying that moving to Rust was a long-term investment in the quality of the code, and that having a slow, conservative early period of the transition would help build out the knowledge and experience necessary to make the transition succeed. She later clarified that a lot of the benefit she saw from this overly careful proposal was doing the groundwork to make using Rust possible at all: sorting out the build-system integration, starting to gather feedback from users and maintainers, and prototyping what a native Rust API for Python could look like. All of that has to happen before it makes sense to consider Rust in the core code — so even though she eventually wants to reach that state, it makes sense to start here.

At the time of writing, the discussion is still ongoing. The Python community has not reached a firm conclusion about the adoption of Rust — but it has definitely ruled out a fast adoption. If Smith and Podoprigora's proposal moves forward, it still seems like it will be several years before Rust is adopted in CPython's core code, if it ever is. Still, the discussion also revealed a lot of enthusiasm for Rust — and that many people would rather contribute code written in Rust than attempt to wrestle with CPython's existing C code.




to post comments

Better idea: rewrite the whole thing in Rust

Posted Dec 5, 2025 19:38 UTC (Fri) by cyperpunks (subscriber, #39406) [Link] (6 responses)

Convert CPython into RustPython by convert pieces of CPython to Rust.

Better idea: rewrite the whole thing in Rust

Posted Dec 5, 2025 20:03 UTC (Fri) by ssokolow (guest, #94568) [Link] (4 responses)

That's what librsvg did. While maintaining the same external API, it was incrementally converted to Rust.

Here's the Rust tag on the maintainer's blog: https://viruta.org/tag/rust.html

Google has a more typical approach, where it's paid employees maintaining things like Android, not volunteers, and they recognize that most memory-safety bugs are in young code, and that there's a risk of re-introducing logic bugs during a rewrite (differential fuzzing is your friend), so their approach with Android has been to move the new work to Rust but to not rewrite stuff in C for its own sake.

They blogged about that here: https://security.googleblog.com/2024/09/eliminating-memor...

Better idea: rewrite the whole thing in Rust

Posted Dec 5, 2025 20:41 UTC (Fri) by alx.manpages (subscriber, #145117) [Link]

Thanks! That article from Google is very interesting!

Better idea: rewrite the whole thing in Rust

Posted Dec 6, 2025 6:54 UTC (Sat) by mirabilos (subscriber, #84359) [Link] (1 responses)

It is also entirely unnecessary as there already *is* a RustPython, so kindly stick to C for CPython.

Better idea: rewrite the whole thing in Rust

Posted Dec 7, 2025 19:14 UTC (Sun) by ballombe (subscriber, #9523) [Link]

At least avoid a situation where a C implementation of the python interpreter cannot be called CPython because this name is used by the Rust version. This would no do anybody a favor.

Better idea: rewrite the whole thing in Rust

Posted Dec 6, 2025 9:59 UTC (Sat) by kunitz (subscriber, #3965) [Link]

A rewrite of a software component you need to assess whether the cost of the rewrite and the future maintenance cost is lower than the maintenance cost of the existing software component.

Google made such an assessment, but I have yet to see the "rewrite everything in X" apostles to provide such an assessment.

Better idea: rewrite the whole thing in Rust

Posted Dec 6, 2025 10:22 UTC (Sat) by jengelh (subscriber, #33263) [Link]

Why stop there? Convert CPython into a "PythonPython" and enjoy all the memory-safety of Python. If only someone were to create that impl… oh wait!

The same old arguments...

Posted Dec 6, 2025 4:36 UTC (Sat) by Heretic_Blacksheep (subscriber, #169992) [Link] (41 responses)

The same old arguments, particularly "rust doesn't support some platform target so the project shouldn't use it." I don't see that as a useful argument unless there's a significant percentage of users currently using Python on a target that Rust doesn't support... that support matrix is constantly growing: https://doc.rust-lang.org/nightly/rustc/platform-support....

There shouldn't be a handful of users on niche OS/hardware combos holding back a project that's used by tens of millions. I'm making that same argument as someone who still uses an old RPi B+ which is rapidly becoming unsupported by much of the FOSS ecosystem.

I took the opposite stance when it came to the recent kerfluffle over apt and Rust, not because it would cut out niche platforms, but because Debian isn't supposed to be a dictatorship. It's supposed to operate on diplomatic consensus not one person's take-it-or-leave-it ultimatum. That's cause to migrate away or fork apt should it come to pass.

Consensus is what the Python group is attempting to build here before they make a move and that's a Good Thing. Even though I don't care how fast they move as a mostly binary only Python user, if the consensus is to move slowly that's fine! I'm equally fine if they move relatively quickly for such an important project. Integrating Rust will be a net positive for interpreter safety, especially if it's eventually entirely rewritten.

If my RPi B+ goes the way of the dino, that's fine, too. It doesn't need the latest and greatest updates. It only needs the software necessary to do what it does and is of no danger to my LAN or anyone else's as it's entirely air gapped. That model can't even be reached by WIFI without a USB NIC dongle.

The same old arguments...

Posted Dec 7, 2025 4:22 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (40 responses)

> There shouldn't be a handful of users on niche OS/hardware combos holding back a
> project that's used by tens of millions.

But they should!

This is how we get hobbyist OSes… like Minix and then Linux.

This is how we get hobbyist architectures, too.

And these is where the great enrichment of FOSS is, not in corporate “Enterprise” Linux.

With Rust… it begins with LLVM. The project that wants 300$/month so they can run CI instances for a mere fork of FreeBSD, which at that time wasn’t even all that different. And then Rust cannot even use LLVM proper, only its own patched version.

In Debian, we have a Policy against that. Which is, of course, ignored for where this money is.

Then, bootstrapping, then navigating the entire ecosystem (including the cargo LPM, which is a plethora of problems in itself)…

… and for what? For a language that doesn’t even support dynamic linking?

The same old arguments...

Posted Dec 7, 2025 4:58 UTC (Sun) by josh (subscriber, #17465) [Link] (13 responses)

> But they should!

> This is how we get hobbyist OSes… like Minix and then Linux.

> This is how we get hobbyist architectures, too.

No, it's not. We get hobbyist OSes (and other hobbyist projects) because the people working on them put in the work to make them happen, not because they can press other developers into service to make changes work on a target they were not seeking to work on.

Getting others to commit to *keep your OS or architecture working for you* is a very, very big ask. The correct answer in almost all cases is "no, the developers of that OS or architecture need to do all the work to maintain it".

> And then Rust cannot even use LLVM proper, only its own patched version.

This is incorrect misinformation. Rust builds just fine with standard LLVM. The only patches it carries in its branch of LLVM are the same kinds of bugfixes and backports that other users of LLVM carry.

> For a language that doesn’t even support dynamic linking?

C++ generics don't support dynamic linking either, and the only reason C++ is vaguely considered to "support" dynamic linking is when interfaces pointedly avoid them. Even then, it's been through a few rounds of ABI breakage. That's not something we're looking to put our users through, for a feature that most people don't require. It's a useful feature, by all means, and I expect that we'll support it eventually, using a model similar to what Swift did. But it's not a dealbreaker, and it's never likely to be the default, just something that extends the set of things supported over a library ABI boundary to beyond what C supports.

The same old arguments...

Posted Dec 7, 2025 7:36 UTC (Sun) by mb (subscriber, #50428) [Link]

> > For a language that doesn’t even support dynamic linking?

>C++ generics don't support dynamic linking either

And Rust does support dynamic linking.

https://doc.rust-lang.org/reference/linkage.html

It's only that Rust crates typically won't make the C++ trade off to make that happen.

The same old arguments...

Posted Dec 7, 2025 8:09 UTC (Sun) by ssmith32 (subscriber, #72404) [Link]

Even more to the point: we got Linux because a big OS with many users _did not_ support every use case.

In fact, I would say the lack of support for niche platforms encourages hobby projects..

The same old arguments...

Posted Dec 7, 2025 9:39 UTC (Sun) by Sesse (subscriber, #53779) [Link] (5 responses)

> C++ generics don't support dynamic linking either

This is only true for a pretty narrow definition of “support”. It is true that _someone_ has to monomorphize the generic before it can be linked (since the only real experience AFAIK is either a VM or type erasure?); but that's true whether we're talking static or dynamic linking. Lots of C++ dynamic libraries expose functions involving these specializations; e.g., std::string is a generic (std::basic_string<char>) and libstdc++.so exposes a lot of functions related to it. In practice, you can upgrade libstdc++ pretty freely without anything related to vector, string, map, etc. breaking—but you can't easily change their internals, since they become effectively part of the ABI (like so many other things in C).

The same old arguments...

Posted Dec 7, 2025 18:43 UTC (Sun) by JoeBuck (subscriber, #2330) [Link] (4 responses)

Right; I expect that at some point the most commonly used generics in the standard library will have their implementations frozen enough so that a stable ABI can be produced and dynamic linking can be supported in Rust in more cases. This was done for C++ long ago.

The same old arguments...

Posted Dec 7, 2025 18:58 UTC (Sun) by josh (subscriber, #17465) [Link] (3 responses)

I doubt we'll ever stabilize the internal layout of anything more complicated than `Result` or `Option` (and even for those, doing so means giving up on any further opportunities for niche optimization, the mechanism by which types like `Option<&T>` is the same size as `&T`).

For something like `Vec` or `HashMap`, the most likely path to stabilization is an opaque pointer plus a vtable of methods.

The same old arguments...

Posted Dec 7, 2025 23:40 UTC (Sun) by JoeBuck (subscriber, #2330) [Link] (2 responses)

That would be crazily inefficient for Vec<i32> or other vector of atomic type. If the call is not inlined, the structure could be frozen, as it is for std::vector<int> in C++ when libstdc++ is in use. Likewise for string slice arguments.

The same old arguments...

Posted Dec 7, 2025 23:43 UTC (Sun) by josh (subscriber, #17465) [Link] (1 responses)

We could nail down slices easily enough. It might potentially be reasonable to give *some* direct access to `Vec`, since the triple of pointer, length, and capacity is the obvious implementation; that would allow efficient and vectorized access to the data. However, for instance, reallocation would likely still require a vtable call.

The same old arguments...

Posted Dec 9, 2025 19:41 UTC (Tue) by NYKevin (subscriber, #129325) [Link]

After careful consideration, I don't think Vec needs vtables for reallocation, because those vtables really should attach to Allocator instead of Vec. But we do need some ABI glue that currently does not exist:

* In the case of Vec<T> a/k/a Vec<T, Global>, Global is a well-known ZST that can be statically dispatched. No need for a vtable. This is the most common case, so ideally it should not be pessimized in order to support other cases (especially seeing as the global allocator can be replaced). The foreign code will need to call into Rust's global allocation routines, but you have to do that even in the (default) case where Global delegates to System, so that's unavoidable.
* In the case of Vec<T, A> where A is a ZST or never dropped, the ABI needs glue code to coerce the whole thing into Vec<T, &'static dyn Allocator>, and then the vtable logic lives in Allocator where it belongs. I'm assuming, of course, that we can also nail down the ABI of &dyn Trait, which is a whole other kettle of fish. But at least dyn Trait is explicitly designed to support dynamic dispatch - most of the technical choices have already been made.
* In the case of Vec<T, &'a A>, it's the same story but with a lifetime parameter. Not sure how well that translates over the ABI, but at least lifetimes add no extra gunk at runtime.
* In the general case, the Vec might own an allocator, which might not be a ZST. That coercion is more complicated because now the Vec itself is of unknown size (it directly contains the allocator's fields). I would be inclined to declare that as unsupported or at least out of scope for language-level support, in the interests of not overly pessimizing Vec<T, Global> to support a niche corner case. Probably it could still be supported at the library level by decoupling the ownership of the Allocator from the Vec, and instead passing Vec<T, &'a dyn Allocator>. But that conversion is complicated and unsafe, so maybe some kind of glue code would be helpful here as well. Or maybe this version of Vec really does need a vtable.

The same old arguments...

Posted Dec 7, 2025 20:56 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (2 responses)

> We get hobbyist OSes (and other hobbyist projects) because the
> people working on them put in the work to make them happen

Yes. But that only works if the upstreams accept such work and don’t not only put too many hurdles in for hobbyists and raise complexity but also actively throw sticks and stones into their paths and extort “protection” money (money to merge the patches).

The same old arguments...

Posted Dec 7, 2025 23:45 UTC (Sun) by josh (subscriber, #17465) [Link]

Certainly there's no call to make it intentionally harder. But resources like CI systems and maintainer time do have a cost. Framing that as "protection money" is disingenuous.

The same old arguments...

Posted Dec 8, 2025 22:56 UTC (Mon) by intelfx (subscriber, #130118) [Link]

> Yes. But that only works if the upstreams accept such work and don’t not only put too many hurdles in for hobbyists and raise complexity but also actively throw sticks and stones into their paths and extort “protection” money (money to merge the patches).

As I'm sure you are aware, in FOSS, every project has the fundamental moral right to "self-determinate": to decide what, if any, level of formal or informal support and guarantees it wishes to make.

You normally hear about this in context of projects having the moral right to provide no guarantees and no support: the proverbial "as is". However, this works both ways. If a project, such as Rust, wishes to hold themselves to a higher standard — such as requiring all code to pass CI before declaring a target is supported — **YOU CANNOT STOP THEM FROM DOING SO**.

Calling this "extorting protection money" is so disingenuous and hostile that you should honestly be ashamed of saying that.

The same old arguments...

Posted Dec 12, 2025 18:25 UTC (Fri) by anton (subscriber, #25547) [Link] (1 responses)

Getting others to commit to *keep your OS or architecture working for you* is a very, very big ask.
As someone who wants to keep the software I maintain working on minority hardware and software, I certainly won't use Rust in its current state for it. And reading some of the opinions expressed in the present discussion makes me happy that our project does not depend on CPython.

The same old arguments...

Posted Dec 12, 2025 20:22 UTC (Fri) by mirabilos (subscriber, #84359) [Link]

Thanks, that’s a statement that’s nice to read.

----

I’m also always surprised how ⓐ people go from “please merge my patches and just try to not actively break my arch” to “forcing others to commit to keep your OS or architecture working for you”, and ⓑ why that would even be onerous.

I mean, I’m not in the habit of writing shitty code that fails on other CPUs or (unixoid, mostly, as the things I write tend to target unixoid) OSes.

The same old arguments...

Posted Dec 7, 2025 7:04 UTC (Sun) by interalia (subscriber, #26615) [Link] (6 responses)

> But they should!

> This is how we get hobbyist OSes… like Minix and then Linux.

But how were the creation of Minix and Linux caused by users on niche OS/hardware holding back a large project? What large project are we talking about, because I don't think the big commercial Unixes felt constrained by x86 nor did they create Minix/Linux.

It doesn't seem to me that either of them were written in order to support other people who were users of niche hardware. They were written by those niche hardware users themselves, which would also be the proposed solution if projects like Python, apt or Linux decide to drop an architecture.

The same old arguments...

Posted Dec 7, 2025 20:57 UTC (Sun) by mirabilos (subscriber, #84359) [Link] (5 responses)

I argue that if “a large project” is “held back” by support for more architectures/systems/targets, then it’s both an unportable and a shitty project and definitely NOT something that should become a cornerstone of FOSS.

The same old arguments...

Posted Dec 8, 2025 8:45 UTC (Mon) by taladar (subscriber, #68407) [Link] (4 responses)

You have that the wrong way around. The "unportable and shitty" bit is the old hardware, that is literally why nobody builds or wants to support it anymore, because it fundamentally does something in a way that we figured out was a bad idea or at the very least different from everyone else for no good reason.

The same old arguments...

Posted Dec 8, 2025 8:52 UTC (Mon) by Wol (subscriber, #4433) [Link] (3 responses)

I think you've got it the wrong way round.

All too often the majority modern way is the WRONG way, but unfortunately it won the race to the bottom.

I can't speak for x86_64, but the 68000? the 32032? MUCH better chips, much better designed, they just couldn't make headway against the 80x86 line ...

Cheers,
Wol

The same old arguments...

Posted Dec 8, 2025 9:54 UTC (Mon) by anselm (subscriber, #2796) [Link] (2 responses)

I can't speak for x86_64, but the 68000? the 32032? MUCH better chips, much better designed, they just couldn't make headway against the 80x86 line ...

The story goes that the reason why IBM used the 8088 for the PC rather than the 68000 (which had already been available at the time) is that they didn't want PCs to be too powerful because they might have cannibalised sales of their minicomputer lines. A similar argument later kept IBM from introducing 80386-based PCs but then Compaq came out with one and the floodgates were open.

As far as the 68000 was concerned, it was certainly not for lack of trying on the part of the industry. At the time, various 68000-based computers like the Atari ST and Commodore Amiga were quite popular with home users but never made noticeable inroads in the business PC world (which was probably less to do with the technical merit of the platform(s) and more with terrible marketing and unwise product development decisions by their manufacturers). And of course the original Macintosh was 68000-based but the platform switched over to PowerPC and eventually x86 (and ARM) – much like early SUN-type Unix workstations were built around 680x0 chips before CISC fell out of fashion and the workstation makers all came up with their own RISC CPUs (SPARC, HPPA, …).

The same old arguments...

Posted Dec 8, 2025 10:31 UTC (Mon) by farnz (subscriber, #17727) [Link] (1 responses)

There's other parts to that story; the 68k had a 16 bit external data bus, where the 8088 had a mere 8 bit bus. This meant that the PC was a cheaper design, since it could reuse long-established 8 bit parts (and, indeed, if you look at the chips used in the IBM Personal Computer 5150 and the IBM System/23 Datamaster 5322 or 5324, you see a lot of overlap).

And, of course, the 32032 was a disaster zone of a chip. On paper, it was reasonable, but once you took the errata lists into account, it was awful, and you were better off with the 68000.

The same old arguments...

Posted Dec 9, 2025 11:27 UTC (Tue) by epa (subscriber, #39769) [Link]

Ah yes, and the 68008 (which also had an 8-bit data bus and could have been used to build a cheap m68k-based machine) didn't come out until 1982, too late for the IBM PC.

The same old arguments...

Posted Dec 7, 2025 9:28 UTC (Sun) by qyliss (guest, #131684) [Link]

> And then Rust cannot even use LLVM proper, only its own patched version.

This has not been true for years.

The same old arguments...

Posted Dec 7, 2025 10:10 UTC (Sun) by MortenSickel (subscriber, #3238) [Link] (1 responses)

>> There shouldn't be a handful of users on niche OS/hardware combos holding back a
>> project that's used by tens of millions.

>But they should!
>This is how we get hobbyist architectures, too.

No. Linux was initially written for 80386 that was pretty far from hobbyist in the early '90ies. As a student back then, I had a 8086 myself, dreaming if being able to buy an 80286. I had access to 80386 PCs at the university. (as well as a few other professional systems) Many of the hobbyist architectures today are the former professional architectures. (when I could take over and take home the HPUX workstation I had used at work around 2000, that felt pretty cool and I was looking into install linux on it, but it turned out that I had other things to do with my life, so it ended up in electronic waste)

The same old arguments...

Posted Dec 7, 2025 21:02 UTC (Sun) by mirabilos (subscriber, #84359) [Link]

It was hobbyist back then compared to the other Unix workstations. It was “the cheap PC”.

That it was still expensive in Europe doesn’t detract from the relative cheapness and hobbyist-ness.

I also only had an 8088 back in 1991. But the 80386 and systems with it had already been on the (american, I guess) market for years (1985 and 1986, respectively). And you kinda need one for Unix, the 80286 and below don’t have the infrastructure to support it easily. The m68k series, also a favourite by hobbyists at that time, did, so it was pure chance Torvalds did Intel first.

The same old arguments...

Posted Dec 8, 2025 1:29 UTC (Mon) by dvdeug (subscriber, #10998) [Link] (15 responses)

> This is how we get hobbyist OSes… like Minix and then Linux.

Like OSes built for the most mass-market CPU at the time? There are a lot of hobbyist OSes out there; Rosco is a new OS/system for the M68K. Its audience is fans of retrocomputing and the M68K, and it's not going to hit like Linux or Minix.

Linux on most of these architectures, besides x86 or ARM, have always been rare usages. Even the high-end versions are now weaker than any computer on the market, with the exception of S390.

> And these is where the great enrichment of FOSS is

In the handful of people who still have archaic hardware and are installing new operating systems on them? I'd rather bet on the hundreds of millions of new people who are playing around with their first computer and might be convinced to be FOSS programmers, who could lead FOSS for next forty years, rather than the thousands who want to run Linux on their ancient computers instead of doing anything forward looking.

Yes, we should work with people who want to do what they want on Linux. But hurting the mainstream to support a small minority is not a win, whether you consider popular support or the enrichment of FOSS.

The same old arguments...

Posted Dec 8, 2025 1:46 UTC (Mon) by mirabilos (subscriber, #84359) [Link] (14 responses)

A society is not measured in how it treats the masses; rather, it is measured in how it treats its minorities.

And lots of great things do eventually come from minorities.

I am also thinking of self-hosted systems, not those cross-compiled from a majority system.

The same old arguments...

Posted Dec 8, 2025 8:49 UTC (Mon) by taladar (subscriber, #68407) [Link] (12 responses)

You are not a persecuted minority because other people don't want to invest any more effort into supporting your niche hobby hardware in their mainstream software code bases.

The same old arguments...

Posted Dec 8, 2025 11:19 UTC (Mon) by moltonel (subscriber, #45207) [Link] (11 responses)

Don´t build a strawman, I didn't see anybody talking about persecution. The "how a society treats its minorities" insight is not just about treating minorities equally, but about how much community help they receive. For example how many places are made wheelchair-accessible.

Likewise, mainstream community projects are generally willing to do a bit of extra work for niche archs, but the cut-off point for "you're on your own beyond that point" is fuzzy, subjective, and worth debating. Michał Górny's comment in the original thread is pretty clear-thinking: asking for understanding/flexibility/help, but acknowledging that mainstream can't wait forever.

It does look like Rust support work for/by some niche archs got invigorated, partly thanks to this Python discussion. That's a good thing for everybody.

The same old arguments...

Posted Dec 8, 2025 13:37 UTC (Mon) by pizza (subscriber, #46) [Link] (10 responses)

> I didn't see anybody talking about persecution. The "how a society treats its minorities" insight is not just about treating minorities equally, but about how much community help they receive. For example how many places are made wheelchair-accessible.

First, "community help" is funded by taxes, and "wheelchair-accessible" places are made so because they are forced to do so as a condition of running a public business, not out of the goodness of their hearts. I might add that that accessibility directly results in higher prices for everyone else.

> Likewise, mainstream community projects are generally willing to do a bit of extra work for niche archs,

Generally, that "extra work" is "patches welcome" and increasingly, "supply testing/CI resources that can be relied upon".

The same old arguments...

Posted Dec 8, 2025 16:30 UTC (Mon) by moltonel (subscriber, #45207) [Link] (9 responses)

> First, "community help" is funded by taxes, and "wheelchair-accessible" places are made so because they are forced to do so as a condition of running a public business, not out of the goodness of their hearts. I might add that that accessibility directly results in higher prices for everyone else.

These facts sound like a rebutal, but I'm not sure of what ? Yes, some (not all) community help is tax-funded and/or legally mandated, and has a price for the overall community. And yet we still do it: we pass those laws, spend that money, and encourage these volunteers. Why ? Because we collectively decided that it was a good thing to do, whether for ethical or practical reasons. Societies keep adjusting how far community help can/should go, but there's a strong correlation between a healthy community and a helpful one.

> Generally, that "extra work" is "patches welcome" and increasingly, "supply testing/CI resources that can be relied upon".

Yes, though even "patches welcome" is not free: it costs reviewer time, ongoing maintenance, implicit commitment, etc. Every project is different: some don't accept patches, some will spend a lot of resources to help a single user.

In CPython's case, the argument is that remaining C-only has an ongoing cost, paid by the project to help minority platforms. That balance has shifted over time: 10 years ago, missing platform support was seen as Rust's problem and could legitimately prevent Rust adoption. Today more and more, it is seen as that platform's problem and dropping support has become the lesser evil.

The same old arguments...

Posted Dec 8, 2025 18:18 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

> Yes, though even "patches welcome" is not free: it costs reviewer time, ongoing maintenance, implicit commitment, etc. Every project is different: some don't accept patches, some will spend a lot of resources to help a single user.

Agreed. One project I'm on is a "patches welcome" project for platforms we don't actively support (including WSL, MinGW, Cygwin, FreeBSD, etc.). We will review patches (as time affords), but we cannot guarantee that things won't break without contributed CI resources (and even then, they'll usually be "notification" as we don't have control over the machine(s) and cannot block paying customers on such open-ended things as "CI machine over there is down").

The same old arguments...

Posted Dec 8, 2025 19:51 UTC (Mon) by pizza (subscriber, #46) [Link] (7 responses)

> Societies keep adjusting how far community help can/should go, but there's a strong correlation between a healthy community and a helpful one.

Sure. But this is where calling "loose collection of software developers working on a project on an ad-hoc volunteeer basis and a vastly larger number of non-contributing users" a "community" breaks down.

In a "real" society/community, everyone has to explicitly opt in (if only by virtue of not leaving) and once in they have to continually pay (or otherwise contribute) on an ongoing basis (=~taxes) to receive those benefits. Non-compliance with those rules has real penalties are ultimately enforced by, well, literal force.

...A society/community cannot function with zero or purely one-sided obligations.

The same old arguments...

Posted Dec 8, 2025 20:20 UTC (Mon) by moltonel (subscriber, #45207) [Link] (6 responses)

You're reading too much into this simile, how the rules are(n't) enforced or where resources come from is beside the point. AFAIU, mirabilos's point is just that it's generally a good thing for groups to spend some resources helping weaker members. This applies at every level of human societies. Always within reason: the group won't help beyond its means, or if there really is no expected return.

The same old arguments...

Posted Dec 8, 2025 23:02 UTC (Mon) by pizza (subscriber, #46) [Link]

> Always within reason: the group won't help beyond its means,

It's all well and good to say 'groups should spend some resources helping weaker members', but the fundamental point here remains the simple fact that there are [nearly always] [vastly] fewer available resources than demands placed upon them.

The same old arguments...

Posted Dec 9, 2025 1:23 UTC (Tue) by dvdeug (subscriber, #10998) [Link] (4 responses)

I don't see people running antique hardware as weaker members. They're people who run Arm or x86-64 for normal usage, and work on other systems because it's fun. They likely have more Arm/x86 computing power sitting around than the average user.

The same old arguments...

Posted Dec 9, 2025 7:42 UTC (Tue) by mirabilos (subscriber, #84359) [Link] (3 responses)

Really not.

I use Thinkpad X61 as daily driver for Linux. That’s Core2Duo from 2007.

I use Thinkpad X40 as daily driver for BSD. That’s Pentium M from 2004. I can do everything I need except Firefox and Mu͒seScore on it.

I do have one Raspberry Pi 1… because I got it as a gift.

My home server is a pre-Spectre/Meltdown Pentium 233 MMX.

I use a “dumbphone” for telephoning… I also have an old smartphone, but mostly for GPS for geocaching and the likes.

You significantly overestimate what people need to run to have a good experience.

The same old arguments...

Posted Dec 9, 2025 8:03 UTC (Tue) by mjg59 (subscriber, #23239) [Link] (2 responses)

So you have a 64-bit x86 system that supports up to 8GB of RAM and is likely faster than any commercial RISC system that can be run without a ludicrous electricity bill. You don't *need* any alternative architectures - and I have enough junk under my desk that if that's the blocker on you running weird old stuff then I'll happily drag some over to Europe when I'm there next week and post them to you, and you can't even argue about it being a waste of hardware because right now I have several old laptops that are doing nothing.

I say this as someone still actively poking at Linux driver support for the Commodore CDTV, and trying to get Zorro III working under Amiga Unix. These are things I find fun to do. I would never ask anyone else to care in the slightest.

The same old arguments...

Posted Dec 9, 2025 9:05 UTC (Tue) by mirabilos (subscriber, #84359) [Link] (1 responses)

> So you have a 64-bit x86 system that supports up to 8GB of RAM and is likely

Yes, and people are calling it legacy and are wantink to remove support for it already.

It’s ridiculous, isn’t it?

The same old arguments...

Posted Dec 9, 2025 9:11 UTC (Tue) by mjg59 (subscriber, #23239) [Link]

In this context? No, Rust compiled code is going to be Just Fine on a Core 2 Duo.

The same old arguments...

Posted Dec 8, 2025 11:36 UTC (Mon) by dvdeug (subscriber, #10998) [Link]

> A society is not measured in how it treats the masses; rather, it is measured in how it treats its minorities.

Err, no, societies that have a minority in the lap of luxury on the backs of masses living in squalor don't get rated very high.

In this case, I feel like people who have Alphas and M68K and the rest of the hardware in question tend to be the technological elite, who have a modern computer to do their work on, and have already decided whether or not to be a part of the FOSS community. It's the young kids who we need to keep the community running for another 40 years.

What's in a name?

Posted Dec 6, 2025 14:05 UTC (Sat) by rrolls (subscriber, #151126) [Link] (10 responses)

I wasn't aware there was already a RustPython until I read mirabilos's comment above, but I'm not surprised it exists.

So my stance is very much: leave CPython alone, please and thank you.

Yes we all love to hate C, but it has its place. Mostly that place is not fixing old code that ain't broken. I've attempted to compile a few language implementations from source over the years, and I have to say that CPython was one of the easiest, largely thanks to it being plain C.

If the PSF wants to endorse a Python interpreter written in Rust, it should go the whole hog: focus development and resources on completing the implementation of RustPython, officially endorse it as "the default Python", and label CPython as the alternative.

Bringing Rust into CPython would only create a mix of legacy and new-fangled, which is surely the worst of both worlds. It gives you all the annoying requirements of the new thing, and doesn't remove any of the burdens of the old thing you wanted to get rid of.

At the very least, don't call CPython CPython if it's not going to be C anymore.

Personally, I'd like to see a ZigPython.

What's in a name?

Posted Dec 6, 2025 15:59 UTC (Sat) by ssokolow (guest, #94568) [Link] (1 responses)

So my stance is very much: leave CPython alone, please and thank you.
Given that the people who opened the discussion were "two of Python's core developers", and you're not paying them, one could argue that the "fair and kind" way to "leave CPython alone" would be to just abandon it and mass-exodus to some kind of "NeoPython" fork, like how LibreOffice came to be when Oracle bought OpenOffice.org, leaving people like you free to take up maintainership of CPython.

It's their project and they're free to do whatever they feel is best... especially if they're not getting paid to do it. You're free to fork the last version you like. That's the FLOSS social contract.

(It's not as if Microsoft was obligated to continue supporting Windows 98 SE or Windows XP or Windows 7, and it's not as if the Linux kernel would have been obligated to keep Firewire support if not for someone stepping up to maintain it.)

What's in a name?

Posted Dec 6, 2025 19:03 UTC (Sat) by josh (subscriber, #17465) [Link]

Exactly: the project is not obligated to keep "the last C-only version" alive, but you can make such a fork yourself if you believe that people would rather use your fork than the official version of Python.

What's in a name?

Posted Dec 6, 2025 16:50 UTC (Sat) by ptime (subscriber, #168171) [Link] (2 responses)

The Zig compiler and manuals are freely available as soon as you’re ready to get to work.

What's in a name?

Posted Dec 7, 2025 1:23 UTC (Sun) by atai (subscriber, #10977) [Link] (1 responses)

Why is anyone ready to do this work?Or wants to do this work?

What's in a name?

Posted Dec 7, 2025 10:09 UTC (Sun) by intelfx (subscriber, #130118) [Link]

>>> Personally, I'd like to see a ZigPython.
>> The Zig compiler and manuals are freely available as soon as you’re ready to get to work.
> Why is anyone ready to do this work?Or wants to do this work?

How do I care?

This is the single possible response one can get in FOSS when one says "I want to see X" (or, conversely, "I don't want to see X that someone else is doing"): get to work yourself.

What's in a name?

Posted Dec 7, 2025 5:26 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

> If the PSF wants to endorse a Python interpreter written in Rust, it should go the whole hog: focus development and resources on completing the implementation of RustPython, officially endorse it as "the default Python", and label CPython as the alternative.

Yeah, like they did with Python 3.

Rewriting something from scratch while the original product is still being used is almost always a bad idea.

What's in a name?

Posted Dec 7, 2025 18:23 UTC (Sun) by alx.manpages (subscriber, #145117) [Link] (2 responses)

> Yeah, like they did with Python 3.

The problem with Python 3 was backwards compatibility of the language.

> Rewriting something from scratch while the original product is still being used is almost always a bad idea.

In this case, the problem is simpler:

Rewriting something from scratch is almost always a bad idea.

Some caveat is true:

Doing it while you use it makes it less terrible, because you'll get bug reports gradually, so you boil the frog alive. If one offered the rewritten version at once, the amount of bugs would be so high that no-one would accept it.

But it's still a bad idea.

What's in a name?

Posted Dec 7, 2025 18:55 UTC (Sun) by khim (subscriber, #9252) [Link]

> Rewriting something from scratch is almost always a bad idea.

Not true. I've seen things being rewritten from scratch by someone to become much better. In Linux and outside of Linux.

But the catch there is one: if something can be understood and written by one, single person (or, maybe, very small group of persons, 3-5 people with one lead and a few helpers) then things work beautifully, very often.

If something couldn't be understood and written by one person in a reasonable timeframe (couple of years tops) then rewrite, very often, just becomes a mirror of the first attempt, just with glaring flaws in a slightly different places.

What's in a name?

Posted Dec 7, 2025 22:59 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

> Doing it while you use it makes it less terrible, because you'll get bug reports gradually, so you boil the frog alive. If one offered the rewritten version at once, the amount of bugs would be so high that no-one would accept it.

It's more than that. For complicated products, it's almost always impossible to replicate all the behaviors without actively keeping two codebases in sync. In both directions. And this is doubly complicated if the original branch continues to get new features. Perl 5 vs. Perl 6 comes to mind as a good example.

What's in a name?

Posted Dec 8, 2025 9:44 UTC (Mon) by danielthompson (subscriber, #97243) [Link]

> If the PSF wants to endorse a Python interpreter written in Rust, it should go
> the whole hog: focus development and resources on completing the
> implementation of RustPython, officially endorse it as "the default Python",
> and label CPython as the alternative.

I think this misses the difference between having a Python interpreter written in Rust and having a Python interpreter written in <who-cares> that provides official Rust and C APIs for extensions.

Part of Python's culture, and value to new programmers, is the "batteries included" standard library. If a large number of extensions start to be written in Rust it might well be detrimental to the ecosystem to have to say "that would be a great addition to the standard library but you need to rewrite it in C".

I'd say that's the most valuable part of the pre-PEP: seeking to provide road map and advice on people writing new Python libraries that might be good candidated for the standard library.

Maybe we don't need everything under the sun in the stdlib?

Posted Dec 7, 2025 4:14 UTC (Sun) by kmeyer (subscriber, #50720) [Link]

It seems fine to me to just continue to create new Rust Python libraries using PyO3 and have them available as 3rd party packages. Python is kind of a weird language in that it bundles so much ... cruft ... into the standard library. There is or was a lot of unloved random code in there for no particular reason and I'm not sure it needs more of that.

If you want to use Rust in core CPython, of course, my proposal doesn't help. But if all we're talking about is some modules, they can live out of tree.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds