Discussion:
Does Python Need Virtual Threads? (Posting On Python-List Prohibited)
(too old to reply)
Lawrence D'Oliveiro
2025-06-14 04:11:55 UTC
Permalink
Short answer: no.

<https://discuss.python.org/t/add-virtual-threads-to-python/91403>

Firstly, anybody appealing to Java as an example of how to design a
programming language should immediately be sending your bullshit detector
into the yellow zone.

Secondly, the link to a critique of JavaScript that dates from 2015, from
before the language acquired its async/await constructs, should be another
warning sign.

Looking at that Java spec, a “virtual thread” is just another name for
“stackful coroutine”. Because that’s what you get when you take away
implicit thread preemption and substitute explicit preemption instead.

The continuation concept is useful in its own right. Why not concentrate
on implementing that as a new primitive instead?
Paul Rubin
2025-06-14 11:29:07 UTC
Permalink
Post by Lawrence D'Oliveiro
Looking at that Java spec, a “virtual thread” is just another name for
“stackful coroutine”. Because that’s what you get when you take away
implicit thread preemption and substitute explicit preemption instead.
Try using Erlang a little, It has preemptive lightweight processes and
it is great. Much better than async/await imho.
Lawrence D'Oliveiro
2025-06-14 23:10:55 UTC
Permalink
Post by Paul Rubin
Try using Erlang a little, It has preemptive lightweight processes and
it is great. Much better than async/await imho.
Those are called “threads”. Python already has those, and the ongoing
“noGIL” project will make them even more useful.

There’s a reason why the old coroutine concept was brought back (albeit in
this new “stackless” guise): because threads are not the best answer to
everything.
Paul Rubin
2025-06-15 01:25:26 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Paul Rubin
Try using Erlang a little, It has preemptive lightweight processes and
it is great. Much better than async/await imho.
Those are called “threads”. Python already has those, and the ongoing
“noGIL” project will make them even more useful.
Erlang's lightweight processes are called "processes" rather than
"threads" since they don't give the appearance of having shared memory.
They communicate by passing data through channels. From the
application's perspective, that is always done by copying the data,
although the VM sometimes optimizes away the copying behind the scenes.

Python has OS threads but they are way more expensive than Erlang
processes. Programming with them in an Erlang-like style still can work
pretty well.
Lawrence D'Oliveiro
2025-06-15 02:13:33 UTC
Permalink
Post by Paul Rubin
Erlang's lightweight processes are called "processes" rather than
"threads" since they don't give the appearance of having shared memory.
They communicate by passing data through channels. From the
application's perspective, that is always done by copying the data,
although the VM sometimes optimizes away the copying behind the scenes.
Python has OS threads but they are way more expensive than Erlang
processes.
Sharing process context is cheaper than having to keep copying data back
and forth. Clever tricks with the paging hardware can often be more
trouble than they’re worth.

Remember, Python’s threads are OS threads. If you’re thinking “expensive”,
you must be assuming “Microsoft Windows”.
Paul Rubin
2025-06-15 20:24:56 UTC
Permalink
Post by Lawrence D'Oliveiro
Remember, Python’s threads are OS threads. If you’re thinking “expensive”,
you must be assuming “Microsoft Windows”.
Let's see how CPython holds up with a million OS threads running. Even
being able to disable the GIL and use more than one core is very recent.
Lawrence D'Oliveiro
2025-06-15 20:59:43 UTC
Permalink
Post by Paul Rubin
Post by Lawrence D'Oliveiro
Remember, Python’s threads are OS threads. If you’re thinking
“expensive”, you must be assuming “Microsoft Windows”.
Let's see how CPython holds up with a million OS threads running.
Linux can already run hundreds of thousands of processes/threads (there’s
not a lot of difference between the two on Linux). Remember why pid_t is
32 bits, not 16 bits.

See the definition of /proc/sys/kernel/threads-max
<https://manpages.debian.org/proc_sys_kernel(5)>.
Paul Rubin
2025-06-15 21:33:28 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Paul Rubin
Let's see how CPython holds up with a million OS threads running.
Linux can already run hundreds of thousands of processes/threads
To misquote Austin Powers, "one MILLLLLION threads". Here Erlang
does it in less than 1GB of memory:

https://hauleth.dev/post/beam-process-memory-usage/
Lawrence D'Oliveiro
2025-06-16 01:14:34 UTC
Permalink
Post by Lawrence D'Oliveiro
Post by Paul Rubin
Let's see how CPython holds up with a million OS threads running.
Linux can already run hundreds of thousands of processes/threads
To misquote Austin Powers, "one MILLLLLION threads". Here Erlang does
https://hauleth.dev/post/beam-process-memory-usage/
Just a note that Erlang dates from before the current state of CPU
architectures, where you have a 100:1 disparity between RAM-access speeds
and CPU register-access speeds.

In other words, you do not want to copy stuff between processes if you can
help it. With threads sharing common memory, that data can reside in
caches that multiple threads in the same process context can share without
copying.
Paul Rubin
2025-06-16 04:02:46 UTC
Permalink
Post by Lawrence D'Oliveiro
In other words, you do not want to copy stuff between processes if you can
help it.
I'd be interested in seeing some benchmarks of multi-threaded Python
beating Erlang, if you have any to show. Otherwise, you are guessing.
Stuff copied between Erlang processes tends to be pretty small, fwiw.
Lawrence D'Oliveiro
2025-06-17 02:12:03 UTC
Permalink
Post by Paul Rubin
I'd be interested in seeing some benchmarks of multi-threaded Python
beating Erlang, if you have any to show.
Since you ask, I tried running up a simple program that creates lots of
dummy threads that do nothing but sleep for a few seconds, and reports on
its RAM usage by reading /proc/self/statm.

I am currently up to a bit over 25,000 threads (the default limit is
somewhere just under 26,000). The program reports its VM usage as over
200GB, which is way more than my total RAM + swap space, but in fact the
free(1) command reports that RAM and swap usage are nowhere that high. The
resident RAM usage while all the threads are running is reported at about
430MB.

In other words, multiply that by 40, and a million threads should get the
program’s RAM usage up to maybe 18GB.
Mild Shock
2025-06-14 21:23:23 UTC
Permalink
Concerning virtual threads the only problem
with Java I have is, that JDK 17 doesn't have them.
And some linux distributions are stuck with JDK 17.

Otherwise its not an idea that belongs solely
to Java, I think golang pioniered them with their
goroutines. I am planning to use them more heavily

when they become more widely available, and I don't
see any principle objection that Python wouldn't
have them as well. It would make async I/O based

on async waithing for a thread maybe more lightweight.
But this would be only important if you have a high
number of tasks.
Post by Lawrence D'Oliveiro
Short answer: no.
<https://discuss.python.org/t/add-virtual-threads-to-python/91403>
Firstly, anybody appealing to Java as an example of how to design a
programming language should immediately be sending your bullshit detector
into the yellow zone.
Secondly, the link to a critique of JavaScript that dates from 2015, from
before the language acquired its async/await constructs, should be another
warning sign.
Looking at that Java spec, a “virtual thread” is just another name for
“stackful coroutine”. Because that’s what you get when you take away
implicit thread preemption and substitute explicit preemption instead.
The continuation concept is useful in its own right. Why not concentrate
on implementing that as a new primitive instead?
Mild Shock
2025-06-23 11:29:09 UTC
Permalink
Hi,

async I/O in Python is extremly disappointing
and an annoying bottleneck.

The problem is async I/O via threads is currently
extremly slow. I use a custom async I/O file property
predicate. It doesn't need to be async for file

system access. But by some historical circumstances
I made it async since the same file property routine
might also do a http HEAD request. But what I was

testing and comparing was a simple file system access
inside a wrapped thread, that is async awaited.
Such a thread is called for a couple of directory

entries to check a directory tree whether updates
are need. Here some measurement doing this simple
involving some little async I/O:

node.js: 10 ms (usual Promises and stuff)
JDK 24: 50 ms (using Threads, not yet VirtualThreads)
pypy: 2000 ms

So currently PyPy is 200x times slower than node.js
when it comes to async I/O. No files were read or
written in the test case, only "mtime" was read,

via this Python line:

stats = await asyncio.to_thread(os.stat, url)

Bye
Post by Mild Shock
Concerning virtual threads the only problem
with Java I have is, that JDK 17 doesn't have them.
And some linux distributions are stuck with JDK 17.
Otherwise its not an idea that belongs solely
to Java, I think golang pioniered them with their
goroutines. I am planning to use them more heavily
when they become more widely available, and I don't
see any principle objection that Python wouldn't
have them as well. It would make async I/O based
on async waithing for a thread maybe more lightweight.
But this would be only important if you have a high
number of tasks.
Post by Lawrence D'Oliveiro
Short answer: no.
<https://discuss.python.org/t/add-virtual-threads-to-python/91403>
Firstly, anybody appealing to Java as an example of how to design a
programming language should immediately be sending your bullshit detector
into the yellow zone.
Secondly, the link to a critique of JavaScript that dates from 2015, from
before the language acquired its async/await constructs, should be another
warning sign.
Looking at that Java spec, a “virtual thread” is just another name for
“stackful coroutine”. Because that’s what you get when you take away
implicit thread preemption and substitute explicit preemption instead.
The continuation concept is useful in its own right. Why not concentrate
on implementing that as a new primitive instead?
Mild Shock
2025-06-23 22:32:26 UTC
Permalink
So what does:

stats = await asyncio.to_thread(os.stat, url)

Whell it calls in a sparate new secondary thread:

os.stat(url)

It happends that url is only a file path, and
the file path points to an existing file. So the
secondary thread computs the stats, and terminates,

and the async framework hands the stats back to
the main thread that did the await, and the main
thread stops his waiting and continues to run

cooperatively with the other tasks in the current
event loop. The test case measures the wall time.
Post by Mild Shock
node.js: 10 ms (usual Promises and stuff)
JDK 24: 50 ms (using Threads, not yet VirtualThreads)
pypy: 2000 ms
I am only using one main task, sequentially on
such await calles, with a couple of file, not
more than 50 files.

I could compare with removing the async detour,
to qualify the async I/O detour overhead.
Post by Mild Shock
Hi,
async I/O in Python is extremly disappointing
and an annoying bottleneck.
The problem is async I/O via threads is currently
extremly slow. I use a custom async I/O file property
predicate. It doesn't need to be async for file
system access. But by some historical circumstances
I made it async since the same file property routine
might also do a http HEAD request. But what I was
testing and comparing was a simple file system access
inside a wrapped thread, that is async awaited.
Such a thread is called for a couple of directory
entries to check a directory tree whether updates
are need. Here some measurement doing this simple
node.js: 10 ms (usual Promises and stuff)
JDK 24: 50 ms (using Threads, not yet VirtualThreads)
pypy: 2000 ms
So currently PyPy is 200x times slower than node.js
when it comes to async I/O. No files were read or
written in the test case, only "mtime" was read,
stats = await asyncio.to_thread(os.stat, url)
Bye
Post by Mild Shock
Concerning virtual threads the only problem
with Java I have is, that JDK 17 doesn't have them.
And some linux distributions are stuck with JDK 17.
Otherwise its not an idea that belongs solely
to Java, I think golang pioniered them with their
goroutines. I am planning to use them more heavily
when they become more widely available, and I don't
see any principle objection that Python wouldn't
have them as well. It would make async I/O based
on async waithing for a thread maybe more lightweight.
But this would be only important if you have a high
number of tasks.
Post by Lawrence D'Oliveiro
Short answer: no.
<https://discuss.python.org/t/add-virtual-threads-to-python/91403>
Firstly, anybody appealing to Java as an example of how to design a
programming language should immediately be sending your bullshit detector
into the yellow zone.
Secondly, the link to a critique of JavaScript that dates from 2015, from
before the language acquired its async/await constructs, should be another
warning sign.
Looking at that Java spec, a “virtual thread” is just another name for
“stackful coroutine”. Because that’s what you get when you take away
implicit thread preemption and substitute explicit preemption instead.
The continuation concept is useful in its own right. Why not concentrate
on implementing that as a new primitive instead?
Mild Shock
2025-06-23 22:42:14 UTC
Permalink
Hi,

I have some data what the Async Detour usually
costs. I just compared with another Java Prolog
that didn't do the thread thingy.
Post by Mild Shock
JDK 24: 50 ms (using Threads, not yet VirtualThreads)
New additional measurement with an alternative Java Prolog:

JDK 24: 30 ms (no Threads)

But already the using Threads version is quite optimized,
it basically reuse its own thread and uses a mutex
somewhere, so it doesn't really create a new secondary

thread, unless a new task is spawn. Creating a 2nd thread
is silly if task have their own thread. This is the
main potential of virtual threads in upcoming Java,

just run tasks inside virtual threads.

Bye

P.S.: But I should measure with more files, since
the 50 ms and 30 ms are quite small. Also I am using a
warm run, so the files and their meta information is already

cached in operating system memory. I am trying to only
measure the async overhead, but maybe Python doesn't trust
the operating system memory, and calls some disk

sync somewhere. I don't know. I don't open and close the
files, and don't call some disk syncing. Only reading
stats to get mtime and doing some comparisons.
Post by Mild Shock
stats = await asyncio.to_thread(os.stat, url)
os.stat(url)
It happends that url is only a file path, and
the file path points to an existing file. So the
secondary thread computs the stats, and terminates,
and the async framework hands the stats back to
the main thread that did the await, and the main
thread stops his waiting and continues to run
cooperatively with the other tasks in the current
event loop. The test case measures the wall time.
Post by Mild Shock
node.js: 10 ms (usual Promises and stuff)
JDK 24: 50 ms (using Threads, not yet VirtualThreads)
pypy: 2000 ms
I am only using one main task, sequentially on
such await calles, with a couple of file, not
more than 50 files.
I could compare with removing the async detour,
to qualify the async I/O detour overhead.
Post by Mild Shock
Hi,
async I/O in Python is extremly disappointing
and an annoying bottleneck.
The problem is async I/O via threads is currently
extremly slow. I use a custom async I/O file property
predicate. It doesn't need to be async for file
system access. But by some historical circumstances
I made it async since the same file property routine
might also do a http HEAD request. But what I was
testing and comparing was a simple file system access
inside a wrapped thread, that is async awaited.
Such a thread is called for a couple of directory
entries to check a directory tree whether updates
are need. Here some measurement doing this simple
node.js: 10 ms (usual Promises and stuff)
JDK 24: 50 ms (using Threads, not yet VirtualThreads)
pypy: 2000 ms
So currently PyPy is 200x times slower than node.js
when it comes to async I/O. No files were read or
written in the test case, only "mtime" was read,
stats = await asyncio.to_thread(os.stat, url)
Bye
Post by Mild Shock
Concerning virtual threads the only problem
with Java I have is, that JDK 17 doesn't have them.
And some linux distributions are stuck with JDK 17.
Otherwise its not an idea that belongs solely
to Java, I think golang pioniered them with their
goroutines. I am planning to use them more heavily
when they become more widely available, and I don't
see any principle objection that Python wouldn't
have them as well. It would make async I/O based
on async waithing for a thread maybe more lightweight.
But this would be only important if you have a high
number of tasks.
Post by Lawrence D'Oliveiro
Short answer: no.
<https://discuss.python.org/t/add-virtual-threads-to-python/91403>
Firstly, anybody appealing to Java as an example of how to design a
programming language should immediately be sending your bullshit detector
into the yellow zone.
Secondly, the link to a critique of JavaScript that dates from 2015, from
before the language acquired its async/await constructs, should be another
warning sign.
Looking at that Java spec, a “virtual thread” is just another name for
“stackful coroutine”. Because that’s what you get when you take away
implicit thread preemption and substitute explicit preemption instead.
The continuation concept is useful in its own right. Why not concentrate
on implementing that as a new primitive instead?
Mild Shock
2025-06-23 22:48:20 UTC
Permalink
Hi,

I tested this one:

Python 3.11.11 (0253c85bf5f8, Feb 26 2025, 10:43:25)
[PyPy 7.3.19 with MSC v.1941 64 bit (AMD64)] on win32

I didn't test yet this one, because it is usually slower:

ython 3.14.0b2 (tags/v3.14.0b2:12d3f88, May 26 2025, 13:55:44)
[MSC v.1943 64 bit (AMD64)] on win32

Bye
Post by Mild Shock
Hi,
I have some data what the Async Detour usually
costs. I just compared with another Java Prolog
that didn't do the thread thingy.
Post by Mild Shock
JDK 24: 50 ms (using Threads, not yet VirtualThreads)
JDK 24: 30 ms (no Threads)
But already the using Threads version is quite optimized,
it basically reuse its own thread and uses a mutex
somewhere, so it doesn't really create a new secondary
thread, unless a new task is spawn. Creating a 2nd thread
is silly if task have their own thread. This is the
main potential of virtual threads in upcoming Java,
just run tasks inside virtual threads.
Bye
P.S.: But I should measure with more files, since
the 50 ms and 30 ms are quite small. Also I am using a
warm run, so the files and their meta information is already
cached in operating system memory. I am trying to only
measure the async overhead, but maybe Python doesn't trust
the operating system memory, and calls some disk
sync somewhere. I don't know. I don't open and close the
files, and don't call some disk syncing. Only reading
stats to get mtime and doing some comparisons.
Mild Shock
2025-06-24 06:53:40 UTC
Permalink
Hi,

Everybody who puts me personally on CC: , and
posts form python-***@python.org . Please note,
I cannot respond on python-***@python.org .

Somebody blocked me on python-***@python.org .
If you want a discussion, post on comp.lang.python .
And stop spamming me with your CC: .

Bye

P.S.: BTW, I got blocked after this moron wrote
this nonsense. It is complete nonsense, now
that everybody is talking about AsyncAPI, and

since Dogelog Player evolved into Async, simply
by its 2nd target JavaScript. What company was he
working for? A looser company Teledyne ?

------------------- begin moron ---------------------

Opinion: Anyone who is counting on Python for truly
fast compute speed is probably using Python for the
wrong purpose. Here, we use Python to control Test
Equipment, to set up the equipment and ask for a
measurement, get it, and proceed to the next measurement;
and at the end produce a nice formatted report. If we
wrote the test script in C or Rust or whatever it
could not run substantially faster because it is
communicating with the test equipment, setting it up
and waiting for responses, and that is where the vast
majority of the time goes. Especially if the measurement
result requires averaging it can take a while. In my
opinion this is an ideal use for Python, not just
because the speed of Python is not important, but also
because we can easily find people who know Python, who
like coding in Python, and will join the company
to program in Python ... and stay with us.
--- Joseph S.

Teledyne Confidential; Commercially Sensitive Business Data

https://mail.python.org/archives/list/python-***@python.org/thread/RWEKXFW4WED7KNI67QBMDTC32EAEU3ZT/

------------------- end moron -----------------------
Post by Mild Shock
Hi,
Python 3.11.11 (0253c85bf5f8, Feb 26 2025, 10:43:25)
[PyPy 7.3.19 with MSC v.1941 64 bit (AMD64)] on win32
ython 3.14.0b2 (tags/v3.14.0b2:12d3f88, May 26 2025, 13:55:44)
[MSC v.1943 64 bit (AMD64)] on win32
Bye
Post by Mild Shock
Hi,
I have some data what the Async Detour usually
costs. I just compared with another Java Prolog
that didn't do the thread thingy.
 > JDK 24: 50 ms (using Threads, not yet VirtualThreads)
JDK 24: 30 ms (no Threads)
But already the using Threads version is quite optimized,
it basically reuse its own thread and uses a mutex
somewhere, so it doesn't really create a new secondary
thread, unless a new task is spawn. Creating a 2nd thread
is silly if task have their own thread. This is the
main potential of virtual threads in upcoming Java,
just run tasks inside virtual threads.
Bye
P.S.: But I should measure with more files, since
the 50 ms and 30 ms are quite small. Also I am using a
warm run, so the files and their meta information is already
cached in operating system memory. I am trying to only
measure the async overhead, but maybe Python doesn't trust
the operating system memory, and calls some disk
sync somewhere. I don't know. I don't open and close the
files, and don't call some disk syncing. Only reading
stats to get mtime and doing some comparisons.
Loading...