Crossposted from world spirit sock puppet.
“We can’t prevent progress” say the people for some reason enthusiastically advocating that we just risk dying by AI rather than even consider contravening this law.
I have several problems with this, beyond those unsubtly hinted at above.
First, it seems to be willfully conflating “increasing technology understanding and/or tools” with “things getting better”. The word ‘progress’ generally means ‘things getting better’, but here in a debate about whether it is good or not for society to acquire and spread some specific information and tools, we are being asked to label all increases in information and tools as ‘progress’, which is quite the presumption of a particular conclusion.
(Yes the sub-debate here is more narrowly about whether averting technology is feasible not whether it is good, but the bid here to implicitly grant that the infeasible thing is also reprehensible and backward to want (i.e. anti-”progress”) seems unfriendly.)
If we separate the conflated concepts—i.e. distinguish ‘increasing technological information and tools’ from ‘things getting better’—the statement doesn’t seem remotely true for either of them.
First: Preventing things from getting better is a capability humans have had perhaps at least as far back as the Sea Peoples of Bronze Age collapse fame. (If indeed we go ahead and make machines that do in fact destroy humanity, we will also have prevented ‘progress’ in the normal sense.)
But now let’s consider preventing “increasing technology information and tools”, which seems like the more relevant contention. I’m a bit unsure what the position is here, honestly—do people think for instance that the FDA doesn’t slow down the pharmaceutical industry? Do they think that the pharmaceutical industry is too small and insulated from financial incentives for its slowing down to be evidence about AI?
Perhaps we just don’t usually think of the pharmaceutical industry as ‘slowed down’ because we are used to that as the way it operates? Or perhaps this doesn’t count because the point isn’t to slow it down, it’s just to have it proceed at the rate it can do so safely for people, with the slowness as an unfortunate side-effect. In which case, fine—that would also do for AI!
In case this example is for some reason wanting, here are more examples of technologies slowed down to something more like a halt, from a previous post (more detail here also):
- Huge amounts of medical research, including really important medical research e.g. The FDA banned human trials of strep A vaccines from the 70s to the 2000s, in spite of 500,000 global deaths every year. A lot of people also died while covid vaccines went through all the proper trials.
- Nuclear energy
- Fracking
- Various genetics things: genetic modification of foods, gene drives, early recombinant DNA researchers famously organized a moratorium and then ongoing research guidelines including prohibition of certain experiments (see the Asilomar Conference)
- Nuclear, biological, and maybe chemical weapons (or maybe these just aren’t useful)
- Various human reproductive innovation: cloning of humans, genetic manipulation of humans (a notable example of an economically valuable technology that is to my knowledge barely pursued across different countries, without explicit coordination between those countries, even though it would make those countries more competitive. Someone used CRISPR on babies in China, but was imprisoned for it.)
- Recreational drug development
- Geoengineering
- Much of science about humans? I recently ran this survey, and was reminded how encumbering ethical rules are for even incredibly innocuous research. As far as I could tell the EU now makes it illegal to collect data in the EU unless you promise to delete the data from anywhere that it might have gotten to if the person who gave you the data wishes for that at some point. In all, dealing with this and IRB-related things added maybe more than half of the effort of the project. Plausibly I misunderstand the rules, but I doubt other researchers are radically better at figuring them out than I am.
- […]
Aside from the seeming disconnect with empirical evidence, I’m confused by the theoretical model here. Do people think the rate of technological development can’t be affected by funding, or by the costs of inputs, or by regulation? Or do they think these factors would affect technology, but that this will never in practice happen because the relevant decisionmakers will never have the will?
Do they also think technology cannot be sped up? If so, how is that different?
Do they just mean you can’t fully grind it to a halt, preventing all progress? That may be so, but in that case, slowing it down a lot would generally suffice!







