I recently read “The race for an artificial general intelligence: implications for public policy” at work. I don’t want to pick on this paper in particular, but there’s only so many times I can read sentences such as:
“the problem with a race for an AGI is that it may result in a poor-quality AGI that does not take the welfare of humanity into consideration”
before I can’t take it any more. This is just the paper that tipped me over the edge.
AGIs are already among us.
I promise I haven’t gone crazy after discovering one data preprocessing bug too many! I’m going to lay out some simple assumptions and show that this follows from them directly. By the end of this post you may even find you agree!
What will access to human-level AI be like?
This is a good starting point, because human-level intelligence clearly isn’t enough to recursively design smarter AIs or we’d already have done so. This lets us step away from the AI singularity dogma for a moment and think about how we would use this AGI in practice.
Let’s assume an AGI runs at real-time human-level intelligence on something like a small Google TPU v3 pod, which costs $32 / hour right now.
You can spin up a lot of these, but you can’t run an infinite number of them. For around $6b you could deploy a similar number of human-level intelligences as the CPU design industry and accomplish 3 years’ work in 1 year assuming AI doesn’t need to sleep. It might take 10 times that to train them to the same level as their human counterparts but we’ll assume someone else has done that and we can duplicate their checkpoint for free.
What did we just do here, apart from put CPU verification engineers out of work?
AGI let us spend capital ($6b) to achieve imprecisely-specified goals (improved CPU design) over time (1 year). In this brave new AI-enabled future anybody with access to capital and sufficient time can get human-level intelligences to work on their goals for them!
This would be revolutionary if it wasn’t already true. This is has been true since societies agreed on the use of currency - you can pay someone money to work towards your goals and then they do that instead of e.g. growing crops to feed their family, because they can buy those instead. Human-level intelligence has already been commoditized - we call it the labour market.
Human-level AGI would allow companies to arbitrage compute against human labour, which would be massively disruptive to the labour force and as such society as a whole, but only in the same way that outsourcing and globalization already were (i.e. massively).
Anyone with access to capital can start a company, hire someone as CEO and tell them to spend that money as necessary to achieve their goals. If the CEO is a human-level AGI then they’re cheaper, because you only have to pay the TPU hours. On the other hand, they can’t work for stock or options! Either way, the opportunity to you as a capital owner is basically the same. Money, time and goals in, results out.
The whole versus the sum of its parts
Perhaps you believe that hundreds or thousands of human-level AIs working together, day and night, will accomplish things that greatly outstrip that of a single human intelligence. That the effective sum intelligence of this entity will be far beyond that of any single individual?
I agree! That’s why humans work together all the time. No single human could achieve spaceflight, launch communications satellites, lay intercontinental cables across the ocean floor, design and build silicon fabs, CPUs, a mobile communications network, an iPhone and the internet and do so cheaply enough that they can afford to use it to send a video of Fluffy falling off the sofa to a group of strangers.
Companies - today mostly formed as corporations - are already a form of augmented super-human intelligence that work towards the goals specified by their owners.
We might end up with a “poor-quality AGI that does not take the welfare of humanity into consideration”
Yes, well. I think I could make the argument that we have literally billions of “poor-quality” general intelligences that do not take the welfare of humanity into consideration! They are not the biggest problem, though. The problem is that the goal-solving superintelligences of our time - particularly corporations - are generally aligned to the goals of their owners rather than to the welfare of humanity.
Those owners are, in turn, only human - so this should not come as a surprise. We are already suffering the effects of the “alignment problem”. People as individuals tend to put their own desires and families ahead of those of humanity as a whole. Some of those people have access to sufficient capital to direct huge expenditures of intelligence and labour towards their own desires and families and not towards the good of humanity as a whole.
And they do.
There is ample evidence throughout history both distant and recent that just because the individual parts are humans does not mean that an organization as a whole will show attributes such as compassion or conscience.
They do not.
AGIs are already changing the world
The promise of AGI is that you can specify a goal and provide resources and have those resources consumed to achieve that goal. This is already possible simply by employing another human intelligence. Corporations - which have legal status in many ways equivalent to a “person” - are a very successful way to commoditize this today. The legal person of a corporation can exhibit super-human intelligence and is at best aligned with its owner’s goals but not those of humanity as a whole. This is even enshrined in the principle of fiduciary responsibility to shareholders!
In every way that matters a corporation is already an artificial general intelligence. From the perspective of an owner of capital they solve the same problems - in particular, my problems and not everybody else’s.
This doesn’t let us off the hook
I wouldn’t argue that introducing a competing labour force won’t be massively disruptive. Or that, if attempted, it shouldn’t be managed by the only organizations that ostensibly represent the interests of large sections of humanity - their elected governments. I just can’t bear any more intellectual hand-wringing over the “oh but what if the AI doesn’t have humanity’s best interests at heart?” line of reasoning behind some interpretations of the “alignment problem”.
None of us have humanity’s best interests at heart. And we’re destroying ourselves over it. That’s the problem we need to solve - and time is running out.
I find it easy to agree with the many smart people and game-theoretic arguments that say it is essential for governments to regulate and tax AI as a means to ensure that it does not act against our interests.
I just feel that regulating, taxing and aligning corporations to humanity’s interests would be a better place to start.