I find academic arguments about strong AI exhausting, regardless from which direction they’re argued.
The fear mongers (to generalize their arguments) are afraid that strong AI will go postal, exterminate or enslave humanity and single-mindedly turn the universe into forks or something. Human-like intelligence isn’t a theoretical concept though, it exists, and almost 100% of known human-like intelligences aren’t at all like what these experts fear. Do humans have problems and deviances? Sure, but we’re also capable of policing that behaviour. Could human-like machine intelligence be very different from bona fide human intelligence? Sure, but we’re doing science here. If you’re proposing that theoretical strong AIs will necessarily be antisocial, you’re contradicting exceptionally strong evidence that (inter-species!) social behaviour is selected for intelligent and conscious animals. These same scientists have yet to provide a satisfactory explanation for why humans aren’t, every one, a lying, Randian rape-monster, so I’m not sure why we should trust them when they speculate the same about eventual strong AI.
Same thing for the transhumanists (and others who argue or fear that strong AI will beget stronger AI, until approaching ‘infinite technology’). We already have an example of an intelligent, conscious machine that is set upon improving itself. What makes you think humans aren’t already improving their intelligence as fast as physically possible? We don’t even know what consciousness is or how it works, what structures enable consciousness to happen. How do they know that the human brain isn’t already the most optimal and compact structure for generating consciousness? The human brain clearly isn’t perfect - computers have more working memory than we do, and are much better at computation. But the fact that computers are better than brains at computing doesn’t mean they will ever be better than brains at braining. Is it possible? Sure, but you have no evidence for it. We’re supposed to be doing science.
It’s all a bunch of baseless speculation. Everything. The only thing we can say for sure is that Google will misuse AI, but that’s because they misuse everything.
And trying it back to politics, I’m extra chuffed that most of the arguments against strong AI also apply to capitalism. An unthinking machine that enslaves humanity and single-mindedly converts all resources into a useless product? Hmmm.