I was thinking a bit more about computer intelligence, and I think I found a phrase that I think is more fitting.
When we speak of computers being intelligent, I think it's pretty much obvious, and I think most would agree, that the strongest type of strong AI cannot exist. I don't think we mean intelligent by any comparison of how humans think to how computers "think".
So what measure do we really want? I think what we're looking for is more "intelligible".
Now, computers can be made trivially intelligible. You can have a computer display the works of Shakespeare, that output is intelligible.
So I think what we mean is not just intelligible, but intelligible and novel. We want the computer to produce intelligible and new things.
However, we can already do this. You can have a computer simply pair a bunch of words together at random, and over a long enough time, it will generate new sentences that are intelligible. So moreoever, we don't just want intelligible and novel. We also want consistency.
When people discuss AI, I think this is what they mean: a program that produces new, consistent, and intelligible output.
Now, will we ever have that? You could say, "computers can invent new chess moves", for instance. But I don't think the way computers play chess is in any way novel. Rather, it's even something a human could do. For instance, you could make a chess game last many years, between each move give a human being a year to calculate all possible moves, and compare that to a massive record of professional games, figure out which further move produces the highest probability of winning, and boom, your algorithm is run by a human in the exact same way a computer does. The only real difference is a computer can do this very quickly. So I don't think the novelty is very novel, it's more just unexpected outputs. But even unexpected outputs aren't new. But I still believe that, even though you run the same dumb statistical analysis on food, the fact your algorithm is
produced a crappy sauce in the same way it played a chess game isn't particularly novel, a human could have run that same algorithm and get the same unexpected output.
Neural nets are advancing computer intelligence by making outputs more intelligible. But I think the actual application of neural nets that matters, can't produce new results, or if it produces new results it's trading off for consistency. If maximizing all three at once is even possible, we are still a ways out I feel.