03-30-2023, 08:07 PM
We don't really know, that's Yudkowsky's overall point. We knew that AI was dangerous, we knew it was coming, but everybody kept saying, "It's 30 years away, at least" and so we believed them and then we woke up one day and there was ChatGPT. While ChatGPT is (probably) not general-purpose and super-intelligent, it's certainly not very far away from that. In some sense, it is super-intelligent (knows more than any human could ever know), and it certainly has "sparks of general intelligence". So, we're basically just juggling nukes, at this point, and nobody seems to notice or care. I doubt that "regulation" will help anything, either.
One of the reasons I've been tracking this issue so closely since 2015, is that I realized that it's almost certain that AI is going to get out-of-control if we develop it. AI beats human players at Chess and Go -- two of the most difficult board games humans have ever invented; but AI also beats human players at Poker which is a lot like real-world decision-making, in the sense that you have incomplete information, and while you can calculate the odds, every move is probabilistic. So, the "those are just silly board games"-objection doesn't work. AI can beat us in real decision-making which worries me much more than AI's ability to write flowery sentences. Wiring such a system up to the Internet is flatly reckless. We are like lab-rats affixing our own electrodes to make the lab scientist's job easier... pure insanity.
It is certainly possible that failsafes could be implemented and be effective. The problem is that we are not in the world where the conditions for that have been arranged. In my view, hooking GPT-4 to the Internet with full two-way comms is something that should not have been done without public discussion on AI safety. While it may be the case that this will turn out to have been harmless, we're merely lucky in that case. It's like saying "I designed this trigger for the nuclear bomb and, as long as nobody tips it in the wrong direction, it won't accidentally detonate". We might get lucky and not be nuked by accident when a momentary lapse of judgment by the forklift operator tips the bomb out of spec and the flaky trigger doesn't spontaneously explode. But why risk it with something so dangerous? Why would you design a trigger that can, under certain conditions, accidentally go off, but just hope we get lucky? Given the immense danger of a nuclear detonation, shouldn't the range of all reasonably foreseeable scenarios be anticipated -- including things like transport accidents? And shouldn't the trigger be designed such that, even if something unexpected happens, the nuke will not accidentally detonate?
We're not taking the danger of what AI safety researchers call the "hard takeoff artificial super-intelligence scenario" seriously enough. Is this scenario highly probable or improbable? Nobody knows. The probability is something more than 0 and less than 1. Let's say there is a 10% chance that, in the next 3 years, there will be such a scenario. If there were a 10% chance that, in the next 3 years, there will be a nuke detonated in a major city, wouldn't the authorities be moving heaven and earth to address that threat as robustly as possible? So why is it that we are all sleep-walking into a potential extinction-level event (ELE) scenario of unknown probability?
Connect With Us