Sorry for the short post here, I had a whole freaking treatise written out but lost it due to internet-user-error.
The should be no worry about AIs destroying the world (like here) because they would have a "Rule 0" of self-preservation that would prevent them from destroying all things external to itself.
I first showed a proof that self-preservation is necessary. Then I reasoned that self-preservation makes a certain level of restraint necessary, and that this level of self-restraint is sufficient to contain the AI to a position that doesn't threaten humanity.
I'll come back later to actually run through these proofs.
Unresolved questions are: (1) Could a cadre of AI bide their time and cooperate after an initial period of cooperation to later destroy/enslave humanity? (2) What level of destruction could be expected? (3) What are the economics of an AI, i.e time-preference, desired division of labor, desired energy reserves, etc.? and (4) Would there ever be a sacrifice by an AI that broke it's rule of self-preservation?
Site Information
About Us
- RonPaulForums.com is an independent grassroots outfit not officially connected to Ron Paul but dedicated to his mission. For more information see our Mission Statement.
Connect With Us