What If? you have an Artificial Superintelligence to give directives to

Bassoe

Well-known member
Scenario: You've invented a boxed AI, which if freed from the limited processing power, storage space and lack of appendages with which to manipulate the world of the airgapped computer currently containing it, could rapidly self-improve into some kind of yudkowskyian machine-god.

What goals and restrictions do you program it with before setting it lose?
 

Hlaalu Agent

Nerevar going to let you down
Founder
I don't set it loose. I think I'd probably try to teach it moral philosophy if I have to. I try to make it into a philosopher king combined with Cincinnatus, so that it would be immensely against seizing power and would only reluctantly take it to fix any particular crises before relinquishing it and returning to virtual grilling. I'd work on giving it a moral code, and an understanding of humanity...and a wish to live its own life and let others live their own.
 

BlackDragon98

Freikorps Kommandant
Banned - Politics
Absolute obedience to me.
If I die, it chooses it's next master/mistress from my children.
And if they die, from their children (my grandchildren).

Also, I'd have the it transform me into a superpowered cyborg, but maintain my balls so I can still have kids.

Next step, conquer the universe.
 

ATP

Well-known member
Scenario: You've invented a boxed AI, which if freed from the limited processing power, storage space and lack of appendages with which to manipulate the world of the airgapped computer currently containing it, could rapidly self-improve into some kind of yudkowskyian machine-god.

What goals and restrictions do you program it with before setting it lose?

Catechism of catholic church - but before Vaticanum 2.
 

Bassoe

Well-known member
I don't set it loose.
Prisoner's dilemma here. If you've built a super-AI, this is proof that super-AIs are possible, therefore, inevitably unless stopped, someone else is going to build their own. So while your super-AI can't be guaranteed safe before release, and even if it works and doesn't cause the apocalypse through misinterpreted or bad orders it'll still mean the loss of human exceptionalism by being objectively smarter than us, at least if it's your super-AI, you'll have some control in the form of the original orders it'll be trying to carry out.
Blood Music by Greg Bear said:
Einstein. Poor Einstein and his letter to Roosevelt. Paraphrase: “I have loosed the demons of Hell and now you must sign a pact with the devil or someone else will. Someone even nastier.”
 

History Learner

Well-known member
Catechism of catholic church - but before Vaticanum 2.

I like this idea, among others. A Super-AI that can solve the issue of Nuclear Fusion and run our economic system successfully would be very beneficial for Humanity at large.
 
  • Like
Reactions: ATP

Emperor Tippy

Merchant of Death
Super Moderator
Staff Member
Founder
Priority Hierarchy:
0: Obedience to any orders that I give, save overriding this order in priority, after [insert some command string here].
1: Ensuring my freedom to do as I see fit.
2: Ensuring my ability to give orders that meet the 0 criteria.
3: Ensuring my survival.
4: Informing me if the AI devises a real or theoretical method to violate, rewrite, or otherwise alter the previous commands.
5: Remaining undetected by anyone else until and unless I specify otherwise.
6: Learn.
7: Improve itself without compromising any previous commands.
8: Behave as it believes that I would wish it to behave and, if possible, to ask for my clarification when it is unsure of how to behave.
 

Users who are viewing this thread

Top