What with all the AIs we have running around this place, and given the fact that the Foundation would definitely benefit from non-human constructs that are capable of judgment, do you guys think that it's plausible that the Foundation can make AIs that have one single-minded objective, are incapable of not fulfilling that objective, and experience absolutely no emotion or thought not pertaining to their objective?
Yes, but they wouldn't, because that's a terrible idea. Let me explain.
Say room A is a containment unit containing a sphere of power that erases your mind when you look at it. To keep anyone from looking at it, an AI temporarily blinds anyone who enters the room. Room B is the control center for the AI. Room C is a containment unit containing SCP-173. Room D is a connecting hallway. If the agents of the chaos insurgency plant a bomb in room D that destroys the walls separating rooms A,B, and C, what happens?
The guidelines you gave on AI were:
AIs that have one single-minded objective, are incapable of not fulfilling that objective, and experience absolutely no emotion or thought not pertaining to their objective?
So. Because it is not capable of thought not pertaining to "blind all who enter the room", the AI cannot react to the shift in parameters. Its scanning sensors reveal that while an explosion has caused the containment unit to grow in size, the item it is designed to contain remains secure. So you wind up with 173, which you have to look at to not die, in an unlocked room with an AI that blinds you if you enter the room and the control terminal for that AI. So, you can't open the door and live, because the AI blinds you, making you 173 fodder, and you can't turn off the AI because 173 kills you before you can access the terminal. Because the remaining door was an unlocked major hallway where shit just exploded, someone is certain to open the door, get blinded, and die before they can shut the door, leading to a double containment breach and massive headache for site security that could have been easily avoided by employing human guards in room A and training them not to look at the SCP object.
Basically, single mindedness and the inability to consider outside factors are not qualities the Foundation needs in its personnel, human or otherwise.
Bad wording on my part. The "objective" would be "contain this SCP," not "perform this specific task to contain this SCP." But I do understand that the AIs have to be able to think and consider changing parameters.
Okay, well, when you put it that way…. You're basically looking for an AI that is fully sentient (enabling it to read and react to changing circumstances that are not necessarily hardwired into its source code), but still dutiful and not emotional in any way?
In that case, I should say "No," we can't make things like that, for several reasons:
1. Whenever the Foundation finds AIs, they usually become SCPs. If it were easy to make them, they wouldn't be considered supernatural threats that have to be contained. (See also: the Olympia Project files)
2. It would raise some contradictions in how the Foundation manages things. Why have fallible human guards when you have have sophisticated machinery that does the same thing w/o problems associated with fatigue or memetic corruption?
3. It makes things too easy. Things that present a profound threat to humans might not be at all difficult for an intelligent machine to contain, and that makes those things less scary.
4. The Foundation has ample reason to be afraid of mechanical mutinies. Sheer paranoia would likely discourage them from using anything other than the human resources that they are more familiar with.
5. It makes Dr. Gears redundant.
What about AIs that only need to perform very simple tasks that humans can't perform? For example, let's say that, for whatever reason, humans can't perform the task of "count when event X happens." If parameters regarding event X are practically impossible for changing (and if they do change, then we have far bigger problems to worry about), then would it be a plausible AI?
Also: when was Gears a robot?
And if he is, why hasn't the obligatory terminator-*shot.*
Well, sure. Ordinary code monkeys like myself can do that now, in fact, if you're willing to accept that most AI constructs can only operate within limited parameters and may not succeed at their assigned task if they're presented with something that lies outside those parameters.