Page 3 of 4 FirstFirst 1234 LastLast
Results 31 to 45 of 49

Thread: Manufactured Sentient Life

  1. #31
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    I think that instilling a set of morality and ethical guidelines in a sentient computer is in some ways a replacement to empathy and emotions, in the context of Data. Lore had full emotions and no such guidelines, however the result was a disaster. Equaly however ANY behaviour, such as emotions, are a function of programming and hardware design by intent, much as genetics determine our behaviours and responses. So in that context it isn't slavery to graft on emotions, feelings, or ethical subroutes: The slavery aspect is maintaining them by force. Humans, between both nature and nurture are instilled with a set of ethical guidelines, but it is their choice to chose whether to act on them, overwrite them or ignore them as they wish. Data could re-write his subroutines and learned new things, and new ways of thinking, so he was never a slave to them / it.

    Instilling a sense of defined right and wrong, such as shown in the film 'I Robot' created dangerous hoicidal and sociopathic creatures because their forced logic didn't allow them to make up their own mind! Equally a raw inteligence without some form of compassion or empathy, the most basic of all human abilities, the ability to see someone elses viewpoint (which some chose to ignore!) would likelly become sociopathic by nature, even homicidal.

    It also has to be said, without pre-programming of any sort a machine inteligence would likelly take just as long as a biological child to learn how to do things, perhaps even moreso, because at least biological creatures come with inbuilt abilities, such as the capability to learn language, selfishness, motor skills, fight or flight reactions or even a sense of danger and morality. Our biology gives us a huge amount of context before we begin: If you're going to give machines all of that raw knowledge you should also likewise give them all of the rest of it too; to do otherwise would be dangerous and I doubt any Federation (or other non-stupid races) would allow such an experiment to begin in the first place! A Biological child which could instantly learn everything from a download, and have full language skills andmobility from birth would present just as much as an ethical and moral debate as a machine. People forget that We are fantastically inteligent machines too: Our brains are far more powerful than any predicted technology for centuries (even in a trek context!) The power of the notion of 'artificial' inteligence is only in the context of pre-programming and instant transmission of data. Transfering that ability to biological machines would result in just as sophisticated ideoms. However it has to be noted the Borg are just such creatures!

    What would happen if a bunch of machines, because they have 'free will', simply decide that their computing power can be increased exponentially by adding new processors to themselves, through '[telepathy' or subspace communications networks... you then have an all machine Borg, which can make new units through replication, avoiding even the few weeks or months of 'maturation' of Borg Drones (though lacking of course the ability to assimilate in the same way... or maybe they could?!) that's quite a frightening prospect!
    Ta Muchly

  2. #32
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    Quote Originally Posted by Tobian
    What would happen if a bunch of machines, because they have 'free will', simply decide that their computing power can be increased exponentially by adding new processors to themselves, through '[telepathy' or subspace communications networks... you then have an all machine Borg, which can make new units through replication, avoiding even the few weeks or months of 'maturation' of Borg Drones (though lacking of course the ability to assimilate in the same way... or maybe they could?!) that's quite a frightening prospect!
    Something like "Where no machine has gone before". Techincally, not Borg as they have no orgianic componenent. More like a species of intelligence machines ala the Terminator series.

  3. #33
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    I meant Borg in the context of a distributed inteligence model, rather than the organic-machine simbiosis

    It's frightening no matter what you call it
    Ta Muchly

  4. #34
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    Quote Originally Posted by Tobian
    I meant Borg in the context of a distributed inteligence model, rather than the organic-machine simbiosis

    It's frightening no matter what you call it

    Yup, that is one reason why Asimov wrote the 3 laws of robotics.

    If we were to create an automous device with capabilities beyond those of human beings, and without any sort of resitiction or human morality, andallowed it to mutliply you end up with one of the scarier cvliche's of SCiFi: An intelligent robot without mercy or compassion that might decide to eliminate the human race for a vartiety of reasons (efficency, survival of the fittest, self preservation, etc.).

    Such a concept is scarier today that when first imagined, as technology has advanced to the point where we can concieve of such beings as something more than flights of fancy.

    Sort of like how the concept of an internation terrorist orgainazation getting ahold of nuclear weapons was considered ludicious, even laughablee when introduced in Thunderball back in 1961. In 2006, no one is laughing.

  5. #35
    Join Date
    Nov 2002
    Location
    fringes of civillization
    Posts
    903
    There was an Andromeda episode where they encountered some kind of intelligent collective of technological stuff. I cant remember the full name, but the idea was sorta 'borg-ish'. this collective created an android to act as a rep to the Andromeda crew (and it REALLY looked like a borg drone), and then the standard 'member of non-indivudualistic society becomes an indvidual' story line progressed. Typical first season AND., but the idea was interesting.

    A group of machines, self aware and not wanting anything to do with organics. The just recombine and rebuild themselves to fit a need. Just finding a corner of the galaxy and doing their own thing.
    _________________
    "Yes, it's the Apocalypse alright. I always thought I'd have a hand in it"
    Professor Farnsworth

  6. #36
    Join Date
    Jan 2000
    Location
    Virginia Beach, VA
    Posts
    750
    One of the things I liked about Asimov's Three Laws, was that he himself wrote stories exploring the need for robots that didn't have them.
    While the concept was that the 3 laws were so integral to the design of a robot's brain as to be impossable to bypass or remove by accident or design, they has (VERY quietly) designed brains that were different from the ground up. Specificly, they had the 3 laws in a different order.

    See, nuclear power plants can be dangerous places to be. And the robots assigned to work in them were spending all of their time forcibly escorting the human employees out (since to allow them to remain inside might be "allowing them to be harmed").
    Certain rare applications required a robot that would allow humans to be exposed to risk if they ordered it to.

    Quote Originally Posted by Tobian
    Instilling a sense of defined right and wrong, such as shown in the film 'I Robot' created dangerous hoicidal and sociopathic creatures because their forced logic didn't allow them to make up their own mind!
    Actually, IMO, what went horrably wrong in the film was the creation of Robots who, though still bound by the 3 laws, were programmed to substitute someone else's judgement for their own.

    That and a flaw that Asimov himself wrote about: simple robots are incapable of making "judgement calls". If faced with a man about to fall of a cliff, but the only way to catch him will break his arm, a primative robot will lock-up, unable to choose a course of action since they will both result in harm to the human.
    As robots gain sophistication, they gain the ability to choose the path of the least harm. But that opens up the possability that a robot will realize that a tightly regulated police state may provide the greatest safety for the greatest number of people, thus suggesting it would be desirable.

    The solution is referred to as the "Zero-th Law": a Robot shall not, through action or inaction, allow harm to come to humanity.
    (It is presumed that any being intelligent enough to trigger the problem in the first place will realize that freedom is more vital to humanity than the safety a police state provides, if given a little time to think about it.)

    Thou shalt not mention V'Ger and Nomad in the same post, lest people begin to notice how similar those 2 stories are and suggest that TMP was just a rehash of a TOS episode.
    You're a Starfleet Officer. "Weird" is part of the job.
    When the going gets weird, the weird turn Pro
    We're hip-deep in alien cod footsoldiers. Define 'weird'.
    (I had this cool borg smiley here, but it was on my site and my isp seems to have eaten my site. )

  7. #37
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    The three laws remind me somewhat of deals made with Genies, you have to be very very very specific, or else they will bite you on the ass

    But yes you are right, Spyone, it is the intrinsic problem of applying simple rules, and someone elses flawed logic to simple results will result in huge problems. As in the film, a simple difference engine can't decide which is the better choice based on those rules, it just calculates the best choice without emotion. so in that example, the child was left to die, because she had a worse chance of survival than Will. I Robot actually opens up debate in the more pre-sentient arena, on exactly how in depth the programming has to be to keep them from becoming dangerous unintentionally... It reminds me of those humerous supermarket warning labels you see (no where on the packaging did it say to not apply the heated iron to my genitals at any time?)

    The problem with simple rules too is that Humans live by very complex rules, and or motives and behaviours cannot be pared down. Most machines live in terms of absolutes - binary - yes or no - on or off. Humans however often do things for no reason at all, and CAN chose a course of action when both options are equally bad, and then offer no explanation. Our counciousness floats above our unconscious motives and autonomic functions. We also don't follow our own rules, lie cheat, steal and kill, even when we are not supposed to do that by law, and therein lies the problem. We wouldn't want our creations to do that, but how can we stop them, when we can't stop ourselves, and indeed should we? I can certainly see why the bioneural gelpacks were instrumental in creating lifelike (if arguably actually life!) responses, because they function closer to an organic brain, with the much vaunted 'fuzzy logic', which steps outside the basic logic of a difference engine!

    The thing which troubled me about BCQ's Illea concept is that with a 'full medical scan' could one build a human? I severelly doubt it, so I am not sure if they could do the same for a machine as sophisticated as one. It's also not clear if Illea was actually inteligent at all, or just n avatar of V'ger it's self, in which case all you would get was a sophisticated puppet, which contains the memories of a Deltan in storage. From what we know of Data too - it was more the programming which was a problem than the actual computer: Creating a mechanism so complex that all worked without crashing, as Lall untimatelly did. The inherent problem in the scanning idea is it would need to capture her 'brain' at the same type of quantum resolution as a transporter would, as even a single bit error would result in terrible consiquences! Still it's your game, and your suspension of disbelief
    Ta Muchly

  8. #38
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    Another problme with the 3 laws of roboitcs is that there are groups out there who are speciallty NOT incorpating them in any form into some cutting edge robots being developed for the military.

    Essentially the robot wouldn't be of much use to the military if they have laws that prevent them for killing the enemy. The possible repuercussions of something like this seem painfully obvious.

  9. #39
    Join Date
    Feb 2001
    Location
    11S MS 9888 1055
    Posts
    3,221
    Hey this reminds me . . . what of intellegent dumb artifical sentient lifeforms . . . the talking toaster in Red Dwarf? C3PO?

    What of those.

    DeviantArt Slacker MAL Support US Servicemembers
    "The Federation needs men like you, doctor. Men of conscience. Men of principle. Men who can sleep at night... You're also the reason Section Thirty-one exists -- someone has to protect men like you from a universe that doesn't share your sense of right and wrong." Sloan, Section Thirty-One

  10. #40
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    Quote Originally Posted by JALU3
    Hey this reminds me . . . what of intellegent dumb artifical sentient lifeforms . . . the talking toaster in Red Dwarf? C3PO?

    What of those.
    They get the same rights as the intelligent dumb natuial sentient lifeforms . . . Worf in TNG, Janeway.

    If we limited rights only to those indiviuals who could prove they are sentient and intelligent we'd eliminate three-quarters of the character's we've seen on TV.

    But that's just TV, where the numbers are artificially adjusted. In real life, we'd probably have enen a higher pencentage of people who fail the test.

  11. #41
    Join Date
    Feb 2001
    Location
    11S MS 9888 1055
    Posts
    3,221
    I wasn't refering to their rights . . . but their expanse within the universe . . . and their impact . . . both funny . . . and annoying.

    But really intellegent artificial sentience . . . doesn't need to match with top notch designs of ships, or what not.

    Say they raise something similiar to a galaxy . . . but has the intellegence of a brick. But say there is a garbage scow out there woes intellegence would be more befitting for that galaxy . . . let the social questions insew.

    DeviantArt Slacker MAL Support US Servicemembers
    "The Federation needs men like you, doctor. Men of conscience. Men of principle. Men who can sleep at night... You're also the reason Section Thirty-one exists -- someone has to protect men like you from a universe that doesn't share your sense of right and wrong." Sloan, Section Thirty-One

  12. #42
    Join Date
    Aug 2001
    Location
    Paris, France, Earth
    Posts
    2,588
    That's an interesting point. For instance, we could imagine some artificial intelligences with the IQ of a Neanderthal (ok... let's say a very primitive human, as I don't know for sure how clever the Neanderthal was compared to the homo sapiens), or a chimpanzee. Which rights would they have ?

    I expect the Federation has a few criteria to decide whether a species is considered sentient or not (to distinguish a clever chimps from a proto-human), but, for an AI, things could be harder to decide.

    Let's imagine a AI with the IQ of a dog, but, being a computer, able to speak and handle various functions, showing interest in humans, but unable to evolve much more above that (come to think of it, that's close to B4 in Nemesis). Would it have the rights of a pet or those of a full sentient being ?
    "The main difference between Trekkies and Manchester United fans is that Trekkies never trashed a train carriage. So why are the Trekkies the social outcasts?"
    Terry Pratchett

  13. #43
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    I think that most of the inteligence systems aboard the ships, as we've seen throughout the series, have that level of inteligence, sort of like a dog.. Clearly with enough sophistication to perform massivelly parralel autonomic responses (Walking, barking jumping, breathing, eating... thousands of 'everyday' tasks' are hugelly sophisticated and complicated) without the level of inteligence to question the meaning of existence, or contemplate it's navel!

    I think this is one area people always forget about the functioning of a brain; People marvel at a computer which can calculate Pi to 20,000 places? How many computers do you know that can WALK not many. There's only a handful of them in Japan, the rest just have locked off limbs, which is all well and good, but they can't change pace, hop on one foot, dance or skip rope. Some of the 'simplest' things that Humans can do, a computer can't do easilly, and that's because we have a special part of the brain which handles the massivelly complex maths involved in that, and why some 'broken' humans can compete with computers in this area. Can you calculate the 17 dimensional geometry required to articulate an arm with 17 joints, preciselly calculate the pressures of each of the digits, and accuratelly place it using only the reference of two visual sensors and ballance that against data from incoming digit pressure sensors? No? Well I just did that picking up a glass of water. Yet trying to watch a computer do that - HA - it's hilaroius!

    A Typical Starship computer has fantastic abilities.. It manages trillions of bits of sensor information, coordinates the inputs of thousands of people, controlls the ships energy levels, operates all of it's machines - some of which are beyond our comprehension.. yet it is not sentient because those are just autonomic functions.. It has a really huge limbic brain Big computers don't mean inteligence!

    And that would make determining the inteligence of any machine very hard, especially if it's contained within a box which you can't relate too, emote with or read the body language from! Computers with the level of autonomic function of the Galaxy Class ship also creates a damned good fake, apparently smarter than most humans. A Holodeck character could quote you Pi to a billion places (if it was within it's programming to understand and do that hehe) - you couldn't - but does that make it inteligent or sentient? In most cases no! It would be some big headaches for future computer programmers !
    Ta Muchly

  14. #44
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    This is one reeason why I brought up the idea of a machine intelligence (say Data) creating biological lifeforms (humans) but limiting thier intelligence during creation in order to create a semi-intelligent animal workforce.

    THe parallels between that, and humans making computers that border on sentinence are worth consideration.

    Basically, after a point, does a creator owe the creation the resposibility of maaking them fully sentient or not? I for one, would not feel comfortable with Data making a non-sentient human workforce, but on the other hand, he could always point out things like computers and ask what the difference is.

  15. #45
    Join Date
    Jan 2000
    Location
    Virginia Beach, VA
    Posts
    750
    Quote Originally Posted by tonyg
    Another problme with the 3 laws of roboitcs is that there are groups out there who are speciallty NOT incorpating them in any form into some cutting edge robots being developed for the military.

    Essentially the robot wouldn't be of much use to the military if they have laws that prevent them for killing the enemy. The possible repuercussions of something like this seem painfully obvious.
    Which is why I was one of the people calling "bull" when, in the movie Aliens, Bishop says he was programmed with the 3 laws. If he has a mandate to not allow humans to come to harm through his inaction, he would not be able to allow them to go into combat and would have to try to prevent them.

    However, as a combatant itself, it is vaguely possable. It would need to be a sophisticated brain, able to make complex judgements, but then all you need to present it with is a situation where killing the enemy is the least harm to the least people.


    Which reminds me of another story: It was a pretty bad movie and the story seemed to make no sense until certain things were revealed near the end. (I'm intentionally omitting the name of the movie since I must include spoilers to explain the story.) There were robots that had been programmed to defend people, as basic security (not soldiers). They could even kill a person who was taking actions that threatened the life of another: in the eyes of their programming the aggressor was always wrong.
    Well, there also were robots that were designed to simulate humans so well that people would have trouble distinguishing them: they looked and acted human.
    And a crisis arises when one of the first kinds of robots mistakes one of the second kind for human, and kills a human in its defense.
    The shock of the act itself is quickly eclipsed by the realization that it must be kept secret or things will be VERY BAD (tm) for the makers of both kinds of robots.

    The scene where the human is killed was really good, and a perfect example of the scene seeming badly written until you find out what was really going on:
    First time through, a male executive propositions a woman and acts as if she has absolutely no choice in the matter. He seems amused by her refusals until one of the security robots intervenes. Then he reacts like the security robot is dangerously malfunctioning and even procures a weapon and attacks it. Having realized that the executive is now posing a deadly threat to the woman and perhaps others, the robot is authorized to take actions that may result in his death to disarm hims, and that is what happens (he is killed in the disarming).

    Second time through (having seen the movie once), a male executive decides to play with the new gadget, and is amused when it "simulates" not wanting to comply, but then one of the security robots for some reason tries to intefere, and makes it clear it will use force to keep the executive away from his toy. Realizing that the security robot is malfunctioning badly, the executive seeks to defend himself against it, and retrieves a weapon with which to destroy it. It then attacks him and kills him.

    2 conflicting viewpoints on the same events.


    Anyway, I think Asimov felt that, just as people have a little trouble adjusting to someone coming home from war (the nagging notion that "he's a killer"), it will be exaggerated in the case of robots, and people will never accept into their communities the same robots that were used for combat (nagging belief it may "go off" and kill people around it). The only way people would accept robots is if they were 100% no threat.

    Not sure I entirely agree, but society has changed quite a bit in the decades since Asimov first wrote the 3 laws and automated weapons still give people the willies. Heck, a lot of people are creeped-out by police dogs (they accept them as a weapons, but (so?) they can't see them as still being a pet, and okay to have around children).
    You're a Starfleet Officer. "Weird" is part of the job.
    When the going gets weird, the weird turn Pro
    We're hip-deep in alien cod footsoldiers. Define 'weird'.
    (I had this cool borg smiley here, but it was on my site and my isp seems to have eaten my site. )

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •