Page 1 of 2 12 LastLast
Results 1 to 15 of 21

Thread: Artilect questions

  1. #1
    Join Date
    Nov 2002
    Location
    fringes of civillization
    Posts
    903

    Artilect questions

    Hi All!

    Been working on another one of my crazy ideas for TrekRPG, using Androids; it's kinda like the one i tossed out in the FBR, but a little different.

    Anyway, something occured to me: Most Androids have 'Emotional Immaturity', that gives them a -2 on social tests.
    But, couldn't an Android have the Trait 'Likeable', which negates that drawback?

    Think of Data the first couple of seasons of TNG; he's totally not understanding human behaviour, yet he seems to win people over with his honest nature, desire to learn, and ability to quote the "Encyclopedia Galactica"!
    And that Brother of his (not Lore, the other one) Has a similar thing going for him
    well almost (like Jar Jar).
    _________________
    "Yes, it's the Apocalypse alright. I always thought I'd have a hand in it"
    Professor Farnsworth

  2. #2
    Join Date
    Dec 2004
    Location
    Albuquerque, NM
    Posts
    649
    Yes, it could. The flaw is in there to provide some level of balance to android characters; I see no problem with someone rules lawyer-ing around the flaw.

    The main trouble with these "lifelforms" is the inherent intellectual and physical superiority...an aspect to Data that TNG glossed over.

    If you allow technology to improve throughout the campaign, newer models might come into being (they have in ours) and older characters might seek an upgrade. What we've seen is a steady dependence on machine intelligence in the crews that have them aboard, as well as an increasing sense of obscelence in the biological characters. Need something dangerous done? Send in the android -- he's best suited to the mission. Have a sentient computer onthe ship? Let it fly; it knows its "body's" limitations better than the helmsman. I think the development of technology & its effect on society at large is one of the places later Trek series dropped the ball. you never really see a creep of new technologies in the society at large in TNG or Voyager (the latter for obvious reasons; they're ot in the Federation...) DS9 did a little with new technologies, but it was mostly background nifties like the holo teleconferencing.

    Androids like Data wouldn't be that hard to reproduce; you have his schematics and programming...you could at least duplicate the original. You also had examples of other sentient/near sentient machines in the TOS/TMP era -- these technologies didn't get explored? Once you have a few of these machines, with full commensurate rights, can you deny them the right to reproduce themselves? How about copies -- should you allow multiple copies of a personality? What about liability/ownership of goods: if there are two or three copies of an android's mind running, which owns the property of the original? If one commits a crime and cannot be found, does another mind take its place in liability, etc? Do you avoid this by making each copy a person (ala Thomas Riker?) or do you limit the number of copies that can be active at one time?

    There's a lot of fun stuff that comes with these things...especially if you've got some one running a JAG officer.

  3. #3
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    Well I'm not sure how you came to the conclusion they were easy to replicate, since it was farily obvious they couldn't during Data's existence. trials, such as in 'as measure of a man' were all about the ability to do that, to treat him like a 'machine' pull him appart and put him back together. Even Data failed to replicate himself, even though he knew his own template, and Lore is the only remaining model and he's a psychopath.. (it occurs to me that they could put Data's stored memories (in B4) into Lore and wipe his hehe)

    However having characters who advance is part of roleplaying. You could add functions in the form of the old 'medical remedy' - in a Data like case have 'emotion chip' which overrides the Emmotional imaturaty aspect.. but that medical component (in this case a computer chip) is not always fully intergrated and has it's own problems (aka Geordies Visor or Picards heart etc has the same problem) Such as Lore using it to manipulate him, or being accessed by people, or Borg, against his will

    There is a fundamental aspect to Data's emotional maturity: You have to emember he is actually... YOUNG! he is a childs age during the whole show, he is a teenager by the end of his run.. he is actually young, so yes he may simply learn emotional maturity at a later date.

    It's funny but I thought BCq was going to say something else there.. It is their intelectual superiority which makes them emotionally immature: A human with those abilities is often emotionaly imature, they have emotional problems because they can't intereact within that society and that society can't easilly relate to them. The difference between a Human and a Vulcan can be overcome more than between a computer and an organic.. it's a paradigm shift: they are fundamentally different beings. It also creates problems like 'Charlie X' a Savvy being with abilities of an android could be quite nasty, indeed as Lore was. Was Lore wrong? Humanity was limiting to him. Even Data prefered to work on specially designed consoles because he superceded normal tollerances: he could work much faster. it has to be said that emotional imaturity then should not be the only flaw on a social level they should have... they are also simply too different. In a sence it's the problems Data has dealing with different cultures: he can research 35,000 sociatal references, but being a fast and efficient researcher does not make you 'get' them, or be able to react to them as a native would. Just like the main computer sometimes you can be smart, but also incredibly dumb. Knowledge is one thing, but too much means it's harder to keep track of what is actually relevant!

    It's one of the strengths and weaknesses of androids and hologrammes. They can rapidly become unplayable and annoying. It's something Startrek never really dealt with, and why they made Data fundamentally uncopyable.. with the EMH issue, that opened a can of worms that they shouldn't have done!
    Ta Muchly

  4. #4
    Join Date
    Dec 2004
    Location
    Albuquerque, NM
    Posts
    649
    I simply found the idea that Data would be so difficult to recreate overblown.

    the androids in our campaign eventually do become psychologically mature, but it's due to interaction with people.

    As for intelligence being a hinderance to social development, I disagree. The sequestered geek of popular culture is more likely to have emotional development difficulties that are due to home or other personal impediments. There are plenty of smart people who can interact with others just fine.

    In the case of the androids, they are brought online with a wealth of information, but with the same experience of a child. They develop social strategies more quickly when presented with socialization; they have to be allowed to interact with more and a handful of scientists.
    Last edited by black campbellq; 06-06-2005 at 12:53 AM.

  5. #5
    Join Date
    May 2005
    Location
    Potato Fields of Idaho
    Posts
    32
    Quote Originally Posted by black campbellq
    As for intelligence being a hinderance to social development, I disagree. The sequestered geek of popular culture is more likely to have emotional development difficulties that are due to home or other personal impediments. There are plenty of smart people who can interact with others just fine.
    Psychological research backs up BCq's position. The vast majority of highly intelligent people are indistinguishable from those of normal intelligence in terms of their social abilities. The socially inept genius is simply a favored stereotype in our media.

  6. #6
    Join Date
    Nov 2002
    Location
    fringes of civillization
    Posts
    903
    Such discourse!
    Nature vs. Nurture. Socieital Development vs. Intellectual capacity!

    As I avoid the developmental issues, I started wondering:

    Why aren't there more androids in the Trekverse?

    TOS had Mudd's androids, Roger Corby's Old Ones refits, and the Enterprise crew had started to build androids for those disembodied minds.

    Then TMP had the complete scan of Illea.

    And then Soongs work.

    Actually, it seems a lot of tech discovered during the shows never gets used again. Except for the shield mod's that Ferengi Scientist develped. And some of O'brian and Sisko's modifications for the Defiant.
    _________________
    "Yes, it's the Apocalypse alright. I always thought I'd have a hand in it"
    Professor Farnsworth

  7. #7
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    Perhaps I didn't mean it quite that way, but the point about inteligence is that there are a number of forms of inteligence: people who are extremelly inteligent can be so or many things or only for some things: there is such a thing as social inteligence! IQ tests are divided up into different areas for different forms of inteligence. Data's existence is a problem for him because he finds it difficult to adapt to humanity: Something he is not himself. it's a 'fish out of water' problem and that affects humans, in a social context, and fish alike I am not strictly saying Data has the same problems as a 'highly inteligent person' in the steryotypical sense; but rather the nature of his existence sets him apart, makes him different in a way that any alien species in Startrek would have without years of aclimatising to their new environment and the societal norms of the other cultures they interact with. Data, because he has the phenomenal abilities he has, could probably adapt faster than many humans to alien situations, but then his general lack of social sophistication does not help him, and adaptation is a 2 way street: he still has to live with other people's prejudices, and their reaction to him is probably one of the hardest things to deal with!

    It also strikes me: is Data inteligent? He has phenomental abilities, capacity and memory, but then so does the ships computer, yet it is not what you would call inteligent, and it can't grasp bad jokes any better than Data can! In the same way an autistic person can show phenomenal maths abilities, count all the leaves on a tree, remember Pi to 20,000 places but this doesn't make them inteligent, it's just something they can do: Without legs it's hard to walk, without eyes you can't see, they simply have mental apendiges we don't
    Ta Muchly

  8. #8
    Join Date
    May 2005
    Location
    Potato Fields of Idaho
    Posts
    32
    This doesn't really have anything to do with Star Trek ... please forgive me.

    I don't want to get technical here about intelligence and social intelligence etc.. You do make an interesting point that reminded me of a great book that I read by Antonio Damasio. The book is entitled "Descartes' Error." Damasio makes an interesting, and in my opinion, a strong case that human reasoning and intelligence require emotion. This flies in the face of the rationalist tradition of assuming that emotion interferes with decision making.

    Damasio examines a few cases where, due to brain damage, individuals are intellectually intact but emotionally impaired. These individuals' lives fall apart because their social decisions are terribly impaired. Even the famous case of Phineas Gage supports this position. If Damasio is correct, without an emotional component, we have nothing to base the "goodness" or "badness" of our decisions on. Androids or artilects of any type would also need emotions to make good decisions unless those decisions are of the type for which pure logic would suffice (and very few of our daily decisions really result from the logic and rationale that we think they do). Perhaps then, we could argue that Data is more savvy than Damasio would allow for.

    In his book "An Anthropologist on Mars," Oliver Sacks discusses some cases on autism and savant capabilities. He has a section on a subtype of autism called Asperger's syndrome which is characterized by normal and above intelligence with social impairment and repetitive behaviors. Interestingly, he reports that many of these people identify with the character Data.

    Both of those books are great reads. They're written for the general public and don't get too technical. Sorry this isn't directly related to Star Trek and sorry I wrote a small book, but I happen to find this topic fascinating (said with one arched eyebrow, of course ).

  9. #9
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    Well it's all part of the same debate, relevant, and it's certainly interesting

    I was watching a documentary recently titled 'the boy with the incredible brain' Which was indeed about a man who had a form of Aspergers syndrome, which was so mild that it was barelly noticeable, but he had a curious and bizarre form of synesthesia which meant he saw numbers as shapes and could run mental calculations by picturing the interactions of those shapes, enabling him to do phenomenal mathematical and numerical memory skills (such as reciting Pi to 20,000 places, flawlessly, (which took him about 5 and a half hours LOL). It could have been longer, but it was pretty fascinating and they even took him to meet 'Rain man' Himself

    He managed to function fairly normally but he did have a very odd childhood, often spending most of it on his own, but then he as from a family of 6, so that was hard to do But yes it does go to show you can neither make any assumptions about anyone, and the human brain is a very complex thing!

    In the case of Data he is quite different from someone who lacks emotional capacity because his brain does have moral and ethical subroutines. We've seen what happened when they were disabled, he didn't start to run round and slaughter everyone, the changes were subtle at first, but he did start to dismiss friendship, loyalty and pain as incidental details in his quest for emotions (ironically! )

    It was fairly clear Data lacked a certain empathy, which is an essential component in moral judgements, but he did not lack it entirelly, so I suppose he had empathy but lacked the emotional aspect, other than programmed responces: he could sense pain in others, knew that he should act on it, and was bound to do that but he didn't feel those emotions himself. In this sense he would rescue someone from a burning building because he knew they were in danger, because his ethical software told him he should, not because he could relate to their pain: and he certainly lacked irrational fear (not all fear is irrational! ) Which in some ways is exactly how a child is!

    I remember watching a documentary about the emotion of disgust: It's something we learn as children from our parents. They made up a chocolate mouse in the form of *ahem* human waste, and put it in front of children without the presence of their parents.. the Children of course ate it However reading the reaction of their parents in the room: they didn't eat it: It's an irrational fear of association and a learned behavior, and certainly something animals don't have!
    Ta Muchly

  10. #10
    Join Date
    Nov 2002
    Location
    fringes of civillization
    Posts
    903
    All this information is helping me with my concept.

    Basicly, the character idea I have is a group of androids, creations of a alien civilization. These aliens view their androids much how Cmd. Maddox viewed Data: incredibly sophisticated machines/tools. Well, one day, a terrible disaster befalls the alien civilization, and it's all wiped out (I'm thinking some kinda spatial anomaly). This group of androids is on a research station, and while they are spared, the anomaly surrounds the station, and they are trapped as time passes them by and they are forgotten, mostly due to the fact the anomaly tends to destroy anyship that enters it.

    So the androids on the station keep up their duties, and expand them as they see fit. They evolve beyond the tasks they were designed for, but they don't develop emotions; they are a group of droids living amongst droids and a sentient computer, who needs emotions?

    Eventually, a ship makes it through the anomaly; It's a FED ship; badly damaged, and abandoned. The Androids quickly begin to study her, and discover more about their galaxy. Eventually, it is decided that some of them should try and leave the station. In an attempt to prepare the new android crew for the galaxy at large and familarize them with it and the ships operations, many of them are subjected to simulations of life at Starfleet Academy, and then severly 'virtual years' of Starfleet service.

    This procedure begins the Androids first real steps toward becoming independant beings. Until now, they were continuing to follow their last sets of instructions. But now they are seeking to find their own way. And the Androids are begining to show signs of developing humanoid psychological characteristics that they were never intended to have.
    _________________
    "Yes, it's the Apocalypse alright. I always thought I'd have a hand in it"
    Professor Farnsworth

  11. #11
    Join Date
    May 2005
    Location
    Potato Fields of Idaho
    Posts
    32
    Quote Originally Posted by Tobian
    In the case of Data he is quite different from someone who lacks emotional capacity because his brain does have moral and ethical subroutines. We've seen what happened when they were disabled, he didn't start to run round and slaughter everyone, the changes were subtle at first, but he did start to dismiss friendship, loyalty and pain as incidental details in his quest for emotions (ironically! ) !
    I think this might be the crux of the matter. Can ethical and moral decisions be reduced to logical subroutines or algorithms? Or do emotions lie at the very heart of moral and ethical decisions? Given the wide variety of moral stances that exist in the various societies and that have existed throughout history, I would tend to lean toward emotion playing a significant role. Ethical and moral behavior is dependent upon the values a society places on various goals. Value automatically presupposes an emotional appraisal of the situation or goal.

    I think that would leave us with machines that might attempt to emulate ethics by having a matrix of values programmed into them; however, like the brain damaged patients that Damasio discusses in his book, that would possibly leave them knowing what should be done but still not able to respond correctly. It would be like the book smart person that knows the correct answers but is not capable of applying them as they are called for in "real life" situations; again, I point to Asperger's syndrome where an individual might be able to give a book answer about a social/emotional situation but is very impaired in the actual appropriate use of social/emotional skills. (This concept is very difficult to explain. Damasio does such a good job that I highly recommend his book.) How do you deal with conflicting values? How do you establish what value is represented in each situation? It becomes immensely complicated.

    If Damasio is correct (along with a few other neuroscientists), then emotion not only is necessary for good decision making but also lies at the heart of consciousness itself. (It would take too much time to explain that argument here but it is a very convincing one; essentially he is saying that self-awareness is based on assigning emotional value to bodily experiences.) So perhaps we're back to the question of can a sentient machine be built?

    We assume it is possible in sci-fi settings because it is a fun idea to explore. I also think we do so because we still subscribe to the rationalist position that everything can be reduced to pure thought and that emotions interfere with that process.

    Anyway, just a few random thoughts.

  12. #12
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    Data does have one unique advantage, again, over someone with Aspergers Syndrome, in that to a limited degree he can learn behaviour patterns and rewrite his own command pathways: and he can run self diagnostics if he is malfunctioning: it's never been flawless, but it's possible, unlike Humans!

    It's an interesting notion the idea of emotions ruling over higher brain functions, and giving them purpose, but then it's tough to know if for an advanced machine, needing emotions is a factor. Emotions are part of the fuzziness of human brains which gives them motivations, reasons to do anything and a moral centre on issues, but how far away from advanced machines are we really? Are emotions what fuel the brain or simply part of the life emulation software Emotions are still part of our software / wetware, maybe they are hardwired, learned or both, but they don't come out of anything other than our brain.

    I am reminded of the recent I robot film, where Sunny struggled with his ethical programming and for some reason was able to break the software and do something which on the face of it was unethical, but saved lives in the end: A complex moral knot of checks and ballances, which is a phenomenal feat of computer programming, but an extremelly ordinary thing for humans to do! (if not simple!)

    Depending on your standpoint (religiously) no one 'built' humans, but we evolved our software to be tremendously complex. And we can do what no other animal can do: pull apart our motivations, question them and gain understanding from the question! I am not so sure that we can't fathom AI: Computers are ever more sophisticated devices, and a slowly growing trend is towards non specific compuing devises, the CPU was such a revolutionay breakthrough, and still it's undergoing that revision further, where it can essentially reconfigure the way it works, making it's self more efficient: That technology is available today in VLIW Chips, which can reqrite their own code (and of course be updated via software! a hardware which can recieve an upgrade!) it's lightyears away from a sentient machine, but we are constantly learning from the way we work and I expect to be astounded by computer technology duringmy lifetime!

    Back to Tricky's idea, I like it, it could be a good way to bring allot of emotional impact to a story, because of the way robots can be so childlike in their perceptions of the world around them. I was immediatelly reminded of a story from Voyager where a holographic 'servant' basically goes mad and kills off his ships crew, but is he mad, or simply psychopathic because he has no empathy: it's a scary duality that devising machines that have no emotional context also makes what we regard as psychopathic behaviour.. they are essentially no different to a toaster or lawnmower.. a toaster does not care if you stick your hand in it and burn yourself (well maybe Listers super intelligent toaster might, though that was in an altenate universe ). It's exactly what Lore was, and exactly why Data is unique because he manages to carry off a delicate ballance between the two states!
    Ta Muchly

  13. #13
    Join Date
    Nov 2002
    Location
    fringes of civillization
    Posts
    903
    I was thinking about my 'droids and their simulated experiences vs. the real thing.

    Remember when Geordi recreated Dr. Brahms? The computer used all the available data to create a realistic portrait of the good docter. We thought.

    Then he met the real Leah. She was kinda bossy, didn't seem to like Geordi all that much, and was happily married. (guess that's what happens when the Computer only uses information from a Federation Instant Messanger Profile! ).

    I'm thinking while they are more adaped to their new surroundings, they are still puzzled by how contrary humanoids act in regards to their simulated counterparts.
    (like O'Brian trying to be Klingon; he had the actions right, but he still performed them as a human).

    I wonder if 'ethical subroutines' could really work; they would just be rules, and rules could be bent. Or broken. (Look at I Robot again for proof.)
    _________________
    "Yes, it's the Apocalypse alright. I always thought I'd have a hand in it"
    Professor Farnsworth

  14. #14
    Join Date
    Dec 2004
    Location
    Albuquerque, NM
    Posts
    649
    Quote Originally Posted by Tricky
    Such discourse!
    Nature vs. Nurture. Socieital Development vs. Intellectual capacity!

    As I avoid the developmental issues, I started wondering:

    Why aren't there more androids in the Trekverse?

    TOS had Mudd's androids, Roger Corby's Old Ones refits, and the Enterprise crew had started to build androids for those disembodied minds.

    Then TMP had the complete scan of Illea.

    And then Soongs work.

    Actually, it seems a lot of tech discovered during the shows never gets used again. Except for the shield mod's that Ferengi Scientist develped. And some of O'brian and Sisko's modifications for the Defiant.
    Just so, Tricky -- when I got talked into running Trek, one of the first things I wanted to do was address how all the stuff the characters discover gets assimilated into society & changes the same. I wanted to do big issues. The background metastory has been the creation of reliable machine intelligence, and how it is seeping into the Federation society (and now into other major powers' society...) Legal issues, sentient rights issues, the effect on a society that is seems to me implicitly static & how biological life would react to creatures that are smarter, stronger, and able to evolve at the speed of life. I wanted to deal with what it is to be "human"; is it the soul? Genetics? Or is it mimetic, learned & imitated behavior? (I favor the last, but we give all sides...)

    The various encounters with androids would make me think there would be more, as well. The "Data's too hard to replicate" idea is another example of Trek ignoring, or working actively to negate, changes in the universe. Data might represent "bad tech" -- in our universe, much of the android tech is based on Ilia -- whom they did a pretty good scan of in TMP. Data, if so hard to replicate, suggests a design that would "die out" once in competition with another set of technology more easily replicated.

    I'm not a canon-thumper (which seems to be an unfortunate side effect of fandom in any show/movie/sport.) the argument Data couldn't be replicated rang hollow to me, so I simply went around it.

    On the subject of necessity of emotions and ethics:

    The more I research cognitive development, the more I am convinced that emotional response is the key to true sentiency. Although you could create a creature like Data, I would think his ability to make decisions would be greatly impaired; much of a (biological) creature's decision-making process is tied to emotional responses like fear, anger, or affection. Just making choices about what you want to eat involve preferences, tied to a pleasure/displeasure response.

    This begs the question: what use are emotions & how do they develop? Once again, I'm increasingly convinced that emotional response is tied directly to two biologically-driven needs: self preservation and reproduction. Fear and anger responses seem to be reductive to self-preservation directives to protect the being; affection and the need for social interaction seems to be another survival response which doesn't always seem to occur in various species, whereas the first two seem ubiquitous.

    Love, while blown into romantic and noble proportions by art, would seem to be intrinsically tied to reproduction. (An idea that has it's own implications for various sexual habits; make your own moral assumptions.)

    These emotional responses drive behavior, which when coupled with the use of intelligence, create more adaptive and responsive behavior.

    A more pop psychology response might be, emotions are learned behavior. I would think as they interacted with people, androids would by necessity mime emotions to improve that interaction...you could argue they simply imitate emotional responses, but how much behavior in animals/people is mimetic?
    Last edited by black campbellq; 06-08-2005 at 04:03 PM.

  15. #15
    Join Date
    Dec 2004
    Location
    Albuquerque, NM
    Posts
    649
    Long winded Black Campbell crap argument, part duh...

    Quote Originally Posted by Captain Quirk
    I think this might be the crux of the matter. Can ethical and moral decisions be reduced to logical subroutines or algorithms? Or do emotions lie at the very heart of moral and ethical decisions? Given the wide variety of moral stances that exist in the various societies and that have existed throughout history, I would tend to lean toward emotion playing a significant role. Ethical and moral behavior is dependent upon the values a society places on various goals. Value automatically presupposes an emotional appraisal of the situation or goal...
    Damned good question. I think that ethics and morals are learned behavior; mimetic capital passed down through societies as part of their cultural survival strategies.

    How many people have their morals overwirtten in a moment of crisis by emotion? How many moral questions inspire strong emotional responses that close off any chance of argument?

    "Programming" emotions and ethics happens everyday, I would suggest. People emulate those they like/love/respect. I think androids would, by necessity, wind up doing the same.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •