Page 1 of 4 123 ... LastLast
Results 1 to 15 of 49

Thread: Manufactured Sentient Life

  1. #1
    Join Date
    Jan 2000
    Location
    Virginia Beach, VA
    Posts
    750

    Manufactured Sentient Life

    Moving this part to a more appropriate forum
    Quote Originally Posted by spyone
    Especially if you include Voyager (what with it's comments about the EMH Mark-1s being reassigned to maintenece/cleaning duties), it seems obvious that, ready or not, the Federation is on the verge of manufactured sentient life.
    If the ability of holodeck creations to evolve into "photonic lifeforms" isn't enough, remember the Exocomps were not the result of an unduplicatable experiment, and showed not only self-awareness but awareness that the existance of others would continue after their own death.

    While this means Starfleet is hurtling towards some heady social issues, on a purely technical level it means ship design is likely to be revolutionized: while Starfleet tends to shun automation and prefers to let sentient beings do most of the work, it could now have sentient beings suited to work in the most hazardous environments. Workmen in spacesuits would be a thing of the past, since any work in vacuum could be done by crewmen who don't breathe.
    I suspect the state-of-the-art starship of 2400 would have holo-emitters everywhere, especially outside. That is, unless reverse-engineering Voyager's portable emitter from the 2500s produced a model that was man-sized or smaller.
    It would also have a small fleet of Exocomps down in Main Engineering helping out.
    (snip)
    While Soong-type sentient androids remain tantalizingly out of reach, the Federation has a number of other avenues to pursue: Moriarty was created as a sentient program by accident. Is that accident unduplicatable? What happened to Voyager's Doctor suggests it isn't.
    What about the nanites in TNG's "Evolution"? Wesley Crusher's High School Science project achieved sentience!

    But the shining example has to be the Exocomps: reproducable, sentient, and useful.
    In an episode where the entire TNG crew had amnesia, Data theorises that there may be one of his kind on each starship. And why not? Isn't he handy? If you could make more of him, wouldn't you?

    While the bulk of these questions do not bear directly on what future Starfleet ships will be like, here's a few that do:

    The Federation has made a sentient holoprogram on at least 2 occasions, and one of them not through some unique accident after a computer was heavilly modified by Bynars but by using the standard computers on a ship and some programming by an Academy drop-out.

    At least one prototype ship has been fitted with holoemitters throughout the ship to allow holographic peronnel access to the entire ship.
    Holographic personel are currently employed doing unpleasant and/or menial tasks.
    Holographic personel can often operate in hazardous environments that most sentients cannot.


    It seems likely that a Starfleet vessel of the future will have holoemitters throughout (and outside) the ship, specificly so that holograms can do menial and/or hazardous tasks like hull repairs.
    Quote Originally Posted by JALU3
    Since Exocomps are recognized as sentient beings, and EMHs are recongized as sentient beings . . . or atleast past a certain point where they function beyond their base programming. Do these individuals then hold rank? Do these individuals then are to conduct themselves under the same laws, regulations, and guidelines as others? Do they have to attend the same training that other sentient beings do?

    If so . . . there is a series all to itself!
    (snip)

    Yet, to what extent would Starfleet, and the Federation as a whole . . . want to purposefully create a holographic species as it where.

    Furthermore, Exocomps . . . being sentient, are they allowed to reproduce themselves at a rate that they determine, or are they force produced by an organization . . . and upon achieving a level of sentients . . . given the rights of a sentient being? For, are we assuming, that since the beginning of their creation that Exocomps where sentient (but did not reveal it, or realize it) . . . or is it a gained state of being that is an effect of a certain amount of experiences?

    That would make for an episode . . . a non-sentient Exocomp, at a critical moment, becomes sentient, and being sentient . . . and having access to the federation network . . . learns of different occupations . . . and decides that it/he/she wants to be a musician . . . and then replicates a small thumb sized musical instrument.
    The floor is open for the discussion of those "heady social issues" I discussed.
    Last edited by spyone; 06-26-2006 at 09:27 AM.
    You're a Starfleet Officer. "Weird" is part of the job.
    When the going gets weird, the weird turn Pro
    We're hip-deep in alien cod footsoldiers. Define 'weird'.
    (I had this cool borg smiley here, but it was on my site and my isp seems to have eaten my site. )

  2. #2
    Join Date
    Jan 2000
    Location
    Virginia Beach, VA
    Posts
    750
    Let's see if I can answer some questions:

    First, I'm not sure the Federation has officially recognized the Doctor as a sentient being. They are going to, but I don't recall that they did.

    The Exocomps, however, were ruled sentient. In fact, they were more sentient that even Data suspected: he expected them to refuse a suicide mission, but they considered a loss of one Exocomp to rescue 2 humans to be a good trade. Volunteering for a suicide mission indicates a knowledge that the universe will continue to exist after you cease to, which is pretty advanced.

    Given Data's example, Exocomps that choose to join Starfleet would have to go through the Academy, or through the same Enlisted training as everyone else. But we have also seen Starfleet using civilian specialists on occasion, so they might not want to sign up and get rank.

    I would assume that Federation law applies evenly to all sentient beings, and Starfleet regs would be similar. Unless there were specific new rules that said "Exocomps are exempt from Rule # (blank), ..." they would have to comply.
    For example, should an Exocomp join Starfleet, it might be issued a "uniform" that was just a colored strip of fabric to put around its hull. If Horta can join Starfleet, Starfleet already has rules for "what if the uniform won't fit his form?" (In a non-canon novel, there was a Horta in Starfleet. McCoy kept calling it "Ensign Rock".)
    However, as the song goes, "The law don't change an old man's mind ..." Another of those social issues I mentioned: some people in the Federation will have a problem with "artificial" sentients. Some people, like Janeway and Dr. Pulaski, will have a real problem accepting that a machine can also be a person.


    Starfleet seems to be trying really hard to buy some time to wrestle with the social implications of manufactured sentient life, and so I do not think they would "want" to produce a race of, say, sentient holograms. But it looks like the tide of history is moving in that direction whether they like it or not.

    So who's making the new sentient life forms?
    First off, I think they are allowed to reproduce themselves. On the one hand this is disturbing (remember how shocked Picard was to learn that Data had built an android without consulting him? Remember also that Data seemed mystified by this, and pointed out that none of the rest of the crew consulted the Captain when they wished to reproduce.) They will face some interesting issues, too: while we all have to purchase the raw materials our children are made from, rarely to we get to see the process so explicitly.

    However, there will alse be examples that were made by organic sentient's, either as experiments or as ... well, slaves is the best word really. Even Dr. Farralon, who created the Exocomps, wanted specialist slave labor. She just thoght the slaves she had built were too dumb to mind being slaves. (Much like your PC.)
    And therein lies the problem: Dr. Soong was trying to build a sentient machine, and so he kept his work secret to prevent it falling into the wrong hands. Dr. Farallon was building a useful tool to help with the real work she had, so she was far less circumspect about what she had built. She probably patented it (or whatever the Federation equivelent is), so the designs were probably available to the public if anybody was interested.
    Even if the Federation later classified those plans due to their sensative nature, .... well, you've seen how hard it can be to stamp out information posted to the internet: take down the sit it's on and 3 people will post copies on other sites. Somewhere out there is a Ferengi with a set of blueprints he knows will make him a fortune.

    And really, would anybody object to an assembly line churning out people? I mean, if overcrowding and famine were not an issue, is there anything wrong with making them? Generally, the objections are with how they are used once they are made, not the making itself. So the number of new baby machine-sentients each year would be equal to the number of machine-sentients that want to become parents plus some. How many depends on how many you want, and how much they cost. (How much money are you going to spend to create a machine that may or may not do what you want it to and is completely beyond your legal control once it is made?)

    Exocomps are not sentient from the date of manufacture, that much seems pretty clear from the episode. They have the ability to alter their own programming, and the ability to make new circutry for themselves, and are designed to seek innovative solutions to problems. After a certain (variable) amount of experience, they start making new circuitry at an accelerated rate and stop responding to some commands: It is around this point that they become sentient. (Here's another philosophical question: is the accelerated circuitry creation the origin of their sentience or a result? Does it at some point realize that it would be doing its job better if it were smarter and then focus on that?)
    What they were before that is a largely philosophical arguement: in many ways, babies are not "sentient", but they are able to make new neural pathways and alter their programming and will eventually become sentient. Is a newly manufactured Exocomp an innovative tool or an infant?
    Last edited by spyone; 06-26-2006 at 09:31 AM.
    You're a Starfleet Officer. "Weird" is part of the job.
    When the going gets weird, the weird turn Pro
    We're hip-deep in alien cod footsoldiers. Define 'weird'.
    (I had this cool borg smiley here, but it was on my site and my isp seems to have eaten my site. )

  3. #3
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    Sometimes, Star Trek overcomplicates things. Probably the best way to determine if a being counts as a sentient life form desering of rights would be if it wants them, and expresses a wish for them.

    It just seems that the demands on technology in the 24th entury require that engineers must dsign devices that "walk the fence" on sentience. Occasionally, something falls off the fence and "presto! a new life form".

  4. #4
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    Quote Originally Posted by spyone
    And really, would anybody object to an assembly line churning out people? I mean, if overcrowding and famine were not an issue, is there anything wrong with making them? Generally, the objections are with how they are used once they are made, not the making itself. So the number of new baby machine-sentients each year would be equal to the number of machine-sentients that want to become parents plus some. How many depends on how many you want, and how much they cost. (How much money are you going to spend to create a machine that may or may not do what you want it to and is completely beyond your legal control once it is made?)
    I OBJECT! I OBJECT!

    One question that hasn't been addressed is what "return" is the creator due for the materials and effort of creation. Biological reproduction is a natural process that occures as a part of life. TO make an artifical being requires a delibete decision to create such a being for a purpose.

    I'd accept Data (or another syntehic being) having the right to procreate, but do biological life forms have the right to produce mechanical life forms, and if we do should we be held resonsible for their use?

    If a human creates an android, who is resposible for it's actions? What if the android doesn't want to do wht it was created for-can it material be "repossed"? At what point does the creator's rights end and the contructs right begin?

    Of the flip side, does a mechanical life form have the right to create a bilogical life form (and if not, why?). For example, what if Data decided to create and raise a human child? Is turnabout fair play?

    What if he made "dumb" humans, unintelligent, but "pre-prgrammed" to be able to perform certain menial tasks him? Is that okay, or does it give us you the willies?

  5. #5
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    The basic guideline of 'because it asks' is a somewhat grey area in and of it's self. Holography is one area where that becomes highly apparent. A hologram is designed to be a realistic simulacrum of life, and with the aid of the computer, can be frighteningly realistic, without actually being sentient. Especially in the case of the EMH is he sentient or is his simulation so real as to blur the lines, or indeed is this one of those horrible debates on exactly what is sentience

    With regards to creator rights I would say that once a being has demonstrated sentience it should be walled off by law, to protect those rights... aka it can't just be melted down and scratched off as a failed experiment! However equally 'children' are also walled off by law in the reverse sense, because they are also protected from their own mistakes as well as from external threats: They need to be protected from themselves! Sentience also doesn't guarantee 'nice' either: Lore being a perfect example. Data was heavilly tweaked and perfected above the Lore prototype, so machines which are highly sophisticated, self aware, with built in knowledge that no biological (sentient) Child has are a terrifying concept. Data is just an ethical and moral subroutine away from being a psychopath!

    We also have the example of what the Holograms that the Hirogen made did; though they were abused and horribly treated, they were still monsters in their own right. And herin lies another problem. Humans often become abusers when they are abused, sad but true.. If creations of the verge of sentience 'awaken' and realise their 'slave' status, how will they react, especially if they don't have ethical and moral subroutines, because they were never meant to be sophisticated to do anything psychotic! Equally, without those ethical and moral guidelines a sentient life form may well be created to perform as a soldier, torturer, psychopath. This is almost a debate in it's self, but ultimatelly too, is it guilty of crime if it was created that way (which in turn leads on to the debate on punishing the creators too!)

    I guess that artificial lifeforms sadly need to also be confined within the same laws as the society they are from. And if that society is responsible for creating them, they have also to 'clean up the mess' if they act outside it (Aka a sentient life form can't just leave the federation to avoid it's moral laws!) in exactly the same way a criminal hides outside the Federation to avoid it's laws. The Federation is still beholden to solve the problem, as it (he, she?) was their citizen!
    Ta Muchly

  6. #6
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    Quote Originally Posted by Tobian
    The basic guideline of 'because it asks' is a somewhat grey area in and of it's self. Holography is one area where that becomes highly apparent. A hologram is designed to be a realistic simulacrum of life, and with the aid of the computer, can be frighteningly realistic, without actually being sentient. Especially in the case of the EMH is he sentient or is his simulation so real as to blur the lines, or indeed is this one of those horrible debates on exactly what is sentience
    That's why I wrote "becuase it wants them" into my defintion. A standard hologram would be excluded becuse it is just following the program.
    One thing that is true in Trek (even if denied by characters) is that all "beings" have wants and desires. Even Data and the EMH from Voyager.

    That does leave the problem where a creator programs something to express a desire to be treated as a sentient being, but I'm thinking to err on the sie of civil rights. In other wrds if if asks/whants rights, it gets them. I'm thinking that it s better to give a high tech toaster rights it doesn't dervese, than enslave another being.

    I am thinking that the Federation should have some restrictions against creatingfully sentient AI's/life forms to help avoid this.

  7. #7
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    Quote Originally Posted by Tobian
    (Aka a sentient life form can't just leave the federation to avoid it's moral laws!) in exactly the same way a criminal hides outside the Federation to avoid it's laws. The Federation is still beholden to solve the problem, as it (he, she?) was their citizen!
    Woah! So you are saying that people who go outside the Federation are still held to the Federation's laws? That's not right. If someone who has not commited a crime in the Federation goes to live elsewhere, he/she/it should not be held accountable to Federation Law. I think people should be held to the laws of the scoiety in which they reside. If a Human decides to live in the Klingon Empire, they should be held to Klingon law not Federation law.

    It also has major Prime Directive issues, as you are eventually going to run into aculture that does things differenlty that UFP law. Probably the whole thing would be very complcated with laws varying from world to world much like cloning and gentiec engineering is treated (okay on some planets, banned on Earth).

    Star Trek has shown signs of some sort of interstellar law (like internation law) with certain weapons and fields of research being banned among the major Trek cultires. I could see something like that.

    Using real world examples, hiding outside of the US is an excellent way to avoid US law. That's what "no extradtion" means. Another reason why such rules exist today are to prevent country A from controlling country B, and the very pragmatic reason that someone who is ouside of your reach can't be punished anyway. Of course, someone who is "wanted" in a place is probably not going to go back there.


    BTW, Heady stuff, ain't it?

    It is hard to mae a "fair" law wthout causing more problems.

    P.S. What about my idea of Data making non-sentient/pre-programmed humans to do slave labor? It leads to a major rights nightmare.

  8. #8
    Join Date
    Dec 2004
    Location
    Albuquerque, NM
    Posts
    649
    We've been playing with this for about three years of Star Trek gaming. At this point, the Federation has a small, but steadily growing population of androids (in our universe, based on the Ilia tech, not the Soong stuff) and sentient starships. Massive planetary networks are intelligent and active.

    Social issues -- the already lazy, spoiled leisure class of the Federation is now even more disconnected from production of goods and services; the androids and computers do it better. However, they want to be considered more "real" life-wise than the machines. The machine intelligence is willing to work with their biological brethren; most recognize they are the "children" of their creators (and there's plenty of humans, vulcans, etc that feel the same) and get their social capital from copying their "parents." There's a lot of jealousy here toward the machines for their skills, from the machine toward their biological counterparts for their seeming arrogance and lack of empath for the machines.

    The androids and ship reproduce -- androids through creating new bodies, and combining their mind-states and allowing the new sentient to "learn" from their experiences. The starships do the same, combning mind-states and creating new ships that are thinking beings. They are contracted to work in starfleet for a certain length of time, with commensurate rank and privileges, as if they were officers. Some have reached the ranks where they command themselves. Some have retired and are wandering the galaxy on their own.

    The political issues has ranged from what rights they have, to the current time in the game, where they are actively getting into politics and the Starfleet brass. There's a fall coming, partly from the biological-machine antipathy being created by bigots, and partly due to the success of the Federation...the population simply doesn't have to DO anything. Most don't (Starfleet is the elite.) Sooner or later, the social sturctures are going to collapse.

    This is just the tip of things in our campaign...

  9. #9
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    Hmm Ok I could have put that better, but yes basically what I mean't was they would be bound by the same laws, interstellar or otherwise, as other Federation citizens. Also if that sentient went, say, on a mass murder spree on a pre-warp world in 'international waters' they would have to deal with it, because of the prime directive, and because it was their horrible mess. Of course, yes, extradition treaties and such like would still be just as relevant.. but imagine the nightmare of, for example, a Soong type android falling into Romulan space (a functional one, not B4 or something), and falls under their laws... There the nightmare is because it is highly advanced technology falling into enemy hands!

    BCQ, I'm curious where your campaign is going exactly. It sounds like it's about to self destruct, or go down a Cylon / NU BSG route, or something rather like I Robot (the movie) where an AI decides to take over humans to make them more efficient? It sounds fascinating, but it isn't going to remain Startrek Themed very long, just set in the universe
    Ta Muchly

  10. #10
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    Quote Originally Posted by black campbellq
    Social issues -- the already lazy, spoiled leisure class of the Federation is now even more disconnected from production of goods and services; the androids and computers do it better. However, they want to be considered more "real" life-wise than the machines. The machine intelligence is willing to work with their biological brethren; most recognize they are the "children" of their creators (and there's plenty of humans, vulcans, etc that feel the same) and get their social capital from copying their "parents." There's a lot of jealousy here toward the machines for their skills, from the machine toward their biological counterparts for their seeming arrogance and lack of empath for the machines.
    This is just the tip of things in our campaign...

    TheOld Ones. I take the the parallels to "What Are Little Girls Made Of?" are deliberate?

    HArdly seems like the utopian FEderation, does it?


    And before I forget..... Share and enjoy

  11. #11
    Join Date
    Nov 2002
    Location
    fringes of civillization
    Posts
    903
    As the old Ferengi sea shanty goes: "What do you do with a sentient android?"

    This is an issue that is always lurking in sci-fi/comics/fantasy. When does a created lifeform become a being with all the rights, privilages and responsibilites of its creator? One would hope that a Utopian ideal like the Fed. would 'get it right the first time', but then where does the line stop? The Enterprise D became, on a simple level, sentient. Does that mean that it now has the same rights as Picard or Data?
    And if Data has rights, what of Lore? Data just turned him off. That would be the android equivelent to murder, right. Shouldn't he get a trial first, at the very least?

    On thing I always found interesting was that AFTER they declared Data a sentient, independent being, they still tried to create more of him. Isn't that kinda like saying "Kirk was one of our best captains, and a fine example of a FEd. citizen, so were going to clone him?"

    Speaking of clones, did Riker and Pulaski committ murder when they killed their clones in "Up the long ladder"? Because in DS9, Odo told a guy who tried to frame him by murdering his own clone and planting evidence that Odo did it, that killing someone, even you OWN clone, was murder. (maybe its a Bajoran law...)
    _________________
    "Yes, it's the Apocalypse alright. I always thought I'd have a hand in it"
    Professor Farnsworth

  12. #12
    Join Date
    Aug 1999
    Location
    Worcester, MA USA
    Posts
    1,820
    I think that the Clonng episode in TNG mentioned something about an individual having the rights to his own gentic material/DNA code. Since the DNA was stolen they could do what they wanted. Had the clones been fully developed and walking around, it might have been a bit more difficult on a moral issue, anyway.


    As for Lore, I don't think it counts as murder, since they can reactive him, something you cannot do with a biological life form. So technically Lore isn't dead. That might make Data legally responsible for Lore's wellbeing.

    I liked the episode about Data having to fight to prove that he was a sentient being eserving of rights. It seems to me though that it should have been predecided in Data's favor. I would think that the entry requirements into Starfleet and Starfleet Academy would probably include something that would logically prove or at least logically imply that Data was considered a life form.

  13. #13
    Join Date
    Jan 2000
    Location
    Virginia Beach, VA
    Posts
    750
    Quote Originally Posted by tonyg
    As for Lore, I don't think it counts as murder, since they can reactive him, something you cannot do with a biological life form. So technically Lore isn't dead. That might make Data legally responsible for Lore's wellbeing.
    Further, even if Data permanently deactivated Lore, there is little doubt it would be ruled a "justifyable homicide".

    "Your honor, I tried to just leave him alone, but he kept hurting people. I tried to persuade him to stop, but he kept hurting people. Eventually it became clear to me that the only way to get him to stop hurting people was to permanently destroy him. So, I switched him off and removed his brain."

    Only that last part would be wierd in a 20th century courtroom.

    Remember that our societies, and that expressed in Trek, does allow "murder" in the defense of others or self. If any sentient steps too far out of line, it may get destroyed.
    You're a Starfleet Officer. "Weird" is part of the job.
    When the going gets weird, the weird turn Pro
    We're hip-deep in alien cod footsoldiers. Define 'weird'.
    (I had this cool borg smiley here, but it was on my site and my isp seems to have eaten my site. )

  14. #14
    Join Date
    Aug 2001
    Location
    Paris, France, Earth
    Posts
    2,588
    The very tricky thing here is that defining sentience becomes quite hard when you can have thinking rocks, self-aware holograms, or fully functional androids...

    There's also the lingering question in Trek about the fact that technology doesn't seem to be used as much as it should in the case of improving a individual's habilities...
    For instance, why couldn't artificial eyes like Geordi's be implanted into every human, thus giving them superior sight (or conversely, why do these eyes work so much better than real ones - Picard's heart and Nog's leg were just doing what their biological counterparts were doing, nothing more) ?
    Of course, Borgs made an easy explanation why people could become weary of cyborgs (as the Genetical Wars made them weary of gene tampering), but in this case, why not give only biological replacements every time people need a limb or organ ?

    This might be off-topic, but for me it is tied to the matter of ships fully crewed with holographic or android crewmembers (or fully automatic ships, for that matter). Apparently, the Federation ideal is to put the individual before the machine, therefore refraining from relying to much on them to make decisions (The Ultimate Computer notwhitstanding - and I don't think the M-5 failure was enough of a reason to stall any computer research for the centuries to come).
    I guess the other galactic powers have other reasons to avoid the use of machines - Klingons must find it dishonourable, Romulans and Cardassians mustn't trust machines enough (and cyber or genetically augmented soldiers could become a threat), and Ferengis certainly don't need a computer as a competitor. The Dominion is, on the contrary, a perfect example of a power having no such qualms (not to mention the Borg, of course).

    For all of these reasons, I don't think the Federation would be eager to create more artificial lifeforms (no matter how clever Soong was, I don't think the Federation would need much time to reproduce his creation should it put all of its effort into it - think how fast it could adapt to the Borg and Dominion attacks, for instance). Therefore, the case of an artificial intelligence's rights would be analysed only on a case-by-case basis.
    Of course, this could change in the future, but then again, we can pretty much imagine anything in Trek's future (I haven't watched all of Enteprise, but for what I've seen, the 29th century doesn't seem to rely too much on androids either).

    On the holograms question, it was stated clearly in TNG that holograms weren't sentients - they were just very advanced NPCs in a videogame. The only sentient hologram was Moriarty, and he was an anomaly. But the question remains : if the ship's computer can run a program emulating sentience (while doing minor tasks like running the whole ship), then achieving sentience by itself should be piece of cake, unless a whole lot of safeguards are set to prevent such a thing happening (and then... wouldn't the ship's computer be considered like a lobotomised slave ?).

    Things became more muddled in DS9 with Vic and Voyager with the Doctor. Both were self-aware holograms anyway. Considering Vic, we could suppose that he was only a very advanced program, making him much more lifelike, but remaining limited to his own programming (after all we never saw him falling in love with a real character, asking to go out in space to watch the stars, or suddenly slamming the door in a character's face saying he's fed up with them always complaining to him).

    As for the EMH... I haven't seen all of VOY either, but it seems to me he was able to reach far beyond his programming - feeling love, fear, pride, and so on... (BTW, the Ent-E holodoc was apparently also afraid of the Borg). Maybe this has to do with the bio-neural gel packs... making them a ticking bomb to all ships using them, as their computer or holograms could be able to reach sentience far more easily.

    Whew... gonna take a break. Hope all of this is understandable.
    "The main difference between Trekkies and Manchester United fans is that Trekkies never trashed a train carriage. So why are the Trekkies the social outcasts?"
    Terry Pratchett

  15. #15
    This is why I favor games like:

    Klingons off the port bow!

    Raise shields!

    ***BOOOM*** The doc goes flying over the bridge rails.


    None of this complicated "what is sentience" definition stuff, nor any other such ideas, which inhabit and are poster children for the "Pandora's Box of Scenario Generation."

    I don't need players thinking. I need them thinking on their feet.

    Complicated philosophical scenarios have their place, to be sure, but I personally lost interest in them 20 years ago when they consisted of

    "Are the offspring of the orcs inherently evil, thus requiring the party paladin to slay them out of hand?"

    "The evil wizard's tower had a gem that he stole from the elves worth 25,000 Gold. Since we cut off his head, but were hired by dwarves, but the tower is under the charter of the local king of the human lands...who now rightfully owns the gem?"

    "The Faerie Drake is a chaotic good creature. His hide will bring many thousands of gold in resale. What a dillema this is for a party that needs 4,000 gold to buy the ingredients for the sword that will save the village, thus attracting the Church of Good, whom we can't stand."


    I'm all for challenges, and plot development, but not the follow-on arguments that get started with setups like that. I've seen them destroy groups, once positions get entrenched.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •