Results 1 to 12 of 12

Thread: Computers

  1. #1

    Computers

    Ok this has recnetly began to work in my mind and I thought why not mention it here.

    For years Trek blueprints have included large computer room where reasonably it is expected that all ship computer functions are operated from. usually this room has been placed deep inside the vessel. But is this an idel way for a ships computer to be placed. Even today most large compuers are decentralized with the whole being the complete package. There fore the entire memory is not in one location but spread through out in several smaller locations thus not be liable to massive damage. This to me would make sense on a starship. so what says the body?
    Its me Eric R.

  2. #2
    Star Trek is supposed to make sense?

    Command and Control center located on top of and centered on a very large target, all power systems running through a single location, two (or more) single large propulsion units exposed dozens of meters away from their power core, lots of thin easy to target main structural members, oh and lets not forget about sending the most senior officers into the most dangerous situations - leaving the junior personnel with the keys to the multi-gigaton weaponry pointed at them...

    Make sense, pfft its Scifi.

    But other than all of that, yeah a decentralized system would be logical.
    Phoenix...

    "I'm not saying there should be capital punishment for stupidity,
    but maybe we should just remove all the safety lables and let nature take it's course"

    "A Place For Everything & Nothing In It's Place"

  3. #3
    Join Date
    Aug 2009
    Location
    Kwinana, Western Australia
    Posts
    128
    You need to think of the "computer core" as a mass storage device. You couldn't possibly decentralise the massive data library found on a starship.
    "Star Trekkin' across the universe, On the Starship Enterprise, Under Captain Kirk.
    Star Trekkin' across the universe, Boldly going forward 'cause we can't find reverse."
    Star Trekkin' by The Firm.

  4. #4
    Join Date
    Nov 2000
    Location
    Geelong, Vic; Australia
    Posts
    1,131
    I tend to think of the big cylindrical thing as the housing for the massively-parallel processors. When it comes to fast processing, you don't want the processors decentralised - you want them as close together as possible to minimise data transit times. With a Star Trek computer you're looking at extremely fast heuristic neural networks that have to communicate with each other as fast as possible.

    Think of the old Cray supercomputers - they were circular so that no wire was more than (IIRC) about 4 feet long.
    When you are dead, you don't know that you are dead. It is difficult only for others.

    It's the same when you are stupid...

  5. #5
    Keep in mind many Trek computers have FTL data transmission speeds–they probably have all sorts of subspace field generators around them, which would probably be easier to handle if they were close together.

    Although there is a part in the TNG tech manual about networking smaller processors together off the ship–I recall something about all tricorders on away missions sharing speed...
    Portfolio | Blog Currently Running: Call of Cthulhu, Star Trek GUMSHOE Currently Playing: DramaSystem, Swords & Wizardry

  6. #6
    I also think there seems to be an element of the decentralised computer in Voyagers Bio-Neural gel-packs...
    DanG/Darth Gurden
    The Voice of Reason and Sith Lord

    “Putting the FUNK! back into Dysfunctional!”

    Coming soon. The USS Ganymede NCC-80107
    "Ad astrae per scientia" (To the stars through knowledge)

  7. #7
    Join Date
    Nov 2000
    Location
    Geelong, Vic; Australia
    Posts
    1,131
    Quote Originally Posted by Dan Gurden View Post
    I also think there seems to be an element of the decentralised computer in Voyagers Bio-Neural gel-packs...
    My understanding was always that Voyager's BNGPs were to speed up local processing, whereas the main processing still took place in the main (presumably isolinear) core(s).

    Sort of like the reflex system in humans: certain things bypass the brain entirely and are acted upon much faster than if they had to be processed in the cerebrum. When you pull your hand away from a hot stove, the signal doesn't get to your brain until after you have already pulled away, because the reflex system attached to the thermal-pain receptors goes straight through your spinal cord into your motor nerves, ignoring the brain until after the muscles have pulled your hand to safety.

    I thought that the BNGPs acted in a similar fashion: so if there's a hull-breach, for example, emergency force-fields are set up immediately, processed locally by the BNGPs before the message is sent to the main computer core (which then processes and hopefully comes up with a solution as to why there was a hull breach).

    Of course, I could have all this occurring in my head...it wouldn't be the first time!
    When you are dead, you don't know that you are dead. It is difficult only for others.

    It's the same when you are stupid...

  8. #8
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    Yeah Alderon, the Voyager had a central computer core of the regular type., since it was stolen in one episode!

    Depending on your flavour of ship there are usually multiple computer cores and thousands of smaller computers distributed round the ship. If the main core went offline, you could go about usual business, though several systems would be offline (anything processing heavy, and certainly (pre voyager) the holodecks haha)

    However, no, that's not the rule of computers... Modern super computers are all designed with the shortest possible route between processing units that is physically possible. Look at the old Kray supercomputer, it was designed in a pie-wedge format, so all the wires were in the middle, to make the routes as short as possible. In terms of processing, the longer your connections, the more latency you suffer: A computer the size of the one's in startrek would have HUGE problems with latency (and they mention it in the tech manuals) as it is, without decentralising them!

    So no, I don't agree Bluecoat, it makes no sense to do so, because of the raw amount of storage, and the processing of all of that stored information, the 'core' concept is as valid then as it would be now... But they do have local sub-processing units, and most of the consoles are computers in their own rights (they just don't have access to the huge library of information, if the core is offline). As I recall, also, many bridge systems have hard-connects to important systems (as well as the other redundant bridges, such as engineering, and any battle bridges) s they can operate with the core being offline.

    Modern cloud computing is simply a 'cheap' way to do parallel storage and backups, but in terms of processor power it would be PWNed by a modern super computer, because the latency of sending terrabytes of information round the network, of maybe thousands of miles, would wipe out much of the advantage of their raw CPU numbers!
    Ta Muchly

  9. #9
    Join Date
    Aug 2001
    Location
    Paris, France, Earth
    Posts
    2,589
    Given what the tricorders have been shown to be able to do, I expect that a single workstation has enough processing power to work on its own for small tasks.

    While I remember many cases where the computer being off line meant the whole ship was basically unusable for anything (First Contact being one prime example), I don't remember episodes where the computer is off line but a separated workstation managed to perform parts of its duties. Mind you, the holodeck usually worked fine whatever else happened to the central core (First Contact again).
    "The main difference between Trekkies and Manchester United fans is that Trekkies never trashed a train carriage. So why are the Trekkies the social outcasts?"
    Terry Pratchett

  10. #10
    Join Date
    Jan 2000
    Location
    Virginia Beach, VA
    Posts
    750
    As others have said, when processing speed is a big issue, distance is a big issue on processing speed. We have kind of hit a plateau for processor speed for personal computers, and the reason is that our current processors are now limited by the physical distance between the processor and the memory: the processor cannot add the next two numbers until it has saved the result of the last calculation to memory, and the information travels near the speed of light and has to cover a few millimeters from the processor to the RAM.
    Starfleet computers actually create a warp bubble around the computer core so the data can travel FTL.

    The network is decentralized, though. Although every computer or workstation on the ship is designed to connect to the Computer Core, they can connect to each other even if the Core goes down, much like the internet was designed to keep working just fine even if a few major cities cease to exist. However, it is possible for a major disaster to take the network down, or at least to cause large nodes of it to be unable to connect. I seem to recall a TNG episode where the ship had a major disaster, and while folks in Engineering could fire up the drives and make the ship move, they couldn't access the sensors so they couldn't see where they were going. Meanwhile, the Bridge had access to the sensors, but couldn't get control of Helm or Navigation.

    The Computer Core also works a bit like a server farm: it handles the storage of data that is often needed by different parts of the ship, or rarely used by a specific part. For instance, the repair manuals for the ship's major systems are probably stored near Main Engineering, because that's where they are needed most. Also, in the event of a network outage, Main Engineering probably needs those manuals, but sickbay probably doesn't. Sickbay probably keeps medical files and anatomy texts and such for the same reason. But Bajoran History isn't a subject that gets called for a lot, so it gets stored centrally and sent to a workstation when called for. And current sensor readings are stored there because they are used for Navigation, Cartography, Astronomy, and various forms of statistical analysis. Engineering can probably improve warp efficiency by making minute changes in response to subspace conditions immediately in front of the ship, too.

    Think of it this way: 40 years ago, a business with a computer had a big room somewhere with a massive computer in it and a handful of terminals, and if you needed something from the computer you submitted a request to somebody who had a terminal.
    30 years ago, computers needed less training to use and terminals became cheaper, so while you still had the big room filled with computer, now everybody got a terminal on their desk.
    20 years ago, the dumb terminals were being replaced with desktop PCs, but they still connected to the big roomfull of computer in the basement somewhere.
    10 years ago, the desktop PCs were each better than the room-filling mainframe of 20 years before, but somewhere in the business there was still an air-conditioned room filled with computer equipment, but now instead of a mainframe it was "servers".
    The nature of the business' computer network has changed dramatically, but it still looks superficially similar, and you find yourself wondering, "If every desk has a powerful mainframe on it, why is there still a room filled with computers? Wouldn't it make more sense to decentralize that and put maybe a closet in every department or something?

    But actually I think the main reason there is still a Computer Core on starships is up there at the top of my post: in the first decade of the 21st century we pretty much hit the wall on processor speed if data moves at slower-than-light speeds. Sure making processors and memory smaller will allow them to be moved closer together, but the limit is still pretty harsh: to get really awesome processor speeds requires putting the processor(s) inside a warp bubble so data can move at FTL speeds.
    Warp fields tend to be kind of fussy: they collapse, they become unstable, and it appears to be hard to make them very small. I imagine it could be a real problem if some of the processors in a big parallel-processing array suddenly slowed down (like when their warp field dropped), so you need all of the processors in the array to be in the same warp field. And that means they all need to be in one place.
    You're a Starfleet Officer. "Weird" is part of the job.
    When the going gets weird, the weird turn Pro
    We're hip-deep in alien cod footsoldiers. Define 'weird'.
    (I had this cool borg smiley here, but it was on my site and my isp seems to have eaten my site. )

  11. #11
    Join Date
    Jul 2003
    Location
    Newcastle, England
    Posts
    3,462
    Actually using the same model, a lot of companies are going full circle, because 'basic' computers now can run all possible needs, and for business use, that's all you need. To cope with the ever-updating nature of OS and software, some use a centralised software and data store, taking advantage of super-high-capacity data flow... So in that sense it moves ever closer to Trek not further away from it

    The model of 'cloud computing' is also taking off in a big way on the internet. Using a data-centre which regulates it's self and keeps it's self maintained without you haveing to do anything (ok someone does it, just the it technicians don't have to go far )

    Take a look at the new On-Live idea http://www.onlive.com/ All you need is a decent TV, a broadband connection, and something capable of converting streaming video, and your games never become out of date. One of the profound advantages to centralised computing, you only need to update the core not all the consoles
    Ta Muchly

  12. #12
    Join Date
    Mar 2010
    Location
    South Wales, UK
    Posts
    157
    "Welcome back, Commander.
    "You have 2,467 subspace emails pending.
    "You can also upgrade to LCARS Explorer Version 8 now without having to consult your commanding officer or IT support team. Click 'yes' to continue.."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •