This is topic Yay, it's declassified! in forum Starships & Technology at Flare Sci-Fi Forums.


To visit this topic, use this URL:
https://flare.solareclipse.net/ultimatebb.php/topic/6/1908.html

Posted by Shik (Member # 343) on :
 
Some of you maye remember that a month ago I was looking for a particular scientific term but was unable to tell why I needed it. Now I can because Bernd's posted the article.

�Linda Richman� Discuss. �/Linda Richman�
 
Posted by Sol System (Member # 30) on :
 
Asimov's laws make for good short stories in the 1950s, but applying them to real sentient beings is nothing but slavery.
 
Posted by MinutiaeMan (Member # 444) on :
 
I thought the exact same thing. In fact, that's basically what I was trying to argue (in a weird, tangental way) a week or two ago in another thread.

Other than that, though, it's an excellent article. Although a sentient starship isn't really a new idea at all -- we've had HAL 9000, and Andromeda, and Moya, to name just three. Even Trek had Tin Man. But none of them (aside from Farscape) have really approached the idea of a starship that might not want to serve from the very beginning. And in a military-type organization where control is everything, is it a good idea to have a tool that can think for itself?

I also found the part about the development of Cassious to be quite entertaining -- and frighteningly possible in the fictitious world of Trek.

Very interesting! [Smile]
 
Posted by Colorful Cartman (Member # 256) on :
 
"Dr. Pratheep Vijayaraghavensatyanaryanamurthy"

Please tell me you made that name up.

It's a splendid read, though at the rate science is progressing, true artificial intelligence will probably be a reality before the turn of the century, assuming the numerous ethical barriers can be cleared.
 
Posted by Sol System (Member # 30) on :
 
I agree, incidently, that is a fine article. My only real beef, other than the one I mentioned, is that Federation science should have addressed these issues long before the time when the article is set.
 
Posted by PsyLiam (Member # 73) on :
 
Does Red Dwarf count as a sentient starship? How much is Holly just Red Dwarf's face?
 
Posted by Shik (Member # 343) on :
 
Thank you, thank you, aaaaand thank you.

Simon: Regardless of whether or not they might be viable, once Data mentioned that all of his programming conformed to the Laws. So.

Cartman: "Please tell me you made that name up." Yes with an if, no with a but. The name "Pratheep" came from a kid I went to school with named Pratheep Balendra; he was from Sri Lanka & it's pronounced "pra-DEEP." As for the "Vijayaraghavensatyanaryanamurthy"...well, that's the last name of one of the coroners on the show "Crossing Jordan," Dr. Mahesh Vijayaraghavensatyanaryanamurthy, but airbody calls him "Bug" due to his specialty of forensic entymology. It IS a real Indian last name.

Reading through I noticed a few spelling errors--oops. Also, I want to note that I mentioned that there was only 1 other Soong-type android, that being Lal. I figyured that Lore's existence would be heavily classified due to his inherently malicious nature & that at both Soong & Data's insistence, Dr. Tainer's androidocity would not be revealed. And until "Nemesis" is released...well..yeah.

I'm also surprised no one mentioned the stupid joke allusions.
 
Posted by The359 (Member # 37) on :
 
I should point out the original Dr. Chandra from 2010: Odyssey Two who created the HAL 9000, his real name is actually Dr. Sivasubramanian Chandrasegarampillai, since he too is Indian (regardless of what the movie showed).
 
Posted by Timo (Member # 245) on :
 
I'm interested in hearing where and when Data said he was subject to Asimov'w laws. I can't find the reference from "Datalore" or "Brothers", and I don't have "Measure of a Man" or "Offspring" at hand.

Clearly, Data has shown the capacity to kill, and perhaps save for "Descent" has experienced no emotional distress as a result of taking a life ("emotional distress" being what Asimov's robots went into when they killed somebody - Asimov's machines never had much of a problem with possessing emotions).

Also, Data repeatedly refuses legitimate orders from human beings, and often also places his survival over that of hostile humanoids.

And in the end, Asimov's laws always assumed a single biological sentient lifeform, namely Homo sapiens. Trek has thousands, presenting such a wide range that Data himself must count as "humanoid/human" according to the inclusive criteria. Unless Asimov never lived in the Trek timeline, and never wrote "That Thou Art Mindful of Him", it's inconceivable that UFP experts would not have thought of the problems in defining "human" for the purpose of these laws.

Timo Saloniemi
 
Posted by Sol System (Member # 30) on :
 
Nothing is said about the famous Three Laws in "Datalore" or "The Measure of a Man," though Asimov does get a quick mention in the former. I cannot speak for the others.
 
Posted by Daryus Aden (Member # 12) on :
 
I think I can safely say that "nobody can deny the power of curry!" after reading this thread. [Smile]
 
Posted by The Mike Who Would Be Captain (Member # 709) on :
 
a major step in Data's evolution to NOT being a simple robot was made, however, when he made the conscious decision to kill Kivas Fajo. Of course, his programming had to churn over this for quite a while, and probably he wasn't breaking the Laws as much as finding an interpretation that allowed him to do so (i.e. the protection of Fajo'screw and other potential victims).

But I think to attain sentience, the Laws must be disregardable or at least have exceptions made. This avoids the 'slavery' issue raised here, and makes the subject able to take a place in a society of sentient moral beings (for we have no preset concepts except for instincts.. the decision to kill or not kill must be made by our morality).

sorry i couldnt offer this as critique when you first sent me the article, Shik, but, well, I just thought of it now.
 


© 1999-2024 Charles Capps

Powered by UBB.classic™ 6.7.3