Comments open; continually revised
January 3, 2005 4:45 PM   Subscribe

The Ethics of Deep Self-Modification. What will happen when machines gain the ability to modify their own psychology? Do we have a responsibility to step in? What happens when we have the ability to modify ourselves? Philosopher Peter Suber has dedicated himself to issues of self-modification... not just in psychology, but also in constitutional law. Small wonder that this is the guy who invented Nomic. His site is littered with great stuff; he now is primarily involved with the open access movement. Check out his open access primer and blog.
posted by painquale (12 comments total) 2 users marked this as a favorite
 
This guy was my prof at Earlham. Really bright guy and (more importantly) a deeply decent human being. Glad to see him up on the blue.
posted by leotrotsky at 4:59 PM on January 3, 2005


Oh, and to actually add something to the discussion, take some time to navigate his course notes, which are extensive.

Lowenheim-Skolem Theorem, anyone?

or how about a map of The Critique of Pure Reason to the Prolegomena to any Future Metaphysics?
posted by leotrotsky at 5:07 PM on January 3, 2005


My geek chakra glows with white hot healing light everytime nomic comes up.
posted by cortex at 5:53 PM on January 3, 2005


Also, having had some time to dig into the linked article -- fantastic piece of work. Thanks, painquale!
posted by cortex at 6:10 PM on January 3, 2005


This is fantastic.

My Dad and I used to play a sort of Nomic cut-throat pool, in which you could change or add rules with each turn. Dad did a lot more rule changing than I did, for some reason...

Beyond that, fascinating reading, thanks a bunch for the post painquale.

And on preview, this game was played a decade and a half before Suber invented Nomic. Even neater. [My Dad was so smart, making us actually think as we played]
posted by kamylyon at 8:28 PM on January 3, 2005


Peter was revered at Earlham, not only because he's a freaking genius and a good guy, but because--rumor had it--he juggled on the Johnny Carson show. Huzzah for Peter!
posted by gsh at 9:00 PM on January 3, 2005


Yes, it was plungers. I once asked him about it in my Kant seminar, and he responded, after a disturbingly long pause that, "I don't do that anymore."
posted by leotrotsky at 9:28 PM on January 3, 2005


The references to intelligent machines and evolution make me doubt the depth of this guy's insight. Machines don't have psychology. While many people make the mistake of confusing fictional artificial intelligence as something real or realizable, a practicing philosopher should know better.
posted by Osmanthus at 10:03 PM on January 3, 2005


While many people make the mistake of confusing fictional artificial intelligence as something real or realizable, a practicing philosopher should know better.

It's a sarcastic article. EVERYONE knows that psychology comes from the soul.
posted by iamck at 10:11 PM on January 3, 2005


Wow, Peter Suber is one of my favorite philosophers on the web and I had no idea that he did this self-modification stuff. His pages taught me everything I know (which isn't much) about course web page design. I've also heard that he was once a stand-up comedian.

Thanks, painquale. I'm beginning to think that we run in similar circles.

(On preview) Osmanthus, many contemporary philosophers believe psychological states are nothing more than functional states. In this case, not only could very sophisticated machines have psychologies, but a brain is just a wet machine running something like a very sophisticated computer program.
posted by ontic at 10:26 PM on January 3, 2005


While many people make the mistake of confusing fictional artificial intelligence as something real or realizable, a practicing philosopher should know better.

Osmanthus, I think you'll find that artificial intelligence being something realizable is quickly becoming an orthodoxy among "practicing philosophers", if it isn't one already. The choice of the word "psychology" was mine, but I doubt Suber would object. (on preview: what ontic said)

Leotrotsky, I hadn't seen that Lowenheim-Skolem Theorem stuff, and it's great! My favorite theorem! And the plunger story is fantastic.

I really wish I could have taken a seminar with this guy. He sounds like an incredible genius. His writings on self-modification have heavily impacted the way I think on epistemology and philosophy of science.
posted by painquale at 10:30 PM on January 3, 2005


Osmanthus: The references to intelligent machines and evolution make me doubt the depth of this guy's insight. Machines don't have psychology. While many people make the mistake of confusing fictional artificial intelligence as something real or realizable, a practicing philosopher should know better.

Perhaps you should actually read the article before questioning "the depth of this guy's insight" based on "references to intelligent machines."

From the article: "No machine is a person today, but let's imagine a day when a suitably programmed machine is a person by any test (except question-begging tests like biological human ancestry). You needn't believe that this day will ever arrive in order to see the point of working out the ethics of deep self-modification for intelligent beings."


For anyone that liked this article, I recommend just about anything by Daniel Dennett, my personal favorite modern philosopher. In particular, check out "Where am I?" which (IMO) sheds light on what it means to be "me" (while being very entertaining).
posted by Bort at 3:47 PM on January 4, 2005


« Older Book Review Aggregator   |   I feel fine Newer »


This thread has been archived and is closed to new comments