[Sigia-l] IA and the MS Office Paperclip

James Melzer jamesmelzer at gmail.com
Fri Dec 3 09:51:46 EST 2004


Other folks have mentioned games a lot recently on this list, and
there is a game that's relevant for this discussion - Molyneux's Black
and White.

The premise is you are a god who's actions on earth are manifested
through an avatar (a huge animal) who you train to do your bidding.
The avatar is an agent. It acts like a pet. You train it to feed
itself, to poo in appropriate places, and so forth. Or not. You can
train it to do pretty much anything you want. Picking up villagers and
throwing them far out into the ocean, for instance, is a teachable
behavior (if you are the vengeful sort of god). Or dancing in the
village square (villagers love it). The point being that the avatar
has relatively few inate behaviors - most of what it does is because
it was trained to do it. And if it does stuff you don't want, you
smack it on the nose with a newspaper and it stops doing it. Learned
behaviors.

Why doesn't @#$%& Clippy do this? Heck, why doesn't my whole OS do
this? I don't have a good answer. But it would be cool. If I connect
to two different networks repeatedly, why not learn those settings? If
I close a window immediate every time it is opened, why not stop
trying to open it?  If I never register my software, why keep asking?

http://www2.bwgame.com/

~ James Melzer

On Fri, 3 Dec 2004 08:20:30 -0500, Dan Saffer <dan at odannyboy.com> wrote:
> The moral of the Clippy story is that information poorly presented can
> be worse than no information at all.
> 
> There is a need for help content that is presented at the time help is
> needed, but the tone and character (ethos) of that help, especially
> when delivered by some sort of "smart" agent is crucial. Clippy at
> least shows us some things not to do.
> 
> Long aside: I once heard a talk by Caroline Miller, a professor at
> North Carolina State, on Ethos in HCI. She talked about agents being
> rhetorically different than "expert" systems in that users have a
> relationship with them. They need to focus on the establishment of
> trust and so explain their decisions and make those explanations
> credible. They have to be social and adaptable, communicating through
> elaborate interfaces, and they must have an ethos that offers empathy.
> Intelligent agents should be alive with pathos, not logos, winning
> favor and always looking for a response. They need to be friendly,
> familiar, and sympathetic. And, oddly enough, they should seek sympathy
> as well. Professor Miller called this "cyborg discourse" and it
> requires technique and strategy to design.
> 
> Dan
> 
> 
> Dan Saffer
> M. Design Candidate, Interaction Design
> Carnegie Mellon University
> http://www.odannyboy.com
> 
> 
> 
> ------------
> When replying, please *trim your post* as much as possible.
> *Plain text, please; NO Attachments
> 
> Searchable list archive:   http://www.info-arch.org/lists/sigia-l/
> ________________________________________
> Sigia-l mailing list -- post to: Sigia-l at asis.org
> Changes to subscription: http://mail.asis.org/mailman/listinfo/sigia-l
>



More information about the Sigia-l mailing list