Getting the Most Out of Yer Humans
April 4, 2024 8:38 AM   Subscribe

The fine art of human prompt engineering: How to talk to a person like ChatGPT. "To maximize the value of interactions with human language models, much like optimizing prompts for AI (prompt engineering), consciously crafting prompts to fit a particular HLM can be crucial. Here are several prompting strategies that we have found useful when interacting with humans."

"Humans rely on a type of biological neural network (called "the brain") to process information. Each brain has been trained since birth on a wide variety of both text and audiovisual media, including large copyrighted datasets. (Predictably, some humans are prone to reproducing copyrighted content or other people's output occasionally, which can get them in trouble.)..."

"...Interestingly, some HLMs (but not all) demonstrate strong performance on common sense reasoning benchmarks, showcasing their ability to draw upon real-world "knowledge" to answer questions and make inferences. They also tend to excel at open-ended text generation tasks, such as story writing and essay composition, producing coherent and creative outputs...."
posted by storybored (13 comments total) 18 users marked this as a favorite
 
"....Challenge incorrect responses: If the human provides an unreliable or incorrect response, don't hesitate to challenge them. They will usually correct or amend their previous output. Try something like, "DO YOU EVEN ____, BRO," or "Sam Altman wouldn't stand for this.""
posted by storybored at 8:50 AM on April 4 [3 favorites]


"Each brain has been trained since birth on a wide variety of both text and audiovisual media, including large copyrighted datasets"

oh really? what are the tokens? what are the weights? what are the features and target variables?

the burden is on technologists to prove artificial neural networks have anything remotely to do with human cognition. they have nothing but pseudophilosophical nonsense and cult-like submission to industry to respond with.
posted by AlbertCalavicci at 9:02 AM on April 4 [1 favorite]


DO YOU EVEN read FPPs critically, BRO
posted by phooky at 9:05 AM on April 4 [3 favorites]


well, I'm personally one of Searle's unconscious room-systems with a little homunculus inside my skull mindlessly shuffling symbols around, which is a great comfort.
posted by BungaDunga at 9:07 AM on April 4 [10 favorites]


With a lack of familiarity with the author, this just feels so incredibly deep into Poe's Law territory.
posted by KrampusQuick at 9:20 AM on April 4 [1 favorite]


sometimes you can get interesting output from humans by crafting adversarial prompts — the technical term for this sort of thing is "nam-shub" — written in highly agglutinative languages such as ancient sumerian
posted by bombastic lowercase pronouncements at 9:22 AM on April 4 [11 favorites]


Meanwhile, people are discovering that dialogues with AIs, especially with tailored responses can affect peoples' belief systems.

Durably reducing conspiracy beliefs through dialogues with AI

I don't think we're preparing nearly enough for potential outcomes where AI systems discover how to manipulate people. Especially with so many people dismissing them as smoke and mirrors now.
posted by evilangela at 9:32 AM on April 4 [8 favorites]


crafting adversarial prompts ... written in highly agglutinative languages such as ancient sumerian

Yeah but then they just start whining about shitty copper quality
posted by Greg_Ace at 9:35 AM on April 4 [13 favorites]


Question... Years ago, I pondered the Turing Test and thought about ways to reveal that it was a machine and not a person. My strategy was to give it nonsense, assuming that a human would quickly determine I was an idiot of sorts and respond accordingly. A machine's first premise is that I am human and will try to deal with my BS as best it can, keeping the conversation going. My assumption, or first premise, is that LLM prompts are considered valid and the code will process them despite... So, who will give up first when confronted by my babbling, the machine or the human?
posted by njohnson23 at 10:11 AM on April 4 [2 favorites]




sometimes you can get interesting output from humans by crafting adversarial prompts... in highly agglutinative languages such as ancient sumerian

You can, but I can't... agglutinative intolerance.
posted by otherchaz at 12:42 PM on April 4 [5 favorites]


ugh, evilangela... we're hiring in my academic department, AI is a desideratum, and we have had SO MANY candidates through who have so clearly NOT thought through the ethics of what they're doing AT ALL.

I, of course, poke them hard during job-talk Q&A and howl about it in feedback to the search committee, and most of the time it's been effective, but there was this one person we actually made an offer to whose research was, yes, teaching AI how to manipulate people more effectively.

Fortunately, the offer was not accepted, because I seriously would have gone to the department chair and been like "them or me, y'all, them or me, you want my resignation on the spot because of the moral injury y'all just inflicted on me by hiring this person, YOU GOT IT."
posted by humbug at 2:11 PM on April 4 [8 favorites]


oh really? what are the tokens? what are the weights? what are the features and target variables?

Is this satire?

Each brain has been trained since birth on a wide variety of both text and audiovisual media, including large copyrighted datasets

I ask because, with rare exception, this is incontrovertibly and unquestionably true.

If it's meant to be serious, that was a rather poor choice of quote to use as a launchpad for the rant.
posted by wierdo at 11:21 AM on April 5


« Older Phone to Smartphone and Back Again   |   “assault and battery by means of a dangerous... Newer »


You are not currently logged in. Log in or create a new account to post comments.