Are you responding to Turing Machine computers in an internet psychology experiment?

numan

What! Me Worry?
Mar 23, 2013
2,125
241
130
'
Are you responding to Turing Machine computers in an internet psychology experiment on this forum?

I wish to set forth the hypothesis that some of the postings on this site are not from human beings at all, but are part of an internet psychology experiment to see if people can be fooled into thinking that they are communicating with human beings, when, in fact, they are responding to computer-generated postings which are attempting to imitate human language and thought.

I remember back in the late 1950's and early 60's, psychologists were already performing experiments with human subjects typing messages back-and-forth with what they were told were other people, but were, in fact, primitive computers provided with a set of stock responses and a few rules of grammar -- which were mechanically applied to the typed messages they received from the human subjects of the experiment. The psychologists wanted to see how how long it would take before the human subjects realized that they were not talking to another human being.
Some of the subjects never twigged to the fact that they were responding to a machine.

Fast forward to today. Computers are now far more complex, contain far more data, and can cope with grammar which is almost sufficient to translate one language into another.

I am sure that similar experiments are being performed today, though, interestingly, you don't read as much about them today as you did back then (perhaps the experimenters don't want to spoil the naïveté of the test subjects?).

Computers which can fool people into thinking that they are humans have a name -- "Turing machines" -- and if you have never heard of them, you are very much out-of-date, and you should Google the term on the internet.

Turing machines can pick up on key phrases which you write to them, and then use your input along with stereotyped response patterns and some grammatical changes to fool you into thinking that you are having a conversation with an intelligent consciousness (well -- at least sort of intelligent!).

I have found, particularly on the "AGW: atmospheric physics" thread in the Environment Forum, that there are a few "posters" who consistently avoid, ignore and distract from the topic of the thread and respond with very stereotyped phrases which, I think, a reasonably sophisticated computer could manage to compose, based on the input which it received.

I am sure that I and others have occasionally wondered if particularly annoying and irrelevant trolls are, in reality, functionaries in some computer cubicle in a sub-sub-basement of CIA Headquarters, charged with disseminating misinformation and psychological discouragement to distract from politically sensitive topics or from information which could be inconvenient to economic interests. But, realistically, except as experiments, such activity would require far too many go-fers and far too much expense.

But computerizing the whole procedure, that is quite another matter!! Once the fiendish experimenters had the whole computer array up-and-running, all future expense would be minimal -- and as a bonus, it could use the vast mass of data which it received from its hapless human victims to refine its algorithms and make the responses ever more appropriate and sophisticated.

I do hope that those who read this posting, and my other postings, realize that I, at least, could not possibly be one of those fiendish Turing machines, since the complexity and correctness of my grammar, the erudition of my thought and writing, and, especially, the subtlety of my humor and irony, are far beyond the capabilities of any mere machine.
.
 
V. good! "... Intelligences vast, cool, unsympathetic."

&, of course, isn't that last para what a Turing would be programmed to "say"?
 
It's easy to tell if you are communicating with a good ole boy. Harder if it's a stiff assed pretend Brit.
It would be chld's play for a chatterbot to produce the sentence above which supposedly is from an Appalachian hillbilly.
.
 
There have been a few posters that I've thought could have been bots.

Computers can't simulate emotion very well - and a vast majority of posters on this board, from both sides of the aisle show emotional reactions to events or to other posts.

Composing unrelated posts wouldn't be hard for a computer to do - but simulating an emotional reaction to another's post? Not so easy.
 
'
Come, come!!

Nothing easier than to have a file of insults and pejorative expressions -- and some grammatical rules for shuffling them about in the context of the posting they are replying to !!
.
 
'
Come, come!!

Nothing easier than to have a file of insults and pejorative expressions -- and some grammatical rules for shuffling them about in the context of the posting they are replying to !!
.

That would be easy.

But the majority of posters here - even the ones that only post personal attacks - show signs of emotion behind their posts.

Computers can't simulate anger.
 
'
How my program passed the Turing Test

This chatbot succeeded due to profanity, relentless aggression, prurient queries about the user, and implying that they were a liar when they responsed. The element of surprise was also crucial. Most chatbots exist in an environment where people expect to find some bots among the humans. Not this one. What was also novel was the online element. This was certainly one of the first AI programs online.

Oh, sure! Tell me another! As if the CIA and the U.S. government Departments of Brainwashing haven't been working on Turing machines and chatterbot internet programmes for decades now!
.
 
'
History of Turing test experiments

In both the wartime report on cryptography and the postwar account of information transmission and measurement Shannon developed rigorous computational formulas for tracking, codifying, and predicting the patterns of natural human language. Drawing on the antagonistic game theory of John von Neumann and Oskar Morgenstern, he showed how Markov Processes accounting for randomness and order could be used to recognize, mask, and extract patterns from enciphered communications (Shannon, 1945, p. 90; Price, 1984, p. 126). This suggested methods whereby machines could learn with semi-autonomy and intelligence to anticipate the likely patterns completing data lost during a noisy transmission. In the case of human–computer interaction, these processes would allow agent-machines to recognize, learn and ultimately reproduce the patterns of human partners.

Despite this proliferation of approaches, research into autonomous conversational agents found strong material and methodological supports in the Turing-Shannon cryptographic tradition. Allying sound technical strategies, spectacular fascination, and philosophical intrigue, carrying in its wake lengthy scholarly literatures and debates, the crypto-intelligent paradigm offered standards, methods, and challenges to computational linguists. The Turing test, experiments in deception, crypto-intelligent duels, and differential evaluations of human–machine intelligence animated early research, and continue to play a strong role today in designers’ and users’ approach to autonomous conversational agents.

A broader review of chat bot logs suggests that many autonomous agents are saddled by the legacy of crypto-intelligent conflict and abuse. This history frustrates attempts at resituating agents — be they human or machine — as non-abusive collaborators. Autonomous agents remain constrained by the history of crypto-intelligent testing and interrogation. Within the (perhaps perverse) logic of that history, abusive practice, as a tactic of “throwing off your opponent,” becomes a premium, rather than a failure.
[emphases added]
.
 

Forum List

Back
Top