I Respectfully Request that the Mods and and Admins Strongly Consider rules regarding AI in Posts

I just post that which comes from my own brain.


It makes everything so much easier.
Which is from information you gathered from articles, videos, and other people's opinions over the years, stored in your brain and spit out into the forum
 
Last edited:
First rule I'd recommend: No direct cutting and pasting of AI results.

I'll give my reasons:

1) AI is quickly becoming a substitute for thinking. This is a forum for thinking people to exchange ideas with the rule against not shouting others down, or interrupting them automatically enforced by the turn-taking asymetrical format. I doubt that people are using bots to find AI responses and posting them, but is it much better if the go to Google AI or "Pilot" (which should be named "Autopilot") and copy-and-paste?

2) Lengthy cut-and-pastes derived from asking AI a question are very boring. It slows down debate, or stops it when the other person refuses to debate AI. It hinders the purpose of this forum.

3) AI cut-and-pastes are presented as self-evident. Not so, they come from articles on the internet. Google AI provides links to that material. So, why not simply ask the AI question, and then click on the link and use that as the source? We can refute or accept the source and debate its validity.

4) AI companies are under fire for using copyrighted material to "train AI," which many believe amounts to reselling the work of others without giving them compensation or credit. I doubt that a forum like this would ever be implicated for allowing AI cut-and-pastes, but the ethical consideration is really no different. The simple act of clicking on the link in AI and quoting from AI's source takes that out of the equation.

I've been guilty of using AI quotes as a shorthand way of making a point. But no more, because it reduces the quality of my posts, IMHO. I believe the forum would be better off without it.

I basically agree. AI is being used as a source or a cite, and it's just not. Like you, if I'm lazy, I will type something in OpenAI or whatever to see what it says, but I usually like to confirm it in Wikipedia or some other source before actually posting, and thoughts are usually my own.

AI isn't a source; it's using extremely large volumes of data to predict what it's supposed to say based on your query, and AI is often wrong. Maybe not grossly so, but inaccurate enough to matter.
 
You read articles don't you?
Watch news video?
Read other opinions?
AI is a compilation of these things

It is, but AI often generates inaccurate information. It's not just searching for you; it's analyzing and then generating content based on what data it finds, based on whatever it is you write. It could generate different output if you change your query even slightly.
 
I don't think that most folks even realize that they use AI now.

Type in a question and the first thing that comes up is AI recommendation links.

Now you have to parse through that crap and find what suits you.

I bought a fairly short-lived (production-wise) revolver the other day. Typed in the manufacturer/name and AI kicks in and gave me a bunch of BS on it. Not even the correct model.

Sigh, I wish I was savvy enough to just turn it off altogether.
 
Like CNN? Or in your case, Fox News?

It's completely different, but I'm not going to explain it to you because you're clearly convinced that AI is 'intelligent'. It's not. It basically works like Google translate, which is also often mostly accurate but also consistently inaccurate in places, and in some instances, depending on what information you're trying to find, it can be very, very inaccurate. It's a trained algorithm. It's not a source. It's not intelligent. It can't think.

Your posts are consistently low-quality, and it's now obvious to me why that is. You don't write them yourself or even think about whether they're accurate. What's worrisome, though, is that there are probably millions of people out there who believe what you believe about AI - and they vote and serve on juries. That's a sobering thought.
 
First rule I'd recommend: No direct cutting and pasting of AI results.

I'll give my reasons:

1) AI is quickly becoming a substitute for thinking. This is a forum for thinking people to exchange ideas with the rule against not shouting others down, or interrupting them automatically enforced by the turn-taking asymetrical format. I doubt that people are using bots to find AI responses and posting them, but is it much better if the go to Google AI or "Pilot" (which should be named "Autopilot") and copy-and-paste?

2) Lengthy cut-and-pastes derived from asking AI a question are very boring. It slows down debate, or stops it when the other person refuses to debate AI. It hinders the purpose of this forum.

3) AI cut-and-pastes are presented as self-evident. Not so, they come from articles on the internet. Google AI provides links to that material. So, why not simply ask the AI question, and then click on the link and use that as the source? We can refute or accept the source and debate its validity.

4) AI companies are under fire for using copyrighted material to "train AI," which many believe amounts to reselling the work of others without giving them compensation or credit. I doubt that a forum like this would ever be implicated for allowing AI cut-and-pastes, but the ethical consideration is really no different. The simple act of clicking on the link in AI and quoting from AI's source takes that out of the equation.

I've been guilty of using AI quotes as a shorthand way of making a point. But no more, because it reduces the quality of my posts, IMHO. I believe the forum would be better off without it.
The AI summary is default in Chrome, appears at the start of resulting links. It just summarises what it finds on the net.
 
So I asked Chatgpt what's the best way to deal with old farts on a messageboard who dont want to use this new fangled technology we're calling A.I.
Okay, that's funny.

As to the rest, would you really expect AI to give you an unbiased evaluation of the value of AI?
If someone in a discussion forum refuses to accept or engage with ChatGPT-generated responses, here are some effective and respectful ways to respond, depending on the context and tone of the conversation:




✅




Why it works: It shifts the conversation to substance rather than origin, which is the point of most discussions.




✅




Why it works: It shows you’re not blindly copying, but using AI as a tool—just like using a calculator, search engine, or grammar checker.




✅




Why it works: Some critics assume AI = unreliable. This opens the door to collaborative scrutiny rather than dismissal.




✅




Why it works: It calls out possible double standards in tool usage without sounding defensive.




✅


If the forum has specific rules against AI-generated content, then it’s best to respect that:






🚫 What to Avoid:​


  • Don't argue that ChatGPT is "always right"—it’s not.
  • Avoid copy-pasting long AI responses without adding your own voice.
  • Don’t mock or dismiss people who are skeptical—many have valid concerns about reliability or authenticity.
 
It's completely different, but I'm not going to explain it to you because you're clearly convinced that AI is 'intelligent'. It's not
Your ASSumptions are WRONG. Imputing motive is never a good idea, especially from someone who bases his whole argument on "AI is evil because it can be inaccurate/I disagree with what it spits out because I view everything from a left-wing perspective 🤣
 
15th post
No, Carl.

AI is programmed by humans who are motivated by money.
Do you still have your buggy whip?
Is CNN motivated by money?
Fox News?
Your local news station?

You have a fear of new tools/technology. Learn to use the technology. You're never going back. Adapt and change or join an Amish community Seymore
 
You have a fear of new tools/technology.
I asked Hal about chatGPT.

Here's why you shouldn't blindly trust ChatGPT as a definitive source:

Potential for inaccuracy and fabrication:

ChatGPT, while adept at generating human-like text, doesn't inherently understand the information it presents as a human would. It can sometimes confidently generate factually incorrect information, fabricate citations, or misrepresent information, a phenomenon referred to as "hallucination".

Training data limitations and biases:

ChatGPT's responses are based on the vast amount of data it was trained on, which is not always completely current (typically up to 2021 for many models). This means it may not be accurate regarding recent events or developments. Additionally, if the training data contains biases or inaccuracies, the AI may reproduce them, leading to potentially skewed or unfair representations.

Lack of transparency and difficulty in verification:

ChatGPT does not reveal the sources of its information, making it challenging to verify the accuracy of its responses. Unlike search engines that present diverse sources, ChatGPT acts as a solitary source, limiting the user's ability to cross-reference or fact-check.

Using ChatGPT responsibly
To leverage ChatGPT's benefits while mitigating its limitations.

Fact-check everything:

Always verify information generated by ChatGPT using credible and reliable sources, especially for critical topics like medical, legal, or financial advice
 
Is CNN motivated by money?
Fox News?
Your local news station?
Yes, of course they are. All (including AI) are run by humans, not some omnicient beings with no selfish motivations.
Do you still have your buggy whip?
I still have a human wife. God forbid something happens to her, I will seek another human woman.
You have a fear of new tools/technology. Learn to use the technology. You're never going back. Adapt and change or join an Amish community Seymore

You enjoy your "AI companion" though, and feel smug that you use the latest technology, while I use the prehistoric method of human interaction.
 

New Topics

Back
Top Bottom