Do You Use an AI?

AI is labor.

The proper use of AI is for work.

For example, if you're a programmer, AI can save you tons of time. "Claude Code" is like having an entire programming staff at your disposal.
 
AI is labor.

The proper use of AI is for work.

For example, if you're a programmer, AI can save you tons of time. "Claude Code" is like having an entire programming staff at your disposal.
Or it can help making funny memes or videoes....
It can take meeting notes
It can read books and give synopsis of it.

It can even turn a book into a movie. Without Actors, producers and writers all putting their creative license into someone else creative work and destroy it....except in the case of the Hunger Games series....the book was trash and the movies were better than the books.
 
Have you used an AI to accomplish a task?

If so, which one do you use?

Do you understand the parameters surrounding the answers it gives?

Did you use the information provided to assist you in making your decision?

Have you used one of the prompts published to ask your questions?

I've been testing the various AI's to find equipment, what equipment exists, what options I might need and materials that might be better.

I can tell you that without a doubt....Google AI (essentially Chat GPT) is as corrupt as they come. Completely controlled by advertising $$$. DO NOT TRUST IT.

Perplexity seems Ok....it gives citations...especially when citations matter.

Grok....relies a LOT on opinion instead of facts....it's always the "internet Social Media" answer.
Yes, I often ask it questions about code, how best to structure or write certain blocks of code and it is very helpful.
 
Yes, I often ask it questions about code, how best to structure or write certain blocks of code and it is very helpful.
This sounds like good use of AI.
Whether you know it or not, you are using "a.i.". The question is, are you being used by it?
if you ask a question on your bowser or search engine you will most likely get a reply from an AI. Questions that begin with why or how are likely to result in subjective answer.

An Ai is a good research tool. It should not be regarded as a problem solver.
 
1754689094343.webp
 
only when i google for general info - it's right there on the top & i will do my own research after that. however i LOVE watching AI interpretations of music.

 
I program AI. I train it to accomplish specific tasks. Without the proper training AI is worthless.

My latest project is a brain-computer interface that reads volitional motor signals from HD-EEG and converts them into specific actions (or commands for actions).

The idea is, you "think about" moving a limb, without actually moving anything. The AI picks up your "intent", and converts it into an action.

It's like a non-invasive version of Neuralink. EEG is noisy and has a very low signal level, that's why we need machine learning to process it.

Training proceeds in three phases:

1. Learning EEG from others
2. Learning EEG from self
3. Reinforcement of actions

The first part uses canned datasets you can download from the internet. The second part requires you to input your own real time EEG into the computer. For the third part you have to work with the AI to tell it when it has a correct or incorrect interpretation.

So far my AI has nearly 90% accuracy in reading volitional motor imagery. The only downside is it takes a long time to train, you have to give it thousands upon thousands of EEG sequences.
 
I think it largely depends on what kind of information you asked for. You can also be refined and fax checked. You still get an answer and the answer can still lead to more refining questions. I too have been somewhat miffed by some of the big answers I've gotten which seem like they don't need to be so big. AI works with human opinions as much as it works with real data, because a lot of the real data that's out there on the internet are in fact just human opinions.

I don't use AI to think for me and find ultimate answers, I just use it for getting leads that I can continuously follow up on to find more information. That's the way that it should be used, as a tool rather than an altar of Truth. Many of the inquiries are done on AI have produced perfectly usable and verifiable information. I think it's not so much the AI itself that is corrupt as people who post data and opinions on the internet label it in such a way that they know how to fool the AI into saying certain things.
 
I have used AI in grammar checking and suggested changes to text.
I used one for a couple of years and until they became so intrusive I deleted the program extension. It was wasting my time with all of it's colors and highlighting how everything could be/sound better. And that red dot at the end of a sentence not letting you get to the next sentence without taking advantage of "it's" suggestions, etc. It started out better in the beginning but became too annoying to continue.

I want my posts to honestly reflect who I am and what I think and in the sequence that truly represents MY individuality, warts and all, not some programs. I will live with MY typos, sentence structures and spelling errors, etc, not with an AI program. I have never felt so FREE since deleting AI for grammar, etc. 🪷
 
Do not use it and probably never will. It depends too much on what was programmed into it.

I also avoid it, and detest when it is for some reason forced onto me.

If course, it also is likely because I have worked with computers and earlier forms of AI for over 5 decades now. And can easily spot the flaws in it, as it is simply yet another example of GIGO.

But stupid people and those who are unwilling to actually engage their brains and do things like actual research or thinking all seem to love it.

And in several story writing groups I am in there is a huge push against it. As there are some that simply vomit up dozens of AI created stories that are almost all pure crap every day.
 
I also avoid it, and detest when it is for some reason forced onto me.

If course, it also is likely because I have worked with computers and earlier forms of AI for over 5 decades now. And can easily spot the flaws in it, as it is simply yet another example of GIGO.

But stupid people and those who are unwilling to actually engage their brains and do things like actual research or thinking all seem to love it.

And in several story writing groups I am in there is a huge push against it. As there are some that simply vomit up dozens of AI created stories that are almost all pure crap every day.
I use AI every day, and it’s a game changer in terms of saving both time and money.

AI excels at rewriting drafts, whether it's letters, articles, or even books. This is because it follows established rules of grammar, structure, and composition. However, while AI does an excellent job of refining the language, the quality of the final document depends heavily on the clarity of the original input. If the subject matter or the points being made are unclear, the polished, grammatically correct output may still fail to meet the writer's expectations. Simply put, if the input is unclear or flawed, the output will likely be, too.

When asking AI questions, the quality of the response hinges on how clearly the question is phrased. For example, if you ask, "How do I increase line spacing in Word 2024?" you’re likely to receive a clear and helpful answer. But, if the question lacks a straightforward answer, AI may provide multiple responses from various sources, each offering a different perspective.

AI will always attempt to provide the best possible answer, though this may not always be sufficient. In my experience, I’ve never encountered an AI that simply says, "I don’t know." Instead, it will generate information related to the query—whether or not it directly addresses the question at hand.

An AI will attempt to accomplish the task as best it can which may or not be good enough. I have never had an AI tell me it doesn't know. It will give me information relative to the query which may or may not answer the question.
 
I program AI. I train it to accomplish specific tasks. Without the proper training AI is worthless.

My latest project is a brain-computer interface that reads volitional motor signals from HD-EEG and converts them into specific actions (or commands for actions).

The idea is, you "think about" moving a limb, without actually moving anything. The AI picks up your "intent", and converts it into an action.

It's like a non-invasive version of Neuralink. EEG is noisy and has a very low signal level, that's why we need machine learning to process it.

Training proceeds in three phases:

1. Learning EEG from others
2. Learning EEG from self
3. Reinforcement of actions

The first part uses canned datasets you can download from the internet. The second part requires you to input your own real time EEG into the computer. For the third part you have to work with the AI to tell it when it has a correct or incorrect interpretation.

So far my AI has nearly 90% accuracy in reading volitional motor imagery. The only downside is it takes a long time to train, you have to give it thousands upon thousands of EEG sequences.
Awesome! What language do you use? I don't know, but I thought AI controlled prosthetic limbs are already being produced.
 
I excels at rewriting drafts, whether it's letters, articles, or even books.

Maybe if somebody is only semi-literate. I admit I often shake my head at things I read that are AI generated.

I did briefly use AI to proofread my works, and quickly abandoned it because it kept wanting me to make stupid changes that were absolutely nonsensical.

And AI does not "attempt" anything, it simply does what the algorithm tells it to do. It is not "smart", it is not "intelligent". And I catch mistakes in it all the damned time.

About a month back, I needed to try and compare US and Canada gas prices, and see if any sites would both convert the currency values and allow it to give the price of gasoline in Canada in US dollars per gallon. And want to know what the AI told me?

That after conversions, the average price of gasoline in Canada in US dollars was $50 a gallon or more.

I just asked it for the most advanced bomber, and it told me the B-21. An aircraft that is not even in service yet.

And the most advanced missile systems are owned by Russia and China.

But from what I am taking from what you said, AI is great for lazy people, people who do not want to think, or people who are easily fooled.
 
Maybe if somebody is only semi-literate. I admit I often shake my head at things I read that are AI generated.

I did briefly use AI to proofread my works, and quickly abandoned it because it kept wanting me to make stupid changes that were absolutely nonsensical.

And AI does not "attempt" anything, it simply does what the algorithm tells it to do. It is not "smart", it is not "intelligent". And I catch mistakes in it all the damned time.

About a month back, I needed to try and compare US and Canada gas prices, and see if any sites would both convert the currency values and allow it to give the price of gasoline in Canada in US dollars per gallon. And want to know what the AI told me?

That after conversions, the average price of gasoline in Canada in US dollars was $50 a gallon or more.

I just asked it for the most advanced bomber, and it told me the B-21. An aircraft that is not even in service yet.

And the most advanced missile systems are owned by Russia and China.

But from what I am taking from what you said, AI is great for lazy people, people who do not want to think, or people who are easily fooled.
I edit two newsletter, one monthly and one quarterly. The articles come mostly from managers and other employees. Most of the articles are rough drafts. The key points in the articles are usually well articulated, but composition errors often make them disjointed and difficult to read. To address this, I run most articles through an AI tool that refines these rough drafts into polished, professional pieces.

I personally review each article to ensure the tone and key messages are intact. While the AI generally performs well, there are occasional instances where it misinterprets parts of the draft, requiring me to make adjustments. Once the article is finalized, I always send a copy to the writer for review, and occasionally, I receive feedback for corrections.

I’ve received positive reviews from both management and readers. Could I have achieved the same results without the AI? Perhaps, but it would have taken significantly more time, which I simply don’t have.
 
I edit two newsletter, one monthly and one quarterly. The articles come mostly from managers and other employees. Most of the articles are rough drafts. The key points in the articles are usually well articulated, but composition errors often make them disjointed and difficult to read. To address this, I run most articles through an AI tool that refines these rough drafts into polished, professional pieces.

And I do it manually. That is what a proofreader-editor does.

You are just using a lazy tool to do it for you.
 
I asked AI about the pressure relief valve on an air compressor. Still doesn't work.
 
15th post
Do not use it and probably never will. It depends too much on what was programmed into it. What it picks up on the internet or what ever, which maybe slanted in some way. As far as I can tell it misses common sense and the real ability to reason. That being said if I look up something on the internet and there is an AI answer I may peruse it then continue with my own search.
I have been using it for reviews of products I am interested in purchasing. Never for grammar/spelling, corrections, etc. I am liking it for reviews but also do independent research if I perceive a political bent. So far, I have been pleased with AI for reviews of those things I have searched. But still, go elsewhere if my instincts are telling me to. I like the options. 🪷
 
Maybe if somebody is only semi-literate. I admit I often shake my head at things I read that are AI generated.

I did briefly use AI to proofread my works, and quickly abandoned it because it kept wanting me to make stupid changes that were absolutely nonsensical.

And AI does not "attempt" anything, it simply does what the algorithm tells it to do. It is not "smart", it is not "intelligent". And I catch mistakes in it all the damned time.

About a month back, I needed to try and compare US and Canada gas prices, and see if any sites would both convert the currency values and allow it to give the price of gasoline in Canada in US dollars per gallon. And want to know what the AI told me?

That after conversions, the average price of gasoline in Canada in US dollars was $50 a gallon or more.

I just asked it for the most advanced bomber, and it told me the B-21. An aircraft that is not even in service yet.

And the most advanced missile systems are owned by Russia and China.

But from what I am taking from what you said, AI is great for lazy people, people who do not want to think, or people who are easily fooled.
If you ask an AI a question such as, which is the best bomber, you are likely to get several answers; the one with the best range, the best targeting, best stealth, etc. AI's are not good at critical thinking which is exactly what is needed to answer your question.

AIs lack the nuanced understanding, contextual awareness, emotional intelligence, and ethical reasoning of humans, instead they rely on patterns in data and predictive logic to generate responses.

The parents of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI and CEO Sam Altman in August 2025, alleging that the ChatGPT chatbot validated and encouraged their son's suicidal. This is what can happen when an AI attempts critical thinking. Adam presented his case for suicide and the AI found his case was sound and it agreed with him. The AI then encouraged and thus assisted in the suicide of their son. This is why most AIs will not attempt critical thinking. Some day they probably will.
 
And I do it manually. That is what a proofreader-editor does.

You are just using a lazy tool to do it for you.
I'm using a tool, that will save hundreds of hours and produce a product that is well accepted. The AI does more the just proofread. It will reorganized text in order to improve the readability of a piece.

Here what the above paragraph looks like after my AI polishes it.

I’m using a tool that will save me hundreds of hours while producing a product that’s well-received. The AI does more than just proofread—it reorganizes the text to enhance readability and overall flow.

The second paragraph flows a bit better and uses a few less words while maintaining the point of the text.
 
Have you used an AI to accomplish a task?

If so, which one do you use?

Do you understand the parameters surrounding the answers it gives?

Did you use the information provided to assist you in making your decision?

Have you used one of the prompts published to ask your questions?

I've been testing the various AI's to find equipment, what equipment exists, what options I might need and materials that might be better.

I can tell you that without a doubt....Google AI (essentially Chat GPT) is as corrupt as they come. Completely controlled by advertising $$$. DO NOT TRUST IT.

Perplexity seems Ok....it gives citations...especially when citations matter.

Grok....relies a LOT on opinion instead of facts....it's always the "internet Social Media" answer.
I have gotten inaccurate answers from ChatGPT and promptly challenged it with correct information to see how it would react. It’s funny. It usually says, “you’re right” and then “explains” how it presumed that I was addressing a different point.

So, I agree. It’s about as reliable as Wiki. And that’s not a good thing. But both can serve as a point of departure for deeper research.

It’s fun to seek out the primary sources and material and to find out if somebody has chosen to take something said out of context or to otherwise distort it.
 

New Topics

Back
Top Bottom