Flopper
Diamond Member
In the attached link the author attempts to describe some of the most common dangers the public faces in the use of AI's. Is legislation needed to protect the public?
The Misrepresentation:
You have a problem with your computer or some software you purchased as I did. So you go to the customer or technical support tab on the company's web site. There you will probably find some frequently asked questions and answers and links to videos showing how to install and use the product.
Then you will see a button for customer or technical support. When you click the button you will be transferred to a website run by an online support organization that is contracted by the company to provide technically support. This organization is not really going to give you any real support. What they are going to do is collect information about you and your problem under the guise that it is needed to route you to the right place. The AI that you are chatting with is programmed to sense when you are getting impatient and likely to break the connection. As long you keep supplying information the AI will keep asking for more and more. And when you start screaming at the AI demanding to speak to a real support person, the AI says something like, while transferring you please read and agree to our policy. A few minutes later the AI comes back and tells you there will be a $1 fully refundable charge and you need to supply a credit card. It will ask if you have read the financial and privacy policy. When you say yes and type in your credit card info, three things happen. Your credit card is billed $45 as described in the documents you just agreed to, the entire script of the chat is sold to a data miner who will sell your personal information and other data you suppled, and you will you either be transfer to the companies technical support line or you are told to call back latter.
The above scenario happened to me about 6 weeks about. I contacted my state office of consumer affairs and was told what the organization did was perfectly legal and there was nothing they can do.
www.ibm.com
The Misrepresentation:
You have a problem with your computer or some software you purchased as I did. So you go to the customer or technical support tab on the company's web site. There you will probably find some frequently asked questions and answers and links to videos showing how to install and use the product.
Then you will see a button for customer or technical support. When you click the button you will be transferred to a website run by an online support organization that is contracted by the company to provide technically support. This organization is not really going to give you any real support. What they are going to do is collect information about you and your problem under the guise that it is needed to route you to the right place. The AI that you are chatting with is programmed to sense when you are getting impatient and likely to break the connection. As long you keep supplying information the AI will keep asking for more and more. And when you start screaming at the AI demanding to speak to a real support person, the AI says something like, while transferring you please read and agree to our policy. A few minutes later the AI comes back and tells you there will be a $1 fully refundable charge and you need to supply a credit card. It will ask if you have read the financial and privacy policy. When you say yes and type in your credit card info, three things happen. Your credit card is billed $45 as described in the documents you just agreed to, the entire script of the chat is sold to a data miner who will sell your personal information and other data you suppled, and you will you either be transfer to the companies technical support line or you are told to call back latter.
The above scenario happened to me about 6 weeks about. I contacted my state office of consumer affairs and was told what the organization did was perfectly legal and there was nothing they can do.
10 AI dangers and risks and how to manage them | IBM
A closer look at 10 dangers of artificial intelligence and actionable risk management strategies to consider today.
Last edited: