Lord Long Rod
Diamond Member
- Jan 17, 2023
- 7,706
- 8,160
- 2,138
- Banned
- #1
Well, I guess it is time to do all those things on my bucket list because this is certainly not going to end well. Essentially this act of stupidity is the Biden handlers' way of saying that they are not going to do anything about addressing the ethics of our evolving AI. They are content to just let happen whatever is going to happen.
I had a conversation with my teenage son recently about this. Of course, he is tech savvy and thinks I only understand stone-age principles of technology, like levers and wheels and such. He assured me that the AI will only work within the parameters we establish for it, and therefore the idea of the machines gaining intelligent and declaring war on the human race is complete hogwash. I think he even laughed at me under his breath. I countered that (1) there is a certain proportion of rotten human being on the face of the earth, (2) there is nothing exempting highly intelligent people from being rotten, and (3) if a truly rotten person composes the enabling code for AI, then HE defines the parameters. There is nothing stopping some fat incel pig who is mad at the world from writing some malevolent code, hacking some key infrastructure (e.g., utilities, defense, etc...) , and inserting said malicious AI code into the existing AI code. My son seemed to reluctantly agree that the possibility may be genuine.
Moreover, technology generally is aimed at making things work better and more efficiently. Integration is key to this. But will there be effective firewalls put into place in order to compartmentalize otherwise integrated systems for security purposes? One would think so. But we know that absolutely NOTHING is immune from hacking/cracking operations. Therefore, there will always be exposure to risk. As the automated apparatus grows, the the consequence of a breach in security mounts. The only sure way to protect a system is to unplug it from the integration. But this is absolutely opposite of the direction big tech has been headed for the past 50 years.
My point is that the risks posed by AI, and admittedly intriguing concept, shall be subject to the individual ethics and moralities of the people developing, controlling, and maintaining it. Personally, I have found highly intelligent tech people to generally not be to my liking. 9 out of 10 of them are creepy and socially awkward. Many of them don't bathe regularly. They read and watch a lot of fantasy based bullshit and play video games in which they are effectively a god (i.e., they want to be a god). They don't get their dicks wet in appropriate ways, and so on. They possess great knowledge and skill, but they need to be kept under tight control if we are going to grant them the ability to code our futures. These are not the humans we want establishing what our futures look like (unless you want your future to look like World of Warcraft, and I don't.
hotair.com
I had a conversation with my teenage son recently about this. Of course, he is tech savvy and thinks I only understand stone-age principles of technology, like levers and wheels and such. He assured me that the AI will only work within the parameters we establish for it, and therefore the idea of the machines gaining intelligent and declaring war on the human race is complete hogwash. I think he even laughed at me under his breath. I countered that (1) there is a certain proportion of rotten human being on the face of the earth, (2) there is nothing exempting highly intelligent people from being rotten, and (3) if a truly rotten person composes the enabling code for AI, then HE defines the parameters. There is nothing stopping some fat incel pig who is mad at the world from writing some malevolent code, hacking some key infrastructure (e.g., utilities, defense, etc...) , and inserting said malicious AI code into the existing AI code. My son seemed to reluctantly agree that the possibility may be genuine.
Moreover, technology generally is aimed at making things work better and more efficiently. Integration is key to this. But will there be effective firewalls put into place in order to compartmentalize otherwise integrated systems for security purposes? One would think so. But we know that absolutely NOTHING is immune from hacking/cracking operations. Therefore, there will always be exposure to risk. As the automated apparatus grows, the the consequence of a breach in security mounts. The only sure way to protect a system is to unplug it from the integration. But this is absolutely opposite of the direction big tech has been headed for the past 50 years.
My point is that the risks posed by AI, and admittedly intriguing concept, shall be subject to the individual ethics and moralities of the people developing, controlling, and maintaining it. Personally, I have found highly intelligent tech people to generally not be to my liking. 9 out of 10 of them are creepy and socially awkward. Many of them don't bathe regularly. They read and watch a lot of fantasy based bullshit and play video games in which they are effectively a god (i.e., they want to be a god). They don't get their dicks wet in appropriate ways, and so on. They possess great knowledge and skill, but they need to be kept under tight control if we are going to grant them the ability to code our futures. These are not the humans we want establishing what our futures look like (unless you want your future to look like World of Warcraft, and I don't.

God help us. Veep to tackle AI challenge
