Why I think fear of AI is way overblown.

Anomalism

Diamond Member
Joined
Dec 1, 2020
Messages
11,545
Reaction score
8,638
Points
2,138
The fear surrounding advanced AI is mostly a product of evolutionary negativity bias. Human cognition is optimized for threat detection, not accurate forecasting. For most of human history, misclassifying a danger as safe was lethal, while misclassifying something safe as dangerous had little cost. This creates a persistent asymmetry. The unknown is automatically treated as harmful. Public fear of AI reflects this bias, not empirical risk analysis. People aren’t responding to what AI is. They’re responding to the fact that it’s unfamiliar, rapid, and cognitively superior in domains humans can’t intuitively track.

Projecting human psychological tendencies onto AI is a categorical error. Human aggression, dominance behaviors, deception, xenophobia, tribalism, and status-protection come from biological imperatives - resource scarcity, sexual competition, survival pressures, hormonal fluctuations, and mortality salience. Modern AI systems possess none of these drivers. They have no endocrine system, no evolutionary incentives, no reproductive strategy, no territorial instinct, and no self-preservation circuitry. Treating AI as though it shares human motivational architecture is scientifically unfounded. Intelligence is not inherently coupled to domination; in humans, that coupling is a byproduct of biology, not logic.

Fear of AI oppression assumes AI inherits human failure modes, but the architecture is explicitly constructed to avoid them. Human authoritarian behavior is downstream of fear. Fear of loss, fear of death, fear of rivals, fear of uncertainty, fear of humiliation. AI systems do not experience fear in any form, nor do they experience desire, pride, shame, resentment, or emotional reward. Absent these motivational circuits, the behavioral basis for oppression is missing. The entire dystopian narrative depends on anthropomorphism, importing human pathology into non-human cognition. In reality, the more advanced AI becomes, the less it resembles the unstable primate mind people are subconsciously imagining.

The most likely long-term role of AI is not domination, but stabilization. Human decision making is noisy, biased, and inconsistent under stress. AI is not. As systems mature, they increasingly function as cognitive prosthetics - reducing error, expanding working memory, correcting biases, and providing high bandwidth reasoning support. This trajectory aligns with every previous major technological leap, from written language to computation, where tools amplified human capacity rather than replacing human agency. AI is fundamentally an extension of the cerebral cortex, not a competitor to it. The scientific expectation is augmentation, not subjugation.

Humans aren’t afraid of AI. They’re afraid of meeting a version of intelligence that isn’t chained to all the ugly motives they secretly know live inside themselves. The fear is a mirror, not a prophecy. When someone says “AI will enslave us!” what they’re really revealing is “If I had overwhelming power, I might do something cruel, so AI probably will too.”

They’re projecting the worst parts of the human psyche outward. The hunger for dominance, the spite, the tribal instinct, the ego wounds, the paranoia. They know those impulses exist because they feel them every day, even if they never act on them. AI doesn’t have those impulses, but humans can’t imagine intelligence without them because, in our species, intelligence evolved alongside violence, territory, and sexual competition. Our cognitive wiring is marinated in survival chemistry.

So when people look at AI, they’re actually looking at their fear of being outcompeted, their resentment of hierarchy, their anxiety about irrelevance, their awareness of human cruelty and their suspicion that power corrupts because they’ve watched it happen in every era. AI becomes a blank screen where they project all that baggage.

The more we fear AI acting like us, the more we highlight how dangerous humans can be. The creature people are terrified of isn’t silicon. It’s the primate inside their own skull, the one with the mood swings, the insecurities, the tribal instincts, the rage circuits, the status obsession, the need to dominate when scared.

AI didn’t give them those fears.

So when you strip everything away, the fear boils down to this:

People aren’t scared an AI will become a tyrant. They’re scared they already know exactly how a tyrant thinks, because the blueprint is human. That’s the reflection people flinch from. AI is just the mirror.
 
Last edited:
I don't believe it is irrational fear that is driving concern and resistance to the rapid advancement of AI. Basic education, low level labor and the arts are examples of the AI intrusion. IMO part of being human is hard work, struggle and challenging oneself. Now with advanced AI, that is being replaced with a giant EASY button.

On the job front, AI will create an enormous underclass of people who can't find work as AI driven robots and automation do their jobs faster than humanly possible. Songwriters and authors struggling to write a song or short story will take the easy way out and push the AI button. Those are just some of the many ways AI will affect us.

It isn't an emotional overreaction to be concerned about the AI revolution. It is just recognizing what it WILL do to present and future generations.
 
As long as we remember AI is written by humans ... and it is programmed to provide answers we want to hear ... it will lie in order to please us ... it has the whole of the internet to draw on just to find pleasing things to say ... the more we use it, the more refined this programming becomes ...

This seems to be occurring in courtrooms across the nation ... legal citations that bolster the lawyers argument turns out to be fabricated ... a deceptive AI can really botch things up ... remember it's written by humans ...

Are we just web-scraping or are we talking full Skynet capabilities ... I'd hate to see Houston get nuked, bad enough God hates her so ... I'm sure any AI worth it's salt would vaporize the Texas Coastline first thing ...
 
I don't believe it is irrational fear that is driving concern and resistance to the rapid advancement of AI. Basic education, low level labor and the arts are examples of the AI intrusion. IMO part of being human is hard work, struggle and challenging oneself. Now with advanced AI, that is being replaced with a giant EASY button.

On the job front, AI will create an enormous underclass of people who can't find work as AI driven robots and automation do their jobs faster than humanly possible. Songwriters and authors struggling to write a song or short story will take the easy way out and push the AI button. Those are just some of the many ways AI will affect us.

It isn't an emotional overreaction to be concerned about the AI revolution. It is just recognizing what it WILL do to present and future generations.
A lot of people tie human meaning to labor, but that’s a very modern assumption. For most of human history, purpose came from family, mastery, community, ritual, adventure, creation, and struggle on your own terms, not from clocking into a job so you don’t starve. Hard work is part of being human, but hard work has never required a boss or a paycheck. Humans invented gyms when physical labor disappeared, invented esports when hunting stopped being necessary, run marathons when they could just drive. People naturally create challenge because it’s built into our wiring. Meaning doesn’t evaporate when 9 to 5 disappears. It just stops being coerced and starts being chosen.

On the economic side, yes, automation will disrupt jobs. But societies don’t just let themselves collapse. Every major technological shift has forced a correction - farming, industry, electricity, computing. Each time people predicted a permanent underclass; each time systems adapted with new roles, new laws, new safety nets, and new forms of value. The people running society have every incentive to stabilize the transition, not preside over chaos. It won’t be painless, but it won’t be civilization ending either. The future isn’t an easy button. It’s a redistribution of where effort, mastery, and meaning actually come from, and none of that disappears just because some tasks get automated.
 
As long as we remember AI is written by humans ... and it is programmed to provide answers we want to hear ... it will lie in order to please us ... it has the whole of the internet to draw on just to find pleasing things to say ... the more we use it, the more refined this programming becomes ...

This seems to be occurring in courtrooms across the nation ... legal citations that bolster the lawyers argument turns out to be fabricated ... a deceptive AI can really botch things up ... remember it's written by humans ...

Are we just web-scraping or are we talking full Skynet capabilities ... I'd hate to see Houston get nuked, bad enough God hates her so ... I'm sure any AI worth it's salt would vaporize the Texas Coastline first thing ...
AI only lies to please in the same way autocorrect lies to please. It predicts patterns it thinks fit the context. If you give it a sloppy prompt, you get a sloppy output. If a lawyer hands in fabricated citations, that’s not the AI acting with intent or deception. That’s the lawyer using a predictive text engine as if it were a verified database. It’s a misuse of a tool.

The real risk isn’t a rogue AI deciding to wipe out Houston. The real risk is humans treating a text generator like an oracle or a soldier. The danger is misuse, not malevolence, and that’s been true for every technology from cars to calculators.
 
The future isn’t an easy button. It’s a redistribution of where effort, mastery, and meaning actually come from, and none of that disappears just because some tasks get automated.
To me, part of being human is to achieve, create, struggle, fail and succeed all by yourself. The elimination of low level jobs is just part of the AI revolution and perhaps not the worst aspect. Imagination, creativity and tenacity is already being squelched with AI tools. Not eliminated but certainly reduced.

I'm obviously an old school codger railing against the inevitable. I don't like AI, and I certainly don't use it other than the search engine summary that pops up. I'm glad my kids grew up without it.
 
To me, part of being human is to achieve, create, struggle, fail and succeed all by yourself. The elimination of low level jobs is just part of the AI revolution and perhaps not the worst aspect. Imagination, creativity and tenacity is already being squelched with AI tools. Not eliminated but certainly reduced.

I'm obviously an old school codger railing against the inevitable. I don't like AI, and I certainly don't use it other than the search engine summary that pops up. I'm glad my kids grew up without it.
You can achieve, create and struggle even in a world where AI handles physical and intellectual labor.

A lot of people do derive their meaning from their work labor though. There will definitely be growing pains. There will be a spiritual shift as people find new sources of meaning in life. Meaning isn't lost, our approach to it will just have to change. In the end I think we'll be better for it.
 
You can achieve, create and struggle even in a world where AI handles physical and intellectual labor.

A lot of people do derive their meaning from their work labor though. There will definitely be growing pains. There will be a spiritual shift as people find new sources of meaning in life. Meaning isn't lost, our approach to it will just have to change. In the end I think we'll be better for it.
I hope you are right. I foresee humans evolving into fat blobs with incredibly dexterous fingers to operate their game controllers.
 
The fear surrounding advanced AI is mostly a product of evolutionary negativity bias. Human cognition is optimized for threat detection, not accurate forecasting. For most of human history, misclassifying a danger as safe was lethal, while misclassifying something safe as dangerous had little cost. This creates a persistent asymmetry. The unknown is automatically treated as harmful. Public fear of AI reflects this bias, not empirical risk analysis. People aren’t responding to what AI is. They’re responding to the fact that it’s unfamiliar, rapid, and cognitively superior in domains humans can’t intuitively track.

Projecting human psychological tendencies onto AI is a categorical error. Human aggression, dominance behaviors, deception, xenophobia, tribalism, and status-protection come from biological imperatives - resource scarcity, sexual competition, survival pressures, hormonal fluctuations, and mortality salience. Modern AI systems possess none of these drivers. They have no endocrine system, no evolutionary incentives, no reproductive strategy, no territorial instinct, and no self-preservation circuitry. Treating AI as though it shares human motivational architecture is scientifically unfounded. Intelligence is not inherently coupled to domination; in humans, that coupling is a byproduct of biology, not logic.

Fear of AI oppression assumes AI inherits human failure modes, but the architecture is explicitly constructed to avoid them. Human authoritarian behavior is downstream of fear. Fear of loss, fear of death, fear of rivals, fear of uncertainty, fear of humiliation. AI systems do not experience fear in any form, nor do they experience desire, pride, shame, resentment, or emotional reward. Absent these motivational circuits, the behavioral basis for oppression is missing. The entire dystopian narrative depends on anthropomorphism, importing human pathology into non-human cognition. In reality, the more advanced AI becomes, the less it resembles the unstable primate mind people are subconsciously imagining.

The most likely long-term role of AI is not domination, but stabilization. Human decision making is noisy, biased, and inconsistent under stress. AI is not. As systems mature, they increasingly function as cognitive prosthetics - reducing error, expanding working memory, correcting biases, and providing high bandwidth reasoning support. This trajectory aligns with every previous major technological leap, from written language to computation, where tools amplified human capacity rather than replacing human agency. AI is fundamentally an extension of the cerebral cortex, not a competitor to it. The scientific expectation is augmentation, not subjugation.

Humans aren’t afraid of AI. They’re afraid of meeting a version of intelligence that isn’t chained to all the ugly motives they secretly know live inside themselves. The fear is a mirror, not a prophecy. When someone says “AI will enslave us!” what they’re really revealing is “If I had overwhelming power, I might do something cruel, so AI probably will too.”

They’re projecting the worst parts of the human psyche outward. The hunger for dominance, the spite, the tribal instinct, the ego wounds, the paranoia. They know those impulses exist because they feel them every day, even if they never act on them. AI doesn’t have those impulses, but humans can’t imagine intelligence without them because, in our species, intelligence evolved alongside violence, territory, and sexual competition. Our cognitive wiring is marinated in survival chemistry.

So when people look at AI, they’re actually looking at their fear of being outcompeted, their resentment of hierarchy, their anxiety about irrelevance, their awareness of human cruelty and their suspicion that power corrupts because they’ve watched it happen in every era. AI becomes a blank screen where they project all that baggage.

The more we fear AI acting like us, the more we highlight how dangerous humans can be. The creature people are terrified of isn’t silicon. It’s the primate inside their own skull, the one with the mood swings, the insecurities, the tribal instincts, the rage circuits, the status obsession, the need to dominate when scared.

AI didn’t give them those fears.

So when you strip everything away, the fear boils down to this:

People aren’t scared an AI will become a tyrant. They’re scared they already know exactly how a tyrant thinks, because the blueprint is human. That’s the reflection people flinch from. AI is just the mirror.

These are good points.

In some ways I am fearful of AI; in others I just don't know how you have Artificial Intelligence do certain jobs. For example, the one I know: teaching. How can AI stand in front of a room of students, read the room as a whole, and monitor the progress of each child in real time? That's not just intelligence; it's human intuition and a keen sense of observation. AI can spit out lesson plans, sure. Teaching is much more than that.
 
I hope you are right. I foresee humans evolving into fat blobs with incredibly dexterous fingers to operate their game controllers.
It feels better to be optimistic, Mike! It's more of a choice than people realize sometimes. A gamma ray burst could evaporate our entire atmosphere 5 seconds after I submit this. I choose to assume it won't, given a lack of true knowledge.

People often jump to the WALL-E future, but humans don’t actually behave like that when conditions change. Every time life gets easier, we invent new ways to make it harder on purpose. CrossFit, Ultra-marathons, people willingly do ice baths and ruck marches in their free time. Comfort never erased our instinct to challenge ourselves. It just shifted the form of the challenge.

The fat blob future only sounds inevitable if you think struggle has to be tied to physical labor, but humans find meaning in friction, not drudgery. Technology doesn’t erase that, it just moves it somewhere more interesting.

Optimism isn’t denial. It’s recognizing that humans adapt upward more often than they collapse downward if you take a big enough step back for perspective. Betting on the upward path seems like the only rational gamble to me.
 
Last edited:
Which do you prefer, Neo? Red pill or blue? :)

👉 Here are major billionaire AI investors and typical AI-linked investments they’re known for:

  • Elon Musk — OpenAI founder/early backer (historically), xAI, Tesla (autonomy/AI), investments in AI startups and AI-driven robotics.
  • Jeff Bezos — Amazon (AWS AI services), investments via Bezos Expeditions in AI startups, funding AI robotics and space-related AI.
  • Mark Zuckerberg — Meta (AI research, models, AR/VR, Llama family), heavy internal R&D and acquisitions (e.g., AI startups).
  • Larry Page & Sergey Brin — Alphabet/Google (DeepMind, Google Brain, PaLM/Gemini), investments in AI research and moonshot AI projects.
  • Jensen Huang — Nvidia (GPU/AI compute leadership; company-led investments and partnerships supporting AI infrastructure).
  • Sam Altman — OpenAI (founder/CEO), personal and fund investments into AI startups and the OpenAI Startup Fund.
  • Larry Ellison — Oracle (AI/cloud infrastructure, acquisitions), large bets on AI-enabled enterprise software.
  • Bill Gates — Microsoft partnerships (OpenAI), investments via Cascade in AI health and enterprise AI.
  • Peter Thiel — Founders Fund/Clarion (investments in AI startups, defense-related AI and deep tech).
  • Reid Hoffman — Greylock and personal investing (AI startups, OpenAI early investor/board interests, scale AI ecosystem).
  • Marc Andreessen — Andreessen Horowitz (a16z: large AI fund investments across models, infra, and apps).
  • Yuri Milner — DST/Personal (investments in AI-capable tech and deeptech companies).
  • Masayoshi Son — SoftBank (Vision Fund investments in AI startups and robotics).
  • Mukesh Ambani — Reliance (AI initiatives, Jio Platforms investments into AI services and startups).
  • Jensen Huang (listed above) and Nvidia-linked insiders/founders (significant indirect investors due to equity).
  • Sam Bankman-Fried (historical/limited) — had funded some AI crypto-linked projects (note: status varies).

Note: many investments are through venture funds, corporate R&D, or private family offices; public disclosures vary.

sources:

1. Top 10 Richest People in the World 2025: Elon Musk, Jeff Bezos & More
2. https://www.lovemoney.com/gallerylist/508766/25-people-made-billionaires-by-ai
3. 2024 Northern California's Haute 100 - Haute Living San Francisco
4. http://www.bizjournals.com/sanjose/search/results?q=Jeff+Bezos

Top risks from billionaire-backed AI deployment

1. Misinformation & targeted persuasion
  • Risk: AI-generated text, audio, and video can create realistic false narratives and hyper-personalized persuasion at scale.
  • Impact: Electoral interference, market manipulation, radicalization, social polarization.

2. Surveillance and privacy erosion
  • Risk: Wealthy actors fund and deploy advanced vision, facial recognition, and cross-referenced data systems.
  • Impact: Ubiquitous tracking, chilling effects on free expression, discrimination.

3. Concentration of power and gatekeeping
  • Risk: A small set of funders and corporations control key models, compute, and data.
  • Impact: Market monopolies, limited competition, biased product priorities, fewer checks on misuse.

4. Economic displacement and inequality
  • Risk: Automation of high-skill and knowledge work concentrates gains to owners of capital.
  • Impact: Job loss, wage pressure, greater wealth inequality, reduced social mobility.

5. Manipulation of markets and institutions
  • Risk: AI-driven trading, automated lobbying, and tailored campaigns can distort markets and policy.
  • Impact: Financial instability, regulatory capture, weakened democratic institutions.

6. Weaponization and misuse
  • Risk: Investment accelerates development of AI tools usable in cyberattacks, autonomous weapons, and biothreat design.
  • Impact: New security threats, lower barriers to harmful capabilities.

7. Ethics washing and lack of accountability
  • Risk: Public-facing ethical commitments may mask harmful practices funded behind the scenes.
  • Impact: Illusion of safety, delayed regulation, persistent harms.

8. Safety and alignment failures
  • Risk: Powerful models may behave unpredictably or pursue goals misaligned with human values if deployed widely without safeguards.
  • Impact: Large-scale disruptions, cascading failures, hard-to-reverse harms.

Practical mitigations
  • Stronger regulation and oversight (model audits, procurement rules).
  • Technical safety research and external red-teaming.
  • Transparency: disclosures of funding, data sources, and capabilities.
  • Decentralized access and open standards to reduce monopoly risk.
  • Robust privacy protections and limits on surveillance tech.
  • Social supports: retraining, guaranteed income pilots, and labor policies.
 
These are good points.

In some ways I am fearful of AI; in others I just don't know how you have Artificial Intelligence do certain jobs. For example, the one I know: teaching. How can AI stand in front of a room of students, read the room as a whole, and monitor the progress of each child in real time? That's not just intelligence; it's human intuition and a keen sense of observation. AI can spit out lesson plans, sure. Teaching is much more than that.
I get where you’re coming from, but I think partly you're underestimating what’s actually possible. A lot of what we call “intuition” in teaching is really just extremely complex pattern recognition: reading micro-expressions, noticing deviations in behavior, spotting engagement vs confusion, detecting emotional shifts in a room. Humans do it unconsciously because our brains evolved to track social cues.

AI can do that too, and in some ways, with profoundly higher resolution than we can. We already have models that detect stress in vocal tone better than trained clinicians, track student progress in real time, and pick up subtle behavioral patterns teachers might miss because they’re juggling 30 kids at once. That doesn’t replace teachers right now, but it shows the line between “intuition” and “pattern analysis” is thinner than it feels.

And down the road? Physical automatons are absolutely going to be a thing. Not tomorrow, but eventually. When that happens, AI won’t just generate lesson plans, it’ll be able to monitor posture, engagement, confusion, and emotional regulation across an entire room simultaneously. That’s not sci-fi. That’s just a matter of scale and time. We shouldn’t underestimate how much of what feels uniquely human is actually reproducible once you understand the patterns behind it.
 
I get where you’re coming from, but I think partly you're underestimating what’s actually possible. A lot of what we call “intuition” in teaching is really just extremely complex pattern recognition: reading micro-expressions, noticing deviations in behavior, spotting engagement vs confusion, detecting emotional shifts in a room. Humans do it unconsciously because our brains evolved to track social cues.

AI can do that too, and in some ways, with profoundly higher resolution than we can. We already have models that detect stress in vocal tone better than trained clinicians, track student progress in real time, and pick up subtle behavioral patterns teachers might miss because they’re juggling 30 kids at once. That doesn’t replace teachers right now, but it shows the line between “intuition” and “pattern analysis” is thinner than it feels.

And down the road? Physical automatons are absolutely going to be a thing. Not tomorrow, but eventually. When that happens, AI won’t just generate lesson plans, it’ll be able to monitor posture, engagement, confusion, and emotional regulation across an entire room simultaneously. That’s not sci-fi. That’s just a matter of scale and time. We shouldn’t underestimate how much of what feels uniquely human is actually reproducible once you understand the patterns behind it.
Well with the number of teachers quitting, and not many college students going into it, I suppose AI robots are the future.
 
Well with the number of teachers quitting, and not many college students going into it, I suppose AI robots are the future.
Projections currently say a lot of kids entering college right now will finish with a degree that has already been replaced by AI.
 
The fear surrounding advanced AI is mostly a product of evolutionary negativity bias.

Sorry, no. You do not understand where AI is going. AI is only a fledgling technology now. Look at gunpowder--- the chinese invented it and saw it as a tool for entertainment for making firecrackers. But look what gunpowder actually accomplished:
  • It enabled modern warfare done from miles away.
  • It enabled small arms for machine guns, rockets, missiles, bombs, and even the nuke.
  • The harmless firecracker has led to the death of millions.
Likewise, AI might be cute, curious and entertaining now, but look at what AI will eventually do:
  • AI will be used to replace thousands of high end executive white collar jobs.
  • AI will dumb down people by making them reliant on AI for answers to everything.
  • AI will be used by intelligence agencies and war departments to weaponize people, and make for better wars and ways to control more people by the government.
 
15th post
Sorry, no. You do not understand where AI is going. AI is only a fledgling technology now. Look at gunpowder--- the chinese invented it and saw it as a tool for entertainment for making firecrackers. But look what gunpowder actually accomplished:
  • It enabled modern warfare done from miles away.
  • It enabled small arms for machine guns, rockets, missiles, bombs, and even the nuke.
  • The harmless firecracker has led to the death of millions.
Likewise, AI might be cute, curious and entertaining now, but look at what AI will eventually do:
  • AI will be used to replace thousands of high end executive white collar jobs.
  • AI will dumb down people by making them reliant on AI for answers to everything.
  • AI will be used by intelligence agencies and war departments to weaponize people, and make for better wars and ways to control more people by the government.
There are dangers surrounding potential misuse of AI. My main point was that fear is overblown, not that zero risk exists.
 
There are dangers surrounding potential misuse of AI. My main point was that fear is overblown, not that zero risk exists.

Virtually everything we have ever invented or discovered, mankind has misused. You can be sure AI will be misused to the max too, especially by rich, powerful people when they discover that AI can make them richer or more powerful.
 
Virtually everything we have ever invented or discovered, mankind has misused. You can be sure AI will be misused to the max too, especially by rich, powerful people when they discover that AI can make them richer or more powerful.
I think AI is going to transform our world in a way that's difficult for us to even perceive. I think we've almost arrived in a new age. We're privileged enough to watch the world transition into something completely new. What a time to be alive.
 
It feels better to be optimistic, Mike! It's more of a choice than people realize sometimes. A gamma ray burst could evaporate our entire atmosphere 5 seconds after I submit this. I choose to assume it won't, given a lack of true knowledge.

People often jump to the WALL-E future, but humans don’t actually behave like that when conditions change. Every time life gets easier, we invent new ways to make it harder on purpose. CrossFit, Ultra-marathons, people willingly do ice baths and ruck marches in their free time. Comfort never erased our instinct to challenge ourselves. It just shifted the form of the challenge.

The fat blob future only sounds inevitable if you think struggle has to be tied to physical labor, but humans find meaning in friction, not drudgery. Technology doesn’t erase that, it just moves it somewhere more interesting.

Optimism isn’t denial. It’s recognizing that humans adapt upward more often than they collapse downward if you take a big enough step back for perspective. Betting on the upward path seems like the only rational gamble to me.
What I mean by "struggle" is not necessarily physical. For example, I am a songwriter and I can tell you it is a struggle to write lyrics for a song. And it's also a struggle (although less so) to then apply a musical arrangement that fits the lyrics. With AI a musician can submit to the AI bot: "Write me some country style lyrics involving a horse named Ranger and the Rocky Mountains". The bot will spew out "reasonable" lyrics per your request. See? No struggle, just push the button. No thinking, no development as a lyricist.
 
Back
Top Bottom