Fueri
Platinum Member
- Nov 16, 2015
- 6,304
- 4,015
- 1,065
Technology is not evolution.
Evolution would occur if it benefitted the species in some major way as to influence natural selection over the course of time.
Hammerhead sharks, bats, electric eels and countless other animals that have, through evolution, developed some seriously crazy shit would seem to indicate that it is theoretically possible, via evolution, for a species to develop abilities that we might consider 'super-powers'.
Sticking a chip in my head so I can get a live internet feed is not that. Not sure I'd want a tech interface into my brain and, now that I'm thinking about it, I definitely would not.
I mean evolve as in continue to improve technologically. When we talk about chips we talk about the "next generation" or the "evolution of GPU power" etc. Obviously I didn't think the robots of the future would grow new body parts or extra receptors, however, they will learn and store what they learn. It's already here in its infancy and it will be leap years ahead 100 years from now. There is also progress in how they learn.
There are real threats with manipulation of video and audio which can emulate leaders and it is getting increasingly difficult to distinguish from the real thing. A hacker or a machine could create real havok. I predict there will be real threats at some point in the distant future when machines could harm our civilization if they are given a focused task. The pursuit of that goal many not be in our interest, and quantum computing is here already.
For instance, right now this learning is being done by machine in board games such as chess, where it is given one goal which is rewarded: to win. It's a very simplistic and controlled model, however, what if there are human-helping far more advanced robots in the future and they are given a goal to "collect as much gold (or anything) as possible"? They will be like a Terminator and will go to all lengths to accomplish that sole task, regardless of anyone elses health or well being.
Then such machines may start to look at humans as a deterrent to it reaching it's goal. An enemy to their objective even. If there are only a few we may feel comfortable knowing they couldn't rally together and cause harm, but we can't imagine how different the world will be, from the way it is connected globally (the internet 100 years in the future), or what have you.
I was responding to the OP. Sorry I didn't make that clear. My post just happened to follow yours....
Ah, ok np. Interesting subject though.
I think as I've aged I've become a little more open minded about just how far humans (and machines) will go. Remember the old "4 minute mile barrier"? I don't think we can ever set such limits on any human endeavours again.
It is. I've been fascinated by animal's 'superpowers' for years. How they evolved, how long it must have taken, the sheer incredibleness of the abilities they have.
We also have to consider that animals are still evolving. What happens if apes start to speak, for instance, or something similar occurs which shows the development/emergence of a more advanced intelligence than we now understand.
On the human/cybernetic side I think we'll see some of those things fairly soon, likely in our lifetimes in terms of minor interfaces. Neural types- implants, things of that nature, may be a bit further off....