JimBowie1958
Old Fogey
- Sep 25, 2011
- 63,590
- 16,829
- 2,220
- Thread starter
- #41
A Strong AI program does as it is programmed to do and not to leave those paths and decisions.
I think you must use the term "strong AI" (SAI) differently than I do. I use the term to refer to the notion that a "machine's intellectual capability is functionally equal to a human's."
Aspiring to that definitional goal of SAI, how could SAI not make "conscious" choices to abandon one course of action and pursue another. Indeed, SAI, like humans, would sure have to learn to try things and scrap or modify them if they aren't yielding the desired outcomes.
- Strong AI
- 2005 -- Human-Level Artificial Intelligence? Be Serious!
- Subjective Reality and Strong Artificial Intelligence
Until complete omniscience is achieved, even SAI will eventually reach a point where it's in "uncharted waters" and it will have to go with the most likely best choice, which still may not be the actual best choice. I know it's hard for us to conceive that point of "where no SAI has gone before," but it's there somewhere. I suspect where it is has to do with humanity as humans are quite unpredictable and can't be relied upon to behave in a given way.
If i program an app to register the temperature of a thermometer and then say that whatever it got stuck in is good food, it is not tasting the food and really evaluating it and we all understand that.
That is all Strong AI is; a complex SIMULATION of human behavior, but given its intrinsic limitations on shifting focus, it does not have Free Will, no matter how well it can emulate human learning and behavior otherwise.
That is just my opinion, though and of course you are equally entitled to your own.
I dont mean to belittle you or what you say here. The question of Free Will I think is central to what makes us human beings and sentient. We have moral responsibility and Strong AI does not.
Do you think that Strong AI robots should be punished for breaking the law?