If i program an app to register the temperature of a thermometer and then say that whatever it got stuck in is good food, it is not tasting the food and really evaluating it and we all understand that. That is all Strong AI is; a complex SIMULATION of human behavior, but given its intrinsic limitations on shifting focus, it does not have Free Will, no matter how well it can emulate human learning and behavior otherwise. That is just my opinion, though and of course you are equally entitled to your own. I dont mean to belittle you or what you say here. The question of Free Will I think is central to what makes us human beings and sentient. We have moral responsibility and Strong AI does not. Do you think that Strong AI robots should be punished for breaking the law?