When I refer to coercion I'm referring to what we might call "artificial coercion" as opposed to "natural coercion." In a natural scenario I would be permitted to work without the threat of being imprisoned if I don't give a portion of my income to the state. The state, however, adds the artificial coercion to the equation.
It sounds like you would consider any social constraints (as opposed to more basic constraining factors, like the need for food or shelter) to be "artificial." I don't know if you mean that word to be derogatory or to indicate a lesser degree of legitimacy, but I'm not sure why you think the constraints imposed by the rules of social interaction are less valid than those imposed by nature. The rules/constraints imposed by the Ten Commandments are all "artificial coercion" in the way you seem to be using the phrase; indeed, the state has stepped up to reiterate some of those rules (e.g. those against killing and stealing). Are those unacceptable constraints?
Again, the message is simple: choosing the items on this list results in sanction. People still kill and steal, there are just extra factors to weigh now when making that decision (the probability of the law catching you, your personal belief in cosmic justice, etc). Constraints. It sounds like you're arguing that since these constraints exist, no one who makes a choice like
not murdering someone is on some level doing so involuntarily.
I would define to choose as to make a decision regarding something, but the question is whether that choice is voluntary or not.
But that gets to the heart of it. What's an involuntary choice? Suppose I argue that decision-making (choosing) is the result of a calculation, an algorithm in a person's head based on his many preferences. And let's imagine I'm able to program a machine with
all of my preferences (assuming completeness and transitivity of my own preferences). So now I can feed this machine any conceivable situation and set of circumstances/constraints and it spits out a choice either identical to the one I make or equivalent to it (meaning when comparing the machine's output and my own, I'm indifferent between them).
In this little thought experiment, is the machine making choices? Given its (or, rather, my) preferences, it's choosing the best option available in a given circumstance. But generally we associate the act of choosing with free will. Presumably just programming in some preferences doesn't confer free will to the machine. But maybe you object that the machine is proceeding from
my preferences--I put them into it and thus it's a slave to my preferences, it hasn't chosen anything. The outcomes of its "choice" calculations are all deterministic and ultimately I'm the one pulling its strings.
But then you have to ask what that says about me. For again, when I make choices I, too, am weighing my preferences against the incentives and constraints facing me. And it doesn't seem to me I've chosen my preferences anymore than the machine chose them. I don't recall sitting down at any point and consciously deciding that in general steak tastes better than chicken which tastes better than pork. I don't recall choosing to find most shades of green more aesthetically pleasing than most shades of yellow. So when a decision needs to be made and I feed the relevant constraints (natural, artificial, whatever) and circumstances into my mental preference algorithm, have I exercised free will in coming up with my final choice? And is the existence of constraints (including artificial ones) to be weighed any more eyebrow-raising than the fact the set of preferences ultimately determining my choice wasn't itself consciously chosen by me? But what would it even mean to choose one's preferences (on what basis could a preference-less person choose which preferences to select)? Or, put more simply, who's in the driver's seat during the decision-making process? If the machine in my thought experiment isn't displaying free will and doing its own bidding when it spits out a choice, should I be any more convinced that
I'm doing my own bidding when I make the same calculation using the same preference-algorithm (of mysterious origin)?
What's the point here? Choices arise when constraints (social-artificial, necessary-natural) collide with preferences, resulting in multiple possible outcomes. What I argued in the earlier post is essentially that since there
are multiple possible outcomes for you (compliance, emigration, destitution, sanction) then the basic definition of choice as "multiple options" is satisfied. Your response is that choice isn't
really present because 1) several of these possible outcomes are unreasonable and you can't be expected to actually choose them, and 2) your ideal outcome is absent from the list of possible outcomes due to constraints (in other words, the constraints have eliminated your top preference from the realm of possibility).
And so this emphasis on "voluntary" seems to really be a daydream about a perfect world where your personal preferences and the external constraints are orthogonal; the constraints turn out not to be constraints on your preferences at all. That seems implicit in the notion of
involuntary choice that you're proposing. And I'd say not only is that a very limited definition of what is or isn't voluntary, it doesn't answer why preferences should be considered so much more legitimate than constraints in this equation, given the haziness of their origins--if, again, you're trying to make a more philosophical point about free will and choice.