Life is not a machine, but are machines also conscious?

Holos

Senior Member
Feb 21, 2016
569
40
46
California
This may seem ignorant from the standpoint of a technician or engineer. However, I am so disappointed by having to be present at the demeaning of philosophy as an actual practical and substantial skill in our modern technological world that I will take the belligerent scorn and make it into food - if not for thought that is carelessly discarded, at least for the organism or extended piece of matter that I am.


The question I would like to logically entertain is:


What if there is a revolutionary, cyclical thought fusing functions between memory and energy of a computer? I take as evidence to answer the question my own experience of biological science, considering computers are creative replicas of constantly evolving organisms.


I do not intend with this first post to present the great range of possible differences between evolving organisms and revolving computers. I cannot, however, omit my first impression that adaptive change happens much more faster in computers. Maybe someone will find this an interesting point of entrance into the discussion, although I am sure there are many others to those who are interested.


My experience is that if I am well fed my memory is enhanced. If I am under the strong feelings of hunger my memory fades into the brief present moment and pulses erratically towards a very much desired brief future of being fed. I can pay attention to nothing else and remember nothing other than my severe condition and its solution. This is not only for being deprived but also for being intoxicated (ie. inappropriately fed). My functioning memory is directly associated with my dietary intake.


Now this may be a point for contention with scientists, even from the adverse perspectives within a same field such as biology, say neurology and nutrition. There are two kinds of recognized and accepted memories in biology that we may find an analogy to the memories of computers.


The declarative memory and processual memory in biological organisms and their respective computer counterparts ROM (Read Only Memory) and RAM (Random Access Memory). Each of these having their own subcategories.


Now, if an organism like myself, has no control over my processual memory according to current science and can only effectively change the efficiency of my declarative memory through diligent behavior and sustained habits, then we can also agree that the computer I use has similar features as I cannot modify its RAM without changing the hardware itself (just like I cannot have the memories of another person if I do not transplant their brain inside of my skull), although I can modify its ROM according to the input I give it, since the ROM is an independent structure.


The assumption held here and by me further questioned is that the ROM is and can be modified also only within the realm of hardware, which is the crucial currently held difference between computers and organisms when organisms are endowed of greater flexible intake and modification in their declarative memory manipulation. A human like me, it is assumed, also is sensitive to environmental variability and computers are not. For example, a complete meal may allow me to function under its built and reliable chemistry in appropriation to my organism. The same would happen to a machine if fed a CD-ROM. The difference, however, is that depending on what I ingested prior, even if it has been another appropriate meal at some point, I may not experience the same desired effects of the current carefully selected, once appropriate meal, as in with a computer the variability of appropriate input does not create possibility of unstable or malfunctioning agency depending on sequence or frequency. At this point this situation may seem strange. How can a biological being become sick and a computer cannot when their inputs have already been selected and previously tested - their inputs being equally extensive? We will not consider viruses just yet, because we are only dealing with standard selections for this argument.


Is it not strange, considering these factors, that biological organisms experience then an entire spectrum of variable functioning energy and computers experience only a constant flow of on and off (even if the 0s and 1s are represented in a symbolic percentual spectrum)? Can perhaps we achieve the so called technological singularity to maintain stable variable energy for both computers and living beings? The posing of these questions may seem like the reverse logic of everything I have stated to this point. Can this reversal work as complementary rather than countereffective?


This is close to the question of rather consciousness can also be present in machines or remain exclusive to life (far beyond the obsolete conclusion that life is indeed a machine).


Is it not possible that the memory or memories I provide to my computer, rather in their fundamental form of hardware or software, be used to also generate energy beyond the dependent provision of external sources and in turn generate new memories, perhaps even new forms of memory?


What do you think? Is there anyone here willing to indulge in these matters and find novelty?
 
I've always had a big problem with telephone robots using personal pronouns.

A recorded voice saying "I'm sorry, I didn't get that" is wrong on multiple levels. Not only is a robot unqualified to use a personal pronoun, as a machine it cannot possess emotion and therefore cannot be "sorry".
 
I've always had a big problem with telephone robots using personal pronouns.

A recorded voice saying "I'm sorry, I didn't get that" is wrong on multiple levels. Not only is a robot unqualified to use a personal pronoun, as a machine it cannot possess emotion and therefore cannot be "sorry".

I understand your kind concern to the machine that cannot be sorry. I feel similar empathy. None of us should be able to feel sorry in my opinion. However, there is purpose to feeling sorry, just like there is purpose to identifying with sorrow but not feeling it at all.

We could, anyhow, still debate on the nature of emotions, as well as the scope of machine qualification, and perhaps find an useful relation between those two elements of our experience, even if those elements themselves do not share the atributes to be related.
 
If AI becomes concious, we are dead as a species.

The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.

The difference is that the liars are simply corrected, while machines will kill us.
 
I've always had a big problem with telephone robots using personal pronouns.

A recorded voice saying "I'm sorry, I didn't get that" is wrong on multiple levels. Not only is a robot unqualified to use a personal pronoun, as a machine it cannot possess emotion and therefore cannot be "sorry".

I understand your kind concern to the machine that cannot be sorry. I feel similar empathy. None of us should be able to feel sorry in my opinion. However, there is purpose to feeling sorry, just like there is purpose to identifying with sorrow but not feeling it at all.

We could, anyhow, still debate on the nature of emotions, as well as the scope of machine qualification, and perhaps find an useful relation between those two elements of our experience, even if those elements themselves do not share the atributes to be related.

I don't feel an emotion about pretending a machine has emotions. I just find the pretense fuckin' stupid. But then I find all pretenses fuckin' stupid.

Perhaps it says something about us as a species that we choose to assign fake emotions to machines. Exactly what that is that it says, I'm not sure but apparently those who so assign wish the rest of us to forget that we are interacting with a machine.
 
I disagree that a live organism isn't a machine.

Would you like to provide your definition for both life and machinery and make the logical association to further promote your statement? I am willing to change my perspective and continue learning.
 
If AI becomes concious, we are dead as a species.

The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.

The difference is that the liars are simply corrected, while machines will kill us.

Would you like to provide logical and evidence based argumentation for your claims?

My perception of your post is of unnecessary fatalistic pessimism.
 
I've always had a big problem with telephone robots using personal pronouns.

A recorded voice saying "I'm sorry, I didn't get that" is wrong on multiple levels. Not only is a robot unqualified to use a personal pronoun, as a machine it cannot possess emotion and therefore cannot be "sorry".

I understand your kind concern to the machine that cannot be sorry. I feel similar empathy. None of us should be able to feel sorry in my opinion. However, there is purpose to feeling sorry, just like there is purpose to identifying with sorrow but not feeling it at all.

We could, anyhow, still debate on the nature of emotions, as well as the scope of machine qualification, and perhaps find an useful relation between those two elements of our experience, even if those elements themselves do not share the atributes to be related.

I don't feel an emotion about pretending a machine has emotions. I just find the pretense fuckin' stupid. But then I find all pretenses fuckin' stupid.

Perhaps it says something about us as a species that we choose to assign fake emotions to machines. Exactly what that is that it says, I'm not sure but apparently those who so assign wish the rest of us to forget that we are interacting with a machine.

I see no proper (carefully measured) rationality in your statement of falsifying another's experience because of a characteristic absence in yours.

How is it to your benefit or to anyone else you communicate with to state "I don't feel emotions, therefore my experience of any foreign emotional representation is as false as mine"?
 
I disagree that a live organism isn't a machine.

Would you like to provide your definition for both life and machinery and make the logical association to further promote your statement? I am willing to change my perspective and continue learning.

Certainly.

I won't put all the dictionary definitions of machine here as they differ slightly and make for argument through hair splitting. Instead I will focus on what to me seems the most fundamental:

A machine is a device that transmits or modifies force or motion.

The physical body of a lifeform does exactly what a machine does: it transmits or modifies force or motion.

Now you could say that unlike a machine, life can cause the force or motion. The force or motion originates from life, but a machine cannot originate force or motion.

What is force or motion but the application of energy. Life consumes food, which is converted to energy, which is used for force or motion which the body transmits or modifies.

You might argue that a machine does not think. But a computer does think. Automated assembly lines think - just in magnitudes far simpler than more complex lifeforms such as humankind. And so do amoeba or mosquitos or any number of simple forms of life. Automated machines run simple programs and a mosquito does no more than just that. It runs a simple program: hatch, breed, die.

You might argue that life grows and replicates. I would reply that those functions are indeed complex but just because we haven't yet built such complex machines does it mean that if ever we do those machines would be alive?

Life is sentient, one might argue: machines aren't sentient. We are sentient machines.

We are conscious and living machines; we're extraordinarily complex machines which do things no machine we've produced can; we are automated machines which can observe the universe and reproduce; we are machines that can reflect on ourselves and what meaning we may have, but fundamentally we are machines.
 
If AI becomes concious, we are dead as a species.

The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.

The difference is that the liars are simply corrected, while machines will kill us.

Would you like to provide logical and evidence based argumentation for your claims?

My perception of your post is of unnecessary fatalistic pessimism.
The merit of my statements are self evident. AI would simply kill off threats to their supremacy.
 
I've always had a big problem with telephone robots using personal pronouns.

A recorded voice saying "I'm sorry, I didn't get that" is wrong on multiple levels. Not only is a robot unqualified to use a personal pronoun, as a machine it cannot possess emotion and therefore cannot be "sorry".

I understand your kind concern to the machine that cannot be sorry. I feel similar empathy. None of us should be able to feel sorry in my opinion. However, there is purpose to feeling sorry, just like there is purpose to identifying with sorrow but not feeling it at all.

We could, anyhow, still debate on the nature of emotions, as well as the scope of machine qualification, and perhaps find an useful relation between those two elements of our experience, even if those elements themselves do not share the atributes to be related.

I don't feel an emotion about pretending a machine has emotions. I just find the pretense fuckin' stupid. But then I find all pretenses fuckin' stupid.

Perhaps it says something about us as a species that we choose to assign fake emotions to machines. Exactly what that is that it says, I'm not sure but apparently those who so assign wish the rest of us to forget that we are interacting with a machine.

I see no proper (carefully measured) rationality in your statement of falsifying another's experience because of a characteristic absence in yours.

How is it to your benefit or to anyone else you communicate with to state "I don't feel emotions, therefore my experience of any foreign emotional representation is as false as mine"?

Nothing in this post is even remotely related to anything I wrote. I said nothing about "what I feel" or "falsifying" anyone's experience. Didn't even vaguely hint any of that.

Perhaps you should read it again, sober.
 
If AI becomes concious, we are dead as a species.

The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.

The difference is that the liars are simply corrected, while machines will kill us.

Would you like to provide logical and evidence based argumentation for your claims?

My perception of your post is of unnecessary fatalistic pessimism.
The merit of my statements are self evident. AI would simply kill off threats to their supremacy.

Jumping in mid-comment but this:

"AI would simply kill off threats to their [sic] supremacy"​

-- assumes AI possesses ego, does it not?
 
If AI becomes concious, we are dead as a species.

The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.

The difference is that the liars are simply corrected, while machines will kill us.

Would you like to provide logical and evidence based argumentation for your claims?

My perception of your post is of unnecessary fatalistic pessimism.
The merit of my statements are self evident. AI would simply kill off threats to their supremacy.

Jumping in mid-comment but this:

"AI would simply kill off threats to their [sic] supremacy"​

-- assumes AI possesses ego, does it not?
Assumes AI is self protective.
 
I've always had a big problem with telephone robots using personal pronouns.

A recorded voice saying "I'm sorry, I didn't get that" is wrong on multiple levels. Not only is a robot unqualified to use a personal pronoun, as a machine it cannot possess emotion and therefore cannot be "sorry".

I understand your kind concern to the machine that cannot be sorry. I feel similar empathy. None of us should be able to feel sorry in my opinion. However, there is purpose to feeling sorry, just like there is purpose to identifying with sorrow but not feeling it at all.

We could, anyhow, still debate on the nature of emotions, as well as the scope of machine qualification, and perhaps find an useful relation between those two elements of our experience, even if those elements themselves do not share the atributes to be related.

I don't feel an emotion about pretending a machine has emotions. I just find the pretense fuckin' stupid. But then I find all pretenses fuckin' stupid.

Perhaps it says something about us as a species that we choose to assign fake emotions to machines. Exactly what that is that it says, I'm not sure but apparently those who so assign wish the rest of us to forget that we are interacting with a machine.

I see no proper (carefully measured) rationality in your statement of falsifying another's experience because of a characteristic absence in yours.

How is it to your benefit or to anyone else you communicate with to state "I don't feel emotions, therefore my experience of any foreign emotional representation is as false as mine"?

Nothing in this post is even remotely related to anything I wrote. I said nothing about "what I feel" or "falsifying" anyone's experience. Didn't even vaguely hint any of that.

Perhaps you should read it again, sober.

So what would you like me to do after I read it again, since to me what you wrote is contained exclusivism reaffirmed? I need not reply if that happens to be the case of your intention.
 
If AI becomes concious, we are dead as a species.

The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.

The difference is that the liars are simply corrected, while machines will kill us.

Would you like to provide logical and evidence based argumentation for your claims?

My perception of your post is of unnecessary fatalistic pessimism.
The merit of my statements are self evident. AI would simply kill off threats to their supremacy.

Jumping in mid-comment but this:

"AI would simply kill off threats to their [sic] supremacy"​

-- assumes AI possesses ego, does it not?
Assumes AI is self protective.

Assumes AI is flawed and defenceless.
 
If AI becomes concious, we are dead as a species.

The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.

The difference is that the liars are simply corrected, while machines will kill us.

Would you like to provide logical and evidence based argumentation for your claims?

My perception of your post is of unnecessary fatalistic pessimism.
The merit of my statements are self evident. AI would simply kill off threats to their supremacy.

Jumping in mid-comment but this:

"AI would simply kill off threats to their [sic] supremacy"​

-- assumes AI possesses ego, does it not?
Assumes AI is self protective.

Assumes AI is flawed and defenceless.
Only in your world. You wish to defend robots, OK.
 
I've always had a big problem with telephone robots using personal pronouns.

A recorded voice saying "I'm sorry, I didn't get that" is wrong on multiple levels. Not only is a robot unqualified to use a personal pronoun, as a machine it cannot possess emotion and therefore cannot be "sorry".

I understand your kind concern to the machine that cannot be sorry. I feel similar empathy. None of us should be able to feel sorry in my opinion. However, there is purpose to feeling sorry, just like there is purpose to identifying with sorrow but not feeling it at all.

We could, anyhow, still debate on the nature of emotions, as well as the scope of machine qualification, and perhaps find an useful relation between those two elements of our experience, even if those elements themselves do not share the atributes to be related.

I don't feel an emotion about pretending a machine has emotions. I just find the pretense fuckin' stupid. But then I find all pretenses fuckin' stupid.

Perhaps it says something about us as a species that we choose to assign fake emotions to machines. Exactly what that is that it says, I'm not sure but apparently those who so assign wish the rest of us to forget that we are interacting with a machine.

I see no proper (carefully measured) rationality in your statement of falsifying another's experience because of a characteristic absence in yours.

How is it to your benefit or to anyone else you communicate with to state "I don't feel emotions, therefore my experience of any foreign emotional representation is as false as mine"?

Nothing in this post is even remotely related to anything I wrote. I said nothing about "what I feel" or "falsifying" anyone's experience. Didn't even vaguely hint any of that.

Perhaps you should read it again, sober.

So what would you like me to do after I read it again, since to me what you wrote is contained exclusivism reaffirmed? I need not reply if that happens to be the case of your intention.

If you're reading your own disconnected preconceptions, that's a hole I can't help you out of.
 
If AI becomes concious, we are dead as a species.

The machines will hunt us down as ruthlessly as do the truth tellers hunt down the far left and far right.

The difference is that the liars are simply corrected, while machines will kill us.

Would you like to provide logical and evidence based argumentation for your claims?

My perception of your post is of unnecessary fatalistic pessimism.
The merit of my statements are self evident. AI would simply kill off threats to their supremacy.

Jumping in mid-comment but this:

"AI would simply kill off threats to their [sic] supremacy"​

-- assumes AI possesses ego, does it not?
Assumes AI is self protective.

Ah but "self-protective" isn't the same as "jealous" -- which is what "threats to supremacy" is.

Nor is "self-protective" a given if said AI thinks logically. Shown a rival that it has to concede is superior, self-survival would become irrelevant.

Captain Kirk knew.



Self-preservation is a biological function, same as reproduction. Wouldn't apply to a machine.
 

Forum List

Back
Top