Jump to content

Implausibility of Materialism


Recommended Posts

Guest someone2
Posted

On 7 Jul, 23:58, Jim07D7 <Jim0...@nospam.net> wrote:

> someone2 <glenn.spig...@btinternet.com> said:

>

>

>

>

>

> >On 7 Jul, 22:39, someone2 <glenn.spig...@btinternet.com> wrote:

> >> On 7 Jul, 19:26, Jim07D7 <Jim0...@nospam.net> wrote:

>

> >> > someone2 <glenn.spig...@btinternet.com> said:

>

> >> > >On 7 Jul, 07:20, Jim07D7 <Jim0...@nospam.net> wrote:

> >> > >> someone2 <glenn.spig...@btinternet.com> said:

> >> > <...>

> >> > >> >Where is the affect of conscious experiences in the outputs, if all

> >> > >> >the nodes are justgiving the same outputs as they would have singly in

> >> > >> >the lab given the same inputs. Are you suggesting that the outputs

> >> > >> >given by a single node in the lab were consciously affected, or that

> >> > >> >the affect is such that the outputs are the same as without the

> >> > >> >affect?

>

> >> > >> >I notice you avoided facing answering the question. Perhaps you would

> >> > >> >care to directly answer this.

>

> >> > >> My resistance is causing you to start acknowledging that structure

> >> > >> plays a role in your deciding whether an entity is conscious; not just

> >> > >> behavior. Eventually you might see that you ascribe consciousness to

> >> > >> other humans on account of, in part, their structural similarity to

> >> > >> you. The question will remain, what justifies ascribing consciousness

> >> > >> to an entity. WHen we agree on that, I will be able to give you an

> >> > >> answer that satisfies you. Not until then.

>

> >> > >I fail to how my understanding affects your answer. You are just

> >> > >avoiding answering.

>

> >> > Obviously, " When we agree on [a well-defined rule that] justifies

> >> > ascribing consciousness to an entity" means, in part, that I will

> >> > have a rule. I don't now. If I did, I would apply it to the situation

> >> > you describe and tell you how it rules.

>

> >> > >The reason it seems to me, is that you can't face

> >> > >answering, as you couldn't state that the conscious experiences would

> >> > >be influencing the robot's behaviour, and you know that it is

> >> > >implausible that conscious experiences don't influence our behaviour.

> >> > >First your excuse was that I hadn't taken your definition of influence

> >> > >into account, then when I did, you simply try to stear the

> >> > >converstation away. To me your behaviour seems pathetic. If you wish

> >> > >to use the excuse of being offended to avoid answering and ending the

> >> > >conversation, then that is your choice. You can thank me for giving

> >> > >you an excuse to avoid answering it, that you are so desperate for.

>

> >> > Wrong. I have been trying to steer the conversation toward having a

> >> > well-defined rule that justifies ascribing consciousness to an entity.

>

> >> > I thought you might want to help, if you are interested in the answer.

> >> > Clearly, you aren't.

>

> >> > So maybe we might as well end the conversation.

>

> >> What difference does a well defined rule about ascribing consciousness

> >> have to do with it. If you were to claim the robot was having

> >> conscious experiences, then simply answer the question, if you were to

> >> claim that the robot could not be having conscious experiences. If you

> >> can't even face answering a simple question, then you are a coward,

> >> and are wasting my time, there is no point in explaining things to

> >> you, when if it comes to the point you haven't even got the courage to

> >> answer the question, and be true to yourself.

>

> >> Again with regards to the example and question I gave you:

> >> --------

> >> Supposing there was an artificial neural network, with a billion more

> >> nodes than you had neurons in your brain, and a very 'complicated'

> >> configuration, which drove a robot. The robot, due to its behaviour

> >> (being a Turing Equivalent), caused some atheists to claim it was

> >> consciously experiencing. Now supposing each message between each node

> >> contained additional information such as source node, destination node

> >> (which could be the same as the source node, for feedback loops), the

> >> time the message was sent, and the message. Each node wrote out to a

> >> log on receipt of a message, and on sending a message out. Now after

> >> an hour of the atheist communicating with the robot, the logs could be

> >> examined by a bank of computers, varifying, that for the input

> >> messages each received, the outputs were those expected by a single

> >> node in a lab given the same inputs. They could also varify that no

> >> unexplained messages appeared.

>

> >> Where was the influence of any conscious experiences in the robot?

> >> --------

>

> >> You replied:

> >> --------

> >> If it is conscious, the influence is in the outputs, just like ours

> >> are.

> >> --------

>

> >> I am asking you where the influence (which you defined as affect) that

> >> you claimed is in the inputs. As I said:

>

> >> Where is the influence in the outputs, if all the nodes are just

> >> giving the same outputs as they would have singly in the lab given the

> >> same inputs. Are you suggesting that the outputs given by a single

> >> node in the lab were consciously influenced/affected, or that the

> >> influence/affect is such that the outputs are the same as without the

> >> influence?

>

> >> Have you the courage to actually answer, or can't you face it?

>

> >It should have read:

>

> >I am asking you where the influence (which you defined as affect) that

> >you claimed is in the outputs.

>

> I have not used "affect" -- maybe you have mixed me up with someone

> else.

>

> Also, I have not claimed that the influence is in the outputs. That

> would not be my position.

>

> > As I said:

>

> >Where is the influence in the outputs, if all the nodes are just

> >giving the same outputs as they would have singly in the lab given the

> >same inputs. Are you suggesting that the outputs given by a single

> >node in the lab were consciously influenced/affected, or that the

> >influence/affect is such that the outputs are the same as without the

> >influence?

>

> >Have you the courage to actually answer, or can't you face it?

>

> Actually, I am done having my courage mistakenly questioned. Of course

> you are predestined to interpret my goodbye as a lack of courage.

> Goodbye.

>

 

 

Regarding where you said:

-------------------------

I have not used "affect" -- maybe you have mixed me up with someone

else.

 

Also, I have not claimed that the influence is in the outputs. That

would not be my position.

-------------------------

 

You used "affect" in:

 

http://groups.google.co.uk/group/alt.atheism/msg/cfec4602040648ab?dmode=source&hl=en

 

Where you were attempting to avoid answering, and I subsequently

reworded it for you in the hope that you would have the courage to

finally answer, but you continued to refuse to face answering. The

section from the post was:

 

-------------------------

>

>Where is the influence in the outputs, if all the nodes are just

>giving the same outputs as they would have singly in the lab given the

>same inputs. Are you suggesting that the outputs given by a single

>node in the lab were consciously influenced, or that the influence is

>such that the outputs are the same as without the influence?

>

You have forgotten my definition of influence. If I may simplify it.

Something is influential if it affects an outcome. If a single node in

a lab gave all the same responses as you do, over weeks of testing,

would you regard it as conscious? Why or why not?

-------------------------

 

[it was also in that post that you had said you had read every word,

but hadn't thought of it as an answer, but thanked me for stitching it

together, when all I had done was cut and paste what was posted

before]

 

You claimed that the influence is in the outputs in:

 

http://groups.google.co.uk/group/alt.atheism/msg/b8847a09a49f93cb?dmode=source&hl=en

 

Is where you claimed the influence is in the outputs. Also I

repeatedly cut and pasted your response in the subsequent

converstations, and never once before had you forgotten(?) that you

had said it.

 

The section from the post (as I have repeatedly cut and pasted) was:

-------------------------

>Supposing there was an artificial neural network, with a billion more

>nodes than you had neurons in your brain, and a very 'complicated'

>configuration, which drove a robot. The robot, due to its behaviour

>(being a Turing Equivalent), caused some atheists to claim it was

>consciously experiencing. Now supposing each message between each node

>contained additional information such as source node, destination node

>(which could be the same as the source node, for feedback loops), the

>time the message was sent, and the message. Each node wrote out to a

>log on receipt of a message, and on sending a message out. Now after

>an hour of the atheist communicating with the robot, the logs could be

>examined by a bank of computers, varifying, that for the input

>messages each received, the outputs were those expected by a single

>node in a lab given the same inputs. They could also varify that no

>unexplained messages appeared.

>

>Where was the influence of any conscious experiences in the robot?

>

If it is conscious, the influence is in the outputs, just like ours

are.

-------------------------

 

No wonder the converstation went on so long, given that I was

corresponding with someone who was not only a coward, and unable to

face answering because they were caught out in their implausible

beliefs, but also they were dishonest. I'm of course referring to you.

  • Replies 994
  • Created
  • Last Reply

Top Posters In This Topic

Guest James Norris
Posted

On Jul 7, 7:59?pm, Jim07D7 <Jim0...@nospam.net> wrote:

> James Norris <JimNorris...@aol.com> said:

>

> >why do you introduce the idea of a 'soul' (for which their

> >is no evidence) to explain consciousness in animals such as ourselves,

> >when evolution (for which there is evidence) explains it perfectly?

>

> I've found a test for whether we are dealing with a 'bot:

>

> http://handrooster.com/comic/imabot.jpg

 

No, not necessarily a robot. I think he's just fishing for attention,

he probably makes up all those meaningless phrases and unreadable

paragraphs on purpose. He constructs whole postings full of

incomplete and ungrammatical non-sentences, when anyone asks for

clarification, he just stomps his feet and insists that you are a

coward to avoid answering his question, or stupid for not

understanding it! He's probably not a 'bot as such, but his

mentality does seem rather mechanical - he forgets anything which it

doesn't suit him to remember, finds it impossible to admit to any

mistakes or apologise for them, and expects everyone else to obey

him.

 

Oh, but isn't it obvious? He must be a "she" - a nasty spoilt little

brat of a girl who's found her way onto alt.atheism. She wants to

annoy everyone by pretending about religious beliefs because she finds

it funny to waste everyone's time! Typical female behaviour of the

worst sort, totally selfish and thoughtless.

 

I'm still trying to pin down what this means: "Where is the influence

in the outputs, if all the nodes are just giving the same outputs as

they would have singly in the lab given the same inputs. Are you

suggesting that the outputs given by a single node in the lab were

consciously influenced/affected, or that the influence/affect is such

that the outputs are the same as without the influence?" I think the

vagueness and bad wording are caused by her lack of understanding of

the concepts. She seems to think that neural networks in computers

are some special form of conscious program? Who knows what she

means? Who cares?

Guest James Norris
Posted

On Jul 7, 6:31?pm, someone2 <glenn.spig...@btinternet.com> wrote:

> On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

>

> > On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > > > >You said:

>

> > > > > > > > > > >--------

> > > > > > > > > > >

>

> > > > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > > > >

> > > > > > > > > > >--------

>

> > > > > > > > > > >Well this was my response:

> > > > > > > > > > >--------

> > > > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > > > >have already replied:

> > > > > > > > > > >-----------

> > > > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > > > >-----------

>

> > > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > > > >random distribution rule.

>

> > > > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > > > >I have defined free will for you:

>

> > > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > > > >added:

>

> > > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > > > >happen.

> > > > > > > > > > >--------

>

> > > > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > > > >written.

>

> > > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > > > <....>

>

> > > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > > > I think we are closing in on your answer.

>

> > > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > > > context to the question:

>

> > > > > > > > > --------

> > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > unexplained messages appeared.

>

> > > > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > > > --------

>

> > > > > > > > > To which you replied:

> > > > > > > > > --------

> > > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > > > are.

> > > > > > > > > --------

>

> > > > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > > > taking into account what you regard as influence:

>

> > > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > > > affect?

>

> > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > > > and the connections to the outside world through the senses, would in

> > > > > > > > fact correspond to consciousness.

>

> > > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > > > which we understand as related to reality, which we call

> > > > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > > > each node in its neural net would not show it. To understand why an

> > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > > > usually state your personal opinion that evolution does not explain

> > > > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > > > a human have total free will to do or be whatever they feel like

> > > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > > > complexity of the brain and the sense information it receives is

> > > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > > > will, you can lead your life as though you were responsible for your

> > > > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > > > of the requirements for these adult privileges). But you would

> > > > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > > > to decide whether or not to behave in future as though you had free

> > > > > > > > will (rather than just going with the flow all the time and never

> > > > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > > > random coin toss, that you yourself had initiated.

>

> > > > > > > You haven't addressed the simple issue of where is the influence/

> > > > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > > > justgiving the same outputs as they would have singly in

> > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > given by a single node in the lab were consciously influenced/

> > > > > > > affected, or that the affect/influence is such that the outputs are

> > > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > > > It is a simple question, why do you avoid answering it?

>

> > > > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > > > any such thing. Apart from your confusion of the terms effect and

> > > > > > affect, each sentence you write (the output from you as a node) is

> > > > > > meaningless and does not exhibit consciousness. However, the

> > > > > > paragraph as a whole exhibits consciousness, because it has been

> > > > > > posted and arrived at my computer. Try to be more specific in your

> > > > > > writing, and give some examples, and that will help you to communicate

> > > > > > what you mean.

>

> > > > > > Some of the outputs of the conscious organism (eg speech) are

> > > > > > sometimes directly affected by conscious experiences, other outputs

> > > > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > > > laboratory experiment) was delivered the same inputs to identical

> > > > > > neurons as in the earlier recorded hour of data from another brain,

> > > > > > the experiences would be the same (assuming the problem of

> > > > > > constructing an identical arrangement of neurons in the receiving

> > > > > > brain to receive the input data had been solved). Do you disagree?

>

> > > > > It was a post you had replied to, and you hadn't addressed the

> > > > > question. You still haven't addressed it. You are just distracting

> > > > > away, by asking me a question. It does beg the question of why you

> > > > > don't seem to be able to face answering.

>

> > > > > I'll repeat the question for you. Regarding the robot, where is the

> > > > > influence/ affect of conscious experiences in the outputs, if all the

> > > > > nodes are justgiving the same outputs as they would have singly in the

> > > > > lab given the same inputs. Are you suggesting that the outputs given

> > > > > by a single node in the lab were consciously influenced/ affected, or

> > > > > that the affect/influence is such that the outputs are the same as

> > > > > without the affect (i.e. there is no affect/influence)?

>

> > > > You are in too much of a hurry, and you didn't read my reply to your

> > > > first response. As you have repeated your very poorly-worded question

> > > > - which incidentally you should try and tidy up to give it a proper

> > > > meaning - I'll repeat my reply:

>

> > > > > > Your question was not clear enough for me to give you an opinion. You

> > > > > > are focusing attention on analysis of the stored I/O data transactions

> > > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > > > the input nodes, the output from these go to the inputs of other

> > > > > > nodes, and so on until eventually the network output nodes trigger a

> > > > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > > > something, by sounding an alarm. It is clear that no particular part

> > > > > > of the device is conscious, and there is no reason to suppose that the

> > > > > > overall system is conscious either. I gather (from your rather

> > > > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > > > thread) that your argument goes on that the same applies to the human

> > > > > > brain - but I disagree with you there as you know, because I reckon

> > > > > > that the biophysical laws of evolution explain why the overall system

> > > > > > is conscious.

>

> > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > But if that were possible, the same might also be occuring in the

> > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > influence is conscious. I think you have been asking other responders

> > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > computer.

>

> > > > > > Remembering that computers have been specifically designed so that the

> > > > > > external environment does not affect them, and the most sensitive

> > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > influences so that it is almost impossible for anything other than a

> > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > eventual 'glitch'.

>

> > > > If that isn't an adequate response, try writing out your question

> > > > again in a more comprehensible form. You don't distinguish between

> > > > the overall behaviour of the robot (do the outputs make it walk and

> > > > talk?) and the behaviour of the computer processing (the I/O on the

> > > > chip and program circuitry), for example.

>

> > > > I'm trying to help you express whatever it is you are trying to

> > > > explain to the groups in your header (heathen alt.atheism, God-fearing

> > > > alt.religion) in which you are taking the God-given free-will

> > > > position. It looks like you are trying to construct an unassailable

> > > > argument relating subjective experiences and behaviour, computers and

> > > > brains, and free will, to notions of the existence of God and the

> > > > soul. It might be possible to do that if you express yourself more

> > > > clearly, and some of your logic was definitely wrong earlier, so you

> > > > would have to correct that as well. I expect that if whatever you

> > > > come up with doesn't include an explanation for why evolutionary

> > > > principles don't produce consciousness (as you claim) you won't really

> > > > be doing anything other than time-wasting pseudo-science, because

> > > > scientifically-minded people will say you haven't answered our

> > > > question - why do you introduce the idea of a 'soul' (for which their

> > > > is no evidence) to explain consciousness in animals such as ourselves,

> > > > when evolution (for which there is evidence) explains it perfectly?

>

> > > I'll put the example again:

>

> > > --------

> > > Supposing there was an artificial neural network, with a billion more

> > > nodes than you had neurons in your brain, and a very 'complicated'

> > > configuration, which drove a robot. The robot, due to its behaviour

> > > (being a Turing Equivalent), caused some atheists to claim it was

> > > consciously experiencing. Now supposing each message between each node

> > > contained additional information such as source node, destination node

> > > (which could be the same as the source node, for feedback loops), the

> > > time the message was sent, and the message. Each node wrote out to a

> > > log on receipt of a message, and on sending a message out. Now after

> > > an hour of the atheist communicating with the robot, the logs could be

> > > examined by a bank of computers, varifying, that for the input

> > > messages each received, the outputs were those expected by a single

> > > node in a lab given the same inputs. They could also varify that no

> > > unexplained messages appeared.

> > > ---------

>

> > > You seem to have understood the example, but asked whether it walked

> > > and talked, yes lets say it does, and would be a contender for an

> > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> > > With regards to the question you are having a problem comprehending,

> > > I'll reword it for you. By conscious experiences I am refering to the

> > > sensations we experience such as pleasure and pain, visual and

> > > auditory perception etc.

>

> > > Are you suggesting:

>

> > > A) The atheists were wrong to consider it to be consciously

> > > experiencing.

> > > B) Were correct to say that was consciously experiencing, but that it

> > > doesn't influence behaviour.

> > > C) It might have been consciously experiencing, how could you tell, it

> > > doesn't influence behaviour.

> > > D) It was consciously experiencing and that influenced its behaviour.

>

> > > If you select D, as all the nodes are giving the same outputs as they

> > > would have singly in the lab given the same inputs, could you also

> > > select between D1, and D2:

>

> > > D1) The outputs given by a single node in the lab were also influenced

> > > by conscious experiences.

> > > D2) The outputs given by a single node in the lab were not influenced

> > > by conscious experiences, but the influence of the conscious

> > > experiences within the robot, is such that the outputs are the same as

> > > without the influence of the conscious experiences.

>

> > > If you are still having problems comprehending, then simply write out

> > > the sentances you could not understand, and I'll reword them for you.

> > > If you want to add another option, then simply state it, obviously the

> > > option should be related to whether the robot was consciously

> > > experiencing, and if it was, whether the conscious experiences

> > > influenced behaviour.

>

> > As you requested, here is the question I didn't understand: "Regarding

> > the robot, where is the influence/ affect of conscious experiences in

> > the outputs, if all the nodes are justgiving the same outputs as they

> > would have singly in the lab given the same inputs."

>

> I had rewritten the question for you. Why did you simply reference the

> old one, and not answer the new reworded one? It seems like you are

> desperately trying to avoid answering.

 

I was interested in your suggestion that you think you can prove that

God exists because computers aren't conscious (or whatever it is you

are trying to say), and I was trying to help you explain that.

Whatever it is you are trying to ask me, I can't even attempt to give

you an answer unless you can express your question in proper

sentences. In any case, you are the one who is claiming properties

of consciousness of robot inputs and outputs (or whatever you are

claiming), so you will in due course be expected to defend your theory

(once you have written it out in comprehensible English sentences)

without any help from me, so you shouldn't be relying on my answers to

your questions anyway.

Guest someone2
Posted

On 8 Jul, 02:06, James Norris <JimNorri...@aol.com> wrote:

> On Jul 7, 6:31?pm, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

>

> > > On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > > > > >You said:

>

> > > > > > > > > > > >--------

> > > > > > > > > > > >

>

> > > > > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > > > > >

> > > > > > > > > > > >--------

>

> > > > > > > > > > > >Well this was my response:

> > > > > > > > > > > >--------

> > > > > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > > > > >have already replied:

> > > > > > > > > > > >-----------

> > > > > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > > > > >-----------

>

> > > > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > > > > >random distribution rule.

>

> > > > > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > > > > >I have defined free will for you:

>

> > > > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > > > > >added:

>

> > > > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > > > > >happen.

> > > > > > > > > > > >--------

>

> > > > > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > > > > >written.

>

> > > > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > > > > <....>

>

> > > > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > > > > I think we are closing in on your answer.

>

> > > > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > > > > context to the question:

>

> > > > > > > > > > --------

> > > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > > unexplained messages appeared.

>

> > > > > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > > > > --------

>

> > > > > > > > > > To which you replied:

> > > > > > > > > > --------

> > > > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > > > > are.

> > > > > > > > > > --------

>

> > > > > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > > > > taking into account what you regard as influence:

>

> > > > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > > > > affect?

>

> > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > > > > and the connections to the outside world through the senses, would in

> > > > > > > > > fact correspond to consciousness.

>

> > > > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > > > > which we understand as related to reality, which we call

> > > > > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > > > > each node in its neural net would not show it. To understand why an

> > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > > > > usually state your personal opinion that evolution does not explain

> > > > > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > > > > a human have total free will to do or be whatever they feel like

> > > > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > > > > complexity of the brain and the sense information it receives is

> > > > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > > > > will, you can lead your life as though you were responsible for your

> > > > > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > > > > of the requirements for these adult privileges). But you would

> > > > > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > > > > to decide whether or not to behave in future as though you had free

> > > > > > > > > will (rather than just going with the flow all the time and never

> > > > > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > > > > random coin toss, that you yourself had initiated.

>

> > > > > > > > You haven't addressed the simple issue of where is the influence/

> > > > > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > > > > justgiving the same outputs as they would have singly in

> > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > given by a single node in the lab were consciously influenced/

> > > > > > > > affected, or that the affect/influence is such that the outputs are

> > > > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > It is a simple question, why do you avoid answering it?

>

> > > > > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > > > > any such thing. Apart from your confusion of the terms effect and

> > > > > > > affect, each sentence you write (the output from you as a node) is

> > > > > > > meaningless and does not exhibit consciousness. However, the

> > > > > > > paragraph as a whole exhibits consciousness, because it has been

> > > > > > > posted and arrived at my computer. Try to be more specific in your

> > > > > > > writing, and give some examples, and that will help you to communicate

> > > > > > > what you mean.

>

> > > > > > > Some of the outputs of the conscious organism (eg speech) are

> > > > > > > sometimes directly affected by conscious experiences, other outputs

> > > > > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > > > > laboratory experiment) was delivered the same inputs to identical

> > > > > > > neurons as in the earlier recorded hour of data from another brain,

> > > > > > > the experiences would be the same (assuming the problem of

> > > > > > > constructing an identical arrangement of neurons in the receiving

> > > > > > > brain to receive the input data had been solved). Do you disagree?

>

> > > > > > It was a post you had replied to, and you hadn't addressed the

> > > > > > question. You still haven't addressed it. You are just distracting

> > > > > > away, by asking me a question. It does beg the question of why you

> > > > > > don't seem to be able to face answering.

>

> > > > > > I'll repeat the question for you. Regarding the robot, where is the

> > > > > > influence/ affect of conscious experiences in the outputs, if all the

> > > > > > nodes are justgiving the same outputs as they would have singly in the

> > > > > > lab given the same inputs. Are you suggesting that the outputs given

> > > > > > by a single node in the lab were consciously influenced/ affected, or

> > > > > > that the affect/influence is such that the outputs are the same as

> > > > > > without the affect (i.e. there is no affect/influence)?

>

> > > > > You are in too much of a hurry, and you didn't read my reply to your

> > > > > first response. As you have repeated your very poorly-worded question

> > > > > - which incidentally you should try and tidy up to give it a proper

> > > > > meaning - I'll repeat my reply:

>

> > > > > > > Your question was not clear enough for me to give you an opinion. You

> > > > > > > are focusing attention on analysis of the stored I/O data transactions

> > > > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > > > > the input nodes, the output from these go to the inputs of other

> > > > > > > nodes, and so on until eventually the network output nodes trigger a

> > > > > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > > > > something, by sounding an alarm. It is clear that no particular part

> > > > > > > of the device is conscious, and there is no reason to suppose that the

> > > > > > > overall system is conscious either. I gather (from your rather

> > > > > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > > > > thread) that your argument goes on that the same applies to the human

> > > > > > > brain - but I disagree with you there as you know, because I reckon

> > > > > > > that the biophysical laws of evolution explain why the overall system

> > > > > > > is conscious.

>

> > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > computer.

>

> > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > eventual 'glitch'.

>

> > > > > If that isn't an adequate response, try writing out your question

> > > > > again in a more comprehensible form. You don't distinguish between

> > > > > the overall behaviour of the robot (do the outputs make it walk and

> > > > > talk?) and the behaviour of the computer processing (the I/O on the

> > > > > chip and program circuitry), for example.

>

> > > > > I'm trying to help you express whatever it is you are trying to

> > > > > explain to the groups in your header (heathen alt.atheism, God-fearing

> > > > > alt.religion) in which you are taking the God-given free-will

> > > > > position. It looks like you are trying to construct an unassailable

> > > > > argument relating subjective experiences and behaviour, computers and

> > > > > brains, and free will, to notions of the existence of God and the

> > > > > soul. It might be possible to do that if you express yourself more

> > > > > clearly, and some of your logic was definitely wrong earlier, so you

> > > > > would have to correct that as well. I expect that if whatever you

> > > > > come up with doesn't include an explanation for why evolutionary

> > > > > principles don't produce consciousness (as you claim) you won't really

> > > > > be doing anything other than time-wasting pseudo-science, because

> > > > > scientifically-minded people will say you haven't answered our

> > > > > question - why do you introduce the idea of a 'soul' (for which their

> > > > > is no evidence) to explain consciousness in animals such as ourselves,

> > > > > when evolution (for which there is evidence) explains it perfectly?

>

> > > > I'll put the example again:

>

> > > > --------

> > > > Supposing there was an artificial neural network, with a billion more

> > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > consciously experiencing. Now supposing each message between each node

> > > > contained additional information such as source node, destination node

> > > > (which could be the same as the source node, for feedback loops), the

> > > > time the message was sent, and the message. Each node wrote out to a

> > > > log on receipt of a message, and on sending a message out. Now after

> > > > an hour of the atheist communicating with the robot, the logs could be

> > > > examined by a bank of computers, varifying, that for the input

> > > > messages each received, the outputs were those expected by a single

> > > > node in a lab given the same inputs. They could also varify that no

> > > > unexplained messages appeared.

> > > > ---------

>

> > > > You seem to have understood the example, but asked whether it walked

> > > > and talked, yes lets say it does, and would be a contender for an

> > > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> > > > With regards to the question you are having a problem comprehending,

> > > > I'll reword it for you. By conscious experiences I am refering to the

> > > > sensations we experience such as pleasure and pain, visual and

> > > > auditory perception etc.

>

> > > > Are you suggesting:

>

> > > > A) The atheists were wrong to consider it to be consciously

> > > > experiencing.

> > > > B) Were correct to say that was consciously experiencing, but that it

> > > > doesn't influence behaviour.

> > > > C) It might have been consciously experiencing, how could you tell, it

> > > > doesn't influence behaviour.

> > > > D) It was consciously experiencing and that influenced its behaviour.

>

> > > > If you select D, as all the nodes are giving the same outputs as they

> > > > would have singly in the lab given the same inputs, could you also

> > > > select between D1, and D2:

>

> > > > D1) The outputs given by a single node in the lab were also influenced

> > > > by conscious experiences.

> > > > D2) The outputs given by a single node in the lab were not influenced

> > > > by conscious experiences, but the influence of the conscious

> > > > experiences within the robot, is such that the outputs are the same as

> > > > without the influence of the conscious experiences.

>

> > > > If you are still having problems comprehending, then simply write out

> > > > the sentances you could not understand, and I'll reword them for you.

> > > > If you want to add another option, then simply state it, obviously the

> > > > option should be related to whether the robot was consciously

> > > > experiencing, and if it was, whether the conscious experiences

> > > > influenced behaviour.

>

> > > As you requested, here is the question I didn't understand: "Regarding

> > > the robot, where is the influence/ affect of conscious experiences in

> > > the outputs, if all the nodes are justgiving the same outputs as they

> > > would have singly in the lab given the same inputs."

>

> > I had rewritten the question for you. Why did you simply reference the

> > old one, and not answer the new reworded one? It seems like you are

> > desperately trying to avoid answering.

>

> I was interested in your suggestion that you think you can prove that

> God exists because computers aren't conscious (or whatever it is you

> are trying to say), and I was trying to help you explain that.

> Whatever it is you are trying to ask me, I can't even attempt to give

> you an answer unless you can express your question in proper

> sentences. In any case, you are the one who is claiming properties

> of consciousness of robot inputs and outputs (or whatever you are

> claiming), so you will in due course be expected to defend your theory

> (once you have written it out in comprehensible English sentences)

> without any help from me, so you shouldn't be relying on my answers to

> your questions anyway.

 

Why didn't you point out the sentance that was giving you a problem in

the reworded question? As I say, it seems like you are desperately

trying to avoid answering. Though to help you, I am not claiming any

properties or consciousness of robot inputs and outputs, I am

questioning you about your understanding. Though since you think

twanging elastic bands with twigs is sufficient to give rise to

conscious experiences, I'd be interested if you choose option A, if

ever you do find the courage to answer.

Guest James Norris
Posted

On Jul 8, 2:16?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 8 Jul, 02:06, James Norris <JimNorri...@aol.com> wrote:

>

> > On Jul 7, 6:31?pm, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

>

> > > > On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > > > > > >You said:

>

> > > > > > > > > > > > >--------

> > > > > > > > > > > > >

>

> > > > > > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > > > > > >

> > > > > > > > > > > > >--------

>

> > > > > > > > > > > > >Well this was my response:

> > > > > > > > > > > > >--------

> > > > > > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > > > > > >have already replied:

> > > > > > > > > > > > >-----------

> > > > > > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > > > > > >-----------

>

> > > > > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > > > > > >random distribution rule.

>

> > > > > > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > > > > > >I have defined free will for you:

>

> > > > > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > > > > > >added:

>

> > > > > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > > > > > >happen.

> > > > > > > > > > > > >--------

>

> > > > > > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > > > > > >written.

>

> > > > > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > > > > > <....>

>

> > > > > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > > > > > I think we are closing in on your answer.

>

> > > > > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > > > > > context to the question:

>

> > > > > > > > > > > --------

> > > > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > > > unexplained messages appeared.

>

> > > > > > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > > > > > --------

>

> > > > > > > > > > > To which you replied:

> > > > > > > > > > > --------

> > > > > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > > > > > are.

> > > > > > > > > > > --------

>

> > > > > > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > > > > > taking into account what you regard as influence:

>

> > > > > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > > > > > affect?

>

> > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > > > > > and the connections to the outside world through the senses, would in

> > > > > > > > > > fact correspond to consciousness.

>

> > > > > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > > > > > which we understand as related to reality, which we call

> > > > > > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > > > > > each node in its neural net would not show it. To understand why an

> > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > > > > > usually state your personal opinion that evolution does not explain

> > > > > > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > > > > > a human have total free will to do or be whatever they feel like

> > > > > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > > > > > complexity of the brain and the sense information it receives is

> > > > > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > > > > > will, you can lead your life as though you were responsible for your

> > > > > > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > > > > > of the requirements for these adult privileges). But you would

> > > > > > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > > > > > to decide whether or not to behave in future as though you had free

> > > > > > > > > > will (rather than just going with the flow all the time and never

> > > > > > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > > > > > random coin toss, that you yourself had initiated.

>

> > > > > > > > > You haven't addressed the simple issue of where is the influence/

> > > > > > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > > > > > justgiving the same outputs as they would have singly in

> > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > given by a single node in the lab were consciously influenced/

> > > > > > > > > affected, or that the affect/influence is such that the outputs are

> > > > > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > > It is a simple question, why do you avoid answering it?

>

> > > > > > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > > > > > any such thing. Apart from your confusion of the terms effect and

> > > > > > > > affect, each sentence you write (the output from you as a node) is

> > > > > > > > meaningless and does not exhibit consciousness. However, the

> > > > > > > > paragraph as a whole exhibits consciousness, because it has been

> > > > > > > > posted and arrived at my computer. Try to be more specific in your

> > > > > > > > writing, and give some examples, and that will help you to communicate

> > > > > > > > what you mean.

>

> > > > > > > > Some of the outputs of the conscious organism (eg speech) are

> > > > > > > > sometimes directly affected by conscious experiences, other outputs

> > > > > > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > > > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > > > > > laboratory experiment) was delivered the same inputs to identical

> > > > > > > > neurons as in the earlier recorded hour of data from another brain,

> > > > > > > > the experiences would be the same (assuming the problem of

> > > > > > > > constructing an identical arrangement of neurons in the receiving

> > > > > > > > brain to receive the input data had been solved). Do you disagree?

>

> > > > > > > It was a post you had replied to, and you hadn't addressed the

> > > > > > > question. You still haven't addressed it. You are just distracting

> > > > > > > away, by asking me a question. It does beg the question of why you

> > > > > > > don't seem to be able to face answering.

>

> > > > > > > I'll repeat the question for you. Regarding the robot, where is the

> > > > > > > influence/ affect of conscious experiences in the outputs, if all the

> > > > > > > nodes are justgiving the same outputs as they would have singly in the

> > > > > > > lab given the same inputs. Are you suggesting that the outputs given

> > > > > > > by a single node in the lab were consciously influenced/ affected, or

> > > > > > > that the affect/influence is such that the outputs are the same as

> > > > > > > without the affect (i.e. there is no affect/influence)?

>

> > > > > > You are in too much of a hurry, and you didn't read my reply to your

> > > > > > first response. As you have repeated your very poorly-worded question

> > > > > > - which incidentally you should try and tidy up to give it a proper

> > > > > > meaning - I'll repeat my reply:

>

> > > > > > > > Your question was not clear enough for me to give you an opinion. You

> > > > > > > > are focusing attention on analysis of the stored I/O data transactions

> > > > > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > > > > > the input nodes, the output from these go to the inputs of other

> > > > > > > > nodes, and so on until eventually the network output nodes trigger a

> > > > > > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > > > > > something, by sounding an alarm. It is clear that no particular part

> > > > > > > > of the device is conscious, and there is no reason to suppose that the

> > > > > > > > overall system is conscious either. I gather (from your rather

> > > > > > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > > > > > thread) that your argument goes on that the same applies to the human

> > > > > > > > brain - but I disagree with you there as you know, because I reckon

> > > > > > > > that the biophysical laws of evolution explain why the overall system

> > > > > > > > is conscious.

>

> > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > computer.

>

> > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > eventual 'glitch'.

>

> > > > > > If that isn't an adequate response, try writing out your question

> > > > > > again in a more comprehensible form. You don't distinguish between

> > > > > > the overall behaviour of the robot (do the outputs make it walk and

> > > > > > talk?) and the behaviour of the computer processing (the I/O on the

> > > > > > chip and program circuitry), for example.

>

> > > > > > I'm trying to help you express whatever it is you are trying to

> > > > > > explain to the groups in your header (heathen alt.atheism, God-fearing

> > > > > > alt.religion) in which you are taking the God-given free-will

> > > > > > position. It looks like you are trying to construct an unassailable

> > > > > > argument relating subjective experiences and behaviour, computers and

> > > > > > brains, and free will, to notions of the existence of God and the

> > > > > > soul. It might be possible to do that if you express yourself more

> > > > > > clearly, and some of your logic was definitely wrong earlier, so you

> > > > > > would have to correct that as well. I expect that if whatever you

> > > > > > come up with doesn't include an explanation for why evolutionary

> > > > > > principles don't produce consciousness (as you claim) you won't really

> > > > > > be doing anything other than time-wasting pseudo-science, because

> > > > > > scientifically-minded people will say you haven't answered our

> > > > > > question - why do you introduce the idea of a 'soul' (for which their

> > > > > > is no evidence) to explain consciousness in animals such as ourselves,

> > > > > > when evolution (for which there is evidence) explains it perfectly?

>

> > > > > I'll put the example again:

>

> > > > > --------

> > > > > Supposing there was an artificial neural network, with a billion more

> > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > consciously experiencing. Now supposing each message between each node

> > > > > contained additional information such as source node, destination node

> > > > > (which could be the same as the source node, for feedback loops), the

> > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > examined by a bank of computers, varifying, that for the input

> > > > > messages each received, the outputs were those expected by a single

> > > > > node in a lab given the same inputs. They could also varify that no

> > > > > unexplained messages appeared.

> > > > > ---------

>

> > > > > You seem to have understood the example, but asked whether it walked

> > > > > and talked, yes lets say it does, and would be a contender for an

> > > > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> > > > > With regards to the question you are having a problem comprehending,

> > > > > I'll reword it for you. By conscious experiences I am refering to the

> > > > > sensations we experience such as pleasure and pain, visual and

> > > > > auditory perception etc.

>

> > > > > Are you suggesting:

>

> > > > > A) The atheists were wrong to consider it to be consciously

> > > > > experiencing.

> > > > > B) Were correct to say that was consciously experiencing, but that it

> > > > > doesn't influence behaviour.

> > > > > C) It might have been consciously experiencing, how could you tell, it

> > > > > doesn't influence behaviour.

> > > > > D) It was consciously experiencing and that influenced its behaviour.

>

> > > > > If you select D, as all the nodes are giving the same outputs as they

> > > > > would have singly in the lab given the same inputs, could you also

> > > > > select between D1, and D2:

>

> > > > > D1) The outputs given by a single node in the lab were also influenced

> > > > > by conscious experiences.

> > > > > D2) The outputs given by a single node in the lab were not influenced

> > > > > by conscious experiences, but the influence of the conscious

> > > > > experiences within the robot, is such that the outputs are the same as

> > > > > without the influence of the conscious experiences.

>

> > > > > If you are still having problems comprehending, then simply write out

> > > > > the sentances you could not understand, and I'll reword them for you.

> > > > > If you want to add another option, then simply state it, obviously the

> > > > > option should be related to whether the robot was consciously

> > > > > experiencing, and if it was, whether the conscious experiences

> > > > > influenced behaviour.

>

> > > > As you requested, here is the question I didn't understand: "Regarding

> > > > the robot, where is the influence/ affect of conscious experiences in

> > > > the outputs, if all the nodes are justgiving the same outputs as they

> > > > would have singly in the lab given the same inputs."

>

> > > I had rewritten the question for you. Why did you simply reference the

> > > old one, and not answer the new reworded one? It seems like you are

> > > desperately trying to avoid answering.

>

> > I was interested in your suggestion that you think you can prove that

> > God exists because computers aren't conscious (or whatever it is you

> > are trying to say), and I was trying to help you explain that.

> > Whatever it is you are trying to ask me, I can't even attempt to give

> > you an answer unless you can express your question in proper

> > sentences. In any case, you are the one who is claiming properties

> > of consciousness of robot inputs and outputs (or whatever you are

> > claiming), so you will in due course be expected to defend your theory

> > (once you have written it out in comprehensible English sentences)

> > without any help from me, so you shouldn't be relying on my answers to

> > your questions anyway.

>

> Why didn't you point out the sentance that was giving you a problem in

> the reworded question? As I say, it seems like you are desperately

> trying to avoid answering. Though to help you, I am not claiming any

> properties or consciousness of robot inputs and outputs, I am

> questioning you about your understanding. Though since you think

> twanging elastic bands with twigs is sufficient to give rise to

> conscious experiences, I'd be interested if you choose option A, if

> ever you do find the courage to answer.

 

It takes no courage to respond to Usenet postings, just a computer

terminal and a certain amount of time and patience. All I was asking

you to do is to explain what you mean by the following sentence:

"Regarding the robot, where is the influence/ affect of conscious

experiences in the outputs, if all the nodes are justgiving the same

outputs as they would have singly in the lab given the same inputs."

What inputs and outputs are you talking about?

Guest someone2
Posted

On 8 Jul, 03:01, James Norris <JimNorri...@aol.com> wrote:

> On Jul 8, 2:16?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > On 8 Jul, 02:06, James Norris <JimNorri...@aol.com> wrote:

>

> > > On Jul 7, 6:31?pm, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

>

> > > > > On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > > > > > > >You said:

>

> > > > > > > > > > > > > >--------

> > > > > > > > > > > > > >

>

> > > > > > > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > > > > > > >

> > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > >Well this was my response:

> > > > > > > > > > > > > >--------

> > > > > > > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > > > > > > >have already replied:

> > > > > > > > > > > > > >-----------

> > > > > > > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > > > > > > >-----------

>

> > > > > > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > > > > > > >random distribution rule.

>

> > > > > > > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > > > > > > >I have defined free will for you:

>

> > > > > > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > > > > > > >added:

>

> > > > > > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > > > > > > >happen.

> > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > > > > > > >written.

>

> > > > > > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > > > > > > <....>

>

> > > > > > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > > > > > > I think we are closing in on your answer.

>

> > > > > > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > > > > > > context to the question:

>

> > > > > > > > > > > > --------

> > > > > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > > > > unexplained messages appeared.

>

> > > > > > > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > > > > > > --------

>

> > > > > > > > > > > > To which you replied:

> > > > > > > > > > > > --------

> > > > > > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > > > > > > are.

> > > > > > > > > > > > --------

>

> > > > > > > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > > > > > > taking into account what you regard as influence:

>

> > > > > > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > > > > > > affect?

>

> > > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > > > > > > and the connections to the outside world through the senses, would in

> > > > > > > > > > > fact correspond to consciousness.

>

> > > > > > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > > > > > > which we understand as related to reality, which we call

> > > > > > > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > > > > > > each node in its neural net would not show it. To understand why an

> > > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > > > > > > usually state your personal opinion that evolution does not explain

> > > > > > > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > > > > > > a human have total free will to do or be whatever they feel like

> > > > > > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > > > > > > complexity of the brain and the sense information it receives is

> > > > > > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > > > > > > will, you can lead your life as though you were responsible for your

> > > > > > > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > > > > > > of the requirements for these adult privileges). But you would

> > > > > > > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > > > > > > to decide whether or not to behave in future as though you had free

> > > > > > > > > > > will (rather than just going with the flow all the time and never

> > > > > > > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > > > > > > random coin toss, that you yourself had initiated.

>

> > > > > > > > > > You haven't addressed the simple issue of where is the influence/

> > > > > > > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > > > > > > justgiving the same outputs as they would have singly in

> > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > given by a single node in the lab were consciously influenced/

> > > > > > > > > > affected, or that the affect/influence is such that the outputs are

> > > > > > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > > > It is a simple question, why do you avoid answering it?

>

> > > > > > > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > > > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > > > > > > any such thing. Apart from your confusion of the terms effect and

> > > > > > > > > affect, each sentence you write (the output from you as a node) is

> > > > > > > > > meaningless and does not exhibit consciousness. However, the

> > > > > > > > > paragraph as a whole exhibits consciousness, because it has been

> > > > > > > > > posted and arrived at my computer. Try to be more specific in your

> > > > > > > > > writing, and give some examples, and that will help you to communicate

> > > > > > > > > what you mean.

>

> > > > > > > > > Some of the outputs of the conscious organism (eg speech) are

> > > > > > > > > sometimes directly affected by conscious experiences, other outputs

> > > > > > > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > > > > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > > > > > > laboratory experiment) was delivered the same inputs to identical

> > > > > > > > > neurons as in the earlier recorded hour of data from another brain,

> > > > > > > > > the experiences would be the same (assuming the problem of

> > > > > > > > > constructing an identical arrangement of neurons in the receiving

> > > > > > > > > brain to receive the input data had been solved). Do you disagree?

>

> > > > > > > > It was a post you had replied to, and you hadn't addressed the

> > > > > > > > question. You still haven't addressed it. You are just distracting

> > > > > > > > away, by asking me a question. It does beg the question of why you

> > > > > > > > don't seem to be able to face answering.

>

> > > > > > > > I'll repeat the question for you. Regarding the robot, where is the

> > > > > > > > influence/ affect of conscious experiences in the outputs, if all the

> > > > > > > > nodes are justgiving the same outputs as they would have singly in the

> > > > > > > > lab given the same inputs. Are you suggesting that the outputs given

> > > > > > > > by a single node in the lab were consciously influenced/ affected, or

> > > > > > > > that the affect/influence is such that the outputs are the same as

> > > > > > > > without the affect (i.e. there is no affect/influence)?

>

> > > > > > > You are in too much of a hurry, and you didn't read my reply to your

> > > > > > > first response. As you have repeated your very poorly-worded question

> > > > > > > - which incidentally you should try and tidy up to give it a proper

> > > > > > > meaning - I'll repeat my reply:

>

> > > > > > > > > Your question was not clear enough for me to give you an opinion. You

> > > > > > > > > are focusing attention on analysis of the stored I/O data transactions

> > > > > > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > > > > > > the input nodes, the output from these go to the inputs of other

> > > > > > > > > nodes, and so on until eventually the network output nodes trigger a

> > > > > > > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > > > > > > something, by sounding an alarm. It is clear that no particular part

> > > > > > > > > of the device is conscious, and there is no reason to suppose that the

> > > > > > > > > overall system is conscious either. I gather (from your rather

> > > > > > > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > > > > > > thread) that your argument goes on that the same applies to the human

> > > > > > > > > brain - but I disagree with you there as you know, because I reckon

> > > > > > > > > that the biophysical laws of evolution explain why the overall system

> > > > > > > > > is conscious.

>

> > > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > > computer.

>

> > > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > > eventual 'glitch'.

>

> > > > > > > If that isn't an adequate response, try writing out your question

> > > > > > > again in a more comprehensible form. You don't distinguish between

> > > > > > > the overall behaviour of the robot (do the outputs make it walk and

> > > > > > > talk?) and the behaviour of the computer processing (the I/O on the

> > > > > > > chip and program circuitry), for example.

>

> > > > > > > I'm trying to help you express whatever it is you are trying to

> > > > > > > explain to the groups in your header (heathen alt.atheism, God-fearing

> > > > > > > alt.religion) in which you are taking the God-given free-will

> > > > > > > position. It looks like you are trying to construct an unassailable

> > > > > > > argument relating subjective experiences and behaviour, computers and

> > > > > > > brains, and free will, to notions of the existence of God and the

> > > > > > > soul. It might be possible to do that if you express yourself more

> > > > > > > clearly, and some of your logic was definitely wrong earlier, so you

> > > > > > > would have to correct that as well. I expect that if whatever you

> > > > > > > come up with doesn't include an explanation for why evolutionary

> > > > > > > principles don't produce consciousness (as you claim) you won't really

> > > > > > > be doing anything other than time-wasting pseudo-science, because

> > > > > > > scientifically-minded people will say you haven't answered our

> > > > > > > question - why do you introduce the idea of a 'soul' (for which their

> > > > > > > is no evidence) to explain consciousness in animals such as ourselves,

> > > > > > > when evolution (for which there is evidence) explains it perfectly?

>

> > > > > > I'll put the example again:

>

> > > > > > --------

> > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > contained additional information such as source node, destination node

> > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > messages each received, the outputs were those expected by a single

> > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > unexplained messages appeared.

> > > > > > ---------

>

> > > > > > You seem to have understood the example, but asked whether it walked

> > > > > > and talked, yes lets say it does, and would be a contender for an

> > > > > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> > > > > > With regards to the question you are having a problem comprehending,

> > > > > > I'll reword it for you. By conscious experiences I am refering to the

> > > > > > sensations we experience such as pleasure and pain, visual and

> > > > > > auditory perception etc.

>

> > > > > > Are you suggesting:

>

> > > > > > A) The atheists were wrong to consider it to be consciously

> > > > > > experiencing.

> > > > > > B) Were correct to say that was consciously experiencing, but that it

> > > > > > doesn't influence behaviour.

> > > > > > C) It might have been consciously experiencing, how could you tell, it

> > > > > > doesn't influence behaviour.

> > > > > > D) It was consciously experiencing and that influenced its behaviour.

>

> > > > > > If you select D, as all the nodes are giving the same outputs as they

> > > > > > would have singly in the lab given the same inputs, could you also

> > > > > > select between D1, and D2:

>

> > > > > > D1) The outputs given by a single node in the lab were also influenced

> > > > > > by conscious experiences.

> > > > > > D2) The outputs given by a single node in the lab were not influenced

> > > > > > by conscious experiences, but the influence of the conscious

> > > > > > experiences within the robot, is such that the outputs are the same as

> > > > > > without the influence of the conscious experiences.

>

> > > > > > If you are still having problems comprehending, then simply write out

> > > > > > the sentances you could not understand, and I'll reword them for you.

> > > > > > If you want to add another option, then simply state it, obviously the

> > > > > > option should be related to whether the robot was consciously

> > > > > > experiencing, and if it was, whether the conscious experiences

> > > > > > influenced behaviour.

>

> > > > > As you requested, here is the question I didn't understand: "Regarding

> > > > > the robot, where is the influence/ affect of conscious experiences in

> > > > > the outputs, if all the nodes are justgiving the same outputs as they

> > > > > would have singly in the lab given the same inputs."

>

> > > > I had rewritten the question for you. Why did you simply reference the

> > > > old one, and not answer the new reworded one? It seems like you are

> > > > desperately trying to avoid answering.

>

> > > I was interested in your suggestion that you think you can prove that

> > > God exists because computers aren't conscious (or whatever it is you

> > > are trying to say), and I was trying to help you explain that.

> > > Whatever it is you are trying to ask me, I can't even attempt to give

> > > you an answer unless you can express your question in proper

> > > sentences. In any case, you are the one who is claiming properties

> > > of consciousness of robot inputs and outputs (or whatever you are

> > > claiming), so you will in due course be expected to defend your theory

> > > (once you have written it out in comprehensible English sentences)

> > > without any help from me, so you shouldn't be relying on my answers to

> > > your questions anyway.

>

> > Why didn't you point out the sentance that was giving you a problem in

> > the reworded question? As I say, it seems like you are desperately

> > trying to avoid answering. Though to help you, I am not claiming any

> > properties or consciousness of robot inputs and outputs, I am

> > questioning you about your understanding. Though since you think

> > twanging elastic bands with twigs is sufficient to give rise to

> > conscious experiences, I'd be interested if you choose option A, if

> > ever you do find the courage to answer.

>

> It takes no courage to respond to Usenet postings, just a computer

> terminal and a certain amount of time and patience. All I was asking

> you to do is to explain what you mean by the following sentence:

> "Regarding the robot, where is the influence/ affect of conscious

> experiences in the outputs, if all the nodes are justgiving the same

> outputs as they would have singly in the lab given the same inputs."

> What inputs and outputs are you talking about?

 

Here is the example again (though it is still above in the history,

from when you first said you didn't understand the bit you have

quoted, and I reworded it for you, though you then ignored the

reworded version and restated you didn't understand the original

question, when I pointed out what you had done and asked you to answer

the reworded version, you ignored me again, and have done so yet

again, giving the impression you are trying to avoid answering):

 

--------

Supposing there was an artificial neural network, with a billion more

nodes than you had neurons in your brain, and a very 'complicated'

configuration, which drove a robot. The robot, due to its behaviour

(being a Turing Equivalent), caused some atheists to claim it was

consciously experiencing. Now supposing each message between each node

contained additional information such as source node, destination node

(which could be the same as the source node, for feedback loops), the

time the message was sent, and the message. Each node wrote out to a

log on receipt of a message, and on sending a message out. Now after

an hour of the atheist communicating with the robot, the logs could be

examined by a bank of computers, varifying, that for the input

messages each received, the outputs were those expected by a single

node in a lab given the same inputs. They could also varify that no

unexplained messages appeared.

---------

 

The inputs and outputs refer to the inputs and outputs to the nodes.

I'll leave it up to you to either answer the bit you have just quoted,

or the reworded version (the one with the options A-D, and D1 and D2).

Is there a chance that you might answer this time?

Guest James Norris
Posted

On Jul 8, 4:05?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 8 Jul, 03:01, James Norris <JimNorri...@aol.com> wrote:

>

> > On Jul 8, 2:16?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > On 8 Jul, 02:06, James Norris <JimNorri...@aol.com> wrote:

>

> > > > On Jul 7, 6:31?pm, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

>

> > > > > > On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > > > > > > > >You said:

>

> > > > > > > > > > > > > > >--------

> > > > > > > > > > > > > > >

>

> > > > > > > > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > > > > > > > >

> > > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > > >Well this was my response:

> > > > > > > > > > > > > > >--------

> > > > > > > > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > > > > > > > >have already replied:

> > > > > > > > > > > > > > >-----------

> > > > > > > > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > > > > > > > >-----------

>

> > > > > > > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > > > > > > > >random distribution rule.

>

> > > > > > > > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > > > > > > > >I have defined free will for you:

>

> > > > > > > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > > > > > > > >added:

>

> > > > > > > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > > > > > > > >happen.

> > > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > > > > > > > >written.

>

> > > > > > > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > > > > > > > <....>

>

> > > > > > > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > > > > > > > I think we are closing in on your answer.

>

> > > > > > > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > > > > > > > context to the question:

>

> > > > > > > > > > > > > --------

> > > > > > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > > > > > unexplained messages appeared.

>

> > > > > > > > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > > > > > > > --------

>

> > > > > > > > > > > > > To which you replied:

> > > > > > > > > > > > > --------

> > > > > > > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > > > > > > > are.

> > > > > > > > > > > > > --------

>

> > > > > > > > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > > > > > > > taking into account what you regard as influence:

>

> > > > > > > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > > > > > > > affect?

>

> > > > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > > > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > > > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > > > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > > > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > > > > > > > and the connections to the outside world through the senses, would in

> > > > > > > > > > > > fact correspond to consciousness.

>

> > > > > > > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > > > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > > > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > > > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > > > > > > > which we understand as related to reality, which we call

> > > > > > > > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > > > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > > > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > > > > > > > each node in its neural net would not show it. To understand why an

> > > > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > > > > > > > usually state your personal opinion that evolution does not explain

> > > > > > > > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > > > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > > > > > > > a human have total free will to do or be whatever they feel like

> > > > > > > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > > > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > > > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > > > > > > > complexity of the brain and the sense information it receives is

> > > > > > > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > > > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > > > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > > > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > > > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > > > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > > > > > > > will, you can lead your life as though you were responsible for your

> > > > > > > > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > > > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > > > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > > > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > > > > > > > of the requirements for these adult privileges). But you would

> > > > > > > > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > > > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > > > > > > > to decide whether or not to behave in future as though you had free

> > > > > > > > > > > > will (rather than just going with the flow all the time and never

> > > > > > > > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > > > > > > > random coin toss, that you yourself had initiated.

>

> > > > > > > > > > > You haven't addressed the simple issue of where is the influence/

> > > > > > > > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > > > > > > > justgiving the same outputs as they would have singly in

> > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > given by a single node in the lab were consciously influenced/

> > > > > > > > > > > affected, or that the affect/influence is such that the outputs are

> > > > > > > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > > > > It is a simple question, why do you avoid answering it?

>

> > > > > > > > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > > > > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > > > > > > > any such thing. Apart from your confusion of the terms effect and

> > > > > > > > > > affect, each sentence you write (the output from you as a node) is

> > > > > > > > > > meaningless and does not exhibit consciousness. However, the

> > > > > > > > > > paragraph as a whole exhibits consciousness, because it has been

> > > > > > > > > > posted and arrived at my computer. Try to be more specific in your

> > > > > > > > > > writing, and give some examples, and that will help you to communicate

> > > > > > > > > > what you mean.

>

> > > > > > > > > > Some of the outputs of the conscious organism (eg speech) are

> > > > > > > > > > sometimes directly affected by conscious experiences, other outputs

> > > > > > > > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > > > > > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > > > > > > > laboratory experiment) was delivered the same inputs to identical

> > > > > > > > > > neurons as in the earlier recorded hour of data from another brain,

> > > > > > > > > > the experiences would be the same (assuming the problem of

> > > > > > > > > > constructing an identical arrangement of neurons in the receiving

> > > > > > > > > > brain to receive the input data had been solved). Do you disagree?

>

> > > > > > > > > It was a post you had replied to, and you hadn't addressed the

> > > > > > > > > question. You still haven't addressed it. You are just distracting

> > > > > > > > > away, by asking me a question. It does beg the question of why you

> > > > > > > > > don't seem to be able to face answering.

>

> > > > > > > > > I'll repeat the question for you. Regarding the robot, where is the

> > > > > > > > > influence/ affect of conscious experiences in the outputs, if all the

> > > > > > > > > nodes are justgiving the same outputs as they would have singly in the

> > > > > > > > > lab given the same inputs. Are you suggesting that the outputs given

> > > > > > > > > by a single node in the lab were consciously influenced/ affected, or

> > > > > > > > > that the affect/influence is such that the outputs are the same as

> > > > > > > > > without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > You are in too much of a hurry, and you didn't read my reply to your

> > > > > > > > first response. As you have repeated your very poorly-worded question

> > > > > > > > - which incidentally you should try and tidy up to give it a proper

> > > > > > > > meaning - I'll repeat my reply:

>

> > > > > > > > > > Your question was not clear enough for me to give you an opinion. You

> > > > > > > > > > are focusing attention on analysis of the stored I/O data transactions

> > > > > > > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > > > > > > > the input nodes, the output from these go to the inputs of other

> > > > > > > > > > nodes, and so on until eventually the network output nodes trigger a

> > > > > > > > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > > > > > > > something, by sounding an alarm. It is clear that no particular part

> > > > > > > > > > of the device is conscious, and there is no reason to suppose that the

> > > > > > > > > > overall system is conscious either. I gather (from your rather

> > > > > > > > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > > > > > > > thread) that your argument goes on that the same applies to the human

> > > > > > > > > > brain - but I disagree with you there as you know, because I reckon

> > > > > > > > > > that the biophysical laws of evolution explain why the overall system

> > > > > > > > > > is conscious.

>

> > > > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > > > computer.

>

> > > > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > > > eventual 'glitch'.

>

> > > > > > > > If that isn't an adequate response, try writing out your question

> > > > > > > > again in a more comprehensible form. You don't distinguish between

> > > > > > > > the overall behaviour of the robot (do the outputs make it walk and

> > > > > > > > talk?) and the behaviour of the computer processing (the I/O on the

> > > > > > > > chip and program circuitry), for example.

>

> > > > > > > > I'm trying to help you express whatever it is you are trying to

> > > > > > > > explain to the groups in your header (heathen alt.atheism, God-fearing

> > > > > > > > alt.religion) in which you are taking the God-given free-will

> > > > > > > > position. It looks like you are trying to construct an unassailable

> > > > > > > > argument relating subjective experiences and behaviour, computers and

> > > > > > > > brains, and free will, to notions of the existence of God and the

> > > > > > > > soul. It might be possible to do that if you express yourself more

> > > > > > > > clearly, and some of your logic was definitely wrong earlier, so you

> > > > > > > > would have to correct that as well. I expect that if whatever you

> > > > > > > > come up with doesn't include an explanation for why evolutionary

> > > > > > > > principles don't produce consciousness (as you claim) you won't really

> > > > > > > > be doing anything other than time-wasting pseudo-science, because

> > > > > > > > scientifically-minded people will say you haven't answered our

> > > > > > > > question - why do you introduce the idea of a 'soul' (for which their

> > > > > > > > is no evidence) to explain consciousness in animals such as ourselves,

> > > > > > > > when evolution (for which there is evidence) explains it perfectly?

>

> > > > > > > I'll put the example again:

>

> > > > > > > --------

> > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > contained additional information such as source node, destination node

> > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > unexplained messages appeared.

> > > > > > > ---------

>

> > > > > > > You seem to have understood the example, but asked whether it walked

> > > > > > > and talked, yes lets say it does, and would be a contender for an

> > > > > > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> > > > > > > With regards to the question you are having a problem comprehending,

> > > > > > > I'll reword it for you. By conscious experiences I am refering to the

> > > > > > > sensations we experience such as pleasure and pain, visual and

> > > > > > > auditory perception etc.

>

> > > > > > > Are you suggesting:

>

> > > > > > > A) The atheists were wrong to consider it to be consciously

> > > > > > > experiencing.

> > > > > > > B) Were correct to say that was consciously experiencing, but that it

> > > > > > > doesn't influence behaviour.

> > > > > > > C) It might have been consciously experiencing, how could you tell, it

> > > > > > > doesn't influence behaviour.

> > > > > > > D) It was consciously experiencing and that influenced its behaviour.

>

> > > > > > > If you select D, as all the nodes are giving the same outputs as they

> > > > > > > would have singly in the lab given the same inputs, could you also

> > > > > > > select between D1, and D2:

>

> > > > > > > D1) The outputs given by a single node in the lab were also influenced

> > > > > > > by conscious experiences.

> > > > > > > D2) The outputs given by a single node in the lab were not influenced

> > > > > > > by conscious experiences, but the influence of the conscious

> > > > > > > experiences within the robot, is such that the outputs are the same as

> > > > > > > without the influence of the conscious experiences.

>

> > > > > > > If you are still having problems comprehending, then simply write out

> > > > > > > the sentances you could not understand, and I'll reword them for you.

> > > > > > > If you want to add another option, then simply state it, obviously the

> > > > > > > option should be related to whether the robot was consciously

> > > > > > > experiencing, and if it was, whether the conscious experiences

> > > > > > > influenced behaviour.

>

> > > > > > As you requested, here is the question I didn't understand: "Regarding

> > > > > > the robot, where is the influence/ affect of conscious experiences in

> > > > > > the outputs, if all the nodes are justgiving the same outputs as they

> > > > > > would have singly in the lab given the same inputs."

>

> > > > > I had rewritten the question for you. Why did you simply reference the

> > > > > old one, and not answer the new reworded one? It seems like you are

> > > > > desperately trying to avoid answering.

>

> > > > I was interested in your suggestion that you think you can prove that

> > > > God exists because computers aren't conscious (or whatever it is you

> > > > are trying to say), and I was trying to help you explain that.

> > > > Whatever it is you are trying to ask me, I can't even attempt to give

> > > > you an answer unless you can express your question in proper

> > > > sentences. In any case, you are the one who is claiming properties

> > > > of consciousness of robot inputs and outputs (or whatever you are

> > > > claiming), so you will in due course be expected to defend your theory

> > > > (once you have written it out in comprehensible English sentences)

> > > > without any help from me, so you shouldn't be relying on my answers to

> > > > your questions anyway.

>

> > > Why didn't you point out the sentance that was giving you a problem in

> > > the reworded question? As I say, it seems like you are desperately

> > > trying to avoid answering. Though to help you, I am not claiming any

> > > properties or consciousness of robot inputs and outputs, I am

> > > questioning you about your understanding. Though since you think

> > > twanging elastic bands with twigs is sufficient to give rise to

> > > conscious experiences, I'd be interested if you choose option A, if

> > > ever you do find the courage to answer.

>

> > It takes no courage to respond to Usenet postings, just a computer

> > terminal and a certain amount of time and patience. All I was asking

> > you to do is to explain what you mean by the following sentence:

> > "Regarding the robot, where is the influence/ affect of conscious

> > experiences in the outputs, if all the nodes are justgiving the same

> > outputs as they would have singly in the lab given the same inputs."

> > What inputs and outputs are you talking about?

>

> Here is the example again (though it is still above in the history,

> from when you first said you didn't understand the bit you have

> quoted, and I reworded it for you, though you then ignored the

> reworded version and restated you didn't understand the original

> question, when I pointed out what you had done and asked you to answer

> the reworded version, you ignored me again, and have done so yet

> again, giving the impression you are trying to avoid answering):

>

> --------

> Supposing there was an artificial neural network, with a billion more

> nodes than you had neurons in your brain, and a very 'complicated'

> configuration, which drove a robot. The robot, due to its behaviour

> (being a Turing Equivalent), caused some atheists to claim it was

> consciously experiencing. Now supposing each message between each node

> contained additional information such as source node, destination node

> (which could be the same as the source node, for feedback loops), the

> time the message was sent, and the message. Each node wrote out to a

> log on receipt of a message, and on sending a message out. Now after

> an hour of the atheist communicating with the robot, the logs could be

> examined by a bank of computers, varifying, that for the input

> messages each received, the outputs were those expected by a single

> node in a lab given the same inputs. They could also varify that no

> unexplained messages appeared.

> ---------

>

> The inputs and outputs refer to the inputs and outputs to the nodes.

> I'll leave it up to you to either answer the bit you have just quoted,

> or the reworded version (the one with the options A-D, and D1 and D2).

> Is there a chance that you might answer this time?

 

OK, you have at last clarified for me that the inputs and outputs to

which you were referring in your question "Where is the influence/

affect of conscious experiences in the outputs, if all the nodes are

just giving the same outputs as they would have singly in the lab

given the same inputs?", are the I/O amongst the nodes in a computer

neural network. You are not discussing inputs to the network from

sensors or any outputs from the network to an external device. Is

that correct?

 

You say "if all the nodes are just giving the same outputs as they

would singly in the lab given the same inputs". So you have two

neural networks, one in a lab, the other not in a lab, but both of

them with identical inputs and outputs amongst the nodes. Am I

right?

 

Can you please confirm that I understand you so far, and I may then be

able to answer your question with one of A-D, D1 and D2, as you

requested.

Guest someone2
Posted

On 8 Jul, 05:25, James Norris <JimNorri...@aol.com> wrote:

> On Jul 8, 4:05?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > On 8 Jul, 03:01, James Norris <JimNorri...@aol.com> wrote:

>

> > > On Jul 8, 2:16?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > On 8 Jul, 02:06, James Norris <JimNorri...@aol.com> wrote:

>

> > > > > On Jul 7, 6:31?pm, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

>

> > > > > > > On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > > > > > > > > >You said:

>

> > > > > > > > > > > > > > > >--------

> > > > > > > > > > > > > > > >

>

> > > > > > > > > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > > > > > > > > >

> > > > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > > > >Well this was my response:

> > > > > > > > > > > > > > > >--------

> > > > > > > > > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > > > > > > > > >have already replied:

> > > > > > > > > > > > > > > >-----------

> > > > > > > > > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > > > > > > > > >-----------

>

> > > > > > > > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > > > > > > > > >random distribution rule.

>

> > > > > > > > > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > > > > > > > > >I have defined free will for you:

>

> > > > > > > > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > > > > > > > > >added:

>

> > > > > > > > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > > > > > > > > >happen.

> > > > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > > > > > > > > >written.

>

> > > > > > > > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > > > > > > > > <....>

>

> > > > > > > > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > > > > > > > > I think we are closing in on your answer.

>

> > > > > > > > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > > > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > > > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > > > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > > > > > > > > context to the question:

>

> > > > > > > > > > > > > > --------

> > > > > > > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > > > > > > unexplained messages appeared.

>

> > > > > > > > > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > > > > > > > > --------

>

> > > > > > > > > > > > > > To which you replied:

> > > > > > > > > > > > > > --------

> > > > > > > > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > > > > > > > > are.

> > > > > > > > > > > > > > --------

>

> > > > > > > > > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > > > > > > > > taking into account what you regard as influence:

>

> > > > > > > > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > > > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > > > > > > > > affect?

>

> > > > > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > > > > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > > > > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > > > > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > > > > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > > > > > > > > and the connections to the outside world through the senses, would in

> > > > > > > > > > > > > fact correspond to consciousness.

>

> > > > > > > > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > > > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > > > > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > > > > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > > > > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > > > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > > > > > > > > which we understand as related to reality, which we call

> > > > > > > > > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > > > > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > > > > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > > > > > > > > each node in its neural net would not show it. To understand why an

> > > > > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > > > > > > > > usually state your personal opinion that evolution does not explain

> > > > > > > > > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > > > > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > > > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > > > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > > > > > > > > a human have total free will to do or be whatever they feel like

> > > > > > > > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > > > > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > > > > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > > > > > > > > complexity of the brain and the sense information it receives is

> > > > > > > > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > > > > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > > > > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > > > > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > > > > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > > > > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > > > > > > > > will, you can lead your life as though you were responsible for your

> > > > > > > > > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > > > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > > > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > > > > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > > > > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > > > > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > > > > > > > > of the requirements for these adult privileges). But you would

> > > > > > > > > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > > > > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > > > > > > > > to decide whether or not to behave in future as though you had free

> > > > > > > > > > > > > will (rather than just going with the flow all the time and never

> > > > > > > > > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > > > > > > > > random coin toss, that you yourself had initiated.

>

> > > > > > > > > > > > You haven't addressed the simple issue of where is the influence/

> > > > > > > > > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > > > > > > > > justgiving the same outputs as they would have singly in

> > > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > > given by a single node in the lab were consciously influenced/

> > > > > > > > > > > > affected, or that the affect/influence is such that the outputs are

> > > > > > > > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > > > > > It is a simple question, why do you avoid answering it?

>

> > > > > > > > > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > > > > > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > > > > > > > > any such thing. Apart from your confusion of the terms effect and

> > > > > > > > > > > affect, each sentence you write (the output from you as a node) is

> > > > > > > > > > > meaningless and does not exhibit consciousness. However, the

> > > > > > > > > > > paragraph as a whole exhibits consciousness, because it has been

> > > > > > > > > > > posted and arrived at my computer. Try to be more specific in your

> > > > > > > > > > > writing, and give some examples, and that will help you to communicate

> > > > > > > > > > > what you mean.

>

> > > > > > > > > > > Some of the outputs of the conscious organism (eg speech) are

> > > > > > > > > > > sometimes directly affected by conscious experiences, other outputs

> > > > > > > > > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > > > > > > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > > > > > > > > laboratory experiment) was delivered the same inputs to identical

> > > > > > > > > > > neurons as in the earlier recorded hour of data from another brain,

> > > > > > > > > > > the experiences would be the same (assuming the problem of

> > > > > > > > > > > constructing an identical arrangement of neurons in the receiving

> > > > > > > > > > > brain to receive the input data had been solved). Do you disagree?

>

> > > > > > > > > > It was a post you had replied to, and you hadn't addressed the

> > > > > > > > > > question. You still haven't addressed it. You are just distracting

> > > > > > > > > > away, by asking me a question. It does beg the question of why you

> > > > > > > > > > don't seem to be able to face answering.

>

> > > > > > > > > > I'll repeat the question for you. Regarding the robot, where is the

> > > > > > > > > > influence/ affect of conscious experiences in the outputs, if all the

> > > > > > > > > > nodes are justgiving the same outputs as they would have singly in the

> > > > > > > > > > lab given the same inputs. Are you suggesting that the outputs given

> > > > > > > > > > by a single node in the lab were consciously influenced/ affected, or

> > > > > > > > > > that the affect/influence is such that the outputs are the same as

> > > > > > > > > > without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > > You are in too much of a hurry, and you didn't read my reply to your

> > > > > > > > > first response. As you have repeated your very poorly-worded question

> > > > > > > > > - which incidentally you should try and tidy up to give it a proper

> > > > > > > > > meaning - I'll repeat my reply:

>

> > > > > > > > > > > Your question was not clear enough for me to give you an opinion. You

> > > > > > > > > > > are focusing attention on analysis of the stored I/O data transactions

> > > > > > > > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > > > > > > > > the input nodes, the output from these go to the inputs of other

> > > > > > > > > > > nodes, and so on until eventually the network output nodes trigger a

> > > > > > > > > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > > > > > > > > something, by sounding an alarm. It is clear that no particular part

> > > > > > > > > > > of the device is conscious, and there is no reason to suppose that the

> > > > > > > > > > > overall system is conscious either. I gather (from your rather

> > > > > > > > > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > > > > > > > > thread) that your argument goes on that the same applies to the human

> > > > > > > > > > > brain - but I disagree with you there as you know, because I reckon

> > > > > > > > > > > that the biophysical laws of evolution explain why the overall system

> > > > > > > > > > > is conscious.

>

> > > > > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > > > > computer.

>

> > > > > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > > > > eventual 'glitch'.

>

> > > > > > > > > If that isn't an adequate response, try writing out your question

> > > > > > > > > again in a more comprehensible form. You don't distinguish between

> > > > > > > > > the overall behaviour of the robot (do the outputs make it walk and

> > > > > > > > > talk?) and the behaviour of the computer processing (the I/O on the

> > > > > > > > > chip and program circuitry), for example.

>

> > > > > > > > > I'm trying to help you express whatever it is you are trying to

> > > > > > > > > explain to the groups in your header (heathen alt.atheism, God-fearing

> > > > > > > > > alt.religion) in which you are taking the God-given free-will

> > > > > > > > > position. It looks like you are trying to construct an unassailable

> > > > > > > > > argument relating subjective experiences and behaviour, computers and

> > > > > > > > > brains, and free will, to notions of the existence of God and the

> > > > > > > > > soul. It might be possible to do that if you express yourself more

> > > > > > > > > clearly, and some of your logic was definitely wrong earlier, so you

> > > > > > > > > would have to correct that as well. I expect that if whatever you

> > > > > > > > > come up with doesn't include an explanation for why evolutionary

> > > > > > > > > principles don't produce consciousness (as you claim) you won't really

> > > > > > > > > be doing anything other than time-wasting pseudo-science, because

> > > > > > > > > scientifically-minded people will say you haven't answered our

> > > > > > > > > question - why do you introduce the idea of a 'soul' (for which their

> > > > > > > > > is no evidence) to explain consciousness in animals such as ourselves,

> > > > > > > > > when evolution (for which there is evidence) explains it perfectly?

>

> > > > > > > > I'll put the example again:

>

> > > > > > > > --------

> > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > contained additional information such as source node, destination node

> > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > unexplained messages appeared.

> > > > > > > > ---------

>

> > > > > > > > You seem to have understood the example, but asked whether it walked

> > > > > > > > and talked, yes lets say it does, and would be a contender for an

> > > > > > > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> > > > > > > > With regards to the question you are having a problem comprehending,

> > > > > > > > I'll reword it for you. By conscious experiences I am refering to the

> > > > > > > > sensations we experience such as pleasure and pain, visual and

> > > > > > > > auditory perception etc.

>

> > > > > > > > Are you suggesting:

>

> > > > > > > > A) The atheists were wrong to consider it to be consciously

> > > > > > > > experiencing.

> > > > > > > > B) Were correct to say that was consciously experiencing, but that it

> > > > > > > > doesn't influence behaviour.

> > > > > > > > C) It might have been consciously experiencing, how could you tell, it

> > > > > > > > doesn't influence behaviour.

> > > > > > > > D) It was consciously experiencing and that influenced its behaviour.

>

> > > > > > > > If you select D, as all the nodes are giving the same outputs as they

> > > > > > > > would have singly in the lab given the same inputs, could you also

> > > > > > > > select between D1, and D2:

>

> > > > > > > > D1) The outputs given by a single node in the lab were also influenced

> > > > > > > > by conscious experiences.

> > > > > > > > D2) The outputs given by a single node in the lab were not influenced

> > > > > > > > by conscious experiences, but the influence of the conscious

> > > > > > > > experiences within the robot, is such that the outputs are the same as

> > > > > > > > without the influence of the conscious experiences.

>

> > > > > > > > If you are still having problems comprehending, then simply write out

> > > > > > > > the sentances you could not understand, and I'll reword them for you.

> > > > > > > > If you want to add another option, then simply state it, obviously the

> > > > > > > > option should be related to whether the robot was consciously

> > > > > > > > experiencing, and if it was, whether the conscious experiences

> > > > > > > > influenced behaviour.

>

> > > > > > > As you requested, here is the question I didn't understand: "Regarding

> > > > > > > the robot, where is the influence/ affect of conscious experiences in

> > > > > > > the outputs, if all the nodes are justgiving the same outputs as they

> > > > > > > would have singly in the lab given the same inputs."

>

> > > > > > I had rewritten the question for you. Why did you simply reference the

> > > > > > old one, and not answer the new reworded one? It seems like you are

> > > > > > desperately trying to avoid answering.

>

> > > > > I was interested in your suggestion that you think you can prove that

> > > > > God exists because computers aren't conscious (or whatever it is you

> > > > > are trying to say), and I was trying to help you explain that.

> > > > > Whatever it is you are trying to ask me, I can't even attempt to give

> > > > > you an answer unless you can express your question in proper

> > > > > sentences. In any case, you are the one who is claiming properties

> > > > > of consciousness of robot inputs and outputs (or whatever you are

> > > > > claiming), so you will in due course be expected to defend your theory

> > > > > (once you have written it out in comprehensible English sentences)

> > > > > without any help from me, so you shouldn't be relying on my answers to

> > > > > your questions anyway.

>

> > > > Why didn't you point out the sentance that was giving you a problem in

> > > > the reworded question? As I say, it seems like you are desperately

> > > > trying to avoid answering. Though to help you, I am not claiming any

> > > > properties or consciousness of robot inputs and outputs, I am

> > > > questioning you about your understanding. Though since you think

> > > > twanging elastic bands with twigs is sufficient to give rise to

> > > > conscious experiences, I'd be interested if you choose option A, if

> > > > ever you do find the courage to answer.

>

> > > It takes no courage to respond to Usenet postings, just a computer

> > > terminal and a certain amount of time and patience. All I was asking

> > > you to do is to explain what you mean by the following sentence:

> > > "Regarding the robot, where is the influence/ affect of conscious

> > > experiences in the outputs, if all the nodes are justgiving the same

> > > outputs as they would have singly in the lab given the same inputs."

> > > What inputs and outputs are you talking about?

>

> > Here is the example again (though it is still above in the history,

> > from when you first said you didn't understand the bit you have

> > quoted, and I reworded it for you, though you then ignored the

> > reworded version and restated you didn't understand the original

> > question, when I pointed out what you had done and asked you to answer

> > the reworded version, you ignored me again, and have done so yet

> > again, giving the impression you are trying to avoid answering):

>

> > --------

> > Supposing there was an artificial neural network, with a billion more

> > nodes than you had neurons in your brain, and a very 'complicated'

> > configuration, which drove a robot. The robot, due to its behaviour

> > (being a Turing Equivalent), caused some atheists to claim it was

> > consciously experiencing. Now supposing each message between each node

> > contained additional information such as source node, destination node

> > (which could be the same as the source node, for feedback loops), the

> > time the message was sent, and the message. Each node wrote out to a

> > log on receipt of a message, and on sending a message out. Now after

> > an hour of the atheist communicating with the robot, the logs could be

> > examined by a bank of computers, varifying, that for the input

> > messages each received, the outputs were those expected by a single

> > node in a lab given the same inputs. They could also varify that no

> > unexplained messages appeared.

> > ---------

>

> > The inputs and outputs refer to the inputs and outputs to the nodes.

> > I'll leave it up to you to either answer the bit you have just quoted,

> > or the reworded version (the one with the options A-D, and D1 and D2).

> > Is there a chance that you might answer this time?

>

> OK, you have at last clarified for me that the inputs and outputs to

> which you were referring in your question "Where is the influence/

> affect of conscious experiences in the outputs, if all the nodes are

> just giving the same outputs as they would have singly in the lab

> given the same inputs?", are the I/O amongst the nodes in a computer

> neural network. You are not discussing inputs to the network from

> sensors or any outputs from the network to an external device. Is

> that correct?

>

> You say "if all the nodes are just giving the same outputs as they

> would singly in the lab given the same inputs". So you have two

> neural networks, one in a lab, the other not in a lab, but both of

> them with identical inputs and outputs amongst the nodes. Am I

> right?

>

> Can you please confirm that I understand you so far, and I may then be

> able to answer your question with one of A-D, D1 and D2, as you

> requested.

 

The robot has sensors, which feed inputs to the neural network. Which

is why I responded to you earlier that:

------------

You seem to have understood the example, but asked whether it walked

and talked, yes lets say it does, and would be a contender for an

olympic gymnastics medal, and said it hurt if you stamped on its foot.

------------

 

There is not a neural network in the lab, it is just a single node

with input channels and output channels. Where any node behaviour in

the neural network can be reproduced, by feeding it the same inputs as

the one in the network was shown to have received in the logs.

Guest Jeckyl
Posted

"someone2" <glenn.spigel2@btinternet.com> wrote in message

news:1183863917.768229.282730@k79g2000hse.googlegroups.com...

> --------

> Supposing there was an artificial neural network, with a billion more

> nodes than you had neurons in your brain, and a very 'complicated'

> configuration, which drove a robot.

 

Ok .. lets pretend

> The robot, due to its behaviour

> (being a Turing Equivalent), caused some atheists to claim it was

> consciously experiencing.

 

Why only atheists?

> Now supposing each message between each node

> contained additional information such as source node, destination node

> (which could be the same as the source node, for feedback loops), the

> time the message was sent, and the message. Each node wrote out to a

> log on receipt of a message, and on sending a message out. Now after

> an hour of the atheist communicating with the robot, the logs could be

> examined by a bank of computers, varifying, that for the input

> messages each received, the outputs were those expected by a single

> node in a lab given the same inputs. They could also varify that no

> unexplained messages appeared.

 

Fine

 

So what is the problem .. that say nothing about whether or not the robot is

conscious.

Guest someone2
Posted

On 8 Jul, 16:56, "Jeckyl" <n...@nowhere.com> wrote:

> "someone2" <glenn.spig...@btinternet.com> wrote in message

>

> news:1183863917.768229.282730@k79g2000hse.googlegroups.com...

>

> > --------

> > Supposing there was an artificial neural network, with a billion more

> > nodes than you had neurons in your brain, and a very 'complicated'

> > configuration, which drove a robot.

>

> Ok .. lets pretend

>

> > The robot, due to its behaviour

> > (being a Turing Equivalent), caused some atheists to claim it was

> > consciously experiencing.

>

> Why only atheists?

>

> > Now supposing each message between each node

> > contained additional information such as source node, destination node

> > (which could be the same as the source node, for feedback loops), the

> > time the message was sent, and the message. Each node wrote out to a

> > log on receipt of a message, and on sending a message out. Now after

> > an hour of the atheist communicating with the robot, the logs could be

> > examined by a bank of computers, varifying, that for the input

> > messages each received, the outputs were those expected by a single

> > node in a lab given the same inputs. They could also varify that no

> > unexplained messages appeared.

>

> Fine

>

> So what is the problem .. that say nothing about whether or not the robot is

> conscious.

 

So what would you be saying with regards to it consciously

experiencing:

 

A) The atheists were wrong to consider it to be consciously

experiencing.

B) Were correct to say that was consciously experiencing, but that it

doesn't influence behaviour.

C) It might have been consciously experiencing, how could you tell, it

doesn't influence behaviour.

D) It was consciously experiencing and that influenced its behaviour.

 

If you select D, as all the nodes are giving the same outputs as they

would have singly in the lab given the same inputs, could you also

select between D1, and D2:

 

D1) The outputs given by a single node in the lab were also influenced

by conscious experiences.

D2) The outputs given by a single node in the lab were not influenced

by conscious experiences, but the influence of the conscious

experiences within the robot, is such that the outputs are the same as

without the influence of the conscious experiences.

 

or

 

E) Avoid answering (like the other atheists so far)

Guest James Norris
Posted

On Jul 8, 11:35?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 8 Jul, 05:25, James Norris <JimNorri...@aol.com> wrote:

>

> > On Jul 8, 4:05?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > On 8 Jul, 03:01, James Norris <JimNorri...@aol.com> wrote:

>

> > > > On Jul 8, 2:16?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > On 8 Jul, 02:06, James Norris <JimNorri...@aol.com> wrote:

>

> > > > > > On Jul 7, 6:31?pm, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

>

> > > > > > > > On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > > > > > > > > > >You said:

>

> > > > > > > > > > > > > > > > >--------

> > > > > > > > > > > > > > > > >

>

> > > > > > > > > > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > > > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > > > > > > > > > >

> > > > > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > > > > >Well this was my response:

> > > > > > > > > > > > > > > > >--------

> > > > > > > > > > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > > > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > > > > > > > > > >have already replied:

> > > > > > > > > > > > > > > > >-----------

> > > > > > > > > > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > > > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > > > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > > > > > > > > > >-----------

>

> > > > > > > > > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > > > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > > > > > > > > > >random distribution rule.

>

> > > > > > > > > > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > > > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > > > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > > > > > > > > > >I have defined free will for you:

>

> > > > > > > > > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > > > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > > > > > > > > > >added:

>

> > > > > > > > > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > > > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > > > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > > > > > > > > > >happen.

> > > > > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > > > > > > > > > >written.

>

> > > > > > > > > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > > > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > > > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > > > > > > > > > <....>

>

> > > > > > > > > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > > > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > > > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > > > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > > > > > > > > > I think we are closing in on your answer.

>

> > > > > > > > > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > > > > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > > > > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > > > > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > > > > > > > > > context to the question:

>

> > > > > > > > > > > > > > > --------

> > > > > > > > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > > > > > > > unexplained messages appeared.

>

> > > > > > > > > > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > > > > > > > > > --------

>

> > > > > > > > > > > > > > > To which you replied:

> > > > > > > > > > > > > > > --------

> > > > > > > > > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > > > > > > > > > are.

> > > > > > > > > > > > > > > --------

>

> > > > > > > > > > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > > > > > > > > > taking into account what you regard as influence:

>

> > > > > > > > > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > > > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > > > > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > > > > > > > > > affect?

>

> > > > > > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > > > > > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > > > > > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > > > > > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > > > > > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > > > > > > > > > and the connections to the outside world through the senses, would in

> > > > > > > > > > > > > > fact correspond to consciousness.

>

> > > > > > > > > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > > > > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > > > > > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > > > > > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > > > > > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > > > > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > > > > > > > > > which we understand as related to reality, which we call

> > > > > > > > > > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > > > > > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > > > > > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > > > > > > > > > each node in its neural net would not show it. To understand why an

> > > > > > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > > > > > > > > > usually state your personal opinion that evolution does not explain

> > > > > > > > > > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > > > > > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > > > > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > > > > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > > > > > > > > > a human have total free will to do or be whatever they feel like

> > > > > > > > > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > > > > > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > > > > > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > > > > > > > > > complexity of the brain and the sense information it receives is

> > > > > > > > > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > > > > > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > > > > > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > > > > > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > > > > > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > > > > > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > > > > > > > > > will, you can lead your life as though you were responsible for your

> > > > > > > > > > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > > > > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > > > > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > > > > > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > > > > > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > > > > > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > > > > > > > > > of the requirements for these adult privileges). But you would

> > > > > > > > > > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > > > > > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > > > > > > > > > to decide whether or not to behave in future as though you had free

> > > > > > > > > > > > > > will (rather than just going with the flow all the time and never

> > > > > > > > > > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > > > > > > > > > random coin toss, that you yourself had initiated.

>

> > > > > > > > > > > > > You haven't addressed the simple issue of where is the influence/

> > > > > > > > > > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > > > > > > > > > justgiving the same outputs as they would have singly in

> > > > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > > > given by a single node in the lab were consciously influenced/

> > > > > > > > > > > > > affected, or that the affect/influence is such that the outputs are

> > > > > > > > > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > > > > > > It is a simple question, why do you avoid answering it?

>

> > > > > > > > > > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > > > > > > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > > > > > > > > > any such thing. Apart from your confusion of the terms effect and

> > > > > > > > > > > > affect, each sentence you write (the output from you as a node) is

> > > > > > > > > > > > meaningless and does not exhibit consciousness. However, the

> > > > > > > > > > > > paragraph as a whole exhibits consciousness, because it has been

> > > > > > > > > > > > posted and arrived at my computer. Try to be more specific in your

> > > > > > > > > > > > writing, and give some examples, and that will help you to communicate

> > > > > > > > > > > > what you mean.

>

> > > > > > > > > > > > Some of the outputs of the conscious organism (eg speech) are

> > > > > > > > > > > > sometimes directly affected by conscious experiences, other outputs

> > > > > > > > > > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > > > > > > > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > > > > > > > > > laboratory experiment) was delivered the same inputs to identical

> > > > > > > > > > > > neurons as in the earlier recorded hour of data from another brain,

> > > > > > > > > > > > the experiences would be the same (assuming the problem of

> > > > > > > > > > > > constructing an identical arrangement of neurons in the receiving

> > > > > > > > > > > > brain to receive the input data had been solved). Do you disagree?

>

> > > > > > > > > > > It was a post you had replied to, and you hadn't addressed the

> > > > > > > > > > > question. You still haven't addressed it. You are just distracting

> > > > > > > > > > > away, by asking me a question. It does beg the question of why you

> > > > > > > > > > > don't seem to be able to face answering.

>

> > > > > > > > > > > I'll repeat the question for you. Regarding the robot, where is the

> > > > > > > > > > > influence/ affect of conscious experiences in the outputs, if all the

> > > > > > > > > > > nodes are justgiving the same outputs as they would have singly in the

> > > > > > > > > > > lab given the same inputs. Are you suggesting that the outputs given

> > > > > > > > > > > by a single node in the lab were consciously influenced/ affected, or

> > > > > > > > > > > that the affect/influence is such that the outputs are the same as

> > > > > > > > > > > without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > > > You are in too much of a hurry, and you didn't read my reply to your

> > > > > > > > > > first response. As you have repeated your very poorly-worded question

> > > > > > > > > > - which incidentally you should try and tidy up to give it a proper

> > > > > > > > > > meaning - I'll repeat my reply:

>

> > > > > > > > > > > > Your question was not clear enough for me to give you an opinion. You

> > > > > > > > > > > > are focusing attention on analysis of the stored I/O data transactions

> > > > > > > > > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > > > > > > > > > the input nodes, the output from these go to the inputs of other

> > > > > > > > > > > > nodes, and so on until eventually the network output nodes trigger a

> > > > > > > > > > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > > > > > > > > > something, by sounding an alarm. It is clear that no particular part

> > > > > > > > > > > > of the device is conscious, and there is no reason to suppose that the

> > > > > > > > > > > > overall system is conscious either. I gather (from your rather

> > > > > > > > > > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > > > > > > > > > thread) that your argument goes on that the same applies to the human

> > > > > > > > > > > > brain - but I disagree with you there as you know, because I reckon

> > > > > > > > > > > > that the biophysical laws of evolution explain why the overall system

> > > > > > > > > > > > is conscious.

>

> > > > > > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > > > > > computer.

>

> > > > > > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > > > > > eventual 'glitch'.

>

> > > > > > > > > > If that isn't an adequate response, try writing out your question

> > > > > > > > > > again in a more comprehensible form. You don't distinguish between

> > > > > > > > > > the overall behaviour of the robot (do the outputs make it walk and

> > > > > > > > > > talk?) and the behaviour of the computer processing (the I/O on the

> > > > > > > > > > chip and program circuitry), for example.

>

> > > > > > > > > > I'm trying to help you express whatever it is you are trying to

> > > > > > > > > > explain to the groups in your header (heathen alt.atheism, God-fearing

> > > > > > > > > > alt.religion) in which you are taking the God-given free-will

> > > > > > > > > > position. It looks like you are trying to construct an unassailable

> > > > > > > > > > argument relating subjective experiences and behaviour, computers and

> > > > > > > > > > brains, and free will, to notions of the existence of God and the

> > > > > > > > > > soul. It might be possible to do that if you express yourself more

> > > > > > > > > > clearly, and some of your logic was definitely wrong earlier, so you

> > > > > > > > > > would have to correct that as well. I expect that if whatever you

> > > > > > > > > > come up with doesn't include an explanation for why evolutionary

> > > > > > > > > > principles don't produce consciousness (as you claim) you won't really

> > > > > > > > > > be doing anything other than time-wasting pseudo-science, because

> > > > > > > > > > scientifically-minded people will say you haven't answered our

> > > > > > > > > > question - why do you introduce the idea of a 'soul' (for which their

> > > > > > > > > > is no evidence) to explain consciousness in animals such as ourselves,

> > > > > > > > > > when evolution (for which there is evidence) explains it perfectly?

>

> > > > > > > > > I'll put the example again:

>

> > > > > > > > > --------

> > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > unexplained messages appeared.

> > > > > > > > > ---------

>

> > > > > > > > > You seem to have understood the example, but asked whether it walked

> > > > > > > > > and talked, yes lets say it does, and would be a contender for an

> > > > > > > > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> > > > > > > > > With regards to the question you are having a problem comprehending,

> > > > > > > > > I'll reword it for you. By conscious experiences I am refering to the

> > > > > > > > > sensations we experience such as pleasure and pain, visual and

> > > > > > > > > auditory perception etc.

>

> > > > > > > > > Are you suggesting:

>

> > > > > > > > > A) The atheists were wrong to consider it to be consciously

> > > > > > > > > experiencing.

> > > > > > > > > B) Were correct to say that was consciously experiencing, but that it

> > > > > > > > > doesn't influence behaviour.

> > > > > > > > > C) It might have been consciously experiencing, how could you tell, it

> > > > > > > > > doesn't influence behaviour.

> > > > > > > > > D) It was consciously experiencing and that influenced its behaviour.

>

> > > > > > > > > If you select D, as all the nodes are giving the same outputs as they

> > > > > > > > > would have singly in the lab given the same inputs, could you also

> > > > > > > > > select between D1, and D2:

>

> > > > > > > > > D1) The outputs given by a single node in the lab were also influenced

> > > > > > > > > by conscious experiences.

> > > > > > > > > D2) The outputs given by a single node in the lab were not influenced

> > > > > > > > > by conscious experiences, but the influence of the conscious

> > > > > > > > > experiences within the robot, is such that the outputs are the same as

> > > > > > > > > without the influence of the conscious experiences.

>

> > > > > > > > > If you are still having problems comprehending, then simply write out

> > > > > > > > > the sentances you could not understand, and I'll reword them for you.

> > > > > > > > > If you want to add another option, then simply state it, obviously the

> > > > > > > > > option should be related to whether the robot was consciously

> > > > > > > > > experiencing, and if it was, whether the conscious experiences

> > > > > > > > > influenced behaviour.

>

> > > > > > > > As you requested, here is the question I didn't understand: "Regarding

> > > > > > > > the robot, where is the influence/ affect of conscious experiences in

> > > > > > > > the outputs, if all the nodes are justgiving the same outputs as they

> > > > > > > > would have singly in the lab given the same inputs."

>

> > > > > > > I had rewritten the question for you. Why did you simply reference the

> > > > > > > old one, and not answer the new reworded one? It seems like you are

> > > > > > > desperately trying to avoid answering.

>

> > > > > > I was interested in your suggestion that you think you can prove that

> > > > > > God exists because computers aren't conscious (or whatever it is you

> > > > > > are trying to say), and I was trying to help you explain that.

> > > > > > Whatever it is you are trying to ask me, I can't even attempt to give

> > > > > > you an answer unless you can express your question in proper

> > > > > > sentences. In any case, you are the one who is claiming properties

> > > > > > of consciousness of robot inputs and outputs (or whatever you are

> > > > > > claiming), so you will in due course be expected to defend your theory

> > > > > > (once you have written it out in comprehensible English sentences)

> > > > > > without any help from me, so you shouldn't be relying on my answers to

> > > > > > your questions anyway.

>

> > > > > Why didn't you point out the sentance that was giving you a problem in

> > > > > the reworded question? As I say, it seems like you are desperately

> > > > > trying to avoid answering. Though to help you, I am not claiming any

> > > > > properties or consciousness of robot inputs and outputs, I am

> > > > > questioning you about your understanding. Though since you think

> > > > > twanging elastic bands with twigs is sufficient to give rise to

> > > > > conscious experiences, I'd be interested if you choose option A, if

> > > > > ever you do find the courage to answer.

>

> > > > It takes no courage to respond to Usenet postings, just a computer

> > > > terminal and a certain amount of time and patience. All I was asking

> > > > you to do is to explain what you mean by the following sentence:

> > > > "Regarding the robot, where is the influence/ affect of conscious

> > > > experiences in the outputs, if all the nodes are justgiving the same

> > > > outputs as they would have singly in the lab given the same inputs."

> > > > What inputs and outputs are you talking about?

>

> > > Here is the example again (though it is still above in the history,

> > > from when you first said you didn't understand the bit you have

> > > quoted, and I reworded it for you, though you then ignored the

> > > reworded version and restated you didn't understand the original

> > > question, when I pointed out what you had done and asked you to answer

> > > the reworded version, you ignored me again, and have done so yet

> > > again, giving the impression you are trying to avoid answering):

>

> > > --------

> > > Supposing there was an artificial neural network, with a billion more

> > > nodes than you had neurons in your brain, and a very 'complicated'

> > > configuration, which drove a robot. The robot, due to its behaviour

> > > (being a Turing Equivalent), caused some atheists to claim it was

> > > consciously experiencing. Now supposing each message between each node

> > > contained additional information such as source node, destination node

> > > (which could be the same as the source node, for feedback loops), the

> > > time the message was sent, and the message. Each node wrote out to a

> > > log on receipt of a message, and on sending a message out. Now after

> > > an hour of the atheist communicating with the robot, the logs could be

> > > examined by a bank of computers, varifying, that for the input

> > > messages each received, the outputs were those expected by a single

> > > node in a lab given the same inputs. They could also varify that no

> > > unexplained messages appeared.

> > > ---------

>

> > > The inputs and outputs refer to the inputs and outputs to the nodes.

> > > I'll leave it up to you to either answer the bit you have just quoted,

> > > or the reworded version (the one with the options A-D, and D1 and D2).

> > > Is there a chance that you might answer this time?

>

> > OK, you have at last clarified for me that the inputs and outputs to

> > which you were referring in your question "Where is the influence/

> > affect of conscious experiences in the outputs, if all the nodes are

> > just giving the same outputs as they would have singly in the lab

> > given the same inputs?", are the I/O amongst the nodes in a computer

> > neural network. You are not discussing inputs to the network from

> > sensors or any outputs from the network to an external device. Is

> > that correct?

>

> > You say "if all the nodes are just giving the same outputs as they

> > would singly in the lab given the same inputs". So you have two

> > neural networks, one in a lab, the other not in a lab, but both of

> > them with identical inputs and outputs amongst the nodes. Am I

> > right?

>

> > Can you please confirm that I understand you so far, and I may then be

> > able to answer your question with one of A-D, D1 and D2, as you

> > requested.

>

> The robot has sensors, which feed inputs to the neural network. Which

> is why I responded to you earlier that:

> ------------

> You seem to have understood the example, but asked whether it walked

> and talked, yes lets say it does, and would be a contender for an

> olympic gymnastics medal, and said it hurt if you stamped on its foot.

> ------------

>

> There is not a neural network in the lab, it is just a single node

> with input channels and output channels. Where any node behaviour in

> the neural network can be reproduced, by feeding it the same inputs as

> the one in the network was shown to have received in the logs.

 

OK. You have explained that you are examining one node and its inputs

and outputs. Your question is "Where is the influence/affect of

conscious experiences in the outputs.". To understand that question,

I need to know:

A) What conscious experiences are your referring to?

B) When you wrote 'affect', did you mean 'effect'?

C) does 'in the outputs' mean 'contained in the outputs'?

 

I think (but I'm not certain), that you are leading up to saying that

each node is not conscious, so the whole device is not conscious. If

that is your point, I have answered your question when I initially

replied to your posting (see above)

> > > > > > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > > > > > similar data, and we know that we are conscious.

> > > > > > > > > > > > > > [...]

> > > > > > > > > > > > > > To understand why an

> > > > > > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > > > > > awareness in biological organisms.

Guest someone2
Posted

On 9 Jul, 01:27, James Norris <JimNorri...@aol.com> wrote:

> On Jul 8, 11:35?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > On 8 Jul, 05:25, James Norris <JimNorri...@aol.com> wrote:

>

> > > On Jul 8, 4:05?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > On 8 Jul, 03:01, James Norris <JimNorri...@aol.com> wrote:

>

> > > > > On Jul 8, 2:16?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > On 8 Jul, 02:06, James Norris <JimNorri...@aol.com> wrote:

>

> > > > > > > On Jul 7, 6:31?pm, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

>

> > > > > > > > > On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > > > > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > > > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > > > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > > > > > > > > > > >You said:

>

> > > > > > > > > > > > > > > > > >--------

> > > > > > > > > > > > > > > > > >

>

> > > > > > > > > > > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > > > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > > > > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > > > > > > > > > > >

> > > > > > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > > > > > >Well this was my response:

> > > > > > > > > > > > > > > > > >--------

> > > > > > > > > > > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > > > > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > > > > > > > > > > >have already replied:

> > > > > > > > > > > > > > > > > >-----------

> > > > > > > > > > > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > > > > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > > > > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > > > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > > > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > > > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > > > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > > > > > > > > > > >-----------

>

> > > > > > > > > > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > > > > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > > > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > > > > > > > > > > >random distribution rule.

>

> > > > > > > > > > > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > > > > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > > > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > > > > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > > > > > > > > > > >I have defined free will for you:

>

> > > > > > > > > > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > > > > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > > > > > > > > > > >added:

>

> > > > > > > > > > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > > > > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > > > > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > > > > > > > > > > >happen.

> > > > > > > > > > > > > > > > > >--------

>

> > > > > > > > > > > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > > > > > > > > > > >written.

>

> > > > > > > > > > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > > > > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > > > > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > > > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > > > > > > > > > > <....>

>

> > > > > > > > > > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > > > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > > > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > > > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > > > > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > > > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > > > > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > > > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > > > > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > > > > > > > > > > I think we are closing in on your answer.

>

> > > > > > > > > > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > > > > > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > > > > > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > > > > > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > > > > > > > > > > context to the question:

>

> > > > > > > > > > > > > > > > --------

> > > > > > > > > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > > > > > > > > unexplained messages appeared.

>

> > > > > > > > > > > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > > > > > > > > > > --------

>

> > > > > > > > > > > > > > > > To which you replied:

> > > > > > > > > > > > > > > > --------

> > > > > > > > > > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > > > > > > > > > > are.

> > > > > > > > > > > > > > > > --------

>

> > > > > > > > > > > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > > > > > > > > > > taking into account what you regard as influence:

>

> > > > > > > > > > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > > > > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > > > > > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > > > > > > > > > > affect?

>

> > > > > > > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > > > > > > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > > > > > > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > > > > > > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > > > > > > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > > > > > > > > > > and the connections to the outside world through the senses, would in

> > > > > > > > > > > > > > > fact correspond to consciousness.

>

> > > > > > > > > > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > > > > > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > > > > > > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > > > > > > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > > > > > > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > > > > > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > > > > > > > > > > which we understand as related to reality, which we call

> > > > > > > > > > > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > > > > > > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > > > > > > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > > > > > > > > > > each node in its neural net would not show it. To understand why an

> > > > > > > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > > > > > > > > > > usually state your personal opinion that evolution does not explain

> > > > > > > > > > > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > > > > > > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > > > > > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > > > > > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > > > > > > > > > > a human have total free will to do or be whatever they feel like

> > > > > > > > > > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > > > > > > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > > > > > > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > > > > > > > > > > complexity of the brain and the sense information it receives is

> > > > > > > > > > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > > > > > > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > > > > > > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > > > > > > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > > > > > > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > > > > > > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > > > > > > > > > > will, you can lead your life as though you were responsible for your

> > > > > > > > > > > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > > > > > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > > > > > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > > > > > > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > > > > > > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > > > > > > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > > > > > > > > > > of the requirements for these adult privileges). But you would

> > > > > > > > > > > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > > > > > > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > > > > > > > > > > to decide whether or not to behave in future as though you had free

> > > > > > > > > > > > > > > will (rather than just going with the flow all the time and never

> > > > > > > > > > > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > > > > > > > > > > random coin toss, that you yourself had initiated.

>

> > > > > > > > > > > > > > You haven't addressed the simple issue of where is the influence/

> > > > > > > > > > > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > > > > > > > > > > justgiving the same outputs as they would have singly in

> > > > > > > > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > > > > > > > given by a single node in the lab were consciously influenced/

> > > > > > > > > > > > > > affected, or that the affect/influence is such that the outputs are

> > > > > > > > > > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > > > > > > > It is a simple question, why do you avoid answering it?

>

> > > > > > > > > > > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > > > > > > > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > > > > > > > > > > any such thing. Apart from your confusion of the terms effect and

> > > > > > > > > > > > > affect, each sentence you write (the output from you as a node) is

> > > > > > > > > > > > > meaningless and does not exhibit consciousness. However, the

> > > > > > > > > > > > > paragraph as a whole exhibits consciousness, because it has been

> > > > > > > > > > > > > posted and arrived at my computer. Try to be more specific in your

> > > > > > > > > > > > > writing, and give some examples, and that will help you to communicate

> > > > > > > > > > > > > what you mean.

>

> > > > > > > > > > > > > Some of the outputs of the conscious organism (eg speech) are

> > > > > > > > > > > > > sometimes directly affected by conscious experiences, other outputs

> > > > > > > > > > > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > > > > > > > > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > > > > > > > > > > laboratory experiment) was delivered the same inputs to identical

> > > > > > > > > > > > > neurons as in the earlier recorded hour of data from another brain,

> > > > > > > > > > > > > the experiences would be the same (assuming the problem of

> > > > > > > > > > > > > constructing an identical arrangement of neurons in the receiving

> > > > > > > > > > > > > brain to receive the input data had been solved). Do you disagree?

>

> > > > > > > > > > > > It was a post you had replied to, and you hadn't addressed the

> > > > > > > > > > > > question. You still haven't addressed it. You are just distracting

> > > > > > > > > > > > away, by asking me a question. It does beg the question of why you

> > > > > > > > > > > > don't seem to be able to face answering.

>

> > > > > > > > > > > > I'll repeat the question for you. Regarding the robot, where is the

> > > > > > > > > > > > influence/ affect of conscious experiences in the outputs, if all the

> > > > > > > > > > > > nodes are justgiving the same outputs as they would have singly in the

> > > > > > > > > > > > lab given the same inputs. Are you suggesting that the outputs given

> > > > > > > > > > > > by a single node in the lab were consciously influenced/ affected, or

> > > > > > > > > > > > that the affect/influence is such that the outputs are the same as

> > > > > > > > > > > > without the affect (i.e. there is no affect/influence)?

>

> > > > > > > > > > > You are in too much of a hurry, and you didn't read my reply to your

> > > > > > > > > > > first response. As you have repeated your very poorly-worded question

> > > > > > > > > > > - which incidentally you should try and tidy up to give it a proper

> > > > > > > > > > > meaning - I'll repeat my reply:

>

> > > > > > > > > > > > > Your question was not clear enough for me to give you an opinion. You

> > > > > > > > > > > > > are focusing attention on analysis of the stored I/O data transactions

> > > > > > > > > > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > > > > > > > > > > the input nodes, the output from these go to the inputs of other

> > > > > > > > > > > > > nodes, and so on until eventually the network output nodes trigger a

> > > > > > > > > > > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > > > > > > > > > > something, by sounding an alarm. It is clear that no particular part

> > > > > > > > > > > > > of the device is conscious, and there is no reason to suppose that the

> > > > > > > > > > > > > overall system is conscious either. I gather (from your rather

> > > > > > > > > > > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > > > > > > > > > > thread) that your argument goes on that the same applies to the human

> > > > > > > > > > > > > brain - but I disagree with you there as you know, because I reckon

> > > > > > > > > > > > > that the biophysical laws of evolution explain why the overall system

> > > > > > > > > > > > > is conscious.

>

> > > > > > > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > > > > > > computer.

>

> > > > > > > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > > > > > > eventual 'glitch'.

>

> > > > > > > > > > > If that isn't an adequate response, try writing out your question

> > > > > > > > > > > again in a more comprehensible form. You don't distinguish between

> > > > > > > > > > > the overall behaviour of the robot (do the outputs make it walk and

> > > > > > > > > > > talk?) and the behaviour of the computer processing (the I/O on the

> > > > > > > > > > > chip and program circuitry), for example.

>

> > > > > > > > > > > I'm trying to help you express whatever it is you are trying to

> > > > > > > > > > > explain to the groups in your header (heathen alt.atheism, God-fearing

> > > > > > > > > > > alt.religion) in which you are taking the God-given free-will

> > > > > > > > > > > position. It looks like you are trying to construct an unassailable

> > > > > > > > > > > argument relating subjective experiences and behaviour, computers and

> > > > > > > > > > > brains, and free will, to notions of the existence of God and the

> > > > > > > > > > > soul. It might be possible to do that if you express yourself more

> > > > > > > > > > > clearly, and some of your logic was definitely wrong earlier, so you

> > > > > > > > > > > would have to correct that as well. I expect that if whatever you

> > > > > > > > > > > come up with doesn't include an explanation for why evolutionary

> > > > > > > > > > > principles don't produce consciousness (as you claim) you won't really

> > > > > > > > > > > be doing anything other than time-wasting pseudo-science, because

> > > > > > > > > > > scientifically-minded people will say you haven't answered our

> > > > > > > > > > > question - why do you introduce the idea of a 'soul' (for which their

> > > > > > > > > > > is no evidence) to explain consciousness in animals such as ourselves,

> > > > > > > > > > > when evolution (for which there is evidence) explains it perfectly?

>

> > > > > > > > > > I'll put the example again:

>

> > > > > > > > > > --------

> > > > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > > > contained additional information such as source node, destination node

> > > > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > > > unexplained messages appeared.

> > > > > > > > > > ---------

>

> > > > > > > > > > You seem to have understood the example, but asked whether it walked

> > > > > > > > > > and talked, yes lets say it does, and would be a contender for an

> > > > > > > > > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> > > > > > > > > > With regards to the question you are having a problem comprehending,

> > > > > > > > > > I'll reword it for you. By conscious experiences I am refering to the

> > > > > > > > > > sensations we experience such as pleasure and pain, visual and

> > > > > > > > > > auditory perception etc.

>

> > > > > > > > > > Are you suggesting:

>

> > > > > > > > > > A) The atheists were wrong to consider it to be consciously

> > > > > > > > > > experiencing.

> > > > > > > > > > B) Were correct to say that was consciously experiencing, but that it

> > > > > > > > > > doesn't influence behaviour.

> > > > > > > > > > C) It might have been consciously experiencing, how could you tell, it

> > > > > > > > > > doesn't influence behaviour.

> > > > > > > > > > D) It was consciously experiencing and that influenced its behaviour.

>

> > > > > > > > > > If you select D, as all the nodes are giving the same outputs as they

> > > > > > > > > > would have singly in the lab given the same inputs, could you also

> > > > > > > > > > select between D1, and D2:

>

> > > > > > > > > > D1) The outputs given by a single node in the lab were also influenced

> > > > > > > > > > by conscious experiences.

> > > > > > > > > > D2) The outputs given by a single node in the lab were not influenced

> > > > > > > > > > by conscious experiences, but the influence of the conscious

> > > > > > > > > > experiences within the robot, is such that the outputs are the same as

> > > > > > > > > > without the influence of the conscious experiences.

>

> > > > > > > > > > If you are still having problems comprehending, then simply write out

> > > > > > > > > > the sentances you could not understand, and I'll reword them for you.

> > > > > > > > > > If you want to add another option, then simply state it, obviously the

> > > > > > > > > > option should be related to whether the robot was consciously

> > > > > > > > > > experiencing, and if it was, whether the conscious experiences

> > > > > > > > > > influenced behaviour.

>

> > > > > > > > > As you requested, here is the question I didn't understand: "Regarding

> > > > > > > > > the robot, where is the influence/ affect of conscious experiences in

> > > > > > > > > the outputs, if all the nodes are justgiving the same outputs as they

> > > > > > > > > would have singly in the lab given the same inputs."

>

> > > > > > > > I had rewritten the question for you. Why did you simply reference the

> > > > > > > > old one, and not answer the new reworded one? It seems like you are

> > > > > > > > desperately trying to avoid answering.

>

> > > > > > > I was interested in your suggestion that you think you can prove that

> > > > > > > God exists because computers aren't conscious (or whatever it is you

> > > > > > > are trying to say), and I was trying to help you explain that.

> > > > > > > Whatever it is you are trying to ask me, I can't even attempt to give

> > > > > > > you an answer unless you can express your question in proper

> > > > > > > sentences. In any case, you are the one who is claiming properties

> > > > > > > of consciousness of robot inputs and outputs (or whatever you are

> > > > > > > claiming), so you will in due course be expected to defend your theory

> > > > > > > (once you have written it out in comprehensible English sentences)

> > > > > > > without any help from me, so you shouldn't be relying on my answers to

> > > > > > > your questions anyway.

>

> > > > > > Why didn't you point out the sentance that was giving you a problem in

> > > > > > the reworded question? As I say, it seems like you are desperately

> > > > > > trying to avoid answering. Though to help you, I am not claiming any

> > > > > > properties or consciousness of robot inputs and outputs, I am

> > > > > > questioning you about your understanding. Though since you think

> > > > > > twanging elastic bands with twigs is sufficient to give rise to

> > > > > > conscious experiences, I'd be interested if you choose option A, if

> > > > > > ever you do find the courage to answer.

>

> > > > > It takes no courage to respond to Usenet postings, just a computer

> > > > > terminal and a certain amount of time and patience. All I was asking

> > > > > you to do is to explain what you mean by the following sentence:

> > > > > "Regarding the robot, where is the influence/ affect of conscious

> > > > > experiences in the outputs, if all the nodes are justgiving the same

> > > > > outputs as they would have singly in the lab given the same inputs."

> > > > > What inputs and outputs are you talking about?

>

> > > > Here is the example again (though it is still above in the history,

> > > > from when you first said you didn't understand the bit you have

> > > > quoted, and I reworded it for you, though you then ignored the

> > > > reworded version and restated you didn't understand the original

> > > > question, when I pointed out what you had done and asked you to answer

> > > > the reworded version, you ignored me again, and have done so yet

> > > > again, giving the impression you are trying to avoid answering):

>

> > > > --------

> > > > Supposing there was an artificial neural network, with a billion more

> > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > consciously experiencing. Now supposing each message between each node

> > > > contained additional information such as source node, destination node

> > > > (which could be the same as the source node, for feedback loops), the

> > > > time the message was sent, and the message. Each node wrote out to a

> > > > log on receipt of a message, and on sending a message out. Now after

> > > > an hour of the atheist communicating with the robot, the logs could be

> > > > examined by a bank of computers, varifying, that for the input

> > > > messages each received, the outputs were those expected by a single

> > > > node in a lab given the same inputs. They could also varify that no

> > > > unexplained messages appeared.

> > > > ---------

>

> > > > The inputs and outputs refer to the inputs and outputs to the nodes.

> > > > I'll leave it up to you to either answer the bit you have just quoted,

> > > > or the reworded version (the one with the options A-D, and D1 and D2).

> > > > Is there a chance that you might answer this time?

>

> > > OK, you have at last clarified for me that the inputs and outputs to

> > > which you were referring in your question "Where is the influence/

> > > affect of conscious experiences in the outputs, if all the nodes are

> > > just giving the same outputs as they would have singly in the lab

> > > given the same inputs?", are the I/O amongst the nodes in a computer

> > > neural network. You are not discussing inputs to the network from

> > > sensors or any outputs from the network to an external device. Is

> > > that correct?

>

> > > You say "if all the nodes are just giving the same outputs as they

> > > would singly in the lab given the same inputs". So you have two

> > > neural networks, one in a lab, the other not in a lab, but both of

> > > them with identical inputs and outputs amongst the nodes. Am I

> > > right?

>

> > > Can you please confirm that I understand you so far, and I may then be

> > > able to answer your question with one of A-D, D1 and D2, as you

> > > requested.

>

> > The robot has sensors, which feed inputs to the neural network. Which

> > is why I responded to you earlier that:

> > ------------

> > You seem to have understood the example, but asked whether it walked

> > and talked, yes lets say it does, and would be a contender for an

> > olympic gymnastics medal, and said it hurt if you stamped on its foot.

> > ------------

>

> > There is not a neural network in the lab, it is just a single node

> > with input channels and output channels. Where any node behaviour in

> > the neural network can be reproduced, by feeding it the same inputs as

> > the one in the network was shown to have received in the logs.

>

> OK. You have explained that you are examining one node and its inputs

> and outputs. Your question is "Where is the influence/affect of

> conscious experiences in the outputs.". To understand that question,

> I need to know:

> A) What conscious experiences are your referring to?

> B) When you wrote 'affect', did you mean 'effect'?

> C) does 'in the outputs' mean 'contained in the outputs'?

>

> I think (but I'm not certain), that you are leading up to saying that

> each node is not conscious, so the whole device is not conscious. If

> that is your point, I have answered your question when I initially

> replied to your posting (see above)> > > > > > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > > > > > > similar data, and we know that we are conscious.

> > > > > > > > > > > > > > > [...]

> > > > > > > > > > > > > > > To understand why an

> > > > > > > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > > > > > > awareness in biological organisms.

 

Regarding you queries

A) I never said it had any conscious experiences. That is the claim

some atheists are making in the scenario, you don't have to agree with

their claims, there are options which allow you to disagree. I'll

repeat the scenario for you yet again below, and give the options

again.

B) I meant affect as in

1. to act on; produce an effect or change in: Cold weather affected

the crops.

( http://dictionary.reference.com/browse/affect )

C) I mean would the conscious experiences affect the outputs.

 

Here is the scenario yet again:

--------

Supposing there was an artificial neural network, with a billion more

nodes than you had neurons in your brain, and a very 'complicated'

configuration, which drove a robot. The robot, due to its behaviour

(being a Turing Equivalent), caused some atheists to claim it was

consciously experiencing. Now supposing each message between each node

contained additional information such as source node, destination node

(which could be the same as the source node, for feedback loops), the

time the message was sent, and the message. Each node wrote out to a

log on receipt of a message, and on sending a message out. Now after

an hour of the atheist communicating with the robot, the logs could be

examined by a bank of computers, varifying, that for the input

messages each received, the outputs were those expected by a single

node in a lab given the same inputs. They could also varify that no

unexplained messages appeared.

---------

 

And again, here is the question: What would you be saying with regards

to it consciously experiencing:

 

A) The atheists were wrong to consider it to be consciously

experiencing.

B) Were correct to say that was consciously experiencing, but that it

doesn't influence behaviour.

C) It might have been consciously experiencing, how could you tell, it

doesn't influence behaviour.

D) It was consciously experiencing and that influenced its behaviour.

 

If you select D, as all the nodes are giving the same outputs as they

would have singly in the lab given the same inputs, could you also

select between D1, and D2:

 

D1) The outputs given by a single node in the lab were also influenced

by conscious experiences.

D2) The outputs given by a single node in the lab were not influenced

by conscious experiences, but the influence of the conscious

experiences within the robot, is such that the outputs are the same as

without the influence of the conscious experiences.

 

Looking forward to you eventually giving an answer.

Guest someone2
Posted

On 8 Jul, 19:34, someone2 <glenn.spig...@btinternet.com> wrote:

> On 8 Jul, 16:56, "Jeckyl" <n...@nowhere.com> wrote:

>

>

>

>

>

> > "someone2" <glenn.spig...@btinternet.com> wrote in message

>

> >news:1183863917.768229.282730@k79g2000hse.googlegroups.com...

>

> > > --------

> > > Supposing there was an artificial neural network, with a billion more

> > > nodes than you had neurons in your brain, and a very 'complicated'

> > > configuration, which drove a robot.

>

> > Ok .. lets pretend

>

> > > The robot, due to its behaviour

> > > (being a Turing Equivalent), caused some atheists to claim it was

> > > consciously experiencing.

>

> > Why only atheists?

>

> > > Now supposing each message between each node

> > > contained additional information such as source node, destination node

> > > (which could be the same as the source node, for feedback loops), the

> > > time the message was sent, and the message. Each node wrote out to a

> > > log on receipt of a message, and on sending a message out. Now after

> > > an hour of the atheist communicating with the robot, the logs could be

> > > examined by a bank of computers, varifying, that for the input

> > > messages each received, the outputs were those expected by a single

> > > node in a lab given the same inputs. They could also varify that no

> > > unexplained messages appeared.

>

> > Fine

>

> > So what is the problem .. that say nothing about whether or not the robot is

> > conscious.

>

> So what would you be saying with regards to it consciously

> experiencing:

>

> A) The atheists were wrong to consider it to be consciously

> experiencing.

> B) Were correct to say that was consciously experiencing, but that it

> doesn't influence behaviour.

> C) It might have been consciously experiencing, how could you tell, it

> doesn't influence behaviour.

> D) It was consciously experiencing and that influenced its behaviour.

>

> If you select D, as all the nodes are giving the same outputs as they

> would have singly in the lab given the same inputs, could you also

> select between D1, and D2:

>

> D1) The outputs given by a single node in the lab were also influenced

> by conscious experiences.

> D2) The outputs given by a single node in the lab were not influenced

> by conscious experiences, but the influence of the conscious

> experiences within the robot, is such that the outputs are the same as

> without the influence of the conscious experiences.

>

> or

>

> E) Avoid answering (like the other atheists so far)

>

 

Saw you just post, was wondering whether you were going to maybe be

the first atheist on the forum to actually answer.

Guest James Norris
Posted

On Jul 9, 2:00?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 9 Jul, 01:27, James Norris <JimNorri...@aol.com> wrote:

> > On Jul 8, 11:35?am, someone2 <glenn.spig...@btinternet.com> wrote:

> [...]

> > > > > > > Why didn't you point out the sentance that was giving you a problem in

> > > > > > > the reworded question? As I say, it seems like you are desperately

> > > > > > > trying to avoid answering. Though to help you, I am not claiming any

> > > > > > > properties or consciousness of robot inputs and outputs, I am

> > > > > > > questioning you about your understanding. Though since you think

> > > > > > > twanging elastic bands with twigs is sufficient to give rise to

> > > > > > > conscious experiences, I'd be interested if you choose option A, if

> > > > > > > ever you do find the courage to answer.

>

> > > > > > It takes no courage to respond to Usenet postings, just a computer

> > > > > > terminal and a certain amount of time and patience. All I was asking

> > > > > > you to do is to explain what you mean by the following sentence:

> > > > > > "Regarding the robot, where is the influence/ affect of conscious

> > > > > > experiences in the outputs, if all the nodes are justgiving the same

> > > > > > outputs as they would have singly in the lab given the same inputs."

> > > > > > What inputs and outputs are you talking about?

> [...]

> > > > > --------

> > > > > Supposing there was an artificial neural network, with a billion more

> > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > consciously experiencing. Now supposing each message between each node

> > > > > contained additional information such as source node, destination node

> > > > > (which could be the same as the source node, for feedback loops), the

> > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > examined by a bank of computers, varifying, that for the input

> > > > > messages each received, the outputs were those expected by a single

> > > > > node in a lab given the same inputs. They could also varify that no

> > > > > unexplained messages appeared.

> > > > > ---------

>

> > > > > The inputs and outputs refer to the inputs and outputs to the nodes.

> > > > > I'll leave it up to you to either answer the bit you have just quoted,

> > > > > or the reworded version (the one with the options A-D, and D1 and D2).

> > > > > Is there a chance that you might answer this time?

>

> > > > OK, you have at last clarified for me that the inputs and outputs to

> > > > which you were referring in your question "Where is the influence/

> > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > just giving the same outputs as they would have singly in the lab

> > > > given the same inputs?", are the I/O amongst the nodes in a computer

> > > > neural network. You are not discussing inputs to the network from

> > > > sensors or any outputs from the network to an external device. Is

> > > > that correct?

>

> > > > You say "if all the nodes are just giving the same outputs as they

> > > > would singly in the lab given the same inputs". So you have two

> > > > neural networks, one in a lab, the other not in a lab, but both of

> > > > them with identical inputs and outputs amongst the nodes. Am I

> > > > right?

>

> > > > Can you please confirm that I understand you so far, and I may then be

> > > > able to answer your question with one of A-D, D1 and D2, as you

> > > > requested.

>

> > > The robot has sensors, which feed inputs to the neural network. Which

> > > is why I responded to you earlier that:

> > > ------------

> > > You seem to have understood the example, but asked whether it walked

> > > and talked, yes lets say it does, and would be a contender for an

> > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

> > > ------------

>

> > > There is not a neural network in the lab, it is just a single node

> > > with input channels and output channels. Where any node behaviour in

> > > the neural network can be reproduced, by feeding it the same inputs as

> > > the one in the network was shown to have received in the logs.

>

> > OK. You have explained that you are examining one node and its inputs

> > and outputs. Your question is "Where is the influence/affect of

> > conscious experiences in the outputs.". To understand that question,

> > I need to know:

> > A) What conscious experiences are your referring to?

> > B) When you wrote 'affect', did you mean 'effect'?

> > C) does 'in the outputs' mean 'contained in the outputs'?

>

> > I think (but I'm not certain), that you are leading up to saying that

> > each node is not conscious, so the whole device is not conscious. If

> > that is your point, I have answered your question when I initially

> > replied to your posting as follows:

> > > > > > > > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > > > > > > > similar data, and we know that we are conscious.

> > > > > > > > > > > > > > > > [...]

> > > > > > > > > > > > > > > > To understand why an

> > > > > > > > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > > > > > > > awareness in biological organisms.

>

> Regarding you queries

> A) I never said it had any conscious experiences. That is the claim

> some atheists are making in the scenario, you don't have to agree with

> their claims, there are options which allow you to disagree. I'll

> repeat the scenario for you yet again below, and give the options

> again.

 

OK, your stance is that you disagree with an earlier claim by some

atheists that the robot might be conscious.

> B) I meant affect as in

> 1. to act on; produce an effect or change in: Cold weather affected

> the crops.

> (http://dictionary.reference.com/browse/affect)

 

OK, I was asking because affective and effective have very specific

meanings in neuroscience.

> C) I mean would the conscious experiences affect the outputs.

 

So you are asking the atheists "Would the robot's supposed conscious

experiences affect the outputs?". And that is the question which you

want answered.

> Here is the scenario yet again:

> --------

> Supposing there was an artificial neural network, with a billion more

> nodes than you had neurons in your brain, and a very 'complicated'

> configuration, which drove a robot. The robot, due to its behaviour

> (being a Turing Equivalent), caused some atheists to claim it was

> consciously experiencing. Now supposing each message between each node

> contained additional information such as source node, destination node

> (which could be the same as the source node, for feedback loops), the

> time the message was sent, and the message. Each node wrote out to a

> log on receipt of a message, and on sending a message out. Now after

> an hour of the atheist communicating with the robot, the logs could be

> examined by a bank of computers, varifying, that for the input

> messages each received, the outputs were those expected by a single

> node in a lab given the same inputs. They could also varify that no

> unexplained messages appeared.

> ---------

>

> And again, here is the question: What would you be saying with regards

> to it consciously experiencing:

 

Basically, I find it difficult to relate the question you asked to

your suggested answers A-D,D1 and D2 (because you've omitted some of

the words which would be necessary to make your suggested answers

unambiguous) but I will try:

 

A) The atheists were wrong to consider it to be consciously

experiencing.

 

I don't know. The atheists you refer to were not necessarily morally

wrong, sinful, mistaken or misinformed, to consider the neural network

to have conscious experiences. It was a hypothetical possibility -

the atheists reckon that a manufactured neural network might be

conscious.

 

B) Were correct to say that was consciously experiencing, but that it

doesn't influence behaviour.

 

Yes. The neural network might have been consciously experiencing,

without 'that it' influencing behaviour.

 

C) It might have been consciously experiencing, how could you tell, it

doesn't influence behaviour.

 

Yes and no. The neural network might have been consciously

experiencing, but you would not know that by observing its behaviour.

There might be other explanations, such as hardware glitches, if the

neural network behaved abnormally.

 

D) It was consciously experiencing and that influenced its behaviour.

 

Yes and no. If it was consciously experiencing its own behaviour,

that might or might not influence influence its behaviour. It might,

for example, be unable to, or decide not to influence its own

behaviour.

> If you select D, as all the nodes are giving the same outputs as they

> would have singly in the lab given the same inputs, could you also

> select between D1, and D2:

>

> D1) The outputs given by a single node in the lab were also influenced

> by conscious experiences.

 

Yes, possibly.

> D2) The outputs given by a single node in the lab were not influenced

> by conscious experiences, but the influence of the conscious

> experiences within the robot, is such that the outputs are the same as

> without the influence of the conscious experiences.

 

Yes, possibly.

> Looking forward to you eventually giving an answer.

 

And here's my earlier reply again:

> > > > > > > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > > > > > > computer.

>

> > > > > > > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > > > > > > eventual 'glitch'.

Guest Jeckyl
Posted

"someone2" <glenn.spigel2@btinternet.com> wrote in message

news:1183945289.289706.9850@n60g2000hse.googlegroups.com...

> On 8 Jul, 19:34, someone2 <glenn.spig...@btinternet.com> wrote:

>> On 8 Jul, 16:56, "Jeckyl" <n...@nowhere.com> wrote:

>> > "someone2" <glenn.spig...@btinternet.com> wrote in message

>> >news:1183863917.768229.282730@k79g2000hse.googlegroups.com...

>> > > --------

>> > > Supposing there was an artificial neural network, with a billion more

>> > > nodes than you had neurons in your brain, and a very 'complicated'

>> > > configuration, which drove a robot.

>> > Ok .. lets pretend

>> > > The robot, due to its behaviour

>> > > (being a Turing Equivalent), caused some atheists to claim it was

>> > > consciously experiencing.

>> > Why only atheists?

>> > > Now supposing each message between each node

>> > > contained additional information such as source node, destination

>> > > node

>> > > (which could be the same as the source node, for feedback loops), the

>> > > time the message was sent, and the message. Each node wrote out to a

>> > > log on receipt of a message, and on sending a message out. Now after

>> > > an hour of the atheist communicating with the robot, the logs could

>> > > be

>> > > examined by a bank of computers, varifying, that for the input

>> > > messages each received, the outputs were those expected by a single

>> > > node in a lab given the same inputs. They could also varify that no

>> > > unexplained messages appeared.

>> > Fine

>> > So what is the problem .. that say nothing about whether or not the

>> > robot is

>> > conscious.

>>

>> So what would you be saying with regards to it consciously

>> experiencing:

>>

>> A) The atheists were wrong to consider it to be consciously

>> experiencing.

>> B) Were correct to say that was consciously experiencing, but that it

>> doesn't influence behaviour.

>> C) It might have been consciously experiencing, how could you tell, it

>> doesn't influence behaviour.

>> D) It was consciously experiencing and that influenced its behaviour.

 

We don't know if it WAS consciously experiencing.

If it was, then D, if it wasn't then A.

One would need a definition of what comprises a conscious experience

And then a test for the presense of such experiences.

 

However, for the sake of the argument, I'll pick D

>> If you select D, as all the nodes are giving the same outputs as they

>> would have singly in the lab given the same inputs, could you also

>> select between D1, and D2:

>>

>> D1) The outputs given by a single node in the lab were also influenced

>> by conscious experiences.

 

They were influenced by emulated conscious experiences.

 

If you provide it with the same inputs that the other nodes the encoded the

consious experience would have supplied.

>> D2) The outputs given by a single node in the lab were not influenced

>> by conscious experiences, but the influence of the conscious

>> experiences within the robot, is such that the outputs are the same as

>> without the influence of the conscious experiences.

 

If the inputs are the same, the outputs are the same.

 

In the lab, the inputs were directly set to emulate those of consicous

experience encoded in the rest of the network, and so produced the same

results

>> or

>> E) Avoid answering (like the other atheists so far)

 

I'd prefer

 

F) The combination of the nodes encodes the conscious experience

> Saw you just post, was wondering whether you were going to maybe be

> the first atheist on the forum to actually answer.

 

I missed your earlier reply, or I would have

Guest someone2
Posted

On 9 Jul, 03:41, James Norris <JimNorri...@aol.com> wrote:

> On Jul 9, 2:00?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > On 9 Jul, 01:27, James Norris <JimNorri...@aol.com> wrote:

> > > On Jul 8, 11:35?am, someone2 <glenn.spig...@btinternet.com> wrote:

> > [...]

> > > > > > > > Why didn't you point out the sentance that was giving you a problem in

> > > > > > > > the reworded question? As I say, it seems like you are desperately

> > > > > > > > trying to avoid answering. Though to help you, I am not claiming any

> > > > > > > > properties or consciousness of robot inputs and outputs, I am

> > > > > > > > questioning you about your understanding. Though since you think

> > > > > > > > twanging elastic bands with twigs is sufficient to give rise to

> > > > > > > > conscious experiences, I'd be interested if you choose option A, if

> > > > > > > > ever you do find the courage to answer.

>

> > > > > > > It takes no courage to respond to Usenet postings, just a computer

> > > > > > > terminal and a certain amount of time and patience. All I was asking

> > > > > > > you to do is to explain what you mean by the following sentence:

> > > > > > > "Regarding the robot, where is the influence/ affect of conscious

> > > > > > > experiences in the outputs, if all the nodes are justgiving the same

> > > > > > > outputs as they would have singly in the lab given the same inputs."

> > > > > > > What inputs and outputs are you talking about?

> > [...]

> > > > > > --------

> > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > contained additional information such as source node, destination node

> > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > messages each received, the outputs were those expected by a single

> > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > unexplained messages appeared.

> > > > > > ---------

>

> > > > > > The inputs and outputs refer to the inputs and outputs to the nodes.

> > > > > > I'll leave it up to you to either answer the bit you have just quoted,

> > > > > > or the reworded version (the one with the options A-D, and D1 and D2).

> > > > > > Is there a chance that you might answer this time?

>

> > > > > OK, you have at last clarified for me that the inputs and outputs to

> > > > > which you were referring in your question "Where is the influence/

> > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > just giving the same outputs as they would have singly in the lab

> > > > > given the same inputs?", are the I/O amongst the nodes in a computer

> > > > > neural network. You are not discussing inputs to the network from

> > > > > sensors or any outputs from the network to an external device. Is

> > > > > that correct?

>

> > > > > You say "if all the nodes are just giving the same outputs as they

> > > > > would singly in the lab given the same inputs". So you have two

> > > > > neural networks, one in a lab, the other not in a lab, but both of

> > > > > them with identical inputs and outputs amongst the nodes. Am I

> > > > > right?

>

> > > > > Can you please confirm that I understand you so far, and I may then be

> > > > > able to answer your question with one of A-D, D1 and D2, as you

> > > > > requested.

>

> > > > The robot has sensors, which feed inputs to the neural network. Which

> > > > is why I responded to you earlier that:

> > > > ------------

> > > > You seem to have understood the example, but asked whether it walked

> > > > and talked, yes lets say it does, and would be a contender for an

> > > > olympic gymnastics medal, and said it hurt if you stamped on its foot.

> > > > ------------

>

> > > > There is not a neural network in the lab, it is just a single node

> > > > with input channels and output channels. Where any node behaviour in

> > > > the neural network can be reproduced, by feeding it the same inputs as

> > > > the one in the network was shown to have received in the logs.

>

> > > OK. You have explained that you are examining one node and its inputs

> > > and outputs. Your question is "Where is the influence/affect of

> > > conscious experiences in the outputs.". To understand that question,

> > > I need to know:

> > > A) What conscious experiences are your referring to?

> > > B) When you wrote 'affect', did you mean 'effect'?

> > > C) does 'in the outputs' mean 'contained in the outputs'?

>

> > > I think (but I'm not certain), that you are leading up to saying that

> > > each node is not conscious, so the whole device is not conscious. If

> > > that is your point, I have answered your question when I initially

> > > replied to your posting as follows:

> > > > > > > > > > > > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > > > > > > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > > > > > > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > > > > > > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > > > > > > > > > > > similar data, and we know that we are conscious.

> > > > > > > > > > > > > > > > > [...]

> > > > > > > > > > > > > > > > > To understand why an

> > > > > > > > > > > > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > > > > > > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > > > > > > > > > > > over billions of years to become aware of external reality, because

> > > > > > > > > > > > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > > > > > > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > > > > > > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > > > > > > > > > > > awareness in biological organisms.

>

> > Regarding you queries

> > A) I never said it had any conscious experiences. That is the claim

> > some atheists are making in the scenario, you don't have to agree with

> > their claims, there are options which allow you to disagree. I'll

> > repeat the scenario for you yet again below, and give the options

> > again.

>

> OK, your stance is that you disagree with an earlier claim by some

> atheists that the robot might be conscious.

>

> > B) I meant affect as in

> > 1. to act on; produce an effect or change in: Cold weather affected

> > the crops.

> > (http://dictionary.reference.com/browse/affect)

>

> OK, I was asking because affective and effective have very specific

> meanings in neuroscience.

>

> > C) I mean would the conscious experiences affect the outputs.

>

> So you are asking the atheists "Would the robot's supposed conscious

> experiences affect the outputs?". And that is the question which you

> want answered.

>

> > Here is the scenario yet again:

> > --------

> > Supposing there was an artificial neural network, with a billion more

> > nodes than you had neurons in your brain, and a very 'complicated'

> > configuration, which drove a robot. The robot, due to its behaviour

> > (being a Turing Equivalent), caused some atheists to claim it was

> > consciously experiencing. Now supposing each message between each node

> > contained additional information such as source node, destination node

> > (which could be the same as the source node, for feedback loops), the

> > time the message was sent, and the message. Each node wrote out to a

> > log on receipt of a message, and on sending a message out. Now after

> > an hour of the atheist communicating with the robot, the logs could be

> > examined by a bank of computers, varifying, that for the input

> > messages each received, the outputs were those expected by a single

> > node in a lab given the same inputs. They could also varify that no

> > unexplained messages appeared.

> > ---------

>

> > And again, here is the question: What would you be saying with regards

> > to it consciously experiencing:

>

> Basically, I find it difficult to relate the question you asked to

> your suggested answers A-D,D1 and D2 (because you've omitted some of

> the words which would be necessary to make your suggested answers

> unambiguous) but I will try:

>

> A) The atheists were wrong to consider it to be consciously

> experiencing.

>

> I don't know. The atheists you refer to were not necessarily morally

> wrong, sinful, mistaken or misinformed, to consider the neural network

> to have conscious experiences. It was a hypothetical possibility -

> the atheists reckon that a manufactured neural network might be

> conscious.

>

> B) Were correct to say that was consciously experiencing, but that it

> doesn't influence behaviour.

>

> Yes. The neural network might have been consciously experiencing,

> without 'that it' influencing behaviour.

>

> C) It might have been consciously experiencing, how could you tell, it

> doesn't influence behaviour.

>

> Yes and no. The neural network might have been consciously

> experiencing, but you would not know that by observing its behaviour.

> There might be other explanations, such as hardware glitches, if the

> neural network behaved abnormally.

>

> D) It was consciously experiencing and that influenced its behaviour.

>

> Yes and no. If it was consciously experiencing its own behaviour,

> that might or might not influence influence its behaviour. It might,

> for example, be unable to, or decide not to influence its own

> behaviour.

>

> > If you select D, as all the nodes are giving the same outputs as they

> > would have singly in the lab given the same inputs, could you also

> > select between D1, and D2:

>

> > D1) The outputs given by a single node in the lab were also influenced

> > by conscious experiences.

>

> Yes, possibly.

>

> > D2) The outputs given by a single node in the lab were not influenced

> > by conscious experiences, but the influence of the conscious

> > experiences within the robot, is such that the outputs are the same as

> > without the influence of the conscious experiences.

>

> Yes, possibly.

>

> > Looking forward to you eventually giving an answer.

>

> And here's my earlier reply again:> > > > > > > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > > > > > > > computer.

>

> > > > > > > > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > > > > > > > eventual 'glitch'.

 

I will run through your answers so far:

 

To A) By wrong, I meant incorrect.

To B) If you were going for might have been consciously experiencing

but that it wouldn't affect its behaviour you should have selected C.

To C) The example clearly states that there are no abnormalities. All

nodes give the outputs as they would be expected to in the lab.

To D) You state that you don't know whether conscious experiences

affect behaviour.

 

Though on the possibility that conscious experience do affect

behaviour you selected:

 

D1) and stated that a single node in the lab might be consciously

experiencing and influencing its behaviour.

 

D2) you stated that possibly the influence of the conscious

experiences was the same as no influence. In which case how did you

select D and say there was an influence.

 

Since you have tried to avoid answering the questions and attempted in

the end to avoid the problem by trying to create confusion by

answering all the options.

 

If you don't want to select a single option, we can go through each

issue one at a time.

 

Firstly where conscious experiences do not influence behaviour. Do you

acknowledge that if conscious experiences didn't influence human

behaviour, then it would only be coincidental that we have the

conscious experiences that the 'meat machine' is talking about?

Guest someone2
Posted

On 9 Jul, 09:03, "Jeckyl" <n...@nowhere.com> wrote:

> "someone2" <glenn.spig...@btinternet.com> wrote in message

>

> news:1183945289.289706.9850@n60g2000hse.googlegroups.com...

>

>

>

>

>

> > On 8 Jul, 19:34, someone2 <glenn.spig...@btinternet.com> wrote:

> >> On 8 Jul, 16:56, "Jeckyl" <n...@nowhere.com> wrote:

> >> > "someone2" <glenn.spig...@btinternet.com> wrote in message

> >> >news:1183863917.768229.282730@k79g2000hse.googlegroups.com...

> >> > > --------

> >> > > Supposing there was an artificial neural network, with a billion more

> >> > > nodes than you had neurons in your brain, and a very 'complicated'

> >> > > configuration, which drove a robot.

> >> > Ok .. lets pretend

> >> > > The robot, due to its behaviour

> >> > > (being a Turing Equivalent), caused some atheists to claim it was

> >> > > consciously experiencing.

> >> > Why only atheists?

> >> > > Now supposing each message between each node

> >> > > contained additional information such as source node, destination

> >> > > node

> >> > > (which could be the same as the source node, for feedback loops), the

> >> > > time the message was sent, and the message. Each node wrote out to a

> >> > > log on receipt of a message, and on sending a message out. Now after

> >> > > an hour of the atheist communicating with the robot, the logs could

> >> > > be

> >> > > examined by a bank of computers, varifying, that for the input

> >> > > messages each received, the outputs were those expected by a single

> >> > > node in a lab given the same inputs. They could also varify that no

> >> > > unexplained messages appeared.

> >> > Fine

> >> > So what is the problem .. that say nothing about whether or not the

> >> > robot is

> >> > conscious.

>

> >> So what would you be saying with regards to it consciously

> >> experiencing:

>

> >> A) The atheists were wrong to consider it to be consciously

> >> experiencing.

> >> B) Were correct to say that was consciously experiencing, but that it

> >> doesn't influence behaviour.

> >> C) It might have been consciously experiencing, how could you tell, it

> >> doesn't influence behaviour.

> >> D) It was consciously experiencing and that influenced its behaviour.

>

> We don't know if it WAS consciously experiencing.

> If it was, then D, if it wasn't then A.

> One would need a definition of what comprises a conscious experience

> And then a test for the presense of such experiences.

>

> However, for the sake of the argument, I'll pick D

>

> >> If you select D, as all the nodes are giving the same outputs as they

> >> would have singly in the lab given the same inputs, could you also

> >> select between D1, and D2:

>

> >> D1) The outputs given by a single node in the lab were also influenced

> >> by conscious experiences.

>

> They were influenced by emulated conscious experiences.

>

> If you provide it with the same inputs that the other nodes the encoded the

> consious experience would have supplied.

>

> >> D2) The outputs given by a single node in the lab were not influenced

> >> by conscious experiences, but the influence of the conscious

> >> experiences within the robot, is such that the outputs are the same as

> >> without the influence of the conscious experiences.

>

> If the inputs are the same, the outputs are the same.

>

> In the lab, the inputs were directly set to emulate those of consicous

> experience encoded in the rest of the network, and so produced the same

> results

>

> >> or

> >> E) Avoid answering (like the other atheists so far)

>

> I'd prefer

>

> F) The combination of the nodes encodes the conscious experience

>

> > Saw you just post, was wondering whether you were going to maybe be

> > the first atheist on the forum to actually answer.

>

> I missed your earlier reply, or I would have- Hide quoted text -

>

 

Regarding D2, lets say one node acted as an OR gate, and received

inputs from 1000 nodes, the input on one channel representing a 1,

thus the outputs it gave on its 100 output channels was a 1. Are you

suggesting that what caused one input channel to be 1, and the other

999 to be 0 would give the node in the lab the appropriate conscious

experience?

Guest Jim07D7
Posted

"Jeckyl" <noone@nowhere.com> said:

>"someone2" <glenn.spigel2@btinternet.com> wrote in message

>news:1183945289.289706.9850@n60g2000hse.googlegroups.com...

>> On 8 Jul, 19:34, someone2 <glenn.spig...@btinternet.com> wrote:

>>> On 8 Jul, 16:56, "Jeckyl" <n...@nowhere.com> wrote:

>>> > "someone2" <glenn.spig...@btinternet.com> wrote in message

>>> >news:1183863917.768229.282730@k79g2000hse.googlegroups.com...

>>> > > --------

>>> > > Supposing there was an artificial neural network, with a billion more

>>> > > nodes than you had neurons in your brain, and a very 'complicated'

>>> > > configuration, which drove a robot.

>>> > Ok .. lets pretend

>>> > > The robot, due to its behaviour

>>> > > (being a Turing Equivalent), caused some atheists to claim it was

>>> > > consciously experiencing.

>>> > Why only atheists?

>>> > > Now supposing each message between each node

>>> > > contained additional information such as source node, destination

>>> > > node

>>> > > (which could be the same as the source node, for feedback loops), the

>>> > > time the message was sent, and the message. Each node wrote out to a

>>> > > log on receipt of a message, and on sending a message out. Now after

>>> > > an hour of the atheist communicating with the robot, the logs could

>>> > > be

>>> > > examined by a bank of computers, varifying, that for the input

>>> > > messages each received, the outputs were those expected by a single

>>> > > node in a lab given the same inputs. They could also varify that no

>>> > > unexplained messages appeared.

>>> > Fine

>>> > So what is the problem .. that say nothing about whether or not the

>>> > robot is

>>> > conscious.

>>>

>>> So what would you be saying with regards to it consciously

>>> experiencing:

>>>

>>> A) The atheists were wrong to consider it to be consciously

>>> experiencing.

>>> B) Were correct to say that was consciously experiencing, but that it

>>> doesn't influence behaviour.

>>> C) It might have been consciously experiencing, how could you tell, it

>>> doesn't influence behaviour.

>>> D) It was consciously experiencing and that influenced its behaviour.

>

>We don't know if it WAS consciously experiencing.

>If it was, then D, if it wasn't then A.

>One would need a definition of what comprises a conscious experience

>And then a test for the presense of such experiences.

>

>However, for the sake of the argument, I'll pick D

>

>>> If you select D, as all the nodes are giving the same outputs as they

>>> would have singly in the lab given the same inputs, could you also

>>> select between D1, and D2:

>>>

>>> D1) The outputs given by a single node in the lab were also influenced

>>> by conscious experiences.

>

>They were influenced by emulated conscious experiences.

>

>If you provide it with the same inputs that the other nodes the encoded the

>consious experience would have supplied.

>

>>> D2) The outputs given by a single node in the lab were not influenced

>>> by conscious experiences, but the influence of the conscious

>>> experiences within the robot, is such that the outputs are the same as

>>> without the influence of the conscious experiences.

>

>If the inputs are the same, the outputs are the same.

>

>In the lab, the inputs were directly set to emulate those of consicous

>experience encoded in the rest of the network, and so produced the same

>results

>

>>> or

>>> E) Avoid answering (like the other atheists so far)

>

>I'd prefer

>

>F) The combination of the nodes encodes the conscious experience

>

>> Saw you just post, was wondering whether you were going to maybe be

>> the first atheist on the forum to actually answer.

>

>I missed your earlier reply, or I would have

>

I seem to be one of those people who hasn't answered satisfactorily,

but gave up when it became obvious that someone2 lost track of who he

was talking with. Reducing the number of us, seemed best.

 

I am glad to see you have carried this discussion a bit further. Your

approach might be the better way to see if there is anything more to

the matter, than a fallacious argument from incredulity.

Guest James Norris
Posted

On Jul 9, 11:26?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 9 Jul, 03:41, James Norris <JimNorri...@aol.com> wrote:

> [...]

> > So you are asking the atheists "Would the robot's supposed conscious

> > experiences affect the outputs?". And that is the question which you

> > want answered.

>

> > > Here is the scenario yet again:

> > > --------

> > > Supposing there was an artificial neural network, with a billion more

> > > nodes than you had neurons in your brain, and a very 'complicated'

> > > configuration, which drove a robot. The robot, due to its behaviour

> > > (being a Turing Equivalent), caused some atheists to claim it was

> > > consciously experiencing. Now supposing each message between each node

> > > contained additional information such as source node, destination node

> > > (which could be the same as the source node, for feedback loops), the

> > > time the message was sent, and the message. Each node wrote out to a

> > > log on receipt of a message, and on sending a message out. Now after

> > > an hour of the atheist communicating with the robot, the logs could be

> > > examined by a bank of computers, varifying, that for the input

> > > messages each received, the outputs were those expected by a single

> > > node in a lab given the same inputs. They could also varify that no

> > > unexplained messages appeared.

> > > ---------

>

> > > And again, here is the question: What would you be saying with regards

> > > to it consciously experiencing:

>

> > Basically, I find it difficult to relate the question you asked to

> > your suggested answers A-D,D1 and D2 (because you've omitted some of

> > the words which would be necessary to make your suggested answers

> > unambiguous) but I will try:

>

> > A) The atheists were wrong to consider it to be consciously

> > experiencing.

>

> > I don't know. The atheists you refer to were not necessarily morally

> > wrong, sinful, mistaken or misinformed, to consider the neural network

> > to have conscious experiences. It was a hypothetical possibility -

> > the atheists reckon that a manufactured neural network might be

> > conscious.

>

> > B) Were correct to say that was consciously experiencing, but that it

> > doesn't influence behaviour.

>

> > Yes. The neural network might have been consciously experiencing,

> > without 'that it' influencing behaviour.

>

> > C) It might have been consciously experiencing, how could you tell, it

> > doesn't influence behaviour.

>

> > Yes and no. The neural network might have been consciously

> > experiencing, but you would not know that by observing its behaviour.

> > There might be other explanations, such as hardware glitches, if the

> > neural network behaved abnormally.

>

> > D) It was consciously experiencing and that influenced its behaviour.

>

> > Yes and no. If it was consciously experiencing its own behaviour,

> > that might or might not influence influence its behaviour. It might,

> > for example, be unable to, or decide not to influence its own

> > behaviour.

>

> > > If you select D, as all the nodes are giving the same outputs as they

> > > would have singly in the lab given the same inputs, could you also

> > > select between D1, and D2:

>

> > > D1) The outputs given by a single node in the lab were also influenced

> > > by conscious experiences.

>

> > Yes, possibly.

>

> > > D2) The outputs given by a single node in the lab were not influenced

> > > by conscious experiences, but the influence of the conscious

> > > experiences within the robot, is such that the outputs are the same as

> > > without the influence of the conscious experiences.

>

> > Yes, possibly.

>

> > > Looking forward to you eventually giving an answer.

>

> > And here's my earlier reply again:> > > > > > > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > > > > > > > > computer.

>

> > > > > > > > > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > > > > > > > > eventual 'glitch'.

>

> I will run through your answers so far:

>

> To A) By wrong, I meant incorrect.

> To B) If you were going for might have been consciously experiencing

> but that it wouldn't affect its behaviour you should have selected C.

> To C) The example clearly states that there are no abnormalities. All

> nodes give the outputs as they would be expected to in the lab.

> To D) You state that you don't know whether conscious experiences

> affect behaviour.

>

> Though on the possibility that conscious experience do affect

> behaviour you selected:

>

> D1) and stated that a single node in the lab might be consciously

> experiencing and influencing its behaviour.

>

> D2) you stated that possibly the influence of the conscious

> experiences was the same as no influence. In which case how did you

> select D and say there was an influence.

 

My answer to D was not 'don't know', but 'yes and no', by which I

meant that it is yes in some situations, no in others.

> Since you have tried to avoid answering the questions and attempted in

> the end to avoid the problem by trying to create confusion by

> answering all the options.

>

> If you don't want to select a single option, we can go through each

> issue one at a time.

 

OK, that sounds much more sensible. In my response to your earlier

question, I didn't realise I was only allowed to select one of your 6

suggested answers.

> Firstly where conscious experiences do not influence behaviour. Do you

> acknowledge that if conscious experiences didn't influence human

> behaviour, then it would only be coincidental that we have the

> conscious experiences that the 'meat machine' is talking about?

 

When you write 'where conscious experiences do not influence

behaviour', I take it you mean 'where the hypothetical conscious

experiences of the neural network do not influence the behaviour of

the neural network', because conscious experiences in a neural network

and its behaviour is what your earlier questions were about. So your

next sentence must mean something along the lines of 'taking the

analogy of human behaviour, if a human brain was not influenced by its

conscious experiences, but the human brain indicated (by talking) that

it was influenced by conscious experiences, what it was saying would

be irrelevant'. If that's what you mean, I agree with you. The human

could just be in a trance, automatically reciting a sentence about

consciousness that was triggered by some external input, without

consciously experiencing anything. So where do you go from there?

Guest Matt Silberstein
Posted

On Fri, 29 Jun 2007 05:40:25 -0700, in alt.atheism , someone2

<glenn.spigel2@btinternet.com> in

<1183120825.782515.196650@n2g2000hse.googlegroups.com> wrote:

 

[snip]

>If we were like the robot, simply meat machines, then the same would

>hold for us. In which case it would have to be a coincidence the

>conscious experiences the 'meat machine' talked actually existed (they

>couldn't have been influencing the behaviour).

 

Sorry, but this still makes no sense to me and you have yet to get

beyond this unsupported claim. It is not a coincidence, there is

physical activity that goes on affecting conscious experience. You

have a host of hidden assumptions that you refuse, it seems, to make

explicit or explore.

>Are the athiests having a problem following it (maybe the complexity

>of it), or just having a problem that you believed an implausible

>story?

 

Or maybe people (not just atheists) disagree with your unsupported

claims.

--

Matt Silberstein

 

Do something today about the Darfur Genocide

 

http://www.beawitness.org

http://www.darfurgenocide.org

http://www.savedarfur.org

 

"Darfur: A Genocide We can Stop"

Guest jientho@aol.com
Posted

On Jul 9, 2:40 pm, Matt Silberstein

<RemoveThisPrefixmatts2nos...@ix.netcom.com> wrote:

> On Fri, 29 Jun 2007 05:40:25 -0700, in alt.atheism , someone2

> <glenn.spig...@btinternet.com> in

>

> <1183120825.782515.196...@n2g2000hse.googlegroups.com> wrote:

>

> [snip]

>

> >If we were like the robot, simply meat machines, then the same would

> >hold for us. In which case it would have to be a coincidence the

> >conscious experiences the 'meat machine' talked [about] actually existed (they

> >couldn't have been influencing the behaviour).

>

> Sorry, but this still makes no sense to me

 

You snipped what "the same" referred (was agreed?) to, but I think I

can take a cut a further explanation in hopes it might help you make

sense of it. The context of all of this, to remind, is that a present-

laws-of-physics explanation completely explains all (observable)

behavior, in particular the behavior of any given human; that is the

Materialistic premise. So (the argument goes) there is no influence

upon the behavior by anything not within-the-laws-of-physics. The

laws of physics as presently constituted make no use whatsoever of

conscious experiences as explanatory phenomena. So under the

Materialist premise, conscious experiences do not influence observed

behavior.

 

Conscious experiences may certainly be physically _caused_ under the

Materialist premise (perhaps "emergent"), but they are an

_effect_only_ and not the cause of any behavior themselves.

 

Now part of observable behavior of humans is "talking about conscious

experiences". That behavior is not _caused_ by any of the conscious

experiences so talked about. For example, I couldn't be saying "I

have pain in my foot" because I feel pain in my foot. Both the

feeling and the talking could be physically sourced by some common

cause, but in no way could the feeling-of-pain itself be any part of

the cause of the talking-of-pain. That is the "coincidental" aspect

that "someone" finds implausible. Probably because so much of our

psychological, social, and intellectual systems rely on the exact

opposite premise -- that our _subjective_ selves are the cause of our

behavior and not just some side-effect (side show) of the universe.

(Compare all the successful jury nullification going on in the U.S.

judicial system by appeal to "Johnny was the victim of xxxx physical

circumstances, as we've proved, and is thus not really guilty of the

crime his physical body committed." Or let's take an example perhaps

closer to your own interests -- the raiding in Darfur; just a bunch of

physical activity unfolding in its own course without any possible

influence from appeals to solely subjectively-experienced things like

ethics and morality, let alone emotion. Plausible?)

 

Jeff

Guest Jim07D7
Posted

jientho@aol.com said:

>On Jul 9, 2:40 pm, Matt Silberstein

><RemoveThisPrefixmatts2nos...@ix.netcom.com> wrote:

>> On Fri, 29 Jun 2007 05:40:25 -0700, in alt.atheism , someone2

>> <glenn.spig...@btinternet.com> in

>>

>> <1183120825.782515.196...@n2g2000hse.googlegroups.com> wrote:

>>

>> [snip]

>>

>> >If we were like the robot, simply meat machines, then the same would

>> >hold for us. In which case it would have to be a coincidence the

>> >conscious experiences the 'meat machine' talked [about] actually existed (they

>> >couldn't have been influencing the behaviour).

>>

>> Sorry, but this still makes no sense to me

>

>You snipped what "the same" referred (was agreed?) to, but I think I

>can take a cut a further explanation in hopes it might help you make

>sense of it. The context of all of this, to remind, is that a present-

>laws-of-physics explanation completely explains all (observable)

>behavior, in particular the behavior of any given human; that is the

>Materialistic premise.

 

The materialism that someone2 is imagining, is as you say. It is a

Straw Man, because materialism does not definitionally limit itself to

a "present-laws" explanation of behavior. The misrepresentation that

materialism is based on the physics of 2007 (or, more likely, in

someone2's mind, of 1907) underlies much of the mistaken conclusion

that it is implausible.

 

Materialism is, it should be said, an unprovable metaphysical

commitment, in that it depends on the unverifiable premise that it can

in principle explain all phenomena without reference to immaterial

entities. Inductive predictions of future success can be drawn from

the success of materialism (or, more properly, physicalism) -- but

inductive conclusions are always provisional.

>So (the argument goes) there is no influence

>upon the behavior by anything not within-the-laws-of-physics. The

>laws of physics as presently constituted make no use whatsoever of

>conscious experiences as explanatory phenomena. So under the

>Materialist premise, conscious experiences do not influence observed

>behavior.

 

Of course we know that physics does not rule out conscious

deliberation as an intermediate between imputs and outputs, that is

influential by virtue of its utility in developing outputs based on

imputs. Materialism only rules out consciousness as indicative of an

immaterial entity whose influence has no material cause. The

reductionist will say that physics will make room for conscious

experience.

 

It is only eliminative materialism that will not, but if someones2's

brief was against eliminative materialism, I doubt he'd cause much

stir.

>

>Conscious experiences may certainly be physically _caused_ under the

>Materialist premise (perhaps "emergent"), but they are an

>_effect_only_ and not the cause of any behavior themselves.

 

Contrary to the conclusion that conscious experience has no effect, it

could be the materialist's position that the occurrence of conscious

experience is a material phenomenon, and its occurrence implies that

it has some utility, and if the current laws of physics see no use for

it, the current laws of physics need study and modification.

>

>Now part of observable behavior of humans is "talking about conscious

>experiences". That behavior is not _caused_ by any of the conscious

>experiences so talked about. For example, I couldn't be saying "I

>have pain in my foot" because I feel pain in my foot. Both the

>feeling and the talking could be physically sourced by some common

>cause, but in no way could the feeling-of-pain itself be any part of

>the cause of the talking-of-pain. That is the "coincidental" aspect

>that "someone" finds implausible.

 

His conclusion is due to his positing materialism in the way you have

described.

>Probably because so much of our

>psychological, social, and intellectual systems rely on the exact

>opposite premise -- that our _subjective_ selves are the cause of our

>behavior and not just some side-effect (side show) of the universe.

 

This false dichotomy follows from someone2's false view of

materialism.

>(Compare all the successful jury nullification going on in the U.S.

>judicial system by appeal to "Johnny was the victim of xxxx physical

>circumstances, as we've proved, and is thus not really guilty of the

>crime his physical body committed." Or let's take an example perhaps

>closer to your own interests -- the raiding in Darfur; just a bunch of

>physical activity unfolding in its own course without any possible

>influence from appeals to solely subjectively-experienced things like

>ethics and morality, let alone emotion. Plausible?)

 

Ethics and morality represent learned, internalized values, which must

be evaluated along with other learned and/or genetically internalized

elements, such as fear, hate, greed, etc, if external influences call

for a decision and action that may affect the decider. When

functioning, the human considers and estimates which output will best

satisfy the needs of the self-concept she has formed. It is the

utility of a self-concept, that makes conscious experience useful and

influential as part of the flow of events.

Guest someone2
Posted

On 9 Jul, 16:03, Jim07D7 <Jim0...@nospam.net> wrote:

> "Jeckyl" <n...@nowhere.com> said:

>

>

>

>

>

> >"someone2" <glenn.spig...@btinternet.com> wrote in message

> >news:1183945289.289706.9850@n60g2000hse.googlegroups.com...

> >> On 8 Jul, 19:34, someone2 <glenn.spig...@btinternet.com> wrote:

> >>> On 8 Jul, 16:56, "Jeckyl" <n...@nowhere.com> wrote:

> >>> > "someone2" <glenn.spig...@btinternet.com> wrote in message

> >>> >news:1183863917.768229.282730@k79g2000hse.googlegroups.com...

> >>> > > --------

> >>> > > Supposing there was an artificial neural network, with a billion more

> >>> > > nodes than you had neurons in your brain, and a very 'complicated'

> >>> > > configuration, which drove a robot.

> >>> > Ok .. lets pretend

> >>> > > The robot, due to its behaviour

> >>> > > (being a Turing Equivalent), caused some atheists to claim it was

> >>> > > consciously experiencing.

> >>> > Why only atheists?

> >>> > > Now supposing each message between each node

> >>> > > contained additional information such as source node, destination

> >>> > > node

> >>> > > (which could be the same as the source node, for feedback loops), the

> >>> > > time the message was sent, and the message. Each node wrote out to a

> >>> > > log on receipt of a message, and on sending a message out. Now after

> >>> > > an hour of the atheist communicating with the robot, the logs could

> >>> > > be

> >>> > > examined by a bank of computers, varifying, that for the input

> >>> > > messages each received, the outputs were those expected by a single

> >>> > > node in a lab given the same inputs. They could also varify that no

> >>> > > unexplained messages appeared.

> >>> > Fine

> >>> > So what is the problem .. that say nothing about whether or not the

> >>> > robot is

> >>> > conscious.

>

> >>> So what would you be saying with regards to it consciously

> >>> experiencing:

>

> >>> A) The atheists were wrong to consider it to be consciously

> >>> experiencing.

> >>> B) Were correct to say that was consciously experiencing, but that it

> >>> doesn't influence behaviour.

> >>> C) It might have been consciously experiencing, how could you tell, it

> >>> doesn't influence behaviour.

> >>> D) It was consciously experiencing and that influenced its behaviour.

>

> >We don't know if it WAS consciously experiencing.

> >If it was, then D, if it wasn't then A.

> >One would need a definition of what comprises a conscious experience

> >And then a test for the presense of such experiences.

>

> >However, for the sake of the argument, I'll pick D

>

> >>> If you select D, as all the nodes are giving the same outputs as they

> >>> would have singly in the lab given the same inputs, could you also

> >>> select between D1, and D2:

>

> >>> D1) The outputs given by a single node in the lab were also influenced

> >>> by conscious experiences.

>

> >They were influenced by emulated conscious experiences.

>

> >If you provide it with the same inputs that the other nodes the encoded the

> >consious experience would have supplied.

>

> >>> D2) The outputs given by a single node in the lab were not influenced

> >>> by conscious experiences, but the influence of the conscious

> >>> experiences within the robot, is such that the outputs are the same as

> >>> without the influence of the conscious experiences.

>

> >If the inputs are the same, the outputs are the same.

>

> >In the lab, the inputs were directly set to emulate those of consicous

> >experience encoded in the rest of the network, and so produced the same

> >results

>

> >>> or

> >>> E) Avoid answering (like the other atheists so far)

>

> >I'd prefer

>

> >F) The combination of the nodes encodes the conscious experience

>

> >> Saw you just post, was wondering whether you were going to maybe be

> >> the first atheist on the forum to actually answer.

>

> >I missed your earlier reply, or I would have

>

> I seem to be one of those people who hasn't answered satisfactorily,

> but gave up when it became obvious that someone2 lost track of who he

> was talking with. Reducing the number of us, seemed best.

>

> I am glad to see you have carried this discussion a bit further. Your

> approach might be the better way to see if there is anything more to

> the matter, than a fallacious argument from incredulity.

>

 

I didn't lose track of who I was talking to, you dishonest person, I

even quoted the posts you said the things in.

Guest someone2
Posted

On 9 Jul, 16:46, James Norris <JimNorri...@aol.com> wrote:

> On Jul 9, 11:26?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

>

>

>

>

> > On 9 Jul, 03:41, James Norris <JimNorri...@aol.com> wrote:

> > [...]

> > > So you are asking the atheists "Would the robot's supposed conscious

> > > experiences affect the outputs?". And that is the question which you

> > > want answered.

>

> > > > Here is the scenario yet again:

> > > > --------

> > > > Supposing there was an artificial neural network, with a billion more

> > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > consciously experiencing. Now supposing each message between each node

> > > > contained additional information such as source node, destination node

> > > > (which could be the same as the source node, for feedback loops), the

> > > > time the message was sent, and the message. Each node wrote out to a

> > > > log on receipt of a message, and on sending a message out. Now after

> > > > an hour of the atheist communicating with the robot, the logs could be

> > > > examined by a bank of computers, varifying, that for the input

> > > > messages each received, the outputs were those expected by a single

> > > > node in a lab given the same inputs. They could also varify that no

> > > > unexplained messages appeared.

> > > > ---------

>

> > > > And again, here is the question: What would you be saying with regards

> > > > to it consciously experiencing:

>

> > > Basically, I find it difficult to relate the question you asked to

> > > your suggested answers A-D,D1 and D2 (because you've omitted some of

> > > the words which would be necessary to make your suggested answers

> > > unambiguous) but I will try:

>

> > > A) The atheists were wrong to consider it to be consciously

> > > experiencing.

>

> > > I don't know. The atheists you refer to were not necessarily morally

> > > wrong, sinful, mistaken or misinformed, to consider the neural network

> > > to have conscious experiences. It was a hypothetical possibility -

> > > the atheists reckon that a manufactured neural network might be

> > > conscious.

>

> > > B) Were correct to say that was consciously experiencing, but that it

> > > doesn't influence behaviour.

>

> > > Yes. The neural network might have been consciously experiencing,

> > > without 'that it' influencing behaviour.

>

> > > C) It might have been consciously experiencing, how could you tell, it

> > > doesn't influence behaviour.

>

> > > Yes and no. The neural network might have been consciously

> > > experiencing, but you would not know that by observing its behaviour.

> > > There might be other explanations, such as hardware glitches, if the

> > > neural network behaved abnormally.

>

> > > D) It was consciously experiencing and that influenced its behaviour.

>

> > > Yes and no. If it was consciously experiencing its own behaviour,

> > > that might or might not influence influence its behaviour. It might,

> > > for example, be unable to, or decide not to influence its own

> > > behaviour.

>

> > > > If you select D, as all the nodes are giving the same outputs as they

> > > > would have singly in the lab given the same inputs, could you also

> > > > select between D1, and D2:

>

> > > > D1) The outputs given by a single node in the lab were also influenced

> > > > by conscious experiences.

>

> > > Yes, possibly.

>

> > > > D2) The outputs given by a single node in the lab were not influenced

> > > > by conscious experiences, but the influence of the conscious

> > > > experiences within the robot, is such that the outputs are the same as

> > > > without the influence of the conscious experiences.

>

> > > Yes, possibly.

>

> > > > Looking forward to you eventually giving an answer.

>

> > > And here's my earlier reply again:> > > > > > > > > > > > > When I wrote that the robot device 'might in fact have been

> > > > > > > > > > > > > > > > conscious', that is in the context of your belief that a conscious

> > > > > > > > > > > > > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > > > > > > > > > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > > > > > > > > > > > > But if that were possible, the same might also be occuring in the

> > > > > > > > > > > > > > > > computer experiment you described. Perhaps the residual currents in

> > > > > > > > > > > > > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > > > > > > > > > > > > cpu and memory workings of the running program, which by some divine

> > > > > > > > > > > > > > > > influence is conscious. I think you have been asking other responders

> > > > > > > > > > > > > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > > > > > > > > > > > > computer.

>

> > > > > > > > > > > > > > > > Remembering that computers have been specifically designed so that the

> > > > > > > > > > > > > > > > external environment does not affect them, and the most sensitive

> > > > > > > > > > > > > > > > components (the program and cpu) are heavily shielded from external

> > > > > > > > > > > > > > > > influences so that it is almost impossible for anything other than a

> > > > > > > > > > > > > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > > > > > > > > > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > > > > > > > > > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > > > > > > > > > > > > program to behave totally differently, and crash or print a wrong

> > > > > > > > > > > > > > > > number. The reason that computer memory is so heavily shielded is

> > > > > > > > > > > > > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > > > > > > > > > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > > > > > > > > > > > > rays and the like. If your data analysis showed that a couple of

> > > > > > > > > > > > > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > > > > > > > > > > > > seemingly changed of their own accord, there would be no reason to

> > > > > > > > > > > > > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > > > > > > > > > > > > had passed through a critical logic gate in the bus controller or

> > > > > > > > > > > > > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > > > > > > > > > > > > eventual 'glitch'.

>

> > I will run through your answers so far:

>

> > To A) By wrong, I meant incorrect.

> > To B) If you were going for might have been consciously experiencing

> > but that it wouldn't affect its behaviour you should have selected C.

> > To C) The example clearly states that there are no abnormalities. All

> > nodes give the outputs as they would be expected to in the lab.

> > To D) You state that you don't know whether conscious experiences

> > affect behaviour.

>

> > Though on the possibility that conscious experience do affect

> > behaviour you selected:

>

> > D1) and stated that a single node in the lab might be consciously

> > experiencing and influencing its behaviour.

>

> > D2) you stated that possibly the influence of the conscious

> > experiences was the same as no influence. In which case how did you

> > select D and say there was an influence.

>

> My answer to D was not 'don't know', but 'yes and no', by which I

> meant that it is yes in some situations, no in others.

>

> > Since you have tried to avoid answering the questions and attempted in

> > the end to avoid the problem by trying to create confusion by

> > answering all the options.

>

> > If you don't want to select a single option, we can go through each

> > issue one at a time.

>

> OK, that sounds much more sensible. In my response to your earlier

> question, I didn't realise I was only allowed to select one of your 6

> suggested answers.

>

> > Firstly where conscious experiences do not influence behaviour. Do you

> > acknowledge that if conscious experiences didn't influence human

> > behaviour, then it would only be coincidental that we have the

> > conscious experiences that the 'meat machine' is talking about?

>

> When you write 'where conscious experiences do not influence

> behaviour', I take it you mean 'where the hypothetical conscious

> experiences of the neural network do not influence the behaviour of

> the neural network', because conscious experiences in a neural network

> and its behaviour is what your earlier questions were about. So your

> next sentence must mean something along the lines of 'taking the

> analogy of human behaviour, if a human brain was not influenced by its

> conscious experiences, but the human brain indicated (by talking) that

> it was influenced by conscious experiences, what it was saying would

> be irrelevant'. If that's what you mean, I agree with you. The human

> could just be in a trance, automatically reciting a sentence about

> consciousness that was triggered by some external input, without

> consciously experiencing anything. So where do you go from there?>

 

Well, it is implausible that we aren't influenced by our conscious

experiences therefore we cannot be simply a mechanism like the robot

in the example, where there is no conscious influence, the outputs

(which determine the behaviour) are the same as they are for a single

node in the lab which is uninfluenced by conscious experiences.

Guest someone2
Posted

On 9 Jul, 19:40, Matt Silberstein

<RemoveThisPrefixmatts2nos...@ix.netcom.com> wrote:

> On Fri, 29 Jun 2007 05:40:25 -0700, in alt.atheism , someone2

> <glenn.spig...@btinternet.com> in

>

> <1183120825.782515.196...@n2g2000hse.googlegroups.com> wrote:

>

> [snip]

>

> >If we were like the robot, simply meat machines, then the same would

> >hold for us. In which case it would have to be a coincidence the

> >conscious experiences the 'meat machine' talked actually existed (they

> >couldn't have been influencing the behaviour).

>

> Sorry, but this still makes no sense to me and you have yet to get

> beyond this unsupported claim. It is not a coincidence, there is

> physical activity that goes on affecting conscious experience. You

> have a host of hidden assumptions that you refuse, it seems, to make

> explicit or explore.

>

> >Are the athiests having a problem following it (maybe the complexity

> >of it), or just having a problem that you believed an implausible

> >story?

>

> Or maybe people (not just atheists) disagree with your unsupported

> claims.

>

 

Well maybe this example will help what I am saying make sense to you.

 

Supposing there was an artificial neural network, with a billion more

nodes than you had neurons in your brain, and a very 'complicated'

configuration, which drove a robot. The robot, due to its behaviour

(being a Turing Equivalent), caused some atheists to claim it was

consciously experiencing. Now supposing each message between each node

contained additional information such as source node, destination node

(which could be the same as the source node, for feedback loops), the

time the message was sent, and the message. Each node wrote out to a

log on receipt of a message, and on sending a message out. Now after

an hour of the atheist communicating with the robot, the logs could be

examined by a bank of computers, varifying, that for the input

messages each received, the outputs were those expected by a single

node in a lab given the same inputs. They could also varify that no

unexplained messages appeared.

 

What would you be saying with regards to it consciously experiencing:

 

A) The atheists, mentioned in the example, were wrong to consider it

to be consciously experiencing.

B) Were correct to say that was consciously experiencing, but that it

doesn't influence behaviour.

C) It might have been consciously experiencing, how could you tell, it

doesn't influence behaviour.

D) It was consciously experiencing and that influenced its behaviour.

If you select D, as all the nodes are giving the same outputs as they

would have singly in the lab given the same inputs, could you also

select between D1, and D2:

D1) The outputs given by a single node in the lab were also influenced

by conscious experiences.

D2) The outputs given by a single node in the lab were not influenced

by conscious experiences, but the influence of the conscious

experiences within the robot, is such that the outputs are the same as

without the influence of the conscious experiences.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...