Jump to content

Implausibility of Materialism


Recommended Posts

Guest Jim07D7
Posted

someone2 <glenn.spigel2@btinternet.com> said:

>On 6 Jul, 19:48, Jim07D7 <Jim0...@nospam.net> wrote:

>> someone2 <glenn.spig...@btinternet.com> said:

>>

>>

>>

>>

>>

>> >On 6 Jul, 18:02, Jim07D7 <Jim0...@nospam.net> wrote:

>> >> someone2 <glenn.spig...@btinternet.com> said:

>>

>> >> >On 6 Jul, 16:18, Jim07D7 <Jim0...@nospam.net> wrote:

>> >> <...>

>>

>> >> >> I am done dealing with you on the robot consciousness issue. If you

>> >> >> want to reiterate that discussion, please select another person.

>>

>> >> >> Let me start more simply. What, exactly is free will free of, besides

>> >> >> coercion by other agents?

>>

>> >> >What do you mean your done with the issue, do you mean that you accept

>> >> >that any robot following the known laws of physics couldn't be

>> >> >influenced by any conscious experiences, or do you mean that you can't

>> >> >face it?

>>

>> >> I find your second alternative mildly insulting, considering the time

>> >> I have given to this.

>>

>> >> I mean that any being (a robot or human or thermostat) that has imputs

>> >> and converts them to outputs has an influential role in what the

>> >> outputs are. If it consciously experiences and processes the imputs

>> >> and determines the outputs, then it consciously influences what the

>> >> outputs are.

>>

>> >> I do not know if any robot following the known laws of physics

>> >> couldn't be influenced by any [of its own] conscious experiences. It

>> >> seems that robots currently do OK by outsourcing consciousness to us.

>> >> After all, they evolving amazingly fast, via us, symbiotically. ;-)

>>

>> >> >Free will, is for example, that between 2 options, both were possible,

>> >> >and you were able to consciously influence which would happen.

>>

>> >> From what is it free?

>>

>> >Right, so you are saying:

>>

>> Actually, I am asking, from what is it free?

>>

>> I will rearrange your reply to put here, what seems to be a direct

>> reply.

>>

>> >Regarding the 'free will', it is free to choose either option A or B.

>> >It is free from any rules which state which one must happen. As I said

>> >both are possible, before it was consciously willed which would

>> >happen.

>>

>> How do we know whether something is free of any rules which state

>> which one must happen?

>>

I'd like your response to this question.

>>

>> >That any mechanism that has inputs and converts them to outputs has an

>> >influential role in what the outputs are.

>>

>> >Which is true in the sense that the configuration of the mechanism

>> >will influence the outputs given the inputs.

>>

>> That is the sense in which I mean "influence". It doesn't require

>> conscious experience.

>>

>> >In what sense would the outputs be influenced whether the mechanism

>> >was consciously experiencing or not?

>> >To put it another way, there is a robot which because of its

>> >behaviour, an atheist claims is consciously experiencing. The robots

>> >behaviour is explained to the atheist to simply be due to the

>> >mechanism following the laws of physics. The atheist insists that the

>> >mechanism is consciously experiencing, because of the behaviour of the

>> >mechanism. You ask the atheist to point to where there is any evidence

>> >of it consciously experiencing, and where any influence of those

>> >conscious experiences are seen in the behaviour (which was the basis

>> >of them claiming it was consciously experiencing in the first place).

>> >Do you accept that there would be no evidence of any influence of it

>> >consciously experiencing, thus no evidence of it consciously

>> >experiencing.

>>

>> This boils down to whether you and the atheist can agree on an

>> influencing behavior that you both accept as evidence of conscious

>> experience. People have been looking for such things, for years.- >

>

>Well I'd expect an influencing behaviour to be a behaviour other than

>which you'd expect if the influence wasn't present.

 

Could you provide an example?

>Maybe you could

>explain how the atheist could explain the behaviour being that which

>you would expect if there wasn't conscious experiences, to be one

>influenced by conscious experiences?

 

I don't have an example. I believe we make this determination on the

basis of behavior plus other similarities to ourselves. History has

shown that conscious decision making has been ascribed to other

societies of humans, and then to some animals, only slowly, and only

when they have human "sponsors".

 

It is difficult to design effective, purely behavioral tests. THere

are some efforts to avoid bots, by hiding passwords in designs, such

that only humans can read them. But the successful designs exclude

some percentage of humans from passing the test. See:

 

http://www.captcha.net/

>

>

>With regards to how we would know whether something was free from

>rules which state which one must happen, you would have no evidence

>other than your own experience of being able to will which one

>happens. Though if we ever detected the will influencing behaviour,

>which I doubt we will, then you would also have the evidence of there

>being no decernable rules. Any rule shown to the being, the being

>could either choose to adhere to, or break, in the choice they made.

 

We'd have to rule out that they were following a rule when they chose

to break, or adhere to the rule that was shown to them. But it seems

to me that a rule could be discerned, or randomness could be

discerned. Successful poker players would be good test subjects,

because they are good at hiding their betting rules.

  • Replies 994
  • Created
  • Last Reply

Top Posters In This Topic

Guest someone2
Posted

On 6 Jul, 21:54, Jim07D7 <Jim0...@nospam.net> wrote:

> someone2 <glenn.spig...@btinternet.com> said:

>

>

>

>

>

> >On 6 Jul, 19:48, Jim07D7 <Jim0...@nospam.net> wrote:

> >> someone2 <glenn.spig...@btinternet.com> said:

>

> >> >On 6 Jul, 18:02, Jim07D7 <Jim0...@nospam.net> wrote:

> >> >> someone2 <glenn.spig...@btinternet.com> said:

>

> >> >> >On 6 Jul, 16:18, Jim07D7 <Jim0...@nospam.net> wrote:

> >> >> <...>

>

> >> >> >> I am done dealing with you on the robot consciousness issue. If you

> >> >> >> want to reiterate that discussion, please select another person.

>

> >> >> >> Let me start more simply. What, exactly is free will free of, besides

> >> >> >> coercion by other agents?

>

> >> >> >What do you mean your done with the issue, do you mean that you accept

> >> >> >that any robot following the known laws of physics couldn't be

> >> >> >influenced by any conscious experiences, or do you mean that you can't

> >> >> >face it?

>

> >> >> I find your second alternative mildly insulting, considering the time

> >> >> I have given to this.

>

> >> >> I mean that any being (a robot or human or thermostat) that has imputs

> >> >> and converts them to outputs has an influential role in what the

> >> >> outputs are. If it consciously experiences and processes the imputs

> >> >> and determines the outputs, then it consciously influences what the

> >> >> outputs are.

>

> >> >> I do not know if any robot following the known laws of physics

> >> >> couldn't be influenced by any [of its own] conscious experiences. It

> >> >> seems that robots currently do OK by outsourcing consciousness to us.

> >> >> After all, they evolving amazingly fast, via us, symbiotically. ;-)

>

> >> >> >Free will, is for example, that between 2 options, both were possible,

> >> >> >and you were able to consciously influence which would happen.

>

> >> >> From what is it free?

>

> >> >Right, so you are saying:

>

> >> Actually, I am asking, from what is it free?

>

> >> I will rearrange your reply to put here, what seems to be a direct

> >> reply.

>

> >> >Regarding the 'free will', it is free to choose either option A or B.

> >> >It is free from any rules which state which one must happen. As I said

> >> >both are possible, before it was consciously willed which would

> >> >happen.

>

> >> How do we know whether something is free of any rules which state

> >> which one must happen?

>

> I'd like your response to this question.

>

> >> >That any mechanism that has inputs and converts them to outputs has an

> >> >influential role in what the outputs are.

>

> >> >Which is true in the sense that the configuration of the mechanism

> >> >will influence the outputs given the inputs.

>

> >> That is the sense in which I mean "influence". It doesn't require

> >> conscious experience.

>

> >> >In what sense would the outputs be influenced whether the mechanism

> >> >was consciously experiencing or not?

> >> >To put it another way, there is a robot which because of its

> >> >behaviour, an atheist claims is consciously experiencing. The robots

> >> >behaviour is explained to the atheist to simply be due to the

> >> >mechanism following the laws of physics. The atheist insists that the

> >> >mechanism is consciously experiencing, because of the behaviour of the

> >> >mechanism. You ask the atheist to point to where there is any evidence

> >> >of it consciously experiencing, and where any influence of those

> >> >conscious experiences are seen in the behaviour (which was the basis

> >> >of them claiming it was consciously experiencing in the first place).

> >> >Do you accept that there would be no evidence of any influence of it

> >> >consciously experiencing, thus no evidence of it consciously

> >> >experiencing.

>

> >> This boils down to whether you and the atheist can agree on an

> >> influencing behavior that you both accept as evidence of conscious

> >> experience. People have been looking for such things, for years.- >

>

> >Well I'd expect an influencing behaviour to be a behaviour other than

> >which you'd expect if the influence wasn't present.

>

> Could you provide an example?

>

> >Maybe you could

> >explain how the atheist could explain the behaviour being that which

> >you would expect if there wasn't conscious experiences, to be one

> >influenced by conscious experiences?

>

> I don't have an example. I believe we make this determination on the

> basis of behavior plus other similarities to ourselves. History has

> shown that conscious decision making has been ascribed to other

> societies of humans, and then to some animals, only slowly, and only

> when they have human "sponsors".

>

> It is difficult to design effective, purely behavioral tests. THere

> are some efforts to avoid bots, by hiding passwords in designs, such

> that only humans can read them. But the successful designs exclude

> some percentage of humans from passing the test. See:

>

> http://www.captcha.net/

>

> >With regards to how we would know whether something was free from

> >rules which state which one must happen, you would have no evidence

> >other than your own experience of being able to will which one

> >happens. Though if we ever detected the will influencing behaviour,

> >which I doubt we will, then you would also have the evidence of there

> >being no decernable rules. Any rule shown to the being, the being

> >could either choose to adhere to, or break, in the choice they made.

>

> We'd have to rule out that they were following a rule when they chose

> to break, or adhere to the rule that was shown to them. But it seems

> to me that a rule could be discerned, or randomness could be

> discerned. Successful poker players would be good test subjects,

> because they are good at hiding their betting rules.

 

You are avoided answering how the behaviour of the mechanism, simply

explained by the the mechanism following laws of physics, shows any

influence of conscious experiences. Though maybe it was because of a

lack of example that you couldn't understand the point being made. If

so here is an example:

 

Supposing there was an artificial neural network, with a billion more

nodes than you had neurons in your brain, and a very 'complicated'

configuration, which drove a robot. The robot, due to its behaviour

(being a Turing Equivalent), caused some atheists to claim it was

consciously experiencing. Now supposing each message between each node

contained additional information such as source node, destination node

(which could be the same as the source node, for feedback loops), the

time the message was sent, and the message. Each node wrote out to a

log on receipt of a message, and on sending a message out. Now after

an hour of the atheist communicating with the robot, the logs could be

examined by a bank of computers, varifying, that for the input

messages each received, the outputs were those expected by a single

node in a lab given the same inputs. They could also varify that no

unexplained messages appeared.

 

Where was the influence of any conscious experiences in the robot?

 

 

 

 

Regarding free will, and you question as to how do we know whether

something is free of any rules which state which one must happen, I

have already replied:

-----------

With regards to how we would know whether something was free from

rules which state which one must happen, you would have no evidence

other than your own experience of being able to will which one

happens. Though if we ever detected the will influencing behaviour,

which I doubt we will, then you would also have the evidence of there

being no decernable rules. Any rule shown to the being, the being

could either choose to adhere to, or break, in the choice they made.

-----------

 

Though I'd like to change this to: that for any rule you had for which

choice they were going to make, say pressing the black or white button

1000 times, they could adhere to it or not, this could include any

random distribution rule.

 

The reason for the change is that on a robot it could always have a

part which gave the same response as it was shown, or to change the

response so as to not be the same as which was shown to it. So I have

removed it prediction from being shown.

 

I have defined free will for you:

 

Free will, is for example, that between 2 options, both were possible,

and you were able to consciously influence which would happen.

 

and added:

 

Regarding the 'free will', it is free to choose either option A or B.

It is free from any rules which state which one must happen. As I said

both are possible, before it was consciously willed which would

happen.

 

We can possibly never show that the will was free from any rules, as

how would we know there weren't rules that we hadn't discerned.

 

In the robot example above, it can be shown that the behaviour was not

influenced by any conscious experiences, and thus it did not have free

will. It has also been explained to you why it is implausible that we

don't consciously influence our behaviour (otherwise it would have to

be coincidental that the conscious experience the 'meat machine'

expressed existed).

Guest Jim07D7
Posted

someone2 <glenn.spigel2@btinternet.com> said:

>On 6 Jul, 21:54, Jim07D7 <Jim0...@nospam.net> wrote:

>> someone2 <glenn.spig...@btinternet.com> said:

>>

>>

>>

>>

>>

>> >On 6 Jul, 19:48, Jim07D7 <Jim0...@nospam.net> wrote:

>> >> someone2 <glenn.spig...@btinternet.com> said:

>>

>> >> >On 6 Jul, 18:02, Jim07D7 <Jim0...@nospam.net> wrote:

>> >> >> someone2 <glenn.spig...@btinternet.com> said:

>>

>> >> >> >On 6 Jul, 16:18, Jim07D7 <Jim0...@nospam.net> wrote:

>> >> >> <...>

>>

>> >> >> >> I am done dealing with you on the robot consciousness issue. If you

>> >> >> >> want to reiterate that discussion, please select another person.

>>

>> >> >> >> Let me start more simply. What, exactly is free will free of, besides

>> >> >> >> coercion by other agents?

>>

>> >> >> >What do you mean your done with the issue, do you mean that you accept

>> >> >> >that any robot following the known laws of physics couldn't be

>> >> >> >influenced by any conscious experiences, or do you mean that you can't

>> >> >> >face it?

>>

>> >> >> I find your second alternative mildly insulting, considering the time

>> >> >> I have given to this.

>>

>> >> >> I mean that any being (a robot or human or thermostat) that has imputs

>> >> >> and converts them to outputs has an influential role in what the

>> >> >> outputs are. If it consciously experiences and processes the imputs

>> >> >> and determines the outputs, then it consciously influences what the

>> >> >> outputs are.

>>

>> >> >> I do not know if any robot following the known laws of physics

>> >> >> couldn't be influenced by any [of its own] conscious experiences. It

>> >> >> seems that robots currently do OK by outsourcing consciousness to us.

>> >> >> After all, they evolving amazingly fast, via us, symbiotically. ;-)

>>

>> >> >> >Free will, is for example, that between 2 options, both were possible,

>> >> >> >and you were able to consciously influence which would happen.

>>

>> >> >> From what is it free?

>>

>> >> >Right, so you are saying:

>>

>> >> Actually, I am asking, from what is it free?

>>

>> >> I will rearrange your reply to put here, what seems to be a direct

>> >> reply.

>>

>> >> >Regarding the 'free will', it is free to choose either option A or B.

>> >> >It is free from any rules which state which one must happen. As I said

>> >> >both are possible, before it was consciously willed which would

>> >> >happen.

>>

>> >> How do we know whether something is free of any rules which state

>> >> which one must happen?

>>

>> I'd like your response to this question.

 

 

 

I'd still like your response to this question.

>> >> How do we know whether something is free of any rules which state

>> >> which one must happen?

 

 

>>

>> >> >That any mechanism that has inputs and converts them to outputs has an

>> >> >influential role in what the outputs are.

>>

>> >> >Which is true in the sense that the configuration of the mechanism

>> >> >will influence the outputs given the inputs.

>>

>> >> That is the sense in which I mean "influence". It doesn't require

>> >> conscious experience.

>>

>> >> >In what sense would the outputs be influenced whether the mechanism

>> >> >was consciously experiencing or not?

>> >> >To put it another way, there is a robot which because of its

>> >> >behaviour, an atheist claims is consciously experiencing. The robots

>> >> >behaviour is explained to the atheist to simply be due to the

>> >> >mechanism following the laws of physics. The atheist insists that the

>> >> >mechanism is consciously experiencing, because of the behaviour of the

>> >> >mechanism. You ask the atheist to point to where there is any evidence

>> >> >of it consciously experiencing, and where any influence of those

>> >> >conscious experiences are seen in the behaviour (which was the basis

>> >> >of them claiming it was consciously experiencing in the first place).

>> >> >Do you accept that there would be no evidence of any influence of it

>> >> >consciously experiencing, thus no evidence of it consciously

>> >> >experiencing.

>>

>> >> This boils down to whether you and the atheist can agree on an

>> >> influencing behavior that you both accept as evidence of conscious

>> >> experience. People have been looking for such things, for years.- >

>>

>> >Well I'd expect an influencing behaviour to be a behaviour other than

>> >which you'd expect if the influence wasn't present.

>>

>> Could you provide an example?

>>

>> >Maybe you could

>> >explain how the atheist could explain the behaviour being that which

>> >you would expect if there wasn't conscious experiences, to be one

>> >influenced by conscious experiences?

>>

>> I don't have an example. I believe we make this determination on the

>> basis of behavior plus other similarities to ourselves. History has

>> shown that conscious decision making has been ascribed to other

>> societies of humans, and then to some animals, only slowly, and only

>> when they have human "sponsors".

>>

>> It is difficult to design effective, purely behavioral tests. THere

>> are some efforts to avoid bots, by hiding passwords in designs, such

>> that only humans can read them. But the successful designs exclude

>> some percentage of humans from passing the test. See:

>>

>> http://www.captcha.net/

>>

>> >With regards to how we would know whether something was free from

>> >rules which state which one must happen, you would have no evidence

>> >other than your own experience of being able to will which one

>> >happens. Though if we ever detected the will influencing behaviour,

>> >which I doubt we will, then you would also have the evidence of there

>> >being no decernable rules. Any rule shown to the being, the being

>> >could either choose to adhere to, or break, in the choice they made.

>>

>> We'd have to rule out that they were following a rule when they chose

>> to break, or adhere to the rule that was shown to them. But it seems

>> to me that a rule could be discerned, or randomness could be

>> discerned. Successful poker players would be good test subjects,

>> because they are good at hiding their betting rules.

>

>You are avoided answering how the behaviour of the mechanism, simply

>explained by the the mechanism following laws of physics, shows any

>influence of conscious experiences. Though maybe it was because of a

>lack of example that you couldn't understand the point being made. If

>so here is an example:

 

 

 

I answered it.

>> I don't have an example. I believe we make this determination on the

>> basis of behavior plus other similarities to ourselves. History has

>> shown that conscious decision making has been ascribed to other

>> societies of humans, and then to some animals, only slowly, and only

>> when they have human "sponsors".

>>

>> It is difficult to design effective, purely behavioral tests. THere

>> are some efforts to avoid bots, by hiding passwords in designs, such

>> that only humans can read them. But the successful designs exclude

>> some percentage of humans from passing the test. See:

>>

>> http://www.captcha.net/

 

>

>Supposing there was an artificial neural network, with a billion more

>nodes than you had neurons in your brain, and a very 'complicated'

>configuration, which drove a robot. The robot, due to its behaviour

>(being a Turing Equivalent), caused some atheists to claim it was

>consciously experiencing. Now supposing each message between each node

>contained additional information such as source node, destination node

>(which could be the same as the source node, for feedback loops), the

>time the message was sent, and the message. Each node wrote out to a

>log on receipt of a message, and on sending a message out. Now after

>an hour of the atheist communicating with the robot, the logs could be

>examined by a bank of computers, varifying, that for the input

>messages each received, the outputs were those expected by a single

>node in a lab given the same inputs. They could also varify that no

>unexplained messages appeared.

>

>Where was the influence of any conscious experiences in the robot?

>

If it is conscious, the influence is in the outputs, just like ours

are.

>

>

>Regarding free will, and you question as to how do we know whether

>something is free of any rules which state which one must happen, I

>have already replied:

>-----------

>With regards to how we would know whether something was free from

>rules which state which one must happen, you would have no evidence

>other than your own experience of being able to will which one

>happens. Though if we ever detected the will influencing behaviour,

>which I doubt we will, then you would also have the evidence of there

>being no decernable rules. Any rule shown to the being, the being

>could either choose to adhere to, or break, in the choice they made.

>-----------

We never have evidence of the absence of discernable rules. We have

the absence of DISCERNED rules. You equate things that should not be

equated.

 

>Though I'd like to change this to: that for any rule you had for which

>choice they were going to make, say pressing the black or white button

>1000 times, they could adhere to it or not, this could include any

>random distribution rule.

 

Of course, the rule could be to consult a quantum random number

generator. So, how san we discern that there is no rule??????????

>

>The reason for the change is that on a robot it could always have a

>part which gave the same response as it was shown, or to change the

>response so as to not be the same as which was shown to it. So I have

>removed it prediction from being shown.

 

So the absence of a rule is completely unprovable.

>

>I have defined free will for you:

>

>Free will, is for example, that between 2 options, both were possible,

>and you were able to consciously influence which would happen.

>

>and added:

>

>Regarding the 'free will', it is free to choose either option A or B.

>It is free from any rules which state which one must happen. As I said

>both are possible, before it was consciously willed which would

>happen.

 

But as you just said, we can't show that there is no rule being

applied.

>

>We can possibly never show that the will was free from any rules, as

>how would we know there weren't rules that we hadn't discerned.

 

Indeed.

>

>In the robot example above, it can be shown that the behaviour was not

>influenced by any conscious experiences, and thus it did not have free

>will.

 

That's what you think. But how do you show if was not influenced by

conscious experiences. A consciousness might be involved in rule

selection, and might do it so consistently while you are watching,

that it looks lawful.

> It has also been explained to you why it is implausible that we

>don't consciously influence our behaviour (otherwise it would have to

>be coincidental that the conscious experience the 'meat machine'

>expressed existed).

 

No, the conscious influence could be to keep the brain applying the

rules as best it can. If the brain was saturated with alcohol, it

might have a tough time, for example.

Guest someone2
Posted

On 6 Jul, 23:02, Jim07D7 <Jim0...@nospam.net> wrote:

> someone2 <glenn.spig...@btinternet.com> said:

>

> >On 6 Jul, 21:54, Jim07D7 <Jim0...@nospam.net> wrote:

> >> someone2 <glenn.spig...@btinternet.com> said:

>

> >> >On 6 Jul, 19:48, Jim07D7 <Jim0...@nospam.net> wrote:

> >> >> someone2 <glenn.spig...@btinternet.com> said:

>

> >> >> >On 6 Jul, 18:02, Jim07D7 <Jim0...@nospam.net> wrote:

> >> >> >> someone2 <glenn.spig...@btinternet.com> said:

>

> >> >> >> >On 6 Jul, 16:18, Jim07D7 <Jim0...@nospam.net> wrote:

> >> >> >> <...>

>

> >> >> >> >> I am done dealing with you on the robot consciousness issue. If you

> >> >> >> >> want to reiterate that discussion, please select another person.

>

> >> >> >> >> Let me start more simply. What, exactly is free will free of, besides

> >> >> >> >> coercion by other agents?

>

> >> >> >> >What do you mean your done with the issue, do you mean that you accept

> >> >> >> >that any robot following the known laws of physics couldn't be

> >> >> >> >influenced by any conscious experiences, or do you mean that you can't

> >> >> >> >face it?

>

> >> >> >> I find your second alternative mildly insulting, considering the time

> >> >> >> I have given to this.

>

> >> >> >> I mean that any being (a robot or human or thermostat) that has imputs

> >> >> >> and converts them to outputs has an influential role in what the

> >> >> >> outputs are. If it consciously experiences and processes the imputs

> >> >> >> and determines the outputs, then it consciously influences what the

> >> >> >> outputs are.

>

> >> >> >> I do not know if any robot following the known laws of physics

> >> >> >> couldn't be influenced by any [of its own] conscious experiences. It

> >> >> >> seems that robots currently do OK by outsourcing consciousness to us.

> >> >> >> After all, they evolving amazingly fast, via us, symbiotically. ;-)

>

> >> >> >> >Free will, is for example, that between 2 options, both were possible,

> >> >> >> >and you were able to consciously influence which would happen.

>

> >> >> >> From what is it free?

>

> >> >> >Right, so you are saying:

>

> >> >> Actually, I am asking, from what is it free?

>

> >> >> I will rearrange your reply to put here, what seems to be a direct

> >> >> reply.

>

> >> >> >Regarding the 'free will', it is free to choose either option A or B.

> >> >> >It is free from any rules which state which one must happen. As I said

> >> >> >both are possible, before it was consciously willed which would

> >> >> >happen.

>

> >> >> How do we know whether something is free of any rules which state

> >> >> which one must happen?

>

> >> I'd like your response to this question.

>

>

>

> I'd still like your response to this question.

>

> >> >> How do we know whether something is free of any rules which state

> >> >> which one must happen?

>

>

>

>

>

> >> >> >That any mechanism that has inputs and converts them to outputs has an

> >> >> >influential role in what the outputs are.

>

> >> >> >Which is true in the sense that the configuration of the mechanism

> >> >> >will influence the outputs given the inputs.

>

> >> >> That is the sense in which I mean "influence". It doesn't require

> >> >> conscious experience.

>

> >> >> >In what sense would the outputs be influenced whether the mechanism

> >> >> >was consciously experiencing or not?

> >> >> >To put it another way, there is a robot which because of its

> >> >> >behaviour, an atheist claims is consciously experiencing. The robots

> >> >> >behaviour is explained to the atheist to simply be due to the

> >> >> >mechanism following the laws of physics. The atheist insists that the

> >> >> >mechanism is consciously experiencing, because of the behaviour of the

> >> >> >mechanism. You ask the atheist to point to where there is any evidence

> >> >> >of it consciously experiencing, and where any influence of those

> >> >> >conscious experiences are seen in the behaviour (which was the basis

> >> >> >of them claiming it was consciously experiencing in the first place).

> >> >> >Do you accept that there would be no evidence of any influence of it

> >> >> >consciously experiencing, thus no evidence of it consciously

> >> >> >experiencing.

>

> >> >> This boils down to whether you and the atheist can agree on an

> >> >> influencing behavior that you both accept as evidence of conscious

> >> >> experience. People have been looking for such things, for years.- >

>

> >> >Well I'd expect an influencing behaviour to be a behaviour other than

> >> >which you'd expect if the influence wasn't present.

>

> >> Could you provide an example?

>

> >> >Maybe you could

> >> >explain how the atheist could explain the behaviour being that which

> >> >you would expect if there wasn't conscious experiences, to be one

> >> >influenced by conscious experiences?

>

> >> I don't have an example. I believe we make this determination on the

> >> basis of behavior plus other similarities to ourselves. History has

> >> shown that conscious decision making has been ascribed to other

> >> societies of humans, and then to some animals, only slowly, and only

> >> when they have human "sponsors".

>

> >> It is difficult to design effective, purely behavioral tests. THere

> >> are some efforts to avoid bots, by hiding passwords in designs, such

> >> that only humans can read them. But the successful designs exclude

> >> some percentage of humans from passing the test. See:

>

> >>http://www.captcha.net/

>

> >> >With regards to how we would know whether something was free from

> >> >rules which state which one must happen, you would have no evidence

> >> >other than your own experience of being able to will which one

> >> >happens. Though if we ever detected the will influencing behaviour,

> >> >which I doubt we will, then you would also have the evidence of there

> >> >being no decernable rules. Any rule shown to the being, the being

> >> >could either choose to adhere to, or break, in the choice they made.

>

> >> We'd have to rule out that they were following a rule when they chose

> >> to break, or adhere to the rule that was shown to them. But it seems

> >> to me that a rule could be discerned, or randomness could be

> >> discerned. Successful poker players would be good test subjects,

> >> because they are good at hiding their betting rules.

>

> >You are avoided answering how the behaviour of the mechanism, simply

> >explained by the the mechanism following laws of physics, shows any

> >influence of conscious experiences. Though maybe it was because of a

> >lack of example that you couldn't understand the point being made. If

> >so here is an example:

>

>

>

> I answered it.

>

> >> I don't have an example. I believe we make this determination on the

> >> basis of behavior plus other similarities to ourselves. History has

> >> shown that conscious decision making has been ascribed to other

> >> societies of humans, and then to some animals, only slowly, and only

> >> when they have human "sponsors".

>

> >> It is difficult to design effective, purely behavioral tests. THere

> >> are some efforts to avoid bots, by hiding passwords in designs, such

> >> that only humans can read them. But the successful designs exclude

> >> some percentage of humans from passing the test. See:

>

> >>http://www.captcha.net/

>

>

>

> >Supposing there was an artificial neural network, with a billion more

> >nodes than you had neurons in your brain, and a very 'complicated'

> >configuration, which drove a robot. The robot, due to its behaviour

> >(being a Turing Equivalent), caused some atheists to claim it was

> >consciously experiencing. Now supposing each message between each node

> >contained additional information such as source node, destination node

> >(which could be the same as the source node, for feedback loops), the

> >time the message was sent, and the message. Each node wrote out to a

> >log on receipt of a message, and on sending a message out. Now after

> >an hour of the atheist communicating with the robot, the logs could be

> >examined by a bank of computers, varifying, that for the input

> >messages each received, the outputs were those expected by a single

> >node in a lab given the same inputs. They could also varify that no

> >unexplained messages appeared.

>

> >Where was the influence of any conscious experiences in the robot?

>

> If it is conscious, the influence is in the outputs, just like ours

> are.

>

> >Regarding free will, and you question as to how do we know whether

> >something is free of any rules which state which one must happen, I

> >have already replied:

> >-----------

> >With regards to how we would know whether something was free from

> >rules which state which one must happen, you would have no evidence

> >other than your own experience of being able to will which one

> >happens. Though if we ever detected the will influencing behaviour,

> >which I doubt we will, then you would also have the evidence of there

> >being no decernable rules. Any rule shown to the being, the being

> >could either choose to adhere to, or break, in the choice they made.

> >-----------

>

> We never have evidence of the absence of discernable rules. We have

> the absence of DISCERNED rules. You equate things that should not be

> equated.

>

> >Though I'd like to change this to: that for any rule you had for which

> >choice they were going to make, say pressing the black or white button

> >1000 times, they could adhere to it or not, this could include any

> >random distribution rule.

>

> Of course, the rule could be to consult a quantum random number

> generator. So, how san we discern that there is no rule??????????

>

>

>

> >The reason for the change is that on a robot it could always have a

> >part which gave the same response as it was shown, or to change the

> >response so as to not be the same as which was shown to it. So I have

> >removed it prediction from being shown.

>

> So the absence of a rule is completely unprovable.

>

>

>

> >I have defined free will for you:

>

> >Free will, is for example, that between 2 options, both were possible,

> >and you were able to consciously influence which would happen.

>

> >and added:

>

> >Regarding the 'free will', it is free to choose either option A or B.

> >It is free from any rules which state which one must happen. As I said

> >both are possible, before it was consciously willed which would

> >happen.

>

> But as you just said, we can't show that there is no rule being

> applied.

>

>

>

> >We can possibly never show that the will was free from any rules, as

> >how would we know there weren't rules that we hadn't discerned.

>

> Indeed.

>

>

>

> >In the robot example above, it can be shown that the behaviour was not

> >influenced by any conscious experiences, and thus it did not have free

> >will.

>

> That's what you think. But how do you show if was not influenced by

> conscious experiences. A consciousness might be involved in rule

> selection, and might do it so consistently while you are watching,

> that it looks lawful.

>

> > It has also been explained to you why it is implausible that we

> >don't consciously influence our behaviour (otherwise it would have to

> >be coincidental that the conscious experience the 'meat machine'

> >expressed existed).

>

> No, the conscious influence could be to keep the brain applying the

> rules as best it can. If the brain was saturated with alcohol, it

> might have a tough time, for example.

 

You said:

 

--------

 

 

I'd still like your response to this question.

> >> >> How do we know whether something is free of any rules which state

> >> >> which one must happen?

 

 

--------

 

Well this was my response:

--------

Regarding free will, and you question as to how do we know whether

something is free of any rules which state which one must happen, I

have already replied:

-----------

With regards to how we would know whether something was free from

rules which state which one must happen, you would have no evidence

other than your own experience of being able to will which one

happens. Though if we ever detected the will influencing behaviour,

which I doubt we will, then you would also have the evidence of there

being no decernable rules. Any rule shown to the being, the being

could either choose to adhere to, or break, in the choice they made.

-----------

 

Though I'd like to change this to: that for any rule you had for which

choice they were going to make, say pressing the black or white button

1000 times, they could adhere to it or not, this could include any

random distribution rule.

 

The reason for the change is that on a robot it could always have a

part which gave the same response as it was shown, or to change the

response so as to not be the same as which was shown to it. So I have

removed it prediction from being shown.

 

I have defined free will for you:

 

Free will, is for example, that between 2 options, both were possible,

and you were able to consciously influence which would happen. and

added:

 

Regarding the 'free will', it is free to choose either option A or B.

It is free from any rules which state which one must happen. As I said

both are possible, before it was consciously willed which would

happen.

--------

 

You were just showing that you weren't bothering to read what was

written.

 

 

With regards to the example and question I gave you:

--------

Supposing there was an artificial neural network, with a billion more

nodes than you had neurons in your brain, and a very 'complicated'

configuration, which drove a robot. The robot, due to its behaviour

(being a Turing Equivalent), caused some atheists to claim it was

consciously experiencing. Now supposing each message between each node

contained additional information such as source node, destination node

(which could be the same as the source node, for feedback loops), the

time the message was sent, and the message. Each node wrote out to a

log on receipt of a message, and on sending a message out. Now after

an hour of the atheist communicating with the robot, the logs could be

examined by a bank of computers, varifying, that for the input

messages each received, the outputs were those expected by a single

node in a lab given the same inputs. They could also varify that no

unexplained messages appeared.

 

Where was the influence of any conscious experiences in the robot?

--------

 

You replied:

--------

If it is conscious, the influence is in the outputs, just like ours

are.

--------

 

Where is the influence in the outputs, if all the nodes are just

giving the same outputs as they would have singly in the lab given the

same inputs. Are you suggesting that the outputs given by a single

node in the lab were consciously influenced, or that the influence is

such that the outputs are the same as without the influence?

Guest Jim07D7
Posted

someone2 <glenn.spigel2@btinternet.com> said:

 

>

>You said:

>

>--------

>

>

>I'd still like your response to this question.

>

>> >> >> How do we know whether something is free of any rules which state

>> >> >> which one must happen?

>

>

>--------

>

>Well this was my response:

>--------

>Regarding free will, and you question as to how do we know whether

>something is free of any rules which state which one must happen, I

>have already replied:

>-----------

>With regards to how we would know whether something was free from

>rules which state which one must happen, you would have no evidence

>other than your own experience of being able to will which one

>happens. Though if we ever detected the will influencing behaviour,

>which I doubt we will, then you would also have the evidence of there

>being no decernable rules. Any rule shown to the being, the being

>could either choose to adhere to, or break, in the choice they made.

>-----------

>

>Though I'd like to change this to: that for any rule you had for which

>choice they were going to make, say pressing the black or white button

>1000 times, they could adhere to it or not, this could include any

>random distribution rule.

>

>The reason for the change is that on a robot it could always have a

>part which gave the same response as it was shown, or to change the

>response so as to not be the same as which was shown to it. So I have

>removed it prediction from being shown.

>

>I have defined free will for you:

>

>Free will, is for example, that between 2 options, both were possible,

>and you were able to consciously influence which would happen. and

>added:

>

>Regarding the 'free will', it is free to choose either option A or B.

>It is free from any rules which state which one must happen. As I said

>both are possible, before it was consciously willed which would

>happen.

>--------

>

>You were just showing that you weren't bothering to read what was

>written.

 

I read every word. I did not think of that as an answer. Now, you have

stitched it together for me. Thanks. It still does not give me

observable behaviors you would regard as regard as indicating

consciousness. Being "free" is hidden in the hardware.

<....>

>

>Where is the influence in the outputs, if all the nodes are just

>giving the same outputs as they would have singly in the lab given the

>same inputs. Are you suggesting that the outputs given by a single

>node in the lab were consciously influenced, or that the influence is

>such that the outputs are the same as without the influence?

>

You have forgotten my definition of influence. If I may simplify it.

Something is influential if it affects an outcome. If a single node in

a lab gave all the same responses as you do, over weeks of testing,

would you regard it as conscious? Why or why not?

 

I think we are closing in on your answer.

Guest someone2
Posted

On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

> someone2 <glenn.spig...@btinternet.com> said:

>

>

>

>

>

>

>

> >You said:

>

> >--------

> >

>

> >I'd still like your response to this question.

>

> >> >> >> How do we know whether something is free of any rules which state

> >> >> >> which one must happen?

>

> >

> >--------

>

> >Well this was my response:

> >--------

> >Regarding free will, and you question as to how do we know whether

> >something is free of any rules which state which one must happen, I

> >have already replied:

> >-----------

> >With regards to how we would know whether something was free from

> >rules which state which one must happen, you would have no evidence

> >other than your own experience of being able to will which one

> >happens. Though if we ever detected the will influencing behaviour,

> >which I doubt we will, then you would also have the evidence of there

> >being no decernable rules. Any rule shown to the being, the being

> >could either choose to adhere to, or break, in the choice they made.

> >-----------

>

> >Though I'd like to change this to: that for any rule you had for which

> >choice they were going to make, say pressing the black or white button

> >1000 times, they could adhere to it or not, this could include any

> >random distribution rule.

>

> >The reason for the change is that on a robot it could always have a

> >part which gave the same response as it was shown, or to change the

> >response so as to not be the same as which was shown to it. So I have

> >removed it prediction from being shown.

>

> >I have defined free will for you:

>

> >Free will, is for example, that between 2 options, both were possible,

> >and you were able to consciously influence which would happen. and

> >added:

>

> >Regarding the 'free will', it is free to choose either option A or B.

> >It is free from any rules which state which one must happen. As I said

> >both are possible, before it was consciously willed which would

> >happen.

> >--------

>

> >You were just showing that you weren't bothering to read what was

> >written.

>

> I read every word. I did not think of that as an answer. Now, you have

> stitched it together for me. Thanks. It still does not give me

> observable behaviors you would regard as regard as indicating

> consciousness. Being "free" is hidden in the hardware.

> <....>

>

> >Where is the influence in the outputs, if all the nodes are just

> >giving the same outputs as they would have singly in the lab given the

> >same inputs. Are you suggesting that the outputs given by a single

> >node in the lab were consciously influenced, or that the influence is

> >such that the outputs are the same as without the influence?

>

> You have forgotten my definition of influence. If I may simplify it.

> Something is influential if it affects an outcome. If a single node in

> a lab gave all the same responses as you do, over weeks of testing,

> would you regard it as conscious? Why or why not?

>

> I think we are closing in on your answer.

>

 

That wasn't stitched together, that was a cut and paste of what I

wrote in my last response, before it was broken up by your responses.

If you hadn't snipped the history you would have seen that.

 

I'll just put back in the example we are talking about, so there is

context to the question:

 

--------

Supposing there was an artificial neural network, with a billion more

nodes than you had neurons in your brain, and a very 'complicated'

configuration, which drove a robot. The robot, due to its behaviour

(being a Turing Equivalent), caused some atheists to claim it was

consciously experiencing. Now supposing each message between each node

contained additional information such as source node, destination node

(which could be the same as the source node, for feedback loops), the

time the message was sent, and the message. Each node wrote out to a

log on receipt of a message, and on sending a message out. Now after

an hour of the atheist communicating with the robot, the logs could be

examined by a bank of computers, varifying, that for the input

messages each received, the outputs were those expected by a single

node in a lab given the same inputs. They could also varify that no

unexplained messages appeared.

 

Where was the influence of any conscious experiences in the robot?

--------

 

To which you replied:

--------

If it is conscious, the influence is in the outputs, just like ours

are.

--------

 

So rewording my last question to you (seen above in the history),

taking into account what you regard as influence:

 

Where is the affect of conscious experiences in the outputs, if all

the nodes are justgiving the same outputs as they would have singly in

the lab given the same inputs. Are you suggesting that the outputs

given by a single node in the lab were consciously affected, or that

the affect is such that the outputs are the same as without the

affect?

 

I notice you avoided facing answering the question. Perhaps you would

care to directly answer this.

Guest James Norris
Posted

On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > someone2 <glenn.spig...@btinternet.com> said:

>

> > >You said:

>

> > >--------

> > >

>

> > >I'd still like your response to this question.

>

> > >> >> >> How do we know whether something is free of any rules which state

> > >> >> >> which one must happen?

>

> > >

> > >--------

>

> > >Well this was my response:

> > >--------

> > >Regarding free will, and you question as to how do we know whether

> > >something is free of any rules which state which one must happen, I

> > >have already replied:

> > >-----------

> > >With regards to how we would know whether something was free from

> > >rules which state which one must happen, you would have no evidence

> > >other than your own experience of being able to will which one

> > >happens. Though if we ever detected the will influencing behaviour,

> > >which I doubt we will, then you would also have the evidence of there

> > >being no decernable rules. Any rule shown to the being, the being

> > >could either choose to adhere to, or break, in the choice they made.

> > >-----------

>

> > >Though I'd like to change this to: that for any rule you had for which

> > >choice they were going to make, say pressing the black or white button

> > >1000 times, they could adhere to it or not, this could include any

> > >random distribution rule.

>

> > >The reason for the change is that on a robot it could always have a

> > >part which gave the same response as it was shown, or to change the

> > >response so as to not be the same as which was shown to it. So I have

> > >removed it prediction from being shown.

>

> > >I have defined free will for you:

>

> > >Free will, is for example, that between 2 options, both were possible,

> > >and you were able to consciously influence which would happen. and

> > >added:

>

> > >Regarding the 'free will', it is free to choose either option A or B.

> > >It is free from any rules which state which one must happen. As I said

> > >both are possible, before it was consciously willed which would

> > >happen.

> > >--------

>

> > >You were just showing that you weren't bothering to read what was

> > >written.

>

> > I read every word. I did not think of that as an answer. Now, you have

> > stitched it together for me. Thanks. It still does not give me

> > observable behaviors you would regard as regard as indicating

> > consciousness. Being "free" is hidden in the hardware.

> > <....>

>

> > >Where is the influence in the outputs, if all the nodes are just

> > >giving the same outputs as they would have singly in the lab given the

> > >same inputs. Are you suggesting that the outputs given by a single

> > >node in the lab were consciously influenced, or that the influence is

> > >such that the outputs are the same as without the influence?

>

> > You have forgotten my definition of influence. If I may simplify it.

> > Something is influential if it affects an outcome. If a single node in

> > a lab gave all the same responses as you do, over weeks of testing,

> > would you regard it as conscious? Why or why not?

>

> > I think we are closing in on your answer.

>

> That wasn't stitched together, that was a cut and paste of what I

> wrote in my last response, before it was broken up by your responses.

> If you hadn't snipped the history you would have seen that.

>

> I'll just put back in the example we are talking about, so there is

> context to the question:

>

> --------

> Supposing there was an artificial neural network, with a billion more

> nodes than you had neurons in your brain, and a very 'complicated'

> configuration, which drove a robot. The robot, due to its behaviour

> (being a Turing Equivalent), caused some atheists to claim it was

> consciously experiencing. Now supposing each message between each node

> contained additional information such as source node, destination node

> (which could be the same as the source node, for feedback loops), the

> time the message was sent, and the message. Each node wrote out to a

> log on receipt of a message, and on sending a message out. Now after

> an hour of the atheist communicating with the robot, the logs could be

> examined by a bank of computers, varifying, that for the input

> messages each received, the outputs were those expected by a single

> node in a lab given the same inputs. They could also varify that no

> unexplained messages appeared.

>

> Where was the influence of any conscious experiences in the robot?

> --------

>

> To which you replied:

> --------

> If it is conscious, the influence is in the outputs, just like ours

> are.

> --------

>

> So rewording my last question to you (seen above in the history),

> taking into account what you regard as influence:

>

> Where is the affect of conscious experiences in the outputs, if all

> the nodes are justgiving the same outputs as they would have singly in

> the lab given the same inputs. Are you suggesting that the outputs

> given by a single node in the lab were consciously affected, or that

> the affect is such that the outputs are the same as without the

> affect?

 

OK, you've hypothetically monitored all the data going on in the robot

mechanism, and then you say that each bit of it is not conscious, so

the overall mechanism can't be conscious? But each neuron in a brain

is not conscious, and analysis of the brain of a human would produce

similar data, and we know that we are conscious. We are reasonably

sure that a computer controlled robot would not be conscious, so we

can deduce that in the case of human brains 'the whole is more than

the sum of the parts' for some reason. So the actual data from a

brain, if you take into account the physical arrangement and timing

and the connections to the outside world through the senses, would in

fact correspond to consciousness.

 

Consider the million pixels in a 1000x1000 pixel digitized picture -

all are either red or green or blue, and each pixel is just a dot, not

a picture. But from a human perspective of the whole, not only are

there more than just three colours, the pixels can form meaningful

shapes that are related to reality. Similarly, the arrangement of

bioelectric firings in the brain cause meaningful shapes in the brain

which we understand as related to reality, which we call

consciousness, even though each firing is not itself conscious.

 

In your hypothetical experiment, the manufactured computer robot might

in fact have been conscious, but the individual inputs and outputs of

each node in its neural net would not show it. To understand why an

organism might or might not be conscious, you have to consider larger

aspects of the organism. The biological brain and senses have evolved

over billions of years to become aware of external reality, because

awareness of reality is an evolutionary survival trait. The robot

brain you are considering is an attempt to duplicate that awareness,

and to do that, you have to understand how evolution achieved

awareness in biological organisms. At this point in the argument, you

usually state your personal opinion that evolution does not explain

consciousness, because of the soul and God-given free will, your

behaviour goes into a loop and you start again from earlier.

 

The 'free-will' of the organism is irrelevant to the robot/brain

scenario - it applies to conscious beings of any sort. In no way does

a human have total free will to do or be whatever they feel like

(flying, telepathy, infinite wealth etc), so at best we only have

limited freedom to choose our own future. At the microscopic level of

the neurons in a brain, the events are unpredictable (because the

complexity of the brain and the sense information it receives is

sufficient for Chaos Theory to apply), and therefore random. We are

controlled by random events, that is true, and we have no free will in

that sense. However, we have sufficient understanding of free will to

be able to form a moral code based on the notion. You can blame or

congratulate your ancestors, or the society you live in, for your own

behaviour if you want to. But if you choose to believe you have free

will, you can lead your life as though you were responsible for your

own actions, and endeavour to have your own affect on the future. If

you want to, you can abnegate responsibility, and just 'go with the

flow', by tossing coins and consulting the I Ching, as some people do,

but if you do that to excess you would obviously not be in control of

your own life, and you would eventually not be allowed to vote in

elections, or have a bank account (according to some legal definitions

of the requirements for these adult privileges). But you would

clearly have more free will if you chose to believe you had free will

than if you chose to believe you did not. You could even toss a coin

to decide whether or not to behave in future as though you had free

will (rather than just going with the flow all the time and never

making a decision), and your destiny would be controlled by a single

random coin toss, that you yourself had initiated.

Guest someone2
Posted

On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

> On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

>

>

>

>

> > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > >You said:

>

> > > >--------

> > > >

>

> > > >I'd still like your response to this question.

>

> > > >> >> >> How do we know whether something is free of any rules which state

> > > >> >> >> which one must happen?

>

> > > >

> > > >--------

>

> > > >Well this was my response:

> > > >--------

> > > >Regarding free will, and you question as to how do we know whether

> > > >something is free of any rules which state which one must happen, I

> > > >have already replied:

> > > >-----------

> > > >With regards to how we would know whether something was free from

> > > >rules which state which one must happen, you would have no evidence

> > > >other than your own experience of being able to will which one

> > > >happens. Though if we ever detected the will influencing behaviour,

> > > >which I doubt we will, then you would also have the evidence of there

> > > >being no decernable rules. Any rule shown to the being, the being

> > > >could either choose to adhere to, or break, in the choice they made.

> > > >-----------

>

> > > >Though I'd like to change this to: that for any rule you had for which

> > > >choice they were going to make, say pressing the black or white button

> > > >1000 times, they could adhere to it or not, this could include any

> > > >random distribution rule.

>

> > > >The reason for the change is that on a robot it could always have a

> > > >part which gave the same response as it was shown, or to change the

> > > >response so as to not be the same as which was shown to it. So I have

> > > >removed it prediction from being shown.

>

> > > >I have defined free will for you:

>

> > > >Free will, is for example, that between 2 options, both were possible,

> > > >and you were able to consciously influence which would happen. and

> > > >added:

>

> > > >Regarding the 'free will', it is free to choose either option A or B.

> > > >It is free from any rules which state which one must happen. As I said

> > > >both are possible, before it was consciously willed which would

> > > >happen.

> > > >--------

>

> > > >You were just showing that you weren't bothering to read what was

> > > >written.

>

> > > I read every word. I did not think of that as an answer. Now, you have

> > > stitched it together for me. Thanks. It still does not give me

> > > observable behaviors you would regard as regard as indicating

> > > consciousness. Being "free" is hidden in the hardware.

> > > <....>

>

> > > >Where is the influence in the outputs, if all the nodes are just

> > > >giving the same outputs as they would have singly in the lab given the

> > > >same inputs. Are you suggesting that the outputs given by a single

> > > >node in the lab were consciously influenced, or that the influence is

> > > >such that the outputs are the same as without the influence?

>

> > > You have forgotten my definition of influence. If I may simplify it.

> > > Something is influential if it affects an outcome. If a single node in

> > > a lab gave all the same responses as you do, over weeks of testing,

> > > would you regard it as conscious? Why or why not?

>

> > > I think we are closing in on your answer.

>

> > That wasn't stitched together, that was a cut and paste of what I

> > wrote in my last response, before it was broken up by your responses.

> > If you hadn't snipped the history you would have seen that.

>

> > I'll just put back in the example we are talking about, so there is

> > context to the question:

>

> > --------

> > Supposing there was an artificial neural network, with a billion more

> > nodes than you had neurons in your brain, and a very 'complicated'

> > configuration, which drove a robot. The robot, due to its behaviour

> > (being a Turing Equivalent), caused some atheists to claim it was

> > consciously experiencing. Now supposing each message between each node

> > contained additional information such as source node, destination node

> > (which could be the same as the source node, for feedback loops), the

> > time the message was sent, and the message. Each node wrote out to a

> > log on receipt of a message, and on sending a message out. Now after

> > an hour of the atheist communicating with the robot, the logs could be

> > examined by a bank of computers, varifying, that for the input

> > messages each received, the outputs were those expected by a single

> > node in a lab given the same inputs. They could also varify that no

> > unexplained messages appeared.

>

> > Where was the influence of any conscious experiences in the robot?

> > --------

>

> > To which you replied:

> > --------

> > If it is conscious, the influence is in the outputs, just like ours

> > are.

> > --------

>

> > So rewording my last question to you (seen above in the history),

> > taking into account what you regard as influence:

>

> > Where is the affect of conscious experiences in the outputs, if all

> > the nodes are justgiving the same outputs as they would have singly in

> > the lab given the same inputs. Are you suggesting that the outputs

> > given by a single node in the lab were consciously affected, or that

> > the affect is such that the outputs are the same as without the

> > affect?

>

> OK, you've hypothetically monitored all the data going on in the robot

> mechanism, and then you say that each bit of it is not conscious, so

> the overall mechanism can't be conscious? But each neuron in a brain

> is not conscious, and analysis of the brain of a human would produce

> similar data, and we know that we are conscious. We are reasonably

> sure that a computer controlled robot would not be conscious, so we

> can deduce that in the case of human brains 'the whole is more than

> the sum of the parts' for some reason. So the actual data from a

> brain, if you take into account the physical arrangement and timing

> and the connections to the outside world through the senses, would in

> fact correspond to consciousness.

>

> Consider the million pixels in a 1000x1000 pixel digitized picture -

> all are either red or green or blue, and each pixel is just a dot, not

> a picture. But from a human perspective of the whole, not only are

> there more than just three colours, the pixels can form meaningful

> shapes that are related to reality. Similarly, the arrangement of

> bioelectric firings in the brain cause meaningful shapes in the brain

> which we understand as related to reality, which we call

> consciousness, even though each firing is not itself conscious.

>

> In your hypothetical experiment, the manufactured computer robot might

> in fact have been conscious, but the individual inputs and outputs of

> each node in its neural net would not show it. To understand why an

> organism might or might not be conscious, you have to consider larger

> aspects of the organism. The biological brain and senses have evolved

> over billions of years to become aware of external reality, because

> awareness of reality is an evolutionary survival trait. The robot

> brain you are considering is an attempt to duplicate that awareness,

> and to do that, you have to understand how evolution achieved

> awareness in biological organisms. At this point in the argument, you

> usually state your personal opinion that evolution does not explain

> consciousness, because of the soul and God-given free will, your

> behaviour goes into a loop and you start again from earlier.

>

> The 'free-will' of the organism is irrelevant to the robot/brain

> scenario - it applies to conscious beings of any sort. In no way does

> a human have total free will to do or be whatever they feel like

> (flying, telepathy, infinite wealth etc), so at best we only have

> limited freedom to choose our own future. At the microscopic level of

> the neurons in a brain, the events are unpredictable (because the

> complexity of the brain and the sense information it receives is

> sufficient for Chaos Theory to apply), and therefore random. We are

> controlled by random events, that is true, and we have no free will in

> that sense. However, we have sufficient understanding of free will to

> be able to form a moral code based on the notion. You can blame or

> congratulate your ancestors, or the society you live in, for your own

> behaviour if you want to. But if you choose to believe you have free

> will, you can lead your life as though you were responsible for your

> own actions, and endeavour to have your own affect on the future. If

> you want to, you can abnegate responsibility, and just 'go with the

> flow', by tossing coins and consulting the I Ching, as some people do,

> but if you do that to excess you would obviously not be in control of

> your own life, and you would eventually not be allowed to vote in

> elections, or have a bank account (according to some legal definitions

> of the requirements for these adult privileges). But you would

> clearly have more free will if you chose to believe you had free will

> than if you chose to believe you did not. You could even toss a coin

> to decide whether or not to behave in future as though you had free

> will (rather than just going with the flow all the time and never

> making a decision), and your destiny would be controlled by a single

> random coin toss, that you yourself had initiated.

>

 

You haven't addressed the simple issue of where is the influence/

affect of conscious experiences in the outputs, if all the nodes are

justgiving the same outputs as they would have singly in

the lab given the same inputs. Are you suggesting that the outputs

given by a single node in the lab were consciously influenced/

affected, or that the affect/influence is such that the outputs are

the same as without the affect (i.e. there is no affect/influence)?

 

It is a simple question, why do you avoid answering it?

Guest someone2
Posted

On 7 Jul, 02:23, someone2 <glenn.spig...@btinternet.com> wrote:

> On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > >You said:

>

> > > > >--------

> > > > >

>

> > > > >I'd still like your response to this question.

>

> > > > >> >> >> How do we know whether something is free of any rules which state

> > > > >> >> >> which one must happen?

>

> > > > >

> > > > >--------

>

> > > > >Well this was my response:

> > > > >--------

> > > > >Regarding free will, and you question as to how do we know whether

> > > > >something is free of any rules which state which one must happen, I

> > > > >have already replied:

> > > > >-----------

> > > > >With regards to how we would know whether something was free from

> > > > >rules which state which one must happen, you would have no evidence

> > > > >other than your own experience of being able to will which one

> > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > >which I doubt we will, then you would also have the evidence of there

> > > > >being no decernable rules. Any rule shown to the being, the being

> > > > >could either choose to adhere to, or break, in the choice they made.

> > > > >-----------

>

> > > > >Though I'd like to change this to: that for any rule you had for which

> > > > >choice they were going to make, say pressing the black or white button

> > > > >1000 times, they could adhere to it or not, this could include any

> > > > >random distribution rule.

>

> > > > >The reason for the change is that on a robot it could always have a

> > > > >part which gave the same response as it was shown, or to change the

> > > > >response so as to not be the same as which was shown to it. So I have

> > > > >removed it prediction from being shown.

>

> > > > >I have defined free will for you:

>

> > > > >Free will, is for example, that between 2 options, both were possible,

> > > > >and you were able to consciously influence which would happen. and

> > > > >added:

>

> > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > >It is free from any rules which state which one must happen. As I said

> > > > >both are possible, before it was consciously willed which would

> > > > >happen.

> > > > >--------

>

> > > > >You were just showing that you weren't bothering to read what was

> > > > >written.

>

> > > > I read every word. I did not think of that as an answer. Now, you have

> > > > stitched it together for me. Thanks. It still does not give me

> > > > observable behaviors you would regard as regard as indicating

> > > > consciousness. Being "free" is hidden in the hardware.

> > > > <....>

>

> > > > >Where is the influence in the outputs, if all the nodes are just

> > > > >giving the same outputs as they would have singly in the lab given the

> > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > >node in the lab were consciously influenced, or that the influence is

> > > > >such that the outputs are the same as without the influence?

>

> > > > You have forgotten my definition of influence. If I may simplify it.

> > > > Something is influential if it affects an outcome. If a single node in

> > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > would you regard it as conscious? Why or why not?

>

> > > > I think we are closing in on your answer.

>

> > > That wasn't stitched together, that was a cut and paste of what I

> > > wrote in my last response, before it was broken up by your responses.

> > > If you hadn't snipped the history you would have seen that.

>

> > > I'll just put back in the example we are talking about, so there is

> > > context to the question:

>

> > > --------

> > > Supposing there was an artificial neural network, with a billion more

> > > nodes than you had neurons in your brain, and a very 'complicated'

> > > configuration, which drove a robot. The robot, due to its behaviour

> > > (being a Turing Equivalent), caused some atheists to claim it was

> > > consciously experiencing. Now supposing each message between each node

> > > contained additional information such as source node, destination node

> > > (which could be the same as the source node, for feedback loops), the

> > > time the message was sent, and the message. Each node wrote out to a

> > > log on receipt of a message, and on sending a message out. Now after

> > > an hour of the atheist communicating with the robot, the logs could be

> > > examined by a bank of computers, varifying, that for the input

> > > messages each received, the outputs were those expected by a single

> > > node in a lab given the same inputs. They could also varify that no

> > > unexplained messages appeared.

>

> > > Where was the influence of any conscious experiences in the robot?

> > > --------

>

> > > To which you replied:

> > > --------

> > > If it is conscious, the influence is in the outputs, just like ours

> > > are.

> > > --------

>

> > > So rewording my last question to you (seen above in the history),

> > > taking into account what you regard as influence:

>

> > > Where is the affect of conscious experiences in the outputs, if all

> > > the nodes are justgiving the same outputs as they would have singly in

> > > the lab given the same inputs. Are you suggesting that the outputs

> > > given by a single node in the lab were consciously affected, or that

> > > the affect is such that the outputs are the same as without the

> > > affect?

>

> > OK, you've hypothetically monitored all the data going on in the robot

> > mechanism, and then you say that each bit of it is not conscious, so

> > the overall mechanism can't be conscious? But each neuron in a brain

> > is not conscious, and analysis of the brain of a human would produce

> > similar data, and we know that we are conscious. We are reasonably

> > sure that a computer controlled robot would not be conscious, so we

> > can deduce that in the case of human brains 'the whole is more than

> > the sum of the parts' for some reason. So the actual data from a

> > brain, if you take into account the physical arrangement and timing

> > and the connections to the outside world through the senses, would in

> > fact correspond to consciousness.

>

> > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > all are either red or green or blue, and each pixel is just a dot, not

> > a picture. But from a human perspective of the whole, not only are

> > there more than just three colours, the pixels can form meaningful

> > shapes that are related to reality. Similarly, the arrangement of

> > bioelectric firings in the brain cause meaningful shapes in the brain

> > which we understand as related to reality, which we call

> > consciousness, even though each firing is not itself conscious.

>

> > In your hypothetical experiment, the manufactured computer robot might

> > in fact have been conscious, but the individual inputs and outputs of

> > each node in its neural net would not show it. To understand why an

> > organism might or might not be conscious, you have to consider larger

> > aspects of the organism. The biological brain and senses have evolved

> > over billions of years to become aware of external reality, because

> > awareness of reality is an evolutionary survival trait. The robot

> > brain you are considering is an attempt to duplicate that awareness,

> > and to do that, you have to understand how evolution achieved

> > awareness in biological organisms. At this point in the argument, you

> > usually state your personal opinion that evolution does not explain

> > consciousness, because of the soul and God-given free will, your

> > behaviour goes into a loop and you start again from earlier.

>

> > The 'free-will' of the organism is irrelevant to the robot/brain

> > scenario - it applies to conscious beings of any sort. In no way does

> > a human have total free will to do or be whatever they feel like

> > (flying, telepathy, infinite wealth etc), so at best we only have

> > limited freedom to choose our own future. At the microscopic level of

> > the neurons in a brain, the events are unpredictable (because the

> > complexity of the brain and the sense information it receives is

> > sufficient for Chaos Theory to apply), and therefore random. We are

> > controlled by random events, that is true, and we have no free will in

> > that sense. However, we have sufficient understanding of free will to

> > be able to form a moral code based on the notion. You can blame or

> > congratulate your ancestors, or the society you live in, for your own

> > behaviour if you want to. But if you choose to believe you have free

> > will, you can lead your life as though you were responsible for your

> > own actions, and endeavour to have your own affect on the future. If

> > you want to, you can abnegate responsibility, and just 'go with the

> > flow', by tossing coins and consulting the I Ching, as some people do,

> > but if you do that to excess you would obviously not be in control of

> > your own life, and you would eventually not be allowed to vote in

> > elections, or have a bank account (according to some legal definitions

> > of the requirements for these adult privileges). But you would

> > clearly have more free will if you chose to believe you had free will

> > than if you chose to believe you did not. You could even toss a coin

> > to decide whether or not to behave in future as though you had free

> > will (rather than just going with the flow all the time and never

> > making a decision), and your destiny would be controlled by a single

> > random coin toss, that you yourself had initiated.

>

> You haven't addressed the simple issue of where is the influence/

> affect of conscious experiences in the outputs, if all the nodes are

> justgiving the same outputs as they would have singly in

> the lab given the same inputs. Are you suggesting that the outputs

> given by a single node in the lab were consciously influenced/

> affected, or that the affect/influence is such that the outputs are

> the same as without the affect (i.e. there is no affect/influence)?

>

> It is a simple question, why do you avoid answering it?

 

Or was your answer that consciousness would not affect behaviour, as

implied in: "...the manufactured computer robot might in fact have

been conscious, but the individual inputs and outputs of each node in

its neural net would not show it."

Guest James Norris
Posted

On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > >You said:

>

> > > > >--------

> > > > >

>

> > > > >I'd still like your response to this question.

>

> > > > >> >> >> How do we know whether something is free of any rules which state

> > > > >> >> >> which one must happen?

>

> > > > >

> > > > >--------

>

> > > > >Well this was my response:

> > > > >--------

> > > > >Regarding free will, and you question as to how do we know whether

> > > > >something is free of any rules which state which one must happen, I

> > > > >have already replied:

> > > > >-----------

> > > > >With regards to how we would know whether something was free from

> > > > >rules which state which one must happen, you would have no evidence

> > > > >other than your own experience of being able to will which one

> > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > >which I doubt we will, then you would also have the evidence of there

> > > > >being no decernable rules. Any rule shown to the being, the being

> > > > >could either choose to adhere to, or break, in the choice they made.

> > > > >-----------

>

> > > > >Though I'd like to change this to: that for any rule you had for which

> > > > >choice they were going to make, say pressing the black or white button

> > > > >1000 times, they could adhere to it or not, this could include any

> > > > >random distribution rule.

>

> > > > >The reason for the change is that on a robot it could always have a

> > > > >part which gave the same response as it was shown, or to change the

> > > > >response so as to not be the same as which was shown to it. So I have

> > > > >removed it prediction from being shown.

>

> > > > >I have defined free will for you:

>

> > > > >Free will, is for example, that between 2 options, both were possible,

> > > > >and you were able to consciously influence which would happen. and

> > > > >added:

>

> > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > >It is free from any rules which state which one must happen. As I said

> > > > >both are possible, before it was consciously willed which would

> > > > >happen.

> > > > >--------

>

> > > > >You were just showing that you weren't bothering to read what was

> > > > >written.

>

> > > > I read every word. I did not think of that as an answer. Now, you have

> > > > stitched it together for me. Thanks. It still does not give me

> > > > observable behaviors you would regard as regard as indicating

> > > > consciousness. Being "free" is hidden in the hardware.

> > > > <....>

>

> > > > >Where is the influence in the outputs, if all the nodes are just

> > > > >giving the same outputs as they would have singly in the lab given the

> > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > >node in the lab were consciously influenced, or that the influence is

> > > > >such that the outputs are the same as without the influence?

>

> > > > You have forgotten my definition of influence. If I may simplify it.

> > > > Something is influential if it affects an outcome. If a single node in

> > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > would you regard it as conscious? Why or why not?

>

> > > > I think we are closing in on your answer.

>

> > > That wasn't stitched together, that was a cut and paste of what I

> > > wrote in my last response, before it was broken up by your responses.

> > > If you hadn't snipped the history you would have seen that.

>

> > > I'll just put back in the example we are talking about, so there is

> > > context to the question:

>

> > > --------

> > > Supposing there was an artificial neural network, with a billion more

> > > nodes than you had neurons in your brain, and a very 'complicated'

> > > configuration, which drove a robot. The robot, due to its behaviour

> > > (being a Turing Equivalent), caused some atheists to claim it was

> > > consciously experiencing. Now supposing each message between each node

> > > contained additional information such as source node, destination node

> > > (which could be the same as the source node, for feedback loops), the

> > > time the message was sent, and the message. Each node wrote out to a

> > > log on receipt of a message, and on sending a message out. Now after

> > > an hour of the atheist communicating with the robot, the logs could be

> > > examined by a bank of computers, varifying, that for the input

> > > messages each received, the outputs were those expected by a single

> > > node in a lab given the same inputs. They could also varify that no

> > > unexplained messages appeared.

>

> > > Where was the influence of any conscious experiences in the robot?

> > > --------

>

> > > To which you replied:

> > > --------

> > > If it is conscious, the influence is in the outputs, just like ours

> > > are.

> > > --------

>

> > > So rewording my last question to you (seen above in the history),

> > > taking into account what you regard as influence:

>

> > > Where is the affect of conscious experiences in the outputs, if all

> > > the nodes are justgiving the same outputs as they would have singly in

> > > the lab given the same inputs. Are you suggesting that the outputs

> > > given by a single node in the lab were consciously affected, or that

> > > the affect is such that the outputs are the same as without the

> > > affect?

>

> > OK, you've hypothetically monitored all the data going on in the robot

> > mechanism, and then you say that each bit of it is not conscious, so

> > the overall mechanism can't be conscious? But each neuron in a brain

> > is not conscious, and analysis of the brain of a human would produce

> > similar data, and we know that we are conscious. We are reasonably

> > sure that a computer controlled robot would not be conscious, so we

> > can deduce that in the case of human brains 'the whole is more than

> > the sum of the parts' for some reason. So the actual data from a

> > brain, if you take into account the physical arrangement and timing

> > and the connections to the outside world through the senses, would in

> > fact correspond to consciousness.

>

> > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > all are either red or green or blue, and each pixel is just a dot, not

> > a picture. But from a human perspective of the whole, not only are

> > there more than just three colours, the pixels can form meaningful

> > shapes that are related to reality. Similarly, the arrangement of

> > bioelectric firings in the brain cause meaningful shapes in the brain

> > which we understand as related to reality, which we call

> > consciousness, even though each firing is not itself conscious.

>

> > In your hypothetical experiment, the manufactured computer robot might

> > in fact have been conscious, but the individual inputs and outputs of

> > each node in its neural net would not show it. To understand why an

> > organism might or might not be conscious, you have to consider larger

> > aspects of the organism. The biological brain and senses have evolved

> > over billions of years to become aware of external reality, because

> > awareness of reality is an evolutionary survival trait. The robot

> > brain you are considering is an attempt to duplicate that awareness,

> > and to do that, you have to understand how evolution achieved

> > awareness in biological organisms. At this point in the argument, you

> > usually state your personal opinion that evolution does not explain

> > consciousness, because of the soul and God-given free will, your

> > behaviour goes into a loop and you start again from earlier.

>

> > The 'free-will' of the organism is irrelevant to the robot/brain

> > scenario - it applies to conscious beings of any sort. In no way does

> > a human have total free will to do or be whatever they feel like

> > (flying, telepathy, infinite wealth etc), so at best we only have

> > limited freedom to choose our own future. At the microscopic level of

> > the neurons in a brain, the events are unpredictable (because the

> > complexity of the brain and the sense information it receives is

> > sufficient for Chaos Theory to apply), and therefore random. We are

> > controlled by random events, that is true, and we have no free will in

> > that sense. However, we have sufficient understanding of free will to

> > be able to form a moral code based on the notion. You can blame or

> > congratulate your ancestors, or the society you live in, for your own

> > behaviour if you want to. But if you choose to believe you have free

> > will, you can lead your life as though you were responsible for your

> > own actions, and endeavour to have your own affect on the future. If

> > you want to, you can abnegate responsibility, and just 'go with the

> > flow', by tossing coins and consulting the I Ching, as some people do,

> > but if you do that to excess you would obviously not be in control of

> > your own life, and you would eventually not be allowed to vote in

> > elections, or have a bank account (according to some legal definitions

> > of the requirements for these adult privileges). But you would

> > clearly have more free will if you chose to believe you had free will

> > than if you chose to believe you did not. You could even toss a coin

> > to decide whether or not to behave in future as though you had free

> > will (rather than just going with the flow all the time and never

> > making a decision), and your destiny would be controlled by a single

> > random coin toss, that you yourself had initiated.

>

> You haven't addressed the simple issue of where is the influence/

> affect of conscious experiences in the outputs, if all the nodes are

> justgiving the same outputs as they would have singly in

> the lab given the same inputs. Are you suggesting that the outputs

> given by a single node in the lab were consciously influenced/

> affected, or that the affect/influence is such that the outputs are

> the same as without the affect (i.e. there is no affect/influence)?

>

> It is a simple question, why do you avoid answering it?

 

It is a question which you hadn't asked me, so you can hardly claim

that I was avoiding it! The simple answer is no, I am not suggesting

any such thing. Apart from your confusion of the terms effect and

affect, each sentence you write (the output from you as a node) is

meaningless and does not exhibit consciousness. However, the

paragraph as a whole exhibits consciousness, because it has been

posted and arrived at my computer. Try to be more specific in your

writing, and give some examples, and that will help you to communicate

what you mean.

 

Some of the outputs of the conscious organism (eg speech) are

sometimes directly affected by conscious experiences, other outputs

(eg shit) are only indirectly affected by conscious experiences, such

as previous hunger and eating. If a conscious biological brain (in a

laboratory experiment) was delivered the same inputs to identical

neurons as in the earlier recorded hour of data from another brain,

the experiences would be the same (assuming the problem of

constructing an identical arrangement of neurons in the receiving

brain to receive the input data had been solved). Do you disagree?

Guest someone2
Posted

On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

> On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > >You said:

>

> > > > > >--------

> > > > > >

>

> > > > > >I'd still like your response to this question.

>

> > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > >> >> >> which one must happen?

>

> > > > > >

> > > > > >--------

>

> > > > > >Well this was my response:

> > > > > >--------

> > > > > >Regarding free will, and you question as to how do we know whether

> > > > > >something is free of any rules which state which one must happen, I

> > > > > >have already replied:

> > > > > >-----------

> > > > > >With regards to how we would know whether something was free from

> > > > > >rules which state which one must happen, you would have no evidence

> > > > > >other than your own experience of being able to will which one

> > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > >-----------

>

> > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > >choice they were going to make, say pressing the black or white button

> > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > >random distribution rule.

>

> > > > > >The reason for the change is that on a robot it could always have a

> > > > > >part which gave the same response as it was shown, or to change the

> > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > >removed it prediction from being shown.

>

> > > > > >I have defined free will for you:

>

> > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > >and you were able to consciously influence which would happen. and

> > > > > >added:

>

> > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > >It is free from any rules which state which one must happen. As I said

> > > > > >both are possible, before it was consciously willed which would

> > > > > >happen.

> > > > > >--------

>

> > > > > >You were just showing that you weren't bothering to read what was

> > > > > >written.

>

> > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > stitched it together for me. Thanks. It still does not give me

> > > > > observable behaviors you would regard as regard as indicating

> > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > <....>

>

> > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > >such that the outputs are the same as without the influence?

>

> > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > Something is influential if it affects an outcome. If a single node in

> > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > would you regard it as conscious? Why or why not?

>

> > > > > I think we are closing in on your answer.

>

> > > > That wasn't stitched together, that was a cut and paste of what I

> > > > wrote in my last response, before it was broken up by your responses.

> > > > If you hadn't snipped the history you would have seen that.

>

> > > > I'll just put back in the example we are talking about, so there is

> > > > context to the question:

>

> > > > --------

> > > > Supposing there was an artificial neural network, with a billion more

> > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > consciously experiencing. Now supposing each message between each node

> > > > contained additional information such as source node, destination node

> > > > (which could be the same as the source node, for feedback loops), the

> > > > time the message was sent, and the message. Each node wrote out to a

> > > > log on receipt of a message, and on sending a message out. Now after

> > > > an hour of the atheist communicating with the robot, the logs could be

> > > > examined by a bank of computers, varifying, that for the input

> > > > messages each received, the outputs were those expected by a single

> > > > node in a lab given the same inputs. They could also varify that no

> > > > unexplained messages appeared.

>

> > > > Where was the influence of any conscious experiences in the robot?

> > > > --------

>

> > > > To which you replied:

> > > > --------

> > > > If it is conscious, the influence is in the outputs, just like ours

> > > > are.

> > > > --------

>

> > > > So rewording my last question to you (seen above in the history),

> > > > taking into account what you regard as influence:

>

> > > > Where is the affect of conscious experiences in the outputs, if all

> > > > the nodes are justgiving the same outputs as they would have singly in

> > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > given by a single node in the lab were consciously affected, or that

> > > > the affect is such that the outputs are the same as without the

> > > > affect?

>

> > > OK, you've hypothetically monitored all the data going on in the robot

> > > mechanism, and then you say that each bit of it is not conscious, so

> > > the overall mechanism can't be conscious? But each neuron in a brain

> > > is not conscious, and analysis of the brain of a human would produce

> > > similar data, and we know that we are conscious. We are reasonably

> > > sure that a computer controlled robot would not be conscious, so we

> > > can deduce that in the case of human brains 'the whole is more than

> > > the sum of the parts' for some reason. So the actual data from a

> > > brain, if you take into account the physical arrangement and timing

> > > and the connections to the outside world through the senses, would in

> > > fact correspond to consciousness.

>

> > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > all are either red or green or blue, and each pixel is just a dot, not

> > > a picture. But from a human perspective of the whole, not only are

> > > there more than just three colours, the pixels can form meaningful

> > > shapes that are related to reality. Similarly, the arrangement of

> > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > which we understand as related to reality, which we call

> > > consciousness, even though each firing is not itself conscious.

>

> > > In your hypothetical experiment, the manufactured computer robot might

> > > in fact have been conscious, but the individual inputs and outputs of

> > > each node in its neural net would not show it. To understand why an

> > > organism might or might not be conscious, you have to consider larger

> > > aspects of the organism. The biological brain and senses have evolved

> > > over billions of years to become aware of external reality, because

> > > awareness of reality is an evolutionary survival trait. The robot

> > > brain you are considering is an attempt to duplicate that awareness,

> > > and to do that, you have to understand how evolution achieved

> > > awareness in biological organisms. At this point in the argument, you

> > > usually state your personal opinion that evolution does not explain

> > > consciousness, because of the soul and God-given free will, your

> > > behaviour goes into a loop and you start again from earlier.

>

> > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > scenario - it applies to conscious beings of any sort. In no way does

> > > a human have total free will to do or be whatever they feel like

> > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > limited freedom to choose our own future. At the microscopic level of

> > > the neurons in a brain, the events are unpredictable (because the

> > > complexity of the brain and the sense information it receives is

> > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > controlled by random events, that is true, and we have no free will in

> > > that sense. However, we have sufficient understanding of free will to

> > > be able to form a moral code based on the notion. You can blame or

> > > congratulate your ancestors, or the society you live in, for your own

> > > behaviour if you want to. But if you choose to believe you have free

> > > will, you can lead your life as though you were responsible for your

> > > own actions, and endeavour to have your own affect on the future. If

> > > you want to, you can abnegate responsibility, and just 'go with the

> > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > but if you do that to excess you would obviously not be in control of

> > > your own life, and you would eventually not be allowed to vote in

> > > elections, or have a bank account (according to some legal definitions

> > > of the requirements for these adult privileges). But you would

> > > clearly have more free will if you chose to believe you had free will

> > > than if you chose to believe you did not. You could even toss a coin

> > > to decide whether or not to behave in future as though you had free

> > > will (rather than just going with the flow all the time and never

> > > making a decision), and your destiny would be controlled by a single

> > > random coin toss, that you yourself had initiated.

>

> > You haven't addressed the simple issue of where is the influence/

> > affect of conscious experiences in the outputs, if all the nodes are

> > justgiving the same outputs as they would have singly in

> > the lab given the same inputs. Are you suggesting that the outputs

> > given by a single node in the lab were consciously influenced/

> > affected, or that the affect/influence is such that the outputs are

> > the same as without the affect (i.e. there is no affect/influence)?

>

> > It is a simple question, why do you avoid answering it?

>

> It is a question which you hadn't asked me, so you can hardly claim

> that I was avoiding it! The simple answer is no, I am not suggesting

> any such thing. Apart from your confusion of the terms effect and

> affect, each sentence you write (the output from you as a node) is

> meaningless and does not exhibit consciousness. However, the

> paragraph as a whole exhibits consciousness, because it has been

> posted and arrived at my computer. Try to be more specific in your

> writing, and give some examples, and that will help you to communicate

> what you mean.

>

> Some of the outputs of the conscious organism (eg speech) are

> sometimes directly affected by conscious experiences, other outputs

> (eg shit) are only indirectly affected by conscious experiences, such

> as previous hunger and eating. If a conscious biological brain (in a

> laboratory experiment) was delivered the same inputs to identical

> neurons as in the earlier recorded hour of data from another brain,

> the experiences would be the same (assuming the problem of

> constructing an identical arrangement of neurons in the receiving

> brain to receive the input data had been solved). Do you disagree?

 

It was a post you had replied to, and you hadn't addressed the

question. You still haven't addressed it. You are just distracting

away, by asking me a question. It does beg the question of why you

don't seem to be able to face answering.

 

I'll repeat the question for you. Regarding the robot, where is the

influence/ affect of conscious experiences in the outputs, if all the

nodes are justgiving the same outputs as they would have singly in the

lab given the same inputs. Are you suggesting that the outputs given

by a single node in the lab were consciously influenced/ affected, or

that the affect/influence is such that the outputs are the same as

without the affect (i.e. there is no affect/influence)?

Guest James Norris
Posted

On Jul 7, 2:35?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 7 Jul, 02:23, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > >You said:

>

> > > > > >--------

> > > > > >

>

> > > > > >I'd still like your response to this question.

>

> > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > >> >> >> which one must happen?

>

> > > > > >

> > > > > >--------

>

> > > > > >Well this was my response:

> > > > > >--------

> > > > > >Regarding free will, and you question as to how do we know whether

> > > > > >something is free of any rules which state which one must happen, I

> > > > > >have already replied:

> > > > > >-----------

> > > > > >With regards to how we would know whether something was free from

> > > > > >rules which state which one must happen, you would have no evidence

> > > > > >other than your own experience of being able to will which one

> > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > >-----------

>

> > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > >choice they were going to make, say pressing the black or white button

> > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > >random distribution rule.

>

> > > > > >The reason for the change is that on a robot it could always have a

> > > > > >part which gave the same response as it was shown, or to change the

> > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > >removed it prediction from being shown.

>

> > > > > >I have defined free will for you:

>

> > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > >and you were able to consciously influence which would happen. and

> > > > > >added:

>

> > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > >It is free from any rules which state which one must happen. As I said

> > > > > >both are possible, before it was consciously willed which would

> > > > > >happen.

> > > > > >--------

>

> > > > > >You were just showing that you weren't bothering to read what was

> > > > > >written.

>

> > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > stitched it together for me. Thanks. It still does not give me

> > > > > observable behaviors you would regard as regard as indicating

> > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > <....>

>

> > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > >such that the outputs are the same as without the influence?

>

> > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > Something is influential if it affects an outcome. If a single node in

> > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > would you regard it as conscious? Why or why not?

>

> > > > > I think we are closing in on your answer.

>

> > > > That wasn't stitched together, that was a cut and paste of what I

> > > > wrote in my last response, before it was broken up by your responses.

> > > > If you hadn't snipped the history you would have seen that.

>

> > > > I'll just put back in the example we are talking about, so there is

> > > > context to the question:

>

> > > > --------

> > > > Supposing there was an artificial neural network, with a billion more

> > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > consciously experiencing. Now supposing each message between each node

> > > > contained additional information such as source node, destination node

> > > > (which could be the same as the source node, for feedback loops), the

> > > > time the message was sent, and the message. Each node wrote out to a

> > > > log on receipt of a message, and on sending a message out. Now after

> > > > an hour of the atheist communicating with the robot, the logs could be

> > > > examined by a bank of computers, varifying, that for the input

> > > > messages each received, the outputs were those expected by a single

> > > > node in a lab given the same inputs. They could also varify that no

> > > > unexplained messages appeared.

>

> > > > Where was the influence of any conscious experiences in the robot?

> > > > --------

>

> > > > To which you replied:

> > > > --------

> > > > If it is conscious, the influence is in the outputs, just like ours

> > > > are.

> > > > --------

>

> > > > So rewording my last question to you (seen above in the history),

> > > > taking into account what you regard as influence:

>

> > > > Where is the affect of conscious experiences in the outputs, if all

> > > > the nodes are justgiving the same outputs as they would have singly in

> > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > given by a single node in the lab were consciously affected, or that

> > > > the affect is such that the outputs are the same as without the

> > > > affect?

>

> > > OK, you've hypothetically monitored all the data going on in the robot

> > > mechanism, and then you say that each bit of it is not conscious, so

> > > the overall mechanism can't be conscious? But each neuron in a brain

> > > is not conscious, and analysis of the brain of a human would produce

> > > similar data, and we know that we are conscious. We are reasonably

> > > sure that a computer controlled robot would not be conscious, so we

> > > can deduce that in the case of human brains 'the whole is more than

> > > the sum of the parts' for some reason. So the actual data from a

> > > brain, if you take into account the physical arrangement and timing

> > > and the connections to the outside world through the senses, would in

> > > fact correspond to consciousness.

>

> > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > all are either red or green or blue, and each pixel is just a dot, not

> > > a picture. But from a human perspective of the whole, not only are

> > > there more than just three colours, the pixels can form meaningful

> > > shapes that are related to reality. Similarly, the arrangement of

> > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > which we understand as related to reality, which we call

> > > consciousness, even though each firing is not itself conscious.

>

> > > In your hypothetical experiment, the manufactured computer robot might

> > > in fact have been conscious, but the individual inputs and outputs of

> > > each node in its neural net would not show it. To understand why an

> > > organism might or might not be conscious, you have to consider larger

> > > aspects of the organism. The biological brain and senses have evolved

> > > over billions of years to become aware of external reality, because

> > > awareness of reality is an evolutionary survival trait. The robot

> > > brain you are considering is an attempt to duplicate that awareness,

> > > and to do that, you have to understand how evolution achieved

> > > awareness in biological organisms. At this point in the argument, you

> > > usually state your personal opinion that evolution does not explain

> > > consciousness, because of the soul and God-given free will, your

> > > behaviour goes into a loop and you start again from earlier.

>

> > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > scenario - it applies to conscious beings of any sort. In no way does

> > > a human have total free will to do or be whatever they feel like

> > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > limited freedom to choose our own future. At the microscopic level of

> > > the neurons in a brain, the events are unpredictable (because the

> > > complexity of the brain and the sense information it receives is

> > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > controlled by random events, that is true, and we have no free will in

> > > that sense. However, we have sufficient understanding of free will to

> > > be able to form a moral code based on the notion. You can blame or

> > > congratulate your ancestors, or the society you live in, for your own

> > > behaviour if you want to. But if you choose to believe you have free

> > > will, you can lead your life as though you were responsible for your

> > > own actions, and endeavour to have your own affect on the future. If

> > > you want to, you can abnegate responsibility, and just 'go with the

> > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > but if you do that to excess you would obviously not be in control of

> > > your own life, and you would eventually not be allowed to vote in

> > > elections, or have a bank account (according to some legal definitions

> > > of the requirements for these adult privileges). But you would

> > > clearly have more free will if you chose to believe you had free will

> > > than if you chose to believe you did not. You could even toss a coin

> > > to decide whether or not to behave in future as though you had free

> > > will (rather than just going with the flow all the time and never

> > > making a decision), and your destiny would be controlled by a single

> > > random coin toss, that you yourself had initiated.

>

> > You haven't addressed the simple issue of where is the influence/

> > affect of conscious experiences in the outputs, if all the nodes are

> > justgiving the same outputs as they would have singly in

> > the lab given the same inputs. Are you suggesting that the outputs

> > given by a single node in the lab were consciously influenced/

> > affected, or that the affect/influence is such that the outputs are

> > the same as without the affect (i.e. there is no affect/influence)?

>

> > It is a simple question, why do you avoid answering it?

>

> Or was your answer that consciousness would not affect behaviour, as

> implied in: "...the manufactured computer robot might in fact have

> been conscious, but the individual inputs and outputs of each node in

> its neural net would not show it."

 

Your question was not clear enough for me to give you an opinion. You

are focusing attention on analysis of the stored I/O data transactions

within the neural net. Sense data (eg from a TV camera) is fed into

the input nodes, the output from these go to the inputs of other

nodes, and so on until eventually the network output nodes trigger a

response - the programmed device perhaps indicates that it has 'seen'

something, by sounding an alarm. It is clear that no particular part

of the device is conscious, and there is no reason to suppose that the

overall system is conscious either. I gather (from your rather

elastically-worded phrasing in the preceding 1000 messages in this

thread) that your argument goes on that the same applies to the human

brain - but I disagree with you there as you know, because I reckon

that the biophysical laws of evolution explain why the overall system

is conscious.

 

When I wrote that the robot device 'might in fact have been

conscious', that is in the context of your belief that a conscious

'soul' is superimposed on the biomechanical workings of the brain in a

way that science is unaware of, and you may be correct, I don't know.

But if that were possible, the same might also be occuring in the

computer experiment you described. Perhaps the residual currents in

the wires of the circuit board contain a homeopathic 'memory' of the

cpu and memory workings of the running program, which by some divine

influence is conscious. I think you have been asking other responders

how such a 'soul' could influence the overall behaviour of the

computer.

 

Remembering that computers have been specifically designed so that the

external environment does not affect them, and the most sensitive

components (the program and cpu) are heavily shielded from external

influences so that it is almost impossible for anything other than a

nearby nuclear explosion to affect their behaviour, it is probably not

easy for a computer 'soul' to have that effect. But it might take

only one bit of logic in the cpu to go wrong just once for the whole

program to behave totally differently, and crash or print a wrong

number. The reason that computer memory is so heavily shielded is

that the program is extremely sensitive to influences such as magnetic

and electric fields, and particularly gamma particles, cosmic rays, X-

rays and the like. If your data analysis showed that a couple of

bits in the neural net had in fact 'transcended' the program, and

seemingly changed of their own accord, there would be no reason to

suggest that the computer had a 'soul'. More likely an alpha-particle

had passed through a critical logic gate in the bus controller or

memory chip, affecting the timing or data in a way that led to an

eventual 'glitch'.

Guest James Norris
Posted

On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > >You said:

>

> > > > > > >--------

> > > > > > >

>

> > > > > > >I'd still like your response to this question.

>

> > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > >> >> >> which one must happen?

>

> > > > > > >

> > > > > > >--------

>

> > > > > > >Well this was my response:

> > > > > > >--------

> > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > >something is free of any rules which state which one must happen, I

> > > > > > >have already replied:

> > > > > > >-----------

> > > > > > >With regards to how we would know whether something was free from

> > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > >other than your own experience of being able to will which one

> > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > >-----------

>

> > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > >random distribution rule.

>

> > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > >removed it prediction from being shown.

>

> > > > > > >I have defined free will for you:

>

> > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > >and you were able to consciously influence which would happen. and

> > > > > > >added:

>

> > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > >both are possible, before it was consciously willed which would

> > > > > > >happen.

> > > > > > >--------

>

> > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > >written.

>

> > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > observable behaviors you would regard as regard as indicating

> > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > <....>

>

> > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > I think we are closing in on your answer.

>

> > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > wrote in my last response, before it was broken up by your responses.

> > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > I'll just put back in the example we are talking about, so there is

> > > > > context to the question:

>

> > > > > --------

> > > > > Supposing there was an artificial neural network, with a billion more

> > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > consciously experiencing. Now supposing each message between each node

> > > > > contained additional information such as source node, destination node

> > > > > (which could be the same as the source node, for feedback loops), the

> > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > examined by a bank of computers, varifying, that for the input

> > > > > messages each received, the outputs were those expected by a single

> > > > > node in a lab given the same inputs. They could also varify that no

> > > > > unexplained messages appeared.

>

> > > > > Where was the influence of any conscious experiences in the robot?

> > > > > --------

>

> > > > > To which you replied:

> > > > > --------

> > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > are.

> > > > > --------

>

> > > > > So rewording my last question to you (seen above in the history),

> > > > > taking into account what you regard as influence:

>

> > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > given by a single node in the lab were consciously affected, or that

> > > > > the affect is such that the outputs are the same as without the

> > > > > affect?

>

> > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > is not conscious, and analysis of the brain of a human would produce

> > > > similar data, and we know that we are conscious. We are reasonably

> > > > sure that a computer controlled robot would not be conscious, so we

> > > > can deduce that in the case of human brains 'the whole is more than

> > > > the sum of the parts' for some reason. So the actual data from a

> > > > brain, if you take into account the physical arrangement and timing

> > > > and the connections to the outside world through the senses, would in

> > > > fact correspond to consciousness.

>

> > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > a picture. But from a human perspective of the whole, not only are

> > > > there more than just three colours, the pixels can form meaningful

> > > > shapes that are related to reality. Similarly, the arrangement of

> > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > which we understand as related to reality, which we call

> > > > consciousness, even though each firing is not itself conscious.

>

> > > > In your hypothetical experiment, the manufactured computer robot might

> > > > in fact have been conscious, but the individual inputs and outputs of

> > > > each node in its neural net would not show it. To understand why an

> > > > organism might or might not be conscious, you have to consider larger

> > > > aspects of the organism. The biological brain and senses have evolved

> > > > over billions of years to become aware of external reality, because

> > > > awareness of reality is an evolutionary survival trait. The robot

> > > > brain you are considering is an attempt to duplicate that awareness,

> > > > and to do that, you have to understand how evolution achieved

> > > > awareness in biological organisms. At this point in the argument, you

> > > > usually state your personal opinion that evolution does not explain

> > > > consciousness, because of the soul and God-given free will, your

> > > > behaviour goes into a loop and you start again from earlier.

>

> > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > a human have total free will to do or be whatever they feel like

> > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > limited freedom to choose our own future. At the microscopic level of

> > > > the neurons in a brain, the events are unpredictable (because the

> > > > complexity of the brain and the sense information it receives is

> > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > controlled by random events, that is true, and we have no free will in

> > > > that sense. However, we have sufficient understanding of free will to

> > > > be able to form a moral code based on the notion. You can blame or

> > > > congratulate your ancestors, or the society you live in, for your own

> > > > behaviour if you want to. But if you choose to believe you have free

> > > > will, you can lead your life as though you were responsible for your

> > > > own actions, and endeavour to have your own affect on the future. If

> > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > but if you do that to excess you would obviously not be in control of

> > > > your own life, and you would eventually not be allowed to vote in

> > > > elections, or have a bank account (according to some legal definitions

> > > > of the requirements for these adult privileges). But you would

> > > > clearly have more free will if you chose to believe you had free will

> > > > than if you chose to believe you did not. You could even toss a coin

> > > > to decide whether or not to behave in future as though you had free

> > > > will (rather than just going with the flow all the time and never

> > > > making a decision), and your destiny would be controlled by a single

> > > > random coin toss, that you yourself had initiated.

>

> > > You haven't addressed the simple issue of where is the influence/

> > > affect of conscious experiences in the outputs, if all the nodes are

> > > justgiving the same outputs as they would have singly in

> > > the lab given the same inputs. Are you suggesting that the outputs

> > > given by a single node in the lab were consciously influenced/

> > > affected, or that the affect/influence is such that the outputs are

> > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > It is a simple question, why do you avoid answering it?

>

> > It is a question which you hadn't asked me, so you can hardly claim

> > that I was avoiding it! The simple answer is no, I am not suggesting

> > any such thing. Apart from your confusion of the terms effect and

> > affect, each sentence you write (the output from you as a node) is

> > meaningless and does not exhibit consciousness. However, the

> > paragraph as a whole exhibits consciousness, because it has been

> > posted and arrived at my computer. Try to be more specific in your

> > writing, and give some examples, and that will help you to communicate

> > what you mean.

>

> > Some of the outputs of the conscious organism (eg speech) are

> > sometimes directly affected by conscious experiences, other outputs

> > (eg shit) are only indirectly affected by conscious experiences, such

> > as previous hunger and eating. If a conscious biological brain (in a

> > laboratory experiment) was delivered the same inputs to identical

> > neurons as in the earlier recorded hour of data from another brain,

> > the experiences would be the same (assuming the problem of

> > constructing an identical arrangement of neurons in the receiving

> > brain to receive the input data had been solved). Do you disagree?

>

> It was a post you had replied to, and you hadn't addressed the

> question. You still haven't addressed it. You are just distracting

> away, by asking me a question. It does beg the question of why you

> don't seem to be able to face answering.

>

> I'll repeat the question for you. Regarding the robot, where is the

> influence/ affect of conscious experiences in the outputs, if all the

> nodes are justgiving the same outputs as they would have singly in the

> lab given the same inputs. Are you suggesting that the outputs given

> by a single node in the lab were consciously influenced/ affected, or

> that the affect/influence is such that the outputs are the same as

> without the affect (i.e. there is no affect/influence)?

 

You are in too much of a hurry, and you didn't read my reply to your

first response. As you have repeated your very poorly-worded question

- which incidentally you should try and tidy up to give it a proper

meaning - I'll repeat my reply:

> Your question was not clear enough for me to give you an opinion. You

> are focusing attention on analysis of the stored I/O data transactions

> within the neural net. Sense data (eg from a TV camera) is fed into

> the input nodes, the output from these go to the inputs of other

> nodes, and so on until eventually the network output nodes trigger a

> response - the programmed device perhaps indicates that it has 'seen'

> something, by sounding an alarm. It is clear that no particular part

> of the device is conscious, and there is no reason to suppose that the

> overall system is conscious either. I gather (from your rather

> elastically-worded phrasing in the preceding 1000 messages in this

> thread) that your argument goes on that the same applies to the human

> brain - but I disagree with you there as you know, because I reckon

> that the biophysical laws of evolution explain why the overall system

> is conscious.

>

> When I wrote that the robot device 'might in fact have been

> conscious', that is in the context of your belief that a conscious

> 'soul' is superimposed on the biomechanical workings of the brain in a

> way that science is unaware of, and you may be correct, I don't know.

> But if that were possible, the same might also be occuring in the

> computer experiment you described. Perhaps the residual currents in

> the wires of the circuit board contain a homeopathic 'memory' of the

> cpu and memory workings of the running program, which by some divine

> influence is conscious. I think you have been asking other responders

> how such a 'soul' could influence the overall behaviour of the

> computer.

>

> Remembering that computers have been specifically designed so that the

> external environment does not affect them, and the most sensitive

> components (the program and cpu) are heavily shielded from external

> influences so that it is almost impossible for anything other than a

> nearby nuclear explosion to affect their behaviour, it is probably not

> easy for a computer 'soul' to have that effect. But it might take

> only one bit of logic in the cpu to go wrong just once for the whole

> program to behave totally differently, and crash or print a wrong

> number. The reason that computer memory is so heavily shielded is

> that the program is extremely sensitive to influences such as magnetic

> and electric fields, and particularly gamma particles, cosmic rays, X-

> rays and the like. If your data analysis showed that a couple of

> bits in the neural net had in fact 'transcended' the program, and

> seemingly changed of their own accord, there would be no reason to

> suggest that the computer had a 'soul'. More likely an alpha-particle

> had passed through a critical logic gate in the bus controller or

> memory chip, affecting the timing or data in a way that led to an

> eventual 'glitch'.

 

If that isn't an adequate response, try writing out your question

again in a more comprehensible form. You don't distinguish between

the overall behaviour of the robot (do the outputs make it walk and

talk?) and the behaviour of the computer processing (the I/O on the

chip and program circuitry), for example.

 

I'm trying to help you express whatever it is you are trying to

explain to the groups in your header (heathen alt.atheism, God-fearing

alt.religion) in which you are taking the God-given free-will

position. It looks like you are trying to construct an unassailable

argument relating subjective experiences and behaviour, computers and

brains, and free will, to notions of the existence of God and the

soul. It might be possible to do that if you express yourself more

clearly, and some of your logic was definitely wrong earlier, so you

would have to correct that as well. I expect that if whatever you

come up with doesn't include an explanation for why evolutionary

principles don't produce consciousness (as you claim) you won't really

be doing anything other than time-wasting pseudo-science, because

scientifically-minded people will say you haven't answered our

question - why do you introduce the idea of a 'soul' (for which their

is no evidence) to explain consciousness in animals such as ourselves,

when evolution (for which there is evidence) explains it perfectly?

Guest Jim07D7
Posted

someone2 <glenn.spigel2@btinternet.com> said:

>On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>> someone2 <glenn.spig...@btinternet.com> said:

>>

>>

>>

>>

>>

>>

>>

>> >You said:

>>

>> >--------

>> >

>>

>> >I'd still like your response to this question.

>>

>> >> >> >> How do we know whether something is free of any rules which state

>> >> >> >> which one must happen?

>>

>> >

>> >--------

>>

>> >Well this was my response:

>> >--------

>> >Regarding free will, and you question as to how do we know whether

>> >something is free of any rules which state which one must happen, I

>> >have already replied:

>> >-----------

>> >With regards to how we would know whether something was free from

>> >rules which state which one must happen, you would have no evidence

>> >other than your own experience of being able to will which one

>> >happens. Though if we ever detected the will influencing behaviour,

>> >which I doubt we will, then you would also have the evidence of there

>> >being no decernable rules. Any rule shown to the being, the being

>> >could either choose to adhere to, or break, in the choice they made.

>> >-----------

>>

>> >Though I'd like to change this to: that for any rule you had for which

>> >choice they were going to make, say pressing the black or white button

>> >1000 times, they could adhere to it or not, this could include any

>> >random distribution rule.

>>

>> >The reason for the change is that on a robot it could always have a

>> >part which gave the same response as it was shown, or to change the

>> >response so as to not be the same as which was shown to it. So I have

>> >removed it prediction from being shown.

>>

>> >I have defined free will for you:

>>

>> >Free will, is for example, that between 2 options, both were possible,

>> >and you were able to consciously influence which would happen. and

>> >added:

>>

>> >Regarding the 'free will', it is free to choose either option A or B.

>> >It is free from any rules which state which one must happen. As I said

>> >both are possible, before it was consciously willed which would

>> >happen.

>> >--------

>>

>> >You were just showing that you weren't bothering to read what was

>> >written.

>>

>> I read every word. I did not think of that as an answer. Now, you have

>> stitched it together for me. Thanks. It still does not give me

>> observable behaviors you would regard as regard as indicating

>> consciousness. Being "free" is hidden in the hardware.

>> <....>

>>

>> >Where is the influence in the outputs, if all the nodes are just

>> >giving the same outputs as they would have singly in the lab given the

>> >same inputs. Are you suggesting that the outputs given by a single

>> >node in the lab were consciously influenced, or that the influence is

>> >such that the outputs are the same as without the influence?

>>

>> You have forgotten my definition of influence. If I may simplify it.

>> Something is influential if it affects an outcome. If a single node in

>> a lab gave all the same responses as you do, over weeks of testing,

>> would you regard it as conscious? Why or why not?

>>

>> I think we are closing in on your answer.

>>

>

>That wasn't stitched together, that was a cut and paste of what I

>wrote in my last response, before it was broken up by your responses.

>If you hadn't snipped the history you would have seen that.

>

>I'll just put back in the example we are talking about, so there is

>context to the question:

>

>--------

>Supposing there was an artificial neural network, with a billion more

>nodes than you had neurons in your brain, and a very 'complicated'

>configuration, which drove a robot. The robot, due to its behaviour

>(being a Turing Equivalent), caused some atheists to claim it was

>consciously experiencing. Now supposing each message between each node

>contained additional information such as source node, destination node

>(which could be the same as the source node, for feedback loops), the

>time the message was sent, and the message. Each node wrote out to a

>log on receipt of a message, and on sending a message out. Now after

>an hour of the atheist communicating with the robot, the logs could be

>examined by a bank of computers, varifying, that for the input

>messages each received, the outputs were those expected by a single

>node in a lab given the same inputs. They could also varify that no

>unexplained messages appeared.

>

>Where was the influence of any conscious experiences in the robot?

>--------

>

>To which you replied:

>--------

>If it is conscious, the influence is in the outputs, just like ours

>are.

>--------

>

>So rewording my last question to you (seen above in the history),

>taking into account what you regard as influence:

>

>Where is the affect of conscious experiences in the outputs, if all

>the nodes are justgiving the same outputs as they would have singly in

>the lab given the same inputs. Are you suggesting that the outputs

>given by a single node in the lab were consciously affected, or that

>the affect is such that the outputs are the same as without the

>affect?

>

>I notice you avoided facing answering the question. Perhaps you would

>care to directly answer this.

 

My resistance is causing you to start acknowledging that structure

plays a role in your deciding whether an entity is conscious; not just

behavior. Eventually you might see that you ascribe consciousness to

other humans on account of, in part, their structural similarity to

you. The question will remain, what justifies ascribing consciousness

to an entity. WHen we agree on that, I will be able to give you an

answer that satisfies you. Not until then.

Guest someone2
Posted

On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

> On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > >You said:

>

> > > > > > > >--------

> > > > > > > >

>

> > > > > > > >I'd still like your response to this question.

>

> > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > >> >> >> which one must happen?

>

> > > > > > > >

> > > > > > > >--------

>

> > > > > > > >Well this was my response:

> > > > > > > >--------

> > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > >have already replied:

> > > > > > > >-----------

> > > > > > > >With regards to how we would know whether something was free from

> > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > >other than your own experience of being able to will which one

> > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > >-----------

>

> > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > >random distribution rule.

>

> > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > >removed it prediction from being shown.

>

> > > > > > > >I have defined free will for you:

>

> > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > >added:

>

> > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > >happen.

> > > > > > > >--------

>

> > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > >written.

>

> > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > <....>

>

> > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > I think we are closing in on your answer.

>

> > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > context to the question:

>

> > > > > > --------

> > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > contained additional information such as source node, destination node

> > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > messages each received, the outputs were those expected by a single

> > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > unexplained messages appeared.

>

> > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > --------

>

> > > > > > To which you replied:

> > > > > > --------

> > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > are.

> > > > > > --------

>

> > > > > > So rewording my last question to you (seen above in the history),

> > > > > > taking into account what you regard as influence:

>

> > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > the affect is such that the outputs are the same as without the

> > > > > > affect?

>

> > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > brain, if you take into account the physical arrangement and timing

> > > > > and the connections to the outside world through the senses, would in

> > > > > fact correspond to consciousness.

>

> > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > a picture. But from a human perspective of the whole, not only are

> > > > > there more than just three colours, the pixels can form meaningful

> > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > which we understand as related to reality, which we call

> > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > each node in its neural net would not show it. To understand why an

> > > > > organism might or might not be conscious, you have to consider larger

> > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > over billions of years to become aware of external reality, because

> > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > and to do that, you have to understand how evolution achieved

> > > > > awareness in biological organisms. At this point in the argument, you

> > > > > usually state your personal opinion that evolution does not explain

> > > > > consciousness, because of the soul and God-given free will, your

> > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > a human have total free will to do or be whatever they feel like

> > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > complexity of the brain and the sense information it receives is

> > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > controlled by random events, that is true, and we have no free will in

> > > > > that sense. However, we have sufficient understanding of free will to

> > > > > be able to form a moral code based on the notion. You can blame or

> > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > will, you can lead your life as though you were responsible for your

> > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > but if you do that to excess you would obviously not be in control of

> > > > > your own life, and you would eventually not be allowed to vote in

> > > > > elections, or have a bank account (according to some legal definitions

> > > > > of the requirements for these adult privileges). But you would

> > > > > clearly have more free will if you chose to believe you had free will

> > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > to decide whether or not to behave in future as though you had free

> > > > > will (rather than just going with the flow all the time and never

> > > > > making a decision), and your destiny would be controlled by a single

> > > > > random coin toss, that you yourself had initiated.

>

> > > > You haven't addressed the simple issue of where is the influence/

> > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > justgiving the same outputs as they would have singly in

> > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > given by a single node in the lab were consciously influenced/

> > > > affected, or that the affect/influence is such that the outputs are

> > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > It is a simple question, why do you avoid answering it?

>

> > > It is a question which you hadn't asked me, so you can hardly claim

> > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > any such thing. Apart from your confusion of the terms effect and

> > > affect, each sentence you write (the output from you as a node) is

> > > meaningless and does not exhibit consciousness. However, the

> > > paragraph as a whole exhibits consciousness, because it has been

> > > posted and arrived at my computer. Try to be more specific in your

> > > writing, and give some examples, and that will help you to communicate

> > > what you mean.

>

> > > Some of the outputs of the conscious organism (eg speech) are

> > > sometimes directly affected by conscious experiences, other outputs

> > > (eg shit) are only indirectly affected by conscious experiences, such

> > > as previous hunger and eating. If a conscious biological brain (in a

> > > laboratory experiment) was delivered the same inputs to identical

> > > neurons as in the earlier recorded hour of data from another brain,

> > > the experiences would be the same (assuming the problem of

> > > constructing an identical arrangement of neurons in the receiving

> > > brain to receive the input data had been solved). Do you disagree?

>

> > It was a post you had replied to, and you hadn't addressed the

> > question. You still haven't addressed it. You are just distracting

> > away, by asking me a question. It does beg the question of why you

> > don't seem to be able to face answering.

>

> > I'll repeat the question for you. Regarding the robot, where is the

> > influence/ affect of conscious experiences in the outputs, if all the

> > nodes are justgiving the same outputs as they would have singly in the

> > lab given the same inputs. Are you suggesting that the outputs given

> > by a single node in the lab were consciously influenced/ affected, or

> > that the affect/influence is such that the outputs are the same as

> > without the affect (i.e. there is no affect/influence)?

>

> You are in too much of a hurry, and you didn't read my reply to your

> first response. As you have repeated your very poorly-worded question

> - which incidentally you should try and tidy up to give it a proper

> meaning - I'll repeat my reply:

>

> > > Your question was not clear enough for me to give you an opinion. You

> > > are focusing attention on analysis of the stored I/O data transactions

> > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > the input nodes, the output from these go to the inputs of other

> > > nodes, and so on until eventually the network output nodes trigger a

> > > response - the programmed device perhaps indicates that it has 'seen'

> > > something, by sounding an alarm. It is clear that no particular part

> > > of the device is conscious, and there is no reason to suppose that the

> > > overall system is conscious either. I gather (from your rather

> > > elastically-worded phrasing in the preceding 1000 messages in this

> > > thread) that your argument goes on that the same applies to the human

> > > brain - but I disagree with you there as you know, because I reckon

> > > that the biophysical laws of evolution explain why the overall system

> > > is conscious.

>

> > > When I wrote that the robot device 'might in fact have been

> > > conscious', that is in the context of your belief that a conscious

> > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > way that science is unaware of, and you may be correct, I don't know.

> > > But if that were possible, the same might also be occuring in the

> > > computer experiment you described. Perhaps the residual currents in

> > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > cpu and memory workings of the running program, which by some divine

> > > influence is conscious. I think you have been asking other responders

> > > how such a 'soul' could influence the overall behaviour of the

> > > computer.

>

> > > Remembering that computers have been specifically designed so that the

> > > external environment does not affect them, and the most sensitive

> > > components (the program and cpu) are heavily shielded from external

> > > influences so that it is almost impossible for anything other than a

> > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > easy for a computer 'soul' to have that effect. But it might take

> > > only one bit of logic in the cpu to go wrong just once for the whole

> > > program to behave totally differently, and crash or print a wrong

> > > number. The reason that computer memory is so heavily shielded is

> > > that the program is extremely sensitive to influences such as magnetic

> > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > rays and the like. If your data analysis showed that a couple of

> > > bits in the neural net had in fact 'transcended' the program, and

> > > seemingly changed of their own accord, there would be no reason to

> > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > had passed through a critical logic gate in the bus controller or

> > > memory chip, affecting the timing or data in a way that led to an

> > > eventual 'glitch'.

>

> If that isn't an adequate response, try writing out your question

> again in a more comprehensible form. You don't distinguish between

> the overall behaviour of the robot (do the outputs make it walk and

> talk?) and the behaviour of the computer processing (the I/O on the

> chip and program circuitry), for example.

>

> I'm trying to help you express whatever it is you are trying to

> explain to the groups in your header (heathen alt.atheism, God-fearing

> alt.religion) in which you are taking the God-given free-will

> position. It looks like you are trying to construct an unassailable

> argument relating subjective experiences and behaviour, computers and

> brains, and free will, to notions of the existence of God and the

> soul. It might be possible to do that if you express yourself more

> clearly, and some of your logic was definitely wrong earlier, so you

> would have to correct that as well. I expect that if whatever you

> come up with doesn't include an explanation for why evolutionary

> principles don't produce consciousness (as you claim) you won't really

> be doing anything other than time-wasting pseudo-science, because

> scientifically-minded people will say you haven't answered our

> question - why do you introduce the idea of a 'soul' (for which their

> is no evidence) to explain consciousness in animals such as ourselves,

> when evolution (for which there is evidence) explains it perfectly?

 

I'll put the example again:

 

--------

Supposing there was an artificial neural network, with a billion more

nodes than you had neurons in your brain, and a very 'complicated'

configuration, which drove a robot. The robot, due to its behaviour

(being a Turing Equivalent), caused some atheists to claim it was

consciously experiencing. Now supposing each message between each node

contained additional information such as source node, destination node

(which could be the same as the source node, for feedback loops), the

time the message was sent, and the message. Each node wrote out to a

log on receipt of a message, and on sending a message out. Now after

an hour of the atheist communicating with the robot, the logs could be

examined by a bank of computers, varifying, that for the input

messages each received, the outputs were those expected by a single

node in a lab given the same inputs. They could also varify that no

unexplained messages appeared.

---------

 

You seem to have understood the example, but asked whether it walked

and talked, yes lets say it does, and would be a contender for an

olympic gymnastics medal, and said it hurt if you stamped on its foot.

 

With regards to the question you are having a problem comprehending,

I'll reword it for you. By conscious experiences I am refering to the

sensations we experience such as pleasure and pain, visual and

auditory perception etc.

 

Are you suggesting:

 

A) The atheists were wrong to consider it to be consciously

experiencing.

B) Were correct to say that was consciously experiencing, but that it

doesn't influence behaviour.

C) It might have been consciously experiencing, how could you tell, it

doesn't influence behaviour.

D) It was consciously experiencing and that influenced its behaviour.

 

If you select D, as all the nodes are giving the same outputs as they

would have singly in the lab given the same inputs, could you also

select between D1, and D2:

 

D1) The outputs given by a single node in the lab were also influenced

by conscious experiences.

D2) The outputs given by a single node in the lab were not influenced

by conscious experiences, but the influence of the conscious

experiences within the robot, is such that the outputs are the same as

without the influence of the conscious experiences.

 

If you are still having problems comprehending, then simply write out

the sentances you could not understand, and I'll reword them for you.

If you want to add another option, then simply state it, obviously the

option should be related to whether the robot was consciously

experiencing, and if it was, whether the conscious experiences

influenced behaviour.

Guest someone2
Posted

On 7 Jul, 07:20, Jim07D7 <Jim0...@nospam.net> wrote:

> someone2 <glenn.spig...@btinternet.com> said:

>

>

>

>

>

> >On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

> >> someone2 <glenn.spig...@btinternet.com> said:

>

> >> >You said:

>

> >> >--------

> >> >

>

> >> >I'd still like your response to this question.

>

> >> >> >> >> How do we know whether something is free of any rules which state

> >> >> >> >> which one must happen?

>

> >> >

> >> >--------

>

> >> >Well this was my response:

> >> >--------

> >> >Regarding free will, and you question as to how do we know whether

> >> >something is free of any rules which state which one must happen, I

> >> >have already replied:

> >> >-----------

> >> >With regards to how we would know whether something was free from

> >> >rules which state which one must happen, you would have no evidence

> >> >other than your own experience of being able to will which one

> >> >happens. Though if we ever detected the will influencing behaviour,

> >> >which I doubt we will, then you would also have the evidence of there

> >> >being no decernable rules. Any rule shown to the being, the being

> >> >could either choose to adhere to, or break, in the choice they made.

> >> >-----------

>

> >> >Though I'd like to change this to: that for any rule you had for which

> >> >choice they were going to make, say pressing the black or white button

> >> >1000 times, they could adhere to it or not, this could include any

> >> >random distribution rule.

>

> >> >The reason for the change is that on a robot it could always have a

> >> >part which gave the same response as it was shown, or to change the

> >> >response so as to not be the same as which was shown to it. So I have

> >> >removed it prediction from being shown.

>

> >> >I have defined free will for you:

>

> >> >Free will, is for example, that between 2 options, both were possible,

> >> >and you were able to consciously influence which would happen. and

> >> >added:

>

> >> >Regarding the 'free will', it is free to choose either option A or B.

> >> >It is free from any rules which state which one must happen. As I said

> >> >both are possible, before it was consciously willed which would

> >> >happen.

> >> >--------

>

> >> >You were just showing that you weren't bothering to read what was

> >> >written.

>

> >> I read every word. I did not think of that as an answer. Now, you have

> >> stitched it together for me. Thanks. It still does not give me

> >> observable behaviors you would regard as regard as indicating

> >> consciousness. Being "free" is hidden in the hardware.

> >> <....>

>

> >> >Where is the influence in the outputs, if all the nodes are just

> >> >giving the same outputs as they would have singly in the lab given the

> >> >same inputs. Are you suggesting that the outputs given by a single

> >> >node in the lab were consciously influenced, or that the influence is

> >> >such that the outputs are the same as without the influence?

>

> >> You have forgotten my definition of influence. If I may simplify it.

> >> Something is influential if it affects an outcome. If a single node in

> >> a lab gave all the same responses as you do, over weeks of testing,

> >> would you regard it as conscious? Why or why not?

>

> >> I think we are closing in on your answer.

>

> >That wasn't stitched together, that was a cut and paste of what I

> >wrote in my last response, before it was broken up by your responses.

> >If you hadn't snipped the history you would have seen that.

>

> >I'll just put back in the example we are talking about, so there is

> >context to the question:

>

> >--------

> >Supposing there was an artificial neural network, with a billion more

> >nodes than you had neurons in your brain, and a very 'complicated'

> >configuration, which drove a robot. The robot, due to its behaviour

> >(being a Turing Equivalent), caused some atheists to claim it was

> >consciously experiencing. Now supposing each message between each node

> >contained additional information such as source node, destination node

> >(which could be the same as the source node, for feedback loops), the

> >time the message was sent, and the message. Each node wrote out to a

> >log on receipt of a message, and on sending a message out. Now after

> >an hour of the atheist communicating with the robot, the logs could be

> >examined by a bank of computers, varifying, that for the input

> >messages each received, the outputs were those expected by a single

> >node in a lab given the same inputs. They could also varify that no

> >unexplained messages appeared.

>

> >Where was the influence of any conscious experiences in the robot?

> >--------

>

> >To which you replied:

> >--------

> >If it is conscious, the influence is in the outputs, just like ours

> >are.

> >--------

>

> >So rewording my last question to you (seen above in the history),

> >taking into account what you regard as influence:

>

> >Where is the affect of conscious experiences in the outputs, if all

> >the nodes are justgiving the same outputs as they would have singly in

> >the lab given the same inputs. Are you suggesting that the outputs

> >given by a single node in the lab were consciously affected, or that

> >the affect is such that the outputs are the same as without the

> >affect?

>

> >I notice you avoided facing answering the question. Perhaps you would

> >care to directly answer this.

>

> My resistance is causing you to start acknowledging that structure

> plays a role in your deciding whether an entity is conscious; not just

> behavior. Eventually you might see that you ascribe consciousness to

> other humans on account of, in part, their structural similarity to

> you. The question will remain, what justifies ascribing consciousness

> to an entity. WHen we agree on that, I will be able to give you an

> answer that satisfies you. Not until then.

>

 

I fail to how my understanding affects your answer. You are just

avoiding answering. The reason it seems to me, is that you can't face

answering, as you couldn't state that the conscious experiences would

be influencing the robot's behaviour, and you know that it is

implausible that conscious experiences don't influence our behaviour.

First your excuse was that I hadn't taken your definition of influence

into account, then when I did, you simply try to stear the

converstation away. To me your behaviour seems pathetic. If you wish

to use the excuse of being offended to avoid answering and ending the

conversation, then that is your choice. You can thank me for giving

you an excuse to avoid answering it, that you are so desperate for.

Guest James Norris
Posted

On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

> On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > >You said:

>

> > > > > > > > >--------

> > > > > > > > >

>

> > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > >

> > > > > > > > >--------

>

> > > > > > > > >Well this was my response:

> > > > > > > > >--------

> > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > >have already replied:

> > > > > > > > >-----------

> > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > >-----------

>

> > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > >random distribution rule.

>

> > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > >I have defined free will for you:

>

> > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > >added:

>

> > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > >happen.

> > > > > > > > >--------

>

> > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > >written.

>

> > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > <....>

>

> > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > I think we are closing in on your answer.

>

> > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > context to the question:

>

> > > > > > > --------

> > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > contained additional information such as source node, destination node

> > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > unexplained messages appeared.

>

> > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > --------

>

> > > > > > > To which you replied:

> > > > > > > --------

> > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > are.

> > > > > > > --------

>

> > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > taking into account what you regard as influence:

>

> > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > affect?

>

> > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > and the connections to the outside world through the senses, would in

> > > > > > fact correspond to consciousness.

>

> > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > which we understand as related to reality, which we call

> > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > each node in its neural net would not show it. To understand why an

> > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > over billions of years to become aware of external reality, because

> > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > and to do that, you have to understand how evolution achieved

> > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > usually state your personal opinion that evolution does not explain

> > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > a human have total free will to do or be whatever they feel like

> > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > complexity of the brain and the sense information it receives is

> > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > will, you can lead your life as though you were responsible for your

> > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > of the requirements for these adult privileges). But you would

> > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > to decide whether or not to behave in future as though you had free

> > > > > > will (rather than just going with the flow all the time and never

> > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > random coin toss, that you yourself had initiated.

>

> > > > > You haven't addressed the simple issue of where is the influence/

> > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > justgiving the same outputs as they would have singly in

> > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > given by a single node in the lab were consciously influenced/

> > > > > affected, or that the affect/influence is such that the outputs are

> > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > It is a simple question, why do you avoid answering it?

>

> > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > any such thing. Apart from your confusion of the terms effect and

> > > > affect, each sentence you write (the output from you as a node) is

> > > > meaningless and does not exhibit consciousness. However, the

> > > > paragraph as a whole exhibits consciousness, because it has been

> > > > posted and arrived at my computer. Try to be more specific in your

> > > > writing, and give some examples, and that will help you to communicate

> > > > what you mean.

>

> > > > Some of the outputs of the conscious organism (eg speech) are

> > > > sometimes directly affected by conscious experiences, other outputs

> > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > laboratory experiment) was delivered the same inputs to identical

> > > > neurons as in the earlier recorded hour of data from another brain,

> > > > the experiences would be the same (assuming the problem of

> > > > constructing an identical arrangement of neurons in the receiving

> > > > brain to receive the input data had been solved). Do you disagree?

>

> > > It was a post you had replied to, and you hadn't addressed the

> > > question. You still haven't addressed it. You are just distracting

> > > away, by asking me a question. It does beg the question of why you

> > > don't seem to be able to face answering.

>

> > > I'll repeat the question for you. Regarding the robot, where is the

> > > influence/ affect of conscious experiences in the outputs, if all the

> > > nodes are justgiving the same outputs as they would have singly in the

> > > lab given the same inputs. Are you suggesting that the outputs given

> > > by a single node in the lab were consciously influenced/ affected, or

> > > that the affect/influence is such that the outputs are the same as

> > > without the affect (i.e. there is no affect/influence)?

>

> > You are in too much of a hurry, and you didn't read my reply to your

> > first response. As you have repeated your very poorly-worded question

> > - which incidentally you should try and tidy up to give it a proper

> > meaning - I'll repeat my reply:

>

> > > > Your question was not clear enough for me to give you an opinion. You

> > > > are focusing attention on analysis of the stored I/O data transactions

> > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > the input nodes, the output from these go to the inputs of other

> > > > nodes, and so on until eventually the network output nodes trigger a

> > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > something, by sounding an alarm. It is clear that no particular part

> > > > of the device is conscious, and there is no reason to suppose that the

> > > > overall system is conscious either. I gather (from your rather

> > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > thread) that your argument goes on that the same applies to the human

> > > > brain - but I disagree with you there as you know, because I reckon

> > > > that the biophysical laws of evolution explain why the overall system

> > > > is conscious.

>

> > > > When I wrote that the robot device 'might in fact have been

> > > > conscious', that is in the context of your belief that a conscious

> > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > way that science is unaware of, and you may be correct, I don't know.

> > > > But if that were possible, the same might also be occuring in the

> > > > computer experiment you described. Perhaps the residual currents in

> > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > cpu and memory workings of the running program, which by some divine

> > > > influence is conscious. I think you have been asking other responders

> > > > how such a 'soul' could influence the overall behaviour of the

> > > > computer.

>

> > > > Remembering that computers have been specifically designed so that the

> > > > external environment does not affect them, and the most sensitive

> > > > components (the program and cpu) are heavily shielded from external

> > > > influences so that it is almost impossible for anything other than a

> > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > easy for a computer 'soul' to have that effect. But it might take

> > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > program to behave totally differently, and crash or print a wrong

> > > > number. The reason that computer memory is so heavily shielded is

> > > > that the program is extremely sensitive to influences such as magnetic

> > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > rays and the like. If your data analysis showed that a couple of

> > > > bits in the neural net had in fact 'transcended' the program, and

> > > > seemingly changed of their own accord, there would be no reason to

> > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > had passed through a critical logic gate in the bus controller or

> > > > memory chip, affecting the timing or data in a way that led to an

> > > > eventual 'glitch'.

>

> > If that isn't an adequate response, try writing out your question

> > again in a more comprehensible form. You don't distinguish between

> > the overall behaviour of the robot (do the outputs make it walk and

> > talk?) and the behaviour of the computer processing (the I/O on the

> > chip and program circuitry), for example.

>

> > I'm trying to help you express whatever it is you are trying to

> > explain to the groups in your header (heathen alt.atheism, God-fearing

> > alt.religion) in which you are taking the God-given free-will

> > position. It looks like you are trying to construct an unassailable

> > argument relating subjective experiences and behaviour, computers and

> > brains, and free will, to notions of the existence of God and the

> > soul. It might be possible to do that if you express yourself more

> > clearly, and some of your logic was definitely wrong earlier, so you

> > would have to correct that as well. I expect that if whatever you

> > come up with doesn't include an explanation for why evolutionary

> > principles don't produce consciousness (as you claim) you won't really

> > be doing anything other than time-wasting pseudo-science, because

> > scientifically-minded people will say you haven't answered our

> > question - why do you introduce the idea of a 'soul' (for which their

> > is no evidence) to explain consciousness in animals such as ourselves,

> > when evolution (for which there is evidence) explains it perfectly?

>

> I'll put the example again:

>

> --------

> Supposing there was an artificial neural network, with a billion more

> nodes than you had neurons in your brain, and a very 'complicated'

> configuration, which drove a robot. The robot, due to its behaviour

> (being a Turing Equivalent), caused some atheists to claim it was

> consciously experiencing. Now supposing each message between each node

> contained additional information such as source node, destination node

> (which could be the same as the source node, for feedback loops), the

> time the message was sent, and the message. Each node wrote out to a

> log on receipt of a message, and on sending a message out. Now after

> an hour of the atheist communicating with the robot, the logs could be

> examined by a bank of computers, varifying, that for the input

> messages each received, the outputs were those expected by a single

> node in a lab given the same inputs. They could also varify that no

> unexplained messages appeared.

> ---------

>

> You seem to have understood the example, but asked whether it walked

> and talked, yes lets say it does, and would be a contender for an

> olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> With regards to the question you are having a problem comprehending,

> I'll reword it for you. By conscious experiences I am refering to the

> sensations we experience such as pleasure and pain, visual and

> auditory perception etc.

>

> Are you suggesting:

>

> A) The atheists were wrong to consider it to be consciously

> experiencing.

> B) Were correct to say that was consciously experiencing, but that it

> doesn't influence behaviour.

> C) It might have been consciously experiencing, how could you tell, it

> doesn't influence behaviour.

> D) It was consciously experiencing and that influenced its behaviour.

>

> If you select D, as all the nodes are giving the same outputs as they

> would have singly in the lab given the same inputs, could you also

> select between D1, and D2:

>

> D1) The outputs given by a single node in the lab were also influenced

> by conscious experiences.

> D2) The outputs given by a single node in the lab were not influenced

> by conscious experiences, but the influence of the conscious

> experiences within the robot, is such that the outputs are the same as

> without the influence of the conscious experiences.

>

> If you are still having problems comprehending, then simply write out

> the sentances you could not understand, and I'll reword them for you.

> If you want to add another option, then simply state it, obviously the

> option should be related to whether the robot was consciously

> experiencing, and if it was, whether the conscious experiences

> influenced behaviour.

 

As you requested, here is the question I didn't understand: "Regarding

the robot, where is the influence/ affect of conscious experiences in

the outputs, if all the nodes are justgiving the same outputs as they

would have singly in the lab given the same inputs."

Guest someone2
Posted

On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

> On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> > > > > > > > > >You said:

>

> > > > > > > > > >--------

> > > > > > > > > >

>

> > > > > > > > > >I'd still like your response to this question.

>

> > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> > > > > > > > > >> >> >> which one must happen?

>

> > > > > > > > > >

> > > > > > > > > >--------

>

> > > > > > > > > >Well this was my response:

> > > > > > > > > >--------

> > > > > > > > > >Regarding free will, and you question as to how do we know whether

> > > > > > > > > >something is free of any rules which state which one must happen, I

> > > > > > > > > >have already replied:

> > > > > > > > > >-----------

> > > > > > > > > >With regards to how we would know whether something was free from

> > > > > > > > > >rules which state which one must happen, you would have no evidence

> > > > > > > > > >other than your own experience of being able to will which one

> > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> > > > > > > > > >-----------

>

> > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> > > > > > > > > >choice they were going to make, say pressing the black or white button

> > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> > > > > > > > > >random distribution rule.

>

> > > > > > > > > >The reason for the change is that on a robot it could always have a

> > > > > > > > > >part which gave the same response as it was shown, or to change the

> > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> > > > > > > > > >removed it prediction from being shown.

>

> > > > > > > > > >I have defined free will for you:

>

> > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> > > > > > > > > >and you were able to consciously influence which would happen. and

> > > > > > > > > >added:

>

> > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> > > > > > > > > >It is free from any rules which state which one must happen. As I said

> > > > > > > > > >both are possible, before it was consciously willed which would

> > > > > > > > > >happen.

> > > > > > > > > >--------

>

> > > > > > > > > >You were just showing that you weren't bothering to read what was

> > > > > > > > > >written.

>

> > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> > > > > > > > > stitched it together for me. Thanks. It still does not give me

> > > > > > > > > observable behaviors you would regard as regard as indicating

> > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> > > > > > > > > <....>

>

> > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> > > > > > > > > >such that the outputs are the same as without the influence?

>

> > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> > > > > > > > > Something is influential if it affects an outcome. If a single node in

> > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> > > > > > > > > would you regard it as conscious? Why or why not?

>

> > > > > > > > > I think we are closing in on your answer.

>

> > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> > > > > > > > wrote in my last response, before it was broken up by your responses.

> > > > > > > > If you hadn't snipped the history you would have seen that.

>

> > > > > > > > I'll just put back in the example we are talking about, so there is

> > > > > > > > context to the question:

>

> > > > > > > > --------

> > > > > > > > Supposing there was an artificial neural network, with a billion more

> > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> > > > > > > > consciously experiencing. Now supposing each message between each node

> > > > > > > > contained additional information such as source node, destination node

> > > > > > > > (which could be the same as the source node, for feedback loops), the

> > > > > > > > time the message was sent, and the message. Each node wrote out to a

> > > > > > > > log on receipt of a message, and on sending a message out. Now after

> > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> > > > > > > > examined by a bank of computers, varifying, that for the input

> > > > > > > > messages each received, the outputs were those expected by a single

> > > > > > > > node in a lab given the same inputs. They could also varify that no

> > > > > > > > unexplained messages appeared.

>

> > > > > > > > Where was the influence of any conscious experiences in the robot?

> > > > > > > > --------

>

> > > > > > > > To which you replied:

> > > > > > > > --------

> > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> > > > > > > > are.

> > > > > > > > --------

>

> > > > > > > > So rewording my last question to you (seen above in the history),

> > > > > > > > taking into account what you regard as influence:

>

> > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > > > given by a single node in the lab were consciously affected, or that

> > > > > > > > the affect is such that the outputs are the same as without the

> > > > > > > > affect?

>

> > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> > > > > > > is not conscious, and analysis of the brain of a human would produce

> > > > > > > similar data, and we know that we are conscious. We are reasonably

> > > > > > > sure that a computer controlled robot would not be conscious, so we

> > > > > > > can deduce that in the case of human brains 'the whole is more than

> > > > > > > the sum of the parts' for some reason. So the actual data from a

> > > > > > > brain, if you take into account the physical arrangement and timing

> > > > > > > and the connections to the outside world through the senses, would in

> > > > > > > fact correspond to consciousness.

>

> > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> > > > > > > a picture. But from a human perspective of the whole, not only are

> > > > > > > there more than just three colours, the pixels can form meaningful

> > > > > > > shapes that are related to reality. Similarly, the arrangement of

> > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> > > > > > > which we understand as related to reality, which we call

> > > > > > > consciousness, even though each firing is not itself conscious.

>

> > > > > > > In your hypothetical experiment, the manufactured computer robot might

> > > > > > > in fact have been conscious, but the individual inputs and outputs of

> > > > > > > each node in its neural net would not show it. To understand why an

> > > > > > > organism might or might not be conscious, you have to consider larger

> > > > > > > aspects of the organism. The biological brain and senses have evolved

> > > > > > > over billions of years to become aware of external reality, because

> > > > > > > awareness of reality is an evolutionary survival trait. The robot

> > > > > > > brain you are considering is an attempt to duplicate that awareness,

> > > > > > > and to do that, you have to understand how evolution achieved

> > > > > > > awareness in biological organisms. At this point in the argument, you

> > > > > > > usually state your personal opinion that evolution does not explain

> > > > > > > consciousness, because of the soul and God-given free will, your

> > > > > > > behaviour goes into a loop and you start again from earlier.

>

> > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> > > > > > > a human have total free will to do or be whatever they feel like

> > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> > > > > > > limited freedom to choose our own future. At the microscopic level of

> > > > > > > the neurons in a brain, the events are unpredictable (because the

> > > > > > > complexity of the brain and the sense information it receives is

> > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> > > > > > > controlled by random events, that is true, and we have no free will in

> > > > > > > that sense. However, we have sufficient understanding of free will to

> > > > > > > be able to form a moral code based on the notion. You can blame or

> > > > > > > congratulate your ancestors, or the society you live in, for your own

> > > > > > > behaviour if you want to. But if you choose to believe you have free

> > > > > > > will, you can lead your life as though you were responsible for your

> > > > > > > own actions, and endeavour to have your own affect on the future. If

> > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> > > > > > > but if you do that to excess you would obviously not be in control of

> > > > > > > your own life, and you would eventually not be allowed to vote in

> > > > > > > elections, or have a bank account (according to some legal definitions

> > > > > > > of the requirements for these adult privileges). But you would

> > > > > > > clearly have more free will if you chose to believe you had free will

> > > > > > > than if you chose to believe you did not. You could even toss a coin

> > > > > > > to decide whether or not to behave in future as though you had free

> > > > > > > will (rather than just going with the flow all the time and never

> > > > > > > making a decision), and your destiny would be controlled by a single

> > > > > > > random coin toss, that you yourself had initiated.

>

> > > > > > You haven't addressed the simple issue of where is the influence/

> > > > > > affect of conscious experiences in the outputs, if all the nodes are

> > > > > > justgiving the same outputs as they would have singly in

> > > > > > the lab given the same inputs. Are you suggesting that the outputs

> > > > > > given by a single node in the lab were consciously influenced/

> > > > > > affected, or that the affect/influence is such that the outputs are

> > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> > > > > > It is a simple question, why do you avoid answering it?

>

> > > > > It is a question which you hadn't asked me, so you can hardly claim

> > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> > > > > any such thing. Apart from your confusion of the terms effect and

> > > > > affect, each sentence you write (the output from you as a node) is

> > > > > meaningless and does not exhibit consciousness. However, the

> > > > > paragraph as a whole exhibits consciousness, because it has been

> > > > > posted and arrived at my computer. Try to be more specific in your

> > > > > writing, and give some examples, and that will help you to communicate

> > > > > what you mean.

>

> > > > > Some of the outputs of the conscious organism (eg speech) are

> > > > > sometimes directly affected by conscious experiences, other outputs

> > > > > (eg shit) are only indirectly affected by conscious experiences, such

> > > > > as previous hunger and eating. If a conscious biological brain (in a

> > > > > laboratory experiment) was delivered the same inputs to identical

> > > > > neurons as in the earlier recorded hour of data from another brain,

> > > > > the experiences would be the same (assuming the problem of

> > > > > constructing an identical arrangement of neurons in the receiving

> > > > > brain to receive the input data had been solved). Do you disagree?

>

> > > > It was a post you had replied to, and you hadn't addressed the

> > > > question. You still haven't addressed it. You are just distracting

> > > > away, by asking me a question. It does beg the question of why you

> > > > don't seem to be able to face answering.

>

> > > > I'll repeat the question for you. Regarding the robot, where is the

> > > > influence/ affect of conscious experiences in the outputs, if all the

> > > > nodes are justgiving the same outputs as they would have singly in the

> > > > lab given the same inputs. Are you suggesting that the outputs given

> > > > by a single node in the lab were consciously influenced/ affected, or

> > > > that the affect/influence is such that the outputs are the same as

> > > > without the affect (i.e. there is no affect/influence)?

>

> > > You are in too much of a hurry, and you didn't read my reply to your

> > > first response. As you have repeated your very poorly-worded question

> > > - which incidentally you should try and tidy up to give it a proper

> > > meaning - I'll repeat my reply:

>

> > > > > Your question was not clear enough for me to give you an opinion. You

> > > > > are focusing attention on analysis of the stored I/O data transactions

> > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> > > > > the input nodes, the output from these go to the inputs of other

> > > > > nodes, and so on until eventually the network output nodes trigger a

> > > > > response - the programmed device perhaps indicates that it has 'seen'

> > > > > something, by sounding an alarm. It is clear that no particular part

> > > > > of the device is conscious, and there is no reason to suppose that the

> > > > > overall system is conscious either. I gather (from your rather

> > > > > elastically-worded phrasing in the preceding 1000 messages in this

> > > > > thread) that your argument goes on that the same applies to the human

> > > > > brain - but I disagree with you there as you know, because I reckon

> > > > > that the biophysical laws of evolution explain why the overall system

> > > > > is conscious.

>

> > > > > When I wrote that the robot device 'might in fact have been

> > > > > conscious', that is in the context of your belief that a conscious

> > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> > > > > way that science is unaware of, and you may be correct, I don't know.

> > > > > But if that were possible, the same might also be occuring in the

> > > > > computer experiment you described. Perhaps the residual currents in

> > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> > > > > cpu and memory workings of the running program, which by some divine

> > > > > influence is conscious. I think you have been asking other responders

> > > > > how such a 'soul' could influence the overall behaviour of the

> > > > > computer.

>

> > > > > Remembering that computers have been specifically designed so that the

> > > > > external environment does not affect them, and the most sensitive

> > > > > components (the program and cpu) are heavily shielded from external

> > > > > influences so that it is almost impossible for anything other than a

> > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> > > > > easy for a computer 'soul' to have that effect. But it might take

> > > > > only one bit of logic in the cpu to go wrong just once for the whole

> > > > > program to behave totally differently, and crash or print a wrong

> > > > > number. The reason that computer memory is so heavily shielded is

> > > > > that the program is extremely sensitive to influences such as magnetic

> > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> > > > > rays and the like. If your data analysis showed that a couple of

> > > > > bits in the neural net had in fact 'transcended' the program, and

> > > > > seemingly changed of their own accord, there would be no reason to

> > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> > > > > had passed through a critical logic gate in the bus controller or

> > > > > memory chip, affecting the timing or data in a way that led to an

> > > > > eventual 'glitch'.

>

> > > If that isn't an adequate response, try writing out your question

> > > again in a more comprehensible form. You don't distinguish between

> > > the overall behaviour of the robot (do the outputs make it walk and

> > > talk?) and the behaviour of the computer processing (the I/O on the

> > > chip and program circuitry), for example.

>

> > > I'm trying to help you express whatever it is you are trying to

> > > explain to the groups in your header (heathen alt.atheism, God-fearing

> > > alt.religion) in which you are taking the God-given free-will

> > > position. It looks like you are trying to construct an unassailable

> > > argument relating subjective experiences and behaviour, computers and

> > > brains, and free will, to notions of the existence of God and the

> > > soul. It might be possible to do that if you express yourself more

> > > clearly, and some of your logic was definitely wrong earlier, so you

> > > would have to correct that as well. I expect that if whatever you

> > > come up with doesn't include an explanation for why evolutionary

> > > principles don't produce consciousness (as you claim) you won't really

> > > be doing anything other than time-wasting pseudo-science, because

> > > scientifically-minded people will say you haven't answered our

> > > question - why do you introduce the idea of a 'soul' (for which their

> > > is no evidence) to explain consciousness in animals such as ourselves,

> > > when evolution (for which there is evidence) explains it perfectly?

>

> > I'll put the example again:

>

> > --------

> > Supposing there was an artificial neural network, with a billion more

> > nodes than you had neurons in your brain, and a very 'complicated'

> > configuration, which drove a robot. The robot, due to its behaviour

> > (being a Turing Equivalent), caused some atheists to claim it was

> > consciously experiencing. Now supposing each message between each node

> > contained additional information such as source node, destination node

> > (which could be the same as the source node, for feedback loops), the

> > time the message was sent, and the message. Each node wrote out to a

> > log on receipt of a message, and on sending a message out. Now after

> > an hour of the atheist communicating with the robot, the logs could be

> > examined by a bank of computers, varifying, that for the input

> > messages each received, the outputs were those expected by a single

> > node in a lab given the same inputs. They could also varify that no

> > unexplained messages appeared.

> > ---------

>

> > You seem to have understood the example, but asked whether it walked

> > and talked, yes lets say it does, and would be a contender for an

> > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> > With regards to the question you are having a problem comprehending,

> > I'll reword it for you. By conscious experiences I am refering to the

> > sensations we experience such as pleasure and pain, visual and

> > auditory perception etc.

>

> > Are you suggesting:

>

> > A) The atheists were wrong to consider it to be consciously

> > experiencing.

> > B) Were correct to say that was consciously experiencing, but that it

> > doesn't influence behaviour.

> > C) It might have been consciously experiencing, how could you tell, it

> > doesn't influence behaviour.

> > D) It was consciously experiencing and that influenced its behaviour.

>

> > If you select D, as all the nodes are giving the same outputs as they

> > would have singly in the lab given the same inputs, could you also

> > select between D1, and D2:

>

> > D1) The outputs given by a single node in the lab were also influenced

> > by conscious experiences.

> > D2) The outputs given by a single node in the lab were not influenced

> > by conscious experiences, but the influence of the conscious

> > experiences within the robot, is such that the outputs are the same as

> > without the influence of the conscious experiences.

>

> > If you are still having problems comprehending, then simply write out

> > the sentances you could not understand, and I'll reword them for you.

> > If you want to add another option, then simply state it, obviously the

> > option should be related to whether the robot was consciously

> > experiencing, and if it was, whether the conscious experiences

> > influenced behaviour.

>

> As you requested, here is the question I didn't understand: "Regarding

> the robot, where is the influence/ affect of conscious experiences in

> the outputs, if all the nodes are justgiving the same outputs as they

> would have singly in the lab given the same inputs."

 

I had rewritten the question for you. Why did you simply reference the

old one, and not answer the new reworded one? It seems like you are

desperately trying to avoid answering.

Guest Jim07D7
Posted

someone2 <glenn.spigel2@btinternet.com> said:

>On 7 Jul, 07:20, Jim07D7 <Jim0...@nospam.net> wrote:

>> someone2 <glenn.spig...@btinternet.com> said:

<...>

>> >Where is the affect of conscious experiences in the outputs, if all

>> >the nodes are justgiving the same outputs as they would have singly in

>> >the lab given the same inputs. Are you suggesting that the outputs

>> >given by a single node in the lab were consciously affected, or that

>> >the affect is such that the outputs are the same as without the

>> >affect?

>>

>> >I notice you avoided facing answering the question. Perhaps you would

>> >care to directly answer this.

>>

>> My resistance is causing you to start acknowledging that structure

>> plays a role in your deciding whether an entity is conscious; not just

>> behavior. Eventually you might see that you ascribe consciousness to

>> other humans on account of, in part, their structural similarity to

>> you. The question will remain, what justifies ascribing consciousness

>> to an entity. WHen we agree on that, I will be able to give you an

>> answer that satisfies you. Not until then.

>>

>

>I fail to how my understanding affects your answer. You are just

>avoiding answering.

 

Obviously, " When we agree on [a well-defined rule that] justifies

ascribing consciousness to an entity" means, in part, that I will

have a rule. I don't now. If I did, I would apply it to the situation

you describe and tell you how it rules.

>The reason it seems to me, is that you can't face

>answering, as you couldn't state that the conscious experiences would

>be influencing the robot's behaviour, and you know that it is

>implausible that conscious experiences don't influence our behaviour.

>First your excuse was that I hadn't taken your definition of influence

>into account, then when I did, you simply try to stear the

>converstation away. To me your behaviour seems pathetic. If you wish

>to use the excuse of being offended to avoid answering and ending the

>conversation, then that is your choice. You can thank me for giving

>you an excuse to avoid answering it, that you are so desperate for.

 

Wrong. I have been trying to steer the conversation toward having a

well-defined rule that justifies ascribing consciousness to an entity.

 

I thought you might want to help, if you are interested in the answer.

Clearly, you aren't.

 

So maybe we might as well end the conversation.

Guest Jim07D7
Posted

someone2 <glenn.spigel2@btinternet.com> said:

>On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

>> On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>>

>> > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>>

>> > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>>

>> > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>>

>> > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>>

>> > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>>

>> > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>>

>> > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>>

>> > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>>

>> > > > > > > > > >You said:

>>

>> > > > > > > > > >--------

>> > > > > > > > > >

>>

>> > > > > > > > > >I'd still like your response to this question.

>>

>> > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

>> > > > > > > > > >> >> >> which one must happen?

>>

>> > > > > > > > > >

>> > > > > > > > > >--------

>>

>> > > > > > > > > >Well this was my response:

>> > > > > > > > > >--------

>> > > > > > > > > >Regarding free will, and you question as to how do we know whether

>> > > > > > > > > >something is free of any rules which state which one must happen, I

>> > > > > > > > > >have already replied:

>> > > > > > > > > >-----------

>> > > > > > > > > >With regards to how we would know whether something was free from

>> > > > > > > > > >rules which state which one must happen, you would have no evidence

>> > > > > > > > > >other than your own experience of being able to will which one

>> > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

>> > > > > > > > > >which I doubt we will, then you would also have the evidence of there

>> > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

>> > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

>> > > > > > > > > >-----------

>>

>> > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

>> > > > > > > > > >choice they were going to make, say pressing the black or white button

>> > > > > > > > > >1000 times, they could adhere to it or not, this could include any

>> > > > > > > > > >random distribution rule.

>>

>> > > > > > > > > >The reason for the change is that on a robot it could always have a

>> > > > > > > > > >part which gave the same response as it was shown, or to change the

>> > > > > > > > > >response so as to not be the same as which was shown to it. So I have

>> > > > > > > > > >removed it prediction from being shown.

>>

>> > > > > > > > > >I have defined free will for you:

>>

>> > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

>> > > > > > > > > >and you were able to consciously influence which would happen. and

>> > > > > > > > > >added:

>>

>> > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

>> > > > > > > > > >It is free from any rules which state which one must happen. As I said

>> > > > > > > > > >both are possible, before it was consciously willed which would

>> > > > > > > > > >happen.

>> > > > > > > > > >--------

>>

>> > > > > > > > > >You were just showing that you weren't bothering to read what was

>> > > > > > > > > >written.

>>

>> > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

>> > > > > > > > > stitched it together for me. Thanks. It still does not give me

>> > > > > > > > > observable behaviors you would regard as regard as indicating

>> > > > > > > > > consciousness. Being "free" is hidden in the hardware.

>> > > > > > > > > <....>

>>

>> > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

>> > > > > > > > > >giving the same outputs as they would have singly in the lab given the

>> > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

>> > > > > > > > > >node in the lab were consciously influenced, or that the influence is

>> > > > > > > > > >such that the outputs are the same as without the influence?

>>

>> > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

>> > > > > > > > > Something is influential if it affects an outcome. If a single node in

>> > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

>> > > > > > > > > would you regard it as conscious? Why or why not?

>>

>> > > > > > > > > I think we are closing in on your answer.

>>

>> > > > > > > > That wasn't stitched together, that was a cut and paste of what I

>> > > > > > > > wrote in my last response, before it was broken up by your responses.

>> > > > > > > > If you hadn't snipped the history you would have seen that.

>>

>> > > > > > > > I'll just put back in the example we are talking about, so there is

>> > > > > > > > context to the question:

>>

>> > > > > > > > --------

>> > > > > > > > Supposing there was an artificial neural network, with a billion more

>> > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

>> > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

>> > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

>> > > > > > > > consciously experiencing. Now supposing each message between each node

>> > > > > > > > contained additional information such as source node, destination node

>> > > > > > > > (which could be the same as the source node, for feedback loops), the

>> > > > > > > > time the message was sent, and the message. Each node wrote out to a

>> > > > > > > > log on receipt of a message, and on sending a message out. Now after

>> > > > > > > > an hour of the atheist communicating with the robot, the logs could be

>> > > > > > > > examined by a bank of computers, varifying, that for the input

>> > > > > > > > messages each received, the outputs were those expected by a single

>> > > > > > > > node in a lab given the same inputs. They could also varify that no

>> > > > > > > > unexplained messages appeared.

>>

>> > > > > > > > Where was the influence of any conscious experiences in the robot?

>> > > > > > > > --------

>>

>> > > > > > > > To which you replied:

>> > > > > > > > --------

>> > > > > > > > If it is conscious, the influence is in the outputs, just like ours

>> > > > > > > > are.

>> > > > > > > > --------

>>

>> > > > > > > > So rewording my last question to you (seen above in the history),

>> > > > > > > > taking into account what you regard as influence:

>>

>> > > > > > > > Where is the affect of conscious experiences in the outputs, if all

>> > > > > > > > the nodes are justgiving the same outputs as they would have singly in

>> > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

>> > > > > > > > given by a single node in the lab were consciously affected, or that

>> > > > > > > > the affect is such that the outputs are the same as without the

>> > > > > > > > affect?

>>

>> > > > > > > OK, you've hypothetically monitored all the data going on in the robot

>> > > > > > > mechanism, and then you say that each bit of it is not conscious, so

>> > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

>> > > > > > > is not conscious, and analysis of the brain of a human would produce

>> > > > > > > similar data, and we know that we are conscious. We are reasonably

>> > > > > > > sure that a computer controlled robot would not be conscious, so we

>> > > > > > > can deduce that in the case of human brains 'the whole is more than

>> > > > > > > the sum of the parts' for some reason. So the actual data from a

>> > > > > > > brain, if you take into account the physical arrangement and timing

>> > > > > > > and the connections to the outside world through the senses, would in

>> > > > > > > fact correspond to consciousness.

>>

>> > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

>> > > > > > > all are either red or green or blue, and each pixel is just a dot, not

>> > > > > > > a picture. But from a human perspective of the whole, not only are

>> > > > > > > there more than just three colours, the pixels can form meaningful

>> > > > > > > shapes that are related to reality. Similarly, the arrangement of

>> > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

>> > > > > > > which we understand as related to reality, which we call

>> > > > > > > consciousness, even though each firing is not itself conscious.

>>

>> > > > > > > In your hypothetical experiment, the manufactured computer robot might

>> > > > > > > in fact have been conscious, but the individual inputs and outputs of

>> > > > > > > each node in its neural net would not show it. To understand why an

>> > > > > > > organism might or might not be conscious, you have to consider larger

>> > > > > > > aspects of the organism. The biological brain and senses have evolved

>> > > > > > > over billions of years to become aware of external reality, because

>> > > > > > > awareness of reality is an evolutionary survival trait. The robot

>> > > > > > > brain you are considering is an attempt to duplicate that awareness,

>> > > > > > > and to do that, you have to understand how evolution achieved

>> > > > > > > awareness in biological organisms. At this point in the argument, you

>> > > > > > > usually state your personal opinion that evolution does not explain

>> > > > > > > consciousness, because of the soul and God-given free will, your

>> > > > > > > behaviour goes into a loop and you start again from earlier.

>>

>> > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

>> > > > > > > scenario - it applies to conscious beings of any sort. In no way does

>> > > > > > > a human have total free will to do or be whatever they feel like

>> > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

>> > > > > > > limited freedom to choose our own future. At the microscopic level of

>> > > > > > > the neurons in a brain, the events are unpredictable (because the

>> > > > > > > complexity of the brain and the sense information it receives is

>> > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

>> > > > > > > controlled by random events, that is true, and we have no free will in

>> > > > > > > that sense. However, we have sufficient understanding of free will to

>> > > > > > > be able to form a moral code based on the notion. You can blame or

>> > > > > > > congratulate your ancestors, or the society you live in, for your own

>> > > > > > > behaviour if you want to. But if you choose to believe you have free

>> > > > > > > will, you can lead your life as though you were responsible for your

>> > > > > > > own actions, and endeavour to have your own affect on the future. If

>> > > > > > > you want to, you can abnegate responsibility, and just 'go with the

>> > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

>> > > > > > > but if you do that to excess you would obviously not be in control of

>> > > > > > > your own life, and you would eventually not be allowed to vote in

>> > > > > > > elections, or have a bank account (according to some legal definitions

>> > > > > > > of the requirements for these adult privileges). But you would

>> > > > > > > clearly have more free will if you chose to believe you had free will

>> > > > > > > than if you chose to believe you did not. You could even toss a coin

>> > > > > > > to decide whether or not to behave in future as though you had free

>> > > > > > > will (rather than just going with the flow all the time and never

>> > > > > > > making a decision), and your destiny would be controlled by a single

>> > > > > > > random coin toss, that you yourself had initiated.

>>

>> > > > > > You haven't addressed the simple issue of where is the influence/

>> > > > > > affect of conscious experiences in the outputs, if all the nodes are

>> > > > > > justgiving the same outputs as they would have singly in

>> > > > > > the lab given the same inputs. Are you suggesting that the outputs

>> > > > > > given by a single node in the lab were consciously influenced/

>> > > > > > affected, or that the affect/influence is such that the outputs are

>> > > > > > the same as without the affect (i.e. there is no affect/influence)?

>>

>> > > > > > It is a simple question, why do you avoid answering it?

>>

>> > > > > It is a question which you hadn't asked me, so you can hardly claim

>> > > > > that I was avoiding it! The simple answer is no, I am not suggesting

>> > > > > any such thing. Apart from your confusion of the terms effect and

>> > > > > affect, each sentence you write (the output from you as a node) is

>> > > > > meaningless and does not exhibit consciousness. However, the

>> > > > > paragraph as a whole exhibits consciousness, because it has been

>> > > > > posted and arrived at my computer. Try to be more specific in your

>> > > > > writing, and give some examples, and that will help you to communicate

>> > > > > what you mean.

>>

>> > > > > Some of the outputs of the conscious organism (eg speech) are

>> > > > > sometimes directly affected by conscious experiences, other outputs

>> > > > > (eg shit) are only indirectly affected by conscious experiences, such

>> > > > > as previous hunger and eating. If a conscious biological brain (in a

>> > > > > laboratory experiment) was delivered the same inputs to identical

>> > > > > neurons as in the earlier recorded hour of data from another brain,

>> > > > > the experiences would be the same (assuming the problem of

>> > > > > constructing an identical arrangement of neurons in the receiving

>> > > > > brain to receive the input data had been solved). Do you disagree?

>>

>> > > > It was a post you had replied to, and you hadn't addressed the

>> > > > question. You still haven't addressed it. You are just distracting

>> > > > away, by asking me a question. It does beg the question of why you

>> > > > don't seem to be able to face answering.

>>

>> > > > I'll repeat the question for you. Regarding the robot, where is the

>> > > > influence/ affect of conscious experiences in the outputs, if all the

>> > > > nodes are justgiving the same outputs as they would have singly in the

>> > > > lab given the same inputs. Are you suggesting that the outputs given

>> > > > by a single node in the lab were consciously influenced/ affected, or

>> > > > that the affect/influence is such that the outputs are the same as

>> > > > without the affect (i.e. there is no affect/influence)?

>>

>> > > You are in too much of a hurry, and you didn't read my reply to your

>> > > first response. As you have repeated your very poorly-worded question

>> > > - which incidentally you should try and tidy up to give it a proper

>> > > meaning - I'll repeat my reply:

>>

>> > > > > Your question was not clear enough for me to give you an opinion. You

>> > > > > are focusing attention on analysis of the stored I/O data transactions

>> > > > > within the neural net. Sense data (eg from a TV camera) is fed into

>> > > > > the input nodes, the output from these go to the inputs of other

>> > > > > nodes, and so on until eventually the network output nodes trigger a

>> > > > > response - the programmed device perhaps indicates that it has 'seen'

>> > > > > something, by sounding an alarm. It is clear that no particular part

>> > > > > of the device is conscious, and there is no reason to suppose that the

>> > > > > overall system is conscious either. I gather (from your rather

>> > > > > elastically-worded phrasing in the preceding 1000 messages in this

>> > > > > thread) that your argument goes on that the same applies to the human

>> > > > > brain - but I disagree with you there as you know, because I reckon

>> > > > > that the biophysical laws of evolution explain why the overall system

>> > > > > is conscious.

>>

>> > > > > When I wrote that the robot device 'might in fact have been

>> > > > > conscious', that is in the context of your belief that a conscious

>> > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

>> > > > > way that science is unaware of, and you may be correct, I don't know.

>> > > > > But if that were possible, the same might also be occuring in the

>> > > > > computer experiment you described. Perhaps the residual currents in

>> > > > > the wires of the circuit board contain a homeopathic 'memory' of the

>> > > > > cpu and memory workings of the running program, which by some divine

>> > > > > influence is conscious. I think you have been asking other responders

>> > > > > how such a 'soul' could influence the overall behaviour of the

>> > > > > computer.

>>

>> > > > > Remembering that computers have been specifically designed so that the

>> > > > > external environment does not affect them, and the most sensitive

>> > > > > components (the program and cpu) are heavily shielded from external

>> > > > > influences so that it is almost impossible for anything other than a

>> > > > > nearby nuclear explosion to affect their behaviour, it is probably not

>> > > > > easy for a computer 'soul' to have that effect. But it might take

>> > > > > only one bit of logic in the cpu to go wrong just once for the whole

>> > > > > program to behave totally differently, and crash or print a wrong

>> > > > > number. The reason that computer memory is so heavily shielded is

>> > > > > that the program is extremely sensitive to influences such as magnetic

>> > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

>> > > > > rays and the like. If your data analysis showed that a couple of

>> > > > > bits in the neural net had in fact 'transcended' the program, and

>> > > > > seemingly changed of their own accord, there would be no reason to

>> > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

>> > > > > had passed through a critical logic gate in the bus controller or

>> > > > > memory chip, affecting the timing or data in a way that led to an

>> > > > > eventual 'glitch'.

>>

>> > > If that isn't an adequate response, try writing out your question

>> > > again in a more comprehensible form. You don't distinguish between

>> > > the overall behaviour of the robot (do the outputs make it walk and

>> > > talk?) and the behaviour of the computer processing (the I/O on the

>> > > chip and program circuitry), for example.

>>

>> > > I'm trying to help you express whatever it is you are trying to

>> > > explain to the groups in your header (heathen alt.atheism, God-fearing

>> > > alt.religion) in which you are taking the God-given free-will

>> > > position. It looks like you are trying to construct an unassailable

>> > > argument relating subjective experiences and behaviour, computers and

>> > > brains, and free will, to notions of the existence of God and the

>> > > soul. It might be possible to do that if you express yourself more

>> > > clearly, and some of your logic was definitely wrong earlier, so you

>> > > would have to correct that as well. I expect that if whatever you

>> > > come up with doesn't include an explanation for why evolutionary

>> > > principles don't produce consciousness (as you claim) you won't really

>> > > be doing anything other than time-wasting pseudo-science, because

>> > > scientifically-minded people will say you haven't answered our

>> > > question - why do you introduce the idea of a 'soul' (for which their

>> > > is no evidence) to explain consciousness in animals such as ourselves,

>> > > when evolution (for which there is evidence) explains it perfectly?

>>

>> > I'll put the example again:

>>

>> > --------

>> > Supposing there was an artificial neural network, with a billion more

>> > nodes than you had neurons in your brain, and a very 'complicated'

>> > configuration, which drove a robot. The robot, due to its behaviour

>> > (being a Turing Equivalent), caused some atheists to claim it was

>> > consciously experiencing. Now supposing each message between each node

>> > contained additional information such as source node, destination node

>> > (which could be the same as the source node, for feedback loops), the

>> > time the message was sent, and the message. Each node wrote out to a

>> > log on receipt of a message, and on sending a message out. Now after

>> > an hour of the atheist communicating with the robot, the logs could be

>> > examined by a bank of computers, varifying, that for the input

>> > messages each received, the outputs were those expected by a single

>> > node in a lab given the same inputs. They could also varify that no

>> > unexplained messages appeared.

>> > ---------

>>

>> > You seem to have understood the example, but asked whether it walked

>> > and talked, yes lets say it does, and would be a contender for an

>> > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>>

>> > With regards to the question you are having a problem comprehending,

>> > I'll reword it for you. By conscious experiences I am refering to the

>> > sensations we experience such as pleasure and pain, visual and

>> > auditory perception etc.

>>

>> > Are you suggesting:

>>

>> > A) The atheists were wrong to consider it to be consciously

>> > experiencing.

>> > B) Were correct to say that was consciously experiencing, but that it

>> > doesn't influence behaviour.

>> > C) It might have been consciously experiencing, how could you tell, it

>> > doesn't influence behaviour.

>> > D) It was consciously experiencing and that influenced its behaviour.

>>

>> > If you select D, as all the nodes are giving the same outputs as they

>> > would have singly in the lab given the same inputs, could you also

>> > select between D1, and D2:

>>

>> > D1) The outputs given by a single node in the lab were also influenced

>> > by conscious experiences.

>> > D2) The outputs given by a single node in the lab were not influenced

>> > by conscious experiences, but the influence of the conscious

>> > experiences within the robot, is such that the outputs are the same as

>> > without the influence of the conscious experiences.

>>

>> > If you are still having problems comprehending, then simply write out

>> > the sentances you could not understand, and I'll reword them for you.

>> > If you want to add another option, then simply state it, obviously the

>> > option should be related to whether the robot was consciously

>> > experiencing, and if it was, whether the conscious experiences

>> > influenced behaviour.

>>

>> As you requested, here is the question I didn't understand: "Regarding

>> the robot, where is the influence/ affect of conscious experiences in

>> the outputs, if all the nodes are justgiving the same outputs as they

>> would have singly in the lab given the same inputs."

>

>I had rewritten the question for you. Why did you simply reference the

>old one, and not answer the new reworded one? It seems like you are

>desperately trying to avoid answering.

 

Heh, everybody but you is doing this, eh?

Guest Jim07D7
Posted

James Norris <JimNorris016@aol.com> said:

>why do you introduce the idea of a 'soul' (for which their

>is no evidence) to explain consciousness in animals such as ourselves,

>when evolution (for which there is evidence) explains it perfectly?

 

I've found a test for whether we are dealing with a 'bot:

 

http://handrooster.com/comic/imabot.jpg

Guest someone2
Posted

On 7 Jul, 19:26, Jim07D7 <Jim0...@nospam.net> wrote:

> someone2 <glenn.spig...@btinternet.com> said:

>

>

>

>

>

>

>

> >On 7 Jul, 07:20, Jim07D7 <Jim0...@nospam.net> wrote:

> >> someone2 <glenn.spig...@btinternet.com> said:

> <...>

> >> >Where is the affect of conscious experiences in the outputs, if all

> >> >the nodes are justgiving the same outputs as they would have singly in

> >> >the lab given the same inputs. Are you suggesting that the outputs

> >> >given by a single node in the lab were consciously affected, or that

> >> >the affect is such that the outputs are the same as without the

> >> >affect?

>

> >> >I notice you avoided facing answering the question. Perhaps you would

> >> >care to directly answer this.

>

> >> My resistance is causing you to start acknowledging that structure

> >> plays a role in your deciding whether an entity is conscious; not just

> >> behavior. Eventually you might see that you ascribe consciousness to

> >> other humans on account of, in part, their structural similarity to

> >> you. The question will remain, what justifies ascribing consciousness

> >> to an entity. WHen we agree on that, I will be able to give you an

> >> answer that satisfies you. Not until then.

>

> >I fail to how my understanding affects your answer. You are just

> >avoiding answering.

>

> Obviously, " When we agree on [a well-defined rule that] justifies

> ascribing consciousness to an entity" means, in part, that I will

> have a rule. I don't now. If I did, I would apply it to the situation

> you describe and tell you how it rules.

>

> >The reason it seems to me, is that you can't face

> >answering, as you couldn't state that the conscious experiences would

> >be influencing the robot's behaviour, and you know that it is

> >implausible that conscious experiences don't influence our behaviour.

> >First your excuse was that I hadn't taken your definition of influence

> >into account, then when I did, you simply try to stear the

> >converstation away. To me your behaviour seems pathetic. If you wish

> >to use the excuse of being offended to avoid answering and ending the

> >conversation, then that is your choice. You can thank me for giving

> >you an excuse to avoid answering it, that you are so desperate for.

>

> Wrong. I have been trying to steer the conversation toward having a

> well-defined rule that justifies ascribing consciousness to an entity.

>

> I thought you might want to help, if you are interested in the answer.

> Clearly, you aren't.

>

> So maybe we might as well end the conversation.

>

 

What difference does a well defined rule about ascribing consciousness

have to do with it. If you were to claim the robot was having

conscious experiences, then simply answer the question, if you were to

claim that the robot could not be having conscious experiences. If you

can't even face answering a simple question, then you are a coward,

and are wasting my time, there is no point in explaining things to

you, when if it comes to the point you haven't even got the courage to

answer the question, and be true to yourself.

 

Again with regards to the example and question I gave you:

--------

Supposing there was an artificial neural network, with a billion more

nodes than you had neurons in your brain, and a very 'complicated'

configuration, which drove a robot. The robot, due to its behaviour

(being a Turing Equivalent), caused some atheists to claim it was

consciously experiencing. Now supposing each message between each node

contained additional information such as source node, destination node

(which could be the same as the source node, for feedback loops), the

time the message was sent, and the message. Each node wrote out to a

log on receipt of a message, and on sending a message out. Now after

an hour of the atheist communicating with the robot, the logs could be

examined by a bank of computers, varifying, that for the input

messages each received, the outputs were those expected by a single

node in a lab given the same inputs. They could also varify that no

unexplained messages appeared.

 

Where was the influence of any conscious experiences in the robot?

--------

 

You replied:

--------

If it is conscious, the influence is in the outputs, just like ours

are.

--------

 

I am asking you where the influence (which you defined as affect) that

you claimed is in the inputs. As I said:

 

Where is the influence in the outputs, if all the nodes are just

giving the same outputs as they would have singly in the lab given the

same inputs. Are you suggesting that the outputs given by a single

node in the lab were consciously influenced/affected, or that the

influence/affect is such that the outputs are the same as without the

influence?

 

Have you the courage to actually answer, or can't you face it?

Guest someone2
Posted

On 7 Jul, 19:27, Jim07D7 <Jim0...@nospam.net> wrote:

> someone2 <glenn.spig...@btinternet.com> said:

>

> >On 7 Jul, 11:41, James Norris <JimNorri...@aol.com> wrote:

> >> On Jul 7, 9:37?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> >> > On 7 Jul, 05:15, James Norris <JimNorris...@aol.com> wrote:

>

> >> > > On Jul 7, 3:06?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> >> > > > On 7 Jul, 02:56, James Norris <JimNorris...@aol.com> wrote:

>

> >> > > > > On Jul 7, 2:23?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> >> > > > > > On 7 Jul, 02:08, James Norris <JimNorris...@aol.com> wrote:

>

> >> > > > > > > On Jul 7, 12:39?am, someone2 <glenn.spig...@btinternet.com> wrote:

>

> >> > > > > > > > On 7 Jul, 00:11, Jim07D7 <Jim0...@nospam.net> wrote:

>

> >> > > > > > > > > someone2 <glenn.spig...@btinternet.com> said:

>

> >> > > > > > > > > >You said:

>

> >> > > > > > > > > >--------

> >> > > > > > > > > >

>

> >> > > > > > > > > >I'd still like your response to this question.

>

> >> > > > > > > > > >> >> >> How do we know whether something is free of any rules which state

> >> > > > > > > > > >> >> >> which one must happen?

>

> >> > > > > > > > > >

> >> > > > > > > > > >--------

>

> >> > > > > > > > > >Well this was my response:

> >> > > > > > > > > >--------

> >> > > > > > > > > >Regarding free will, and you question as to how do we know whether

> >> > > > > > > > > >something is free of any rules which state which one must happen, I

> >> > > > > > > > > >have already replied:

> >> > > > > > > > > >-----------

> >> > > > > > > > > >With regards to how we would know whether something was free from

> >> > > > > > > > > >rules which state which one must happen, you would have no evidence

> >> > > > > > > > > >other than your own experience of being able to will which one

> >> > > > > > > > > >happens. Though if we ever detected the will influencing behaviour,

> >> > > > > > > > > >which I doubt we will, then you would also have the evidence of there

> >> > > > > > > > > >being no decernable rules. Any rule shown to the being, the being

> >> > > > > > > > > >could either choose to adhere to, or break, in the choice they made.

> >> > > > > > > > > >-----------

>

> >> > > > > > > > > >Though I'd like to change this to: that for any rule you had for which

> >> > > > > > > > > >choice they were going to make, say pressing the black or white button

> >> > > > > > > > > >1000 times, they could adhere to it or not, this could include any

> >> > > > > > > > > >random distribution rule.

>

> >> > > > > > > > > >The reason for the change is that on a robot it could always have a

> >> > > > > > > > > >part which gave the same response as it was shown, or to change the

> >> > > > > > > > > >response so as to not be the same as which was shown to it. So I have

> >> > > > > > > > > >removed it prediction from being shown.

>

> >> > > > > > > > > >I have defined free will for you:

>

> >> > > > > > > > > >Free will, is for example, that between 2 options, both were possible,

> >> > > > > > > > > >and you were able to consciously influence which would happen. and

> >> > > > > > > > > >added:

>

> >> > > > > > > > > >Regarding the 'free will', it is free to choose either option A or B.

> >> > > > > > > > > >It is free from any rules which state which one must happen. As I said

> >> > > > > > > > > >both are possible, before it was consciously willed which would

> >> > > > > > > > > >happen.

> >> > > > > > > > > >--------

>

> >> > > > > > > > > >You were just showing that you weren't bothering to read what was

> >> > > > > > > > > >written.

>

> >> > > > > > > > > I read every word. I did not think of that as an answer. Now, you have

> >> > > > > > > > > stitched it together for me. Thanks. It still does not give me

> >> > > > > > > > > observable behaviors you would regard as regard as indicating

> >> > > > > > > > > consciousness. Being "free" is hidden in the hardware.

> >> > > > > > > > > <....>

>

> >> > > > > > > > > >Where is the influence in the outputs, if all the nodes are just

> >> > > > > > > > > >giving the same outputs as they would have singly in the lab given the

> >> > > > > > > > > >same inputs. Are you suggesting that the outputs given by a single

> >> > > > > > > > > >node in the lab were consciously influenced, or that the influence is

> >> > > > > > > > > >such that the outputs are the same as without the influence?

>

> >> > > > > > > > > You have forgotten my definition of influence. If I may simplify it.

> >> > > > > > > > > Something is influential if it affects an outcome. If a single node in

> >> > > > > > > > > a lab gave all the same responses as you do, over weeks of testing,

> >> > > > > > > > > would you regard it as conscious? Why or why not?

>

> >> > > > > > > > > I think we are closing in on your answer.

>

> >> > > > > > > > That wasn't stitched together, that was a cut and paste of what I

> >> > > > > > > > wrote in my last response, before it was broken up by your responses.

> >> > > > > > > > If you hadn't snipped the history you would have seen that.

>

> >> > > > > > > > I'll just put back in the example we are talking about, so there is

> >> > > > > > > > context to the question:

>

> >> > > > > > > > --------

> >> > > > > > > > Supposing there was an artificial neural network, with a billion more

> >> > > > > > > > nodes than you had neurons in your brain, and a very 'complicated'

> >> > > > > > > > configuration, which drove a robot. The robot, due to its behaviour

> >> > > > > > > > (being a Turing Equivalent), caused some atheists to claim it was

> >> > > > > > > > consciously experiencing. Now supposing each message between each node

> >> > > > > > > > contained additional information such as source node, destination node

> >> > > > > > > > (which could be the same as the source node, for feedback loops), the

> >> > > > > > > > time the message was sent, and the message. Each node wrote out to a

> >> > > > > > > > log on receipt of a message, and on sending a message out. Now after

> >> > > > > > > > an hour of the atheist communicating with the robot, the logs could be

> >> > > > > > > > examined by a bank of computers, varifying, that for the input

> >> > > > > > > > messages each received, the outputs were those expected by a single

> >> > > > > > > > node in a lab given the same inputs. They could also varify that no

> >> > > > > > > > unexplained messages appeared.

>

> >> > > > > > > > Where was the influence of any conscious experiences in the robot?

> >> > > > > > > > --------

>

> >> > > > > > > > To which you replied:

> >> > > > > > > > --------

> >> > > > > > > > If it is conscious, the influence is in the outputs, just like ours

> >> > > > > > > > are.

> >> > > > > > > > --------

>

> >> > > > > > > > So rewording my last question to you (seen above in the history),

> >> > > > > > > > taking into account what you regard as influence:

>

> >> > > > > > > > Where is the affect of conscious experiences in the outputs, if all

> >> > > > > > > > the nodes are justgiving the same outputs as they would have singly in

> >> > > > > > > > the lab given the same inputs. Are you suggesting that the outputs

> >> > > > > > > > given by a single node in the lab were consciously affected, or that

> >> > > > > > > > the affect is such that the outputs are the same as without the

> >> > > > > > > > affect?

>

> >> > > > > > > OK, you've hypothetically monitored all the data going on in the robot

> >> > > > > > > mechanism, and then you say that each bit of it is not conscious, so

> >> > > > > > > the overall mechanism can't be conscious? But each neuron in a brain

> >> > > > > > > is not conscious, and analysis of the brain of a human would produce

> >> > > > > > > similar data, and we know that we are conscious. We are reasonably

> >> > > > > > > sure that a computer controlled robot would not be conscious, so we

> >> > > > > > > can deduce that in the case of human brains 'the whole is more than

> >> > > > > > > the sum of the parts' for some reason. So the actual data from a

> >> > > > > > > brain, if you take into account the physical arrangement and timing

> >> > > > > > > and the connections to the outside world through the senses, would in

> >> > > > > > > fact correspond to consciousness.

>

> >> > > > > > > Consider the million pixels in a 1000x1000 pixel digitized picture -

> >> > > > > > > all are either red or green or blue, and each pixel is just a dot, not

> >> > > > > > > a picture. But from a human perspective of the whole, not only are

> >> > > > > > > there more than just three colours, the pixels can form meaningful

> >> > > > > > > shapes that are related to reality. Similarly, the arrangement of

> >> > > > > > > bioelectric firings in the brain cause meaningful shapes in the brain

> >> > > > > > > which we understand as related to reality, which we call

> >> > > > > > > consciousness, even though each firing is not itself conscious.

>

> >> > > > > > > In your hypothetical experiment, the manufactured computer robot might

> >> > > > > > > in fact have been conscious, but the individual inputs and outputs of

> >> > > > > > > each node in its neural net would not show it. To understand why an

> >> > > > > > > organism might or might not be conscious, you have to consider larger

> >> > > > > > > aspects of the organism. The biological brain and senses have evolved

> >> > > > > > > over billions of years to become aware of external reality, because

> >> > > > > > > awareness of reality is an evolutionary survival trait. The robot

> >> > > > > > > brain you are considering is an attempt to duplicate that awareness,

> >> > > > > > > and to do that, you have to understand how evolution achieved

> >> > > > > > > awareness in biological organisms. At this point in the argument, you

> >> > > > > > > usually state your personal opinion that evolution does not explain

> >> > > > > > > consciousness, because of the soul and God-given free will, your

> >> > > > > > > behaviour goes into a loop and you start again from earlier.

>

> >> > > > > > > The 'free-will' of the organism is irrelevant to the robot/brain

> >> > > > > > > scenario - it applies to conscious beings of any sort. In no way does

> >> > > > > > > a human have total free will to do or be whatever they feel like

> >> > > > > > > (flying, telepathy, infinite wealth etc), so at best we only have

> >> > > > > > > limited freedom to choose our own future. At the microscopic level of

> >> > > > > > > the neurons in a brain, the events are unpredictable (because the

> >> > > > > > > complexity of the brain and the sense information it receives is

> >> > > > > > > sufficient for Chaos Theory to apply), and therefore random. We are

> >> > > > > > > controlled by random events, that is true, and we have no free will in

> >> > > > > > > that sense. However, we have sufficient understanding of free will to

> >> > > > > > > be able to form a moral code based on the notion. You can blame or

> >> > > > > > > congratulate your ancestors, or the society you live in, for your own

> >> > > > > > > behaviour if you want to. But if you choose to believe you have free

> >> > > > > > > will, you can lead your life as though you were responsible for your

> >> > > > > > > own actions, and endeavour to have your own affect on the future. If

> >> > > > > > > you want to, you can abnegate responsibility, and just 'go with the

> >> > > > > > > flow', by tossing coins and consulting the I Ching, as some people do,

> >> > > > > > > but if you do that to excess you would obviously not be in control of

> >> > > > > > > your own life, and you would eventually not be allowed to vote in

> >> > > > > > > elections, or have a bank account (according to some legal definitions

> >> > > > > > > of the requirements for these adult privileges). But you would

> >> > > > > > > clearly have more free will if you chose to believe you had free will

> >> > > > > > > than if you chose to believe you did not. You could even toss a coin

> >> > > > > > > to decide whether or not to behave in future as though you had free

> >> > > > > > > will (rather than just going with the flow all the time and never

> >> > > > > > > making a decision), and your destiny would be controlled by a single

> >> > > > > > > random coin toss, that you yourself had initiated.

>

> >> > > > > > You haven't addressed the simple issue of where is the influence/

> >> > > > > > affect of conscious experiences in the outputs, if all the nodes are

> >> > > > > > justgiving the same outputs as they would have singly in

> >> > > > > > the lab given the same inputs. Are you suggesting that the outputs

> >> > > > > > given by a single node in the lab were consciously influenced/

> >> > > > > > affected, or that the affect/influence is such that the outputs are

> >> > > > > > the same as without the affect (i.e. there is no affect/influence)?

>

> >> > > > > > It is a simple question, why do you avoid answering it?

>

> >> > > > > It is a question which you hadn't asked me, so you can hardly claim

> >> > > > > that I was avoiding it! The simple answer is no, I am not suggesting

> >> > > > > any such thing. Apart from your confusion of the terms effect and

> >> > > > > affect, each sentence you write (the output from you as a node) is

> >> > > > > meaningless and does not exhibit consciousness. However, the

> >> > > > > paragraph as a whole exhibits consciousness, because it has been

> >> > > > > posted and arrived at my computer. Try to be more specific in your

> >> > > > > writing, and give some examples, and that will help you to communicate

> >> > > > > what you mean.

>

> >> > > > > Some of the outputs of the conscious organism (eg speech) are

> >> > > > > sometimes directly affected by conscious experiences, other outputs

> >> > > > > (eg shit) are only indirectly affected by conscious experiences, such

> >> > > > > as previous hunger and eating. If a conscious biological brain (in a

> >> > > > > laboratory experiment) was delivered the same inputs to identical

> >> > > > > neurons as in the earlier recorded hour of data from another brain,

> >> > > > > the experiences would be the same (assuming the problem of

> >> > > > > constructing an identical arrangement of neurons in the receiving

> >> > > > > brain to receive the input data had been solved). Do you disagree?

>

> >> > > > It was a post you had replied to, and you hadn't addressed the

> >> > > > question. You still haven't addressed it. You are just distracting

> >> > > > away, by asking me a question. It does beg the question of why you

> >> > > > don't seem to be able to face answering.

>

> >> > > > I'll repeat the question for you. Regarding the robot, where is the

> >> > > > influence/ affect of conscious experiences in the outputs, if all the

> >> > > > nodes are justgiving the same outputs as they would have singly in the

> >> > > > lab given the same inputs. Are you suggesting that the outputs given

> >> > > > by a single node in the lab were consciously influenced/ affected, or

> >> > > > that the affect/influence is such that the outputs are the same as

> >> > > > without the affect (i.e. there is no affect/influence)?

>

> >> > > You are in too much of a hurry, and you didn't read my reply to your

> >> > > first response. As you have repeated your very poorly-worded question

> >> > > - which incidentally you should try and tidy up to give it a proper

> >> > > meaning - I'll repeat my reply:

>

> >> > > > > Your question was not clear enough for me to give you an opinion. You

> >> > > > > are focusing attention on analysis of the stored I/O data transactions

> >> > > > > within the neural net. Sense data (eg from a TV camera) is fed into

> >> > > > > the input nodes, the output from these go to the inputs of other

> >> > > > > nodes, and so on until eventually the network output nodes trigger a

> >> > > > > response - the programmed device perhaps indicates that it has 'seen'

> >> > > > > something, by sounding an alarm. It is clear that no particular part

> >> > > > > of the device is conscious, and there is no reason to suppose that the

> >> > > > > overall system is conscious either. I gather (from your rather

> >> > > > > elastically-worded phrasing in the preceding 1000 messages in this

> >> > > > > thread) that your argument goes on that the same applies to the human

> >> > > > > brain - but I disagree with you there as you know, because I reckon

> >> > > > > that the biophysical laws of evolution explain why the overall system

> >> > > > > is conscious.

>

> >> > > > > When I wrote that the robot device 'might in fact have been

> >> > > > > conscious', that is in the context of your belief that a conscious

> >> > > > > 'soul' is superimposed on the biomechanical workings of the brain in a

> >> > > > > way that science is unaware of, and you may be correct, I don't know.

> >> > > > > But if that were possible, the same might also be occuring in the

> >> > > > > computer experiment you described. Perhaps the residual currents in

> >> > > > > the wires of the circuit board contain a homeopathic 'memory' of the

> >> > > > > cpu and memory workings of the running program, which by some divine

> >> > > > > influence is conscious. I think you have been asking other responders

> >> > > > > how such a 'soul' could influence the overall behaviour of the

> >> > > > > computer.

>

> >> > > > > Remembering that computers have been specifically designed so that the

> >> > > > > external environment does not affect them, and the most sensitive

> >> > > > > components (the program and cpu) are heavily shielded from external

> >> > > > > influences so that it is almost impossible for anything other than a

> >> > > > > nearby nuclear explosion to affect their behaviour, it is probably not

> >> > > > > easy for a computer 'soul' to have that effect. But it might take

> >> > > > > only one bit of logic in the cpu to go wrong just once for the whole

> >> > > > > program to behave totally differently, and crash or print a wrong

> >> > > > > number. The reason that computer memory is so heavily shielded is

> >> > > > > that the program is extremely sensitive to influences such as magnetic

> >> > > > > and electric fields, and particularly gamma particles, cosmic rays, X-

> >> > > > > rays and the like. If your data analysis showed that a couple of

> >> > > > > bits in the neural net had in fact 'transcended' the program, and

> >> > > > > seemingly changed of their own accord, there would be no reason to

> >> > > > > suggest that the computer had a 'soul'. More likely an alpha-particle

> >> > > > > had passed through a critical logic gate in the bus controller or

> >> > > > > memory chip, affecting the timing or data in a way that led to an

> >> > > > > eventual 'glitch'.

>

> >> > > If that isn't an adequate response, try writing out your question

> >> > > again in a more comprehensible form. You don't distinguish between

> >> > > the overall behaviour of the robot (do the outputs make it walk and

> >> > > talk?) and the behaviour of the computer processing (the I/O on the

> >> > > chip and program circuitry), for example.

>

> >> > > I'm trying to help you express whatever it is you are trying to

> >> > > explain to the groups in your header (heathen alt.atheism, God-fearing

> >> > > alt.religion) in which you are taking the God-given free-will

> >> > > position. It looks like you are trying to construct an unassailable

> >> > > argument relating subjective experiences and behaviour, computers and

> >> > > brains, and free will, to notions of the existence of God and the

> >> > > soul. It might be possible to do that if you express yourself more

> >> > > clearly, and some of your logic was definitely wrong earlier, so you

> >> > > would have to correct that as well. I expect that if whatever you

> >> > > come up with doesn't include an explanation for why evolutionary

> >> > > principles don't produce consciousness (as you claim) you won't really

> >> > > be doing anything other than time-wasting pseudo-science, because

> >> > > scientifically-minded people will say you haven't answered our

> >> > > question - why do you introduce the idea of a 'soul' (for which their

> >> > > is no evidence) to explain consciousness in animals such as ourselves,

> >> > > when evolution (for which there is evidence) explains it perfectly?

>

> >> > I'll put the example again:

>

> >> > --------

> >> > Supposing there was an artificial neural network, with a billion more

> >> > nodes than you had neurons in your brain, and a very 'complicated'

> >> > configuration, which drove a robot. The robot, due to its behaviour

> >> > (being a Turing Equivalent), caused some atheists to claim it was

> >> > consciously experiencing. Now supposing each message between each node

> >> > contained additional information such as source node, destination node

> >> > (which could be the same as the source node, for feedback loops), the

> >> > time the message was sent, and the message. Each node wrote out to a

> >> > log on receipt of a message, and on sending a message out. Now after

> >> > an hour of the atheist communicating with the robot, the logs could be

> >> > examined by a bank of computers, varifying, that for the input

> >> > messages each received, the outputs were those expected by a single

> >> > node in a lab given the same inputs. They could also varify that no

> >> > unexplained messages appeared.

> >> > ---------

>

> >> > You seem to have understood the example, but asked whether it walked

> >> > and talked, yes lets say it does, and would be a contender for an

> >> > olympic gymnastics medal, and said it hurt if you stamped on its foot.

>

> >> > With regards to the question you are having a problem comprehending,

> >> > I'll reword it for you. By conscious experiences I am refering to the

> >> > sensations we experience such as pleasure and pain, visual and

> >> > auditory perception etc.

>

> >> > Are you suggesting:

>

> >> > A) The atheists were wrong to consider it to be consciously

> >> > experiencing.

> >> > B) Were correct to say that was consciously experiencing, but that it

> >> > doesn't influence behaviour.

> >> > C) It might have been consciously experiencing, how could you tell, it

> >> > doesn't influence behaviour.

> >> > D) It was consciously experiencing and that influenced its behaviour.

>

> >> > If you select D, as all the nodes are giving the same outputs as they

> >> > would have singly in the lab given the same inputs, could you also

> >> > select between D1, and D2:

>

> >> > D1) The outputs given by a single node in the lab were also influenced

> >> > by conscious experiences.

> >> > D2) The outputs given by a single node in the lab were not influenced

> >> > by conscious experiences, but the influence of the conscious

> >> > experiences within the robot, is such that the outputs are the same as

> >> > without the influence of the conscious experiences.

>

> >> > If you are still having problems comprehending, then simply write out

> >> > the sentances you could not understand, and I'll reword them for you.

> >> > If you want to add another option, then simply state it, obviously the

> >> > option should be related to whether the robot was consciously

> >> > experiencing, and if it was, whether the conscious experiences

> >> > influenced behaviour.

>

> >> As you requested, here is the question I didn't understand: "Regarding

> >> the robot, where is the influence/ affect of conscious experiences in

> >> the outputs, if all the nodes are justgiving the same outputs as they

> >> would have singly in the lab given the same inputs."

>

> >I had rewritten the question for you. Why did you simply reference the

> >old one, and not answer the new reworded one? It seems like you are

> >desperately trying to avoid answering.

>

> Heh, everybody but you is doing this, eh?

 

If you look, he did exactly what I said. I have no need to run like a

coward. These are simple questions, which it seems those on the

atheist channel haven't got the intellectual integrity to face.

Guest someone2
Posted

On 7 Jul, 22:39, someone2 <glenn.spig...@btinternet.com> wrote:

> On 7 Jul, 19:26, Jim07D7 <Jim0...@nospam.net> wrote:

>

>

>

>

>

> > someone2 <glenn.spig...@btinternet.com> said:

>

> > >On 7 Jul, 07:20, Jim07D7 <Jim0...@nospam.net> wrote:

> > >> someone2 <glenn.spig...@btinternet.com> said:

> > <...>

> > >> >Where is the affect of conscious experiences in the outputs, if all

> > >> >the nodes are justgiving the same outputs as they would have singly in

> > >> >the lab given the same inputs. Are you suggesting that the outputs

> > >> >given by a single node in the lab were consciously affected, or that

> > >> >the affect is such that the outputs are the same as without the

> > >> >affect?

>

> > >> >I notice you avoided facing answering the question. Perhaps you would

> > >> >care to directly answer this.

>

> > >> My resistance is causing you to start acknowledging that structure

> > >> plays a role in your deciding whether an entity is conscious; not just

> > >> behavior. Eventually you might see that you ascribe consciousness to

> > >> other humans on account of, in part, their structural similarity to

> > >> you. The question will remain, what justifies ascribing consciousness

> > >> to an entity. WHen we agree on that, I will be able to give you an

> > >> answer that satisfies you. Not until then.

>

> > >I fail to how my understanding affects your answer. You are just

> > >avoiding answering.

>

> > Obviously, " When we agree on [a well-defined rule that] justifies

> > ascribing consciousness to an entity" means, in part, that I will

> > have a rule. I don't now. If I did, I would apply it to the situation

> > you describe and tell you how it rules.

>

> > >The reason it seems to me, is that you can't face

> > >answering, as you couldn't state that the conscious experiences would

> > >be influencing the robot's behaviour, and you know that it is

> > >implausible that conscious experiences don't influence our behaviour.

> > >First your excuse was that I hadn't taken your definition of influence

> > >into account, then when I did, you simply try to stear the

> > >converstation away. To me your behaviour seems pathetic. If you wish

> > >to use the excuse of being offended to avoid answering and ending the

> > >conversation, then that is your choice. You can thank me for giving

> > >you an excuse to avoid answering it, that you are so desperate for.

>

> > Wrong. I have been trying to steer the conversation toward having a

> > well-defined rule that justifies ascribing consciousness to an entity.

>

> > I thought you might want to help, if you are interested in the answer.

> > Clearly, you aren't.

>

> > So maybe we might as well end the conversation.

>

> What difference does a well defined rule about ascribing consciousness

> have to do with it. If you were to claim the robot was having

> conscious experiences, then simply answer the question, if you were to

> claim that the robot could not be having conscious experiences. If you

> can't even face answering a simple question, then you are a coward,

> and are wasting my time, there is no point in explaining things to

> you, when if it comes to the point you haven't even got the courage to

> answer the question, and be true to yourself.

>

> Again with regards to the example and question I gave you:

> --------

> Supposing there was an artificial neural network, with a billion more

> nodes than you had neurons in your brain, and a very 'complicated'

> configuration, which drove a robot. The robot, due to its behaviour

> (being a Turing Equivalent), caused some atheists to claim it was

> consciously experiencing. Now supposing each message between each node

> contained additional information such as source node, destination node

> (which could be the same as the source node, for feedback loops), the

> time the message was sent, and the message. Each node wrote out to a

> log on receipt of a message, and on sending a message out. Now after

> an hour of the atheist communicating with the robot, the logs could be

> examined by a bank of computers, varifying, that for the input

> messages each received, the outputs were those expected by a single

> node in a lab given the same inputs. They could also varify that no

> unexplained messages appeared.

>

> Where was the influence of any conscious experiences in the robot?

> --------

>

> You replied:

> --------

> If it is conscious, the influence is in the outputs, just like ours

> are.

> --------

>

> I am asking you where the influence (which you defined as affect) that

> you claimed is in the inputs. As I said:

>

> Where is the influence in the outputs, if all the nodes are just

> giving the same outputs as they would have singly in the lab given the

> same inputs. Are you suggesting that the outputs given by a single

> node in the lab were consciously influenced/affected, or that the

> influence/affect is such that the outputs are the same as without the

> influence?

>

> Have you the courage to actually answer, or can't you face it?

>

 

It should have read:

 

I am asking you where the influence (which you defined as affect) that

you claimed is in the outputs. As I said:

 

Where is the influence in the outputs, if all the nodes are just

giving the same outputs as they would have singly in the lab given the

same inputs. Are you suggesting that the outputs given by a single

node in the lab were consciously influenced/affected, or that the

influence/affect is such that the outputs are the same as without the

influence?

 

Have you the courage to actually answer, or can't you face it?

Guest Jim07D7
Posted

someone2 <glenn.spigel2@btinternet.com> said:

>On 7 Jul, 22:39, someone2 <glenn.spig...@btinternet.com> wrote:

>> On 7 Jul, 19:26, Jim07D7 <Jim0...@nospam.net> wrote:

>>

>>

>>

>>

>>

>> > someone2 <glenn.spig...@btinternet.com> said:

>>

>> > >On 7 Jul, 07:20, Jim07D7 <Jim0...@nospam.net> wrote:

>> > >> someone2 <glenn.spig...@btinternet.com> said:

>> > <...>

>> > >> >Where is the affect of conscious experiences in the outputs, if all

>> > >> >the nodes are justgiving the same outputs as they would have singly in

>> > >> >the lab given the same inputs. Are you suggesting that the outputs

>> > >> >given by a single node in the lab were consciously affected, or that

>> > >> >the affect is such that the outputs are the same as without the

>> > >> >affect?

>>

>> > >> >I notice you avoided facing answering the question. Perhaps you would

>> > >> >care to directly answer this.

>>

>> > >> My resistance is causing you to start acknowledging that structure

>> > >> plays a role in your deciding whether an entity is conscious; not just

>> > >> behavior. Eventually you might see that you ascribe consciousness to

>> > >> other humans on account of, in part, their structural similarity to

>> > >> you. The question will remain, what justifies ascribing consciousness

>> > >> to an entity. WHen we agree on that, I will be able to give you an

>> > >> answer that satisfies you. Not until then.

>>

>> > >I fail to how my understanding affects your answer. You are just

>> > >avoiding answering.

>>

>> > Obviously, " When we agree on [a well-defined rule that] justifies

>> > ascribing consciousness to an entity" means, in part, that I will

>> > have a rule. I don't now. If I did, I would apply it to the situation

>> > you describe and tell you how it rules.

>>

>> > >The reason it seems to me, is that you can't face

>> > >answering, as you couldn't state that the conscious experiences would

>> > >be influencing the robot's behaviour, and you know that it is

>> > >implausible that conscious experiences don't influence our behaviour.

>> > >First your excuse was that I hadn't taken your definition of influence

>> > >into account, then when I did, you simply try to stear the

>> > >converstation away. To me your behaviour seems pathetic. If you wish

>> > >to use the excuse of being offended to avoid answering and ending the

>> > >conversation, then that is your choice. You can thank me for giving

>> > >you an excuse to avoid answering it, that you are so desperate for.

>>

>> > Wrong. I have been trying to steer the conversation toward having a

>> > well-defined rule that justifies ascribing consciousness to an entity.

>>

>> > I thought you might want to help, if you are interested in the answer.

>> > Clearly, you aren't.

>>

>> > So maybe we might as well end the conversation.

>>

>> What difference does a well defined rule about ascribing consciousness

>> have to do with it. If you were to claim the robot was having

>> conscious experiences, then simply answer the question, if you were to

>> claim that the robot could not be having conscious experiences. If you

>> can't even face answering a simple question, then you are a coward,

>> and are wasting my time, there is no point in explaining things to

>> you, when if it comes to the point you haven't even got the courage to

>> answer the question, and be true to yourself.

>>

>> Again with regards to the example and question I gave you:

>> --------

>> Supposing there was an artificial neural network, with a billion more

>> nodes than you had neurons in your brain, and a very 'complicated'

>> configuration, which drove a robot. The robot, due to its behaviour

>> (being a Turing Equivalent), caused some atheists to claim it was

>> consciously experiencing. Now supposing each message between each node

>> contained additional information such as source node, destination node

>> (which could be the same as the source node, for feedback loops), the

>> time the message was sent, and the message. Each node wrote out to a

>> log on receipt of a message, and on sending a message out. Now after

>> an hour of the atheist communicating with the robot, the logs could be

>> examined by a bank of computers, varifying, that for the input

>> messages each received, the outputs were those expected by a single

>> node in a lab given the same inputs. They could also varify that no

>> unexplained messages appeared.

>>

>> Where was the influence of any conscious experiences in the robot?

>> --------

>>

>> You replied:

>> --------

>> If it is conscious, the influence is in the outputs, just like ours

>> are.

>> --------

>>

>> I am asking you where the influence (which you defined as affect) that

>> you claimed is in the inputs. As I said:

>>

>> Where is the influence in the outputs, if all the nodes are just

>> giving the same outputs as they would have singly in the lab given the

>> same inputs. Are you suggesting that the outputs given by a single

>> node in the lab were consciously influenced/affected, or that the

>> influence/affect is such that the outputs are the same as without the

>> influence?

>>

>> Have you the courage to actually answer, or can't you face it?

>>

>

>It should have read:

>

>I am asking you where the influence (which you defined as affect) that

>you claimed is in the outputs.

 

I have not used "affect" -- maybe you have mixed me up with someone

else.

 

Also, I have not claimed that the influence is in the outputs. That

would not be my position.

> As I said:

>

>Where is the influence in the outputs, if all the nodes are just

>giving the same outputs as they would have singly in the lab given the

>same inputs. Are you suggesting that the outputs given by a single

>node in the lab were consciously influenced/affected, or that the

>influence/affect is such that the outputs are the same as without the

>influence?

>

>Have you the courage to actually answer, or can't you face it?

>

Actually, I am done having my courage mistakenly questioned. Of course

you are predestined to interpret my goodbye as a lack of courage.

Goodbye.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...