Discussion about this post

User's avatar
Robert Long's avatar

A couple of comments about the AI consciousness paper:

-"This seems like a claim that we are using [computational functionalism] because it can have a measurable opinion on which systems are or aren’t conscious. That does not make it true or false."

Agreed. See a bit below in the paper: "it would not be worthwhile to investigate artificial consciousness on the assumption of computational functionalism if this thesis were not sufficiently plausible. Although we have different levels of confidence in computational functionalism, we agree that it is plausible. These different levels of confidence feed into our personal assessments of the likelihood that particular AI systems are conscious, and of the likelihood that conscious AI is possible at all."

"Is it true? If true, is it huge?"

I agree that is an extremely important question. It's just not the topic of this paper - the paper is investigating the prospects for AI consciousness if computational functionalism is true. Because it's an important question, towards the end we call for more work on it (and related questions).

"Determining whether consciousness is possible on conventional computer hardware is a difficult problem, but progress on it would be particularly valuable, and philosophical research

could contribute to such progress. For example, sceptics of computational functionalism

have noted that living organisms are not only self-maintaining homeostatic systems but are

made up of cells that themselves engage in active self-maintenance (e.g. Seth 2021, Aru et

al. 2023); further work could clarify why this might matter for consciousness. Research

might also examine whether there are features of standard computers which might be inconsistent with consciousness, but would not be present in unconventional (e.g. neuromorphic)

silicon hardware."

Expand full comment
Jacob's avatar

Note that the “are you sure” strategy converted 7 answers from false to true and 7 from true to false so in expectation it doesn’t make GPT-4 any more accurate than not asking

Expand full comment
9 more comments...

No posts

Ready for more?