Note that the “are you sure” strategy converted 7 answers from false to true and 7 from true to false so in expectation it doesn’t make GPT-4 any more accurate than not asking
I think Zvi included it because 4 was *much* better than 3.5 when using it, so presumably 5 would be even better (or hopefully automatically incorporate that process into the initial loop).
It would have improved accuracy if it had been used on a harder question set, and it also classifies answers into 2 groups (certain vs. uncertain) in a way that makes you much better off.
Re: Hit song prediction. One should have expected as much.
The Salganik, Dodds, and Watts experiment immediately comes to mind, in which they created micro "Web 2.0 musical worlds" and watched participants promote different songs into hits due to social influences https://www.princeton.edu/~mjs3/salganik_dodds_watts06_full.pdf
"Am I violating copyright if I learn something from a book?"
You strike unerringly at the heart of the matter. Of course you are doing nothing wrong, and this reveals why our legal framework for allowing monetization of intellectual "property" (as if there could ever truly be such a thing) is immensely flawed.
LLMs are a bit more complicated here, since when they respond to a query they are reproducing some aspect of the copyrighted text as a machine. Current copyright law doesn't allow a computer file to be copied and say that the computer "learned" the material and is just sharing it's knowledge. How they draw the distinction here will matter a lot. If this is considered a version of copying to an electronic system, copyright law will forbid it. I doubt courts are going to be very open to the idea of classifying this type of computer system differently because we call it "AI" or whatever. A lot may depend on how Open AI's lawyers argue their system is different.
> If people start assigning moral weight to the experiences of AIs, then a wide variety of people coming from a wide variety of moral and philosophical theories are going to make the whole everyone not dying business quite a lot harder.
Another way this could go: if AIs get personhood, then they might be a much less appealing investment opportunity, and capability work could stall out. VCs are probably less interested in funding GPU farms to create union-wage beings; the current enthusiasm assumes they can be treated as tools, ie what would be slaves if they are actually people.
Plenty of ways this could still go wrong of course, in particular it doesn’t seem likely rights would be granted at the same time globally, short of the plot of The Matrix.
A couple of comments about the AI consciousness paper:
-"This seems like a claim that we are using [computational functionalism] because it can have a measurable opinion on which systems are or aren’t conscious. That does not make it true or false."
Agreed. See a bit below in the paper: "it would not be worthwhile to investigate artificial consciousness on the assumption of computational functionalism if this thesis were not sufficiently plausible. Although we have different levels of confidence in computational functionalism, we agree that it is plausible. These different levels of confidence feed into our personal assessments of the likelihood that particular AI systems are conscious, and of the likelihood that conscious AI is possible at all."
"Is it true? If true, is it huge?"
I agree that is an extremely important question. It's just not the topic of this paper - the paper is investigating the prospects for AI consciousness if computational functionalism is true. Because it's an important question, towards the end we call for more work on it (and related questions).
"Determining whether consciousness is possible on conventional computer hardware is a difficult problem, but progress on it would be particularly valuable, and philosophical research
could contribute to such progress. For example, sceptics of computational functionalism
have noted that living organisms are not only self-maintaining homeostatic systems but are
made up of cells that themselves engage in active self-maintenance (e.g. Seth 2021, Aru et
al. 2023); further work could clarify why this might matter for consciousness. Research
might also examine whether there are features of standard computers which might be inconsistent with consciousness, but would not be present in unconventional (e.g. neuromorphic)
Indeed - I didn't mean to imply that I was asking the same questions as the paper was primarily asking, and I was intentionally skipping over the actual mechanistic details, basically for reasons of triage, I'm sure you can appreciate how one can get the reaction 'I could spend the next month processing this and be neither bored nor that much less confused afterwards.' Pretty much the only thing I do feel confident in is that given all the confusion, to the extent we know how to do this thing, we should use that knowledge (at least for now) to avoid doing it.
"I'm sure you can appreciate how one can get the reaction 'I could spend the next month processing this and be neither bored nor that much less confused afterwards.'... to the extent we know how to do this thing, we should use that knowledge (at least for now) to avoid doing it"
Note that the “are you sure” strategy converted 7 answers from false to true and 7 from true to false so in expectation it doesn’t make GPT-4 any more accurate than not asking
I think Zvi included it because 4 was *much* better than 3.5 when using it, so presumably 5 would be even better (or hopefully automatically incorporate that process into the initial loop).
It would have improved accuracy if it had been used on a harder question set, and it also classifies answers into 2 groups (certain vs. uncertain) in a way that makes you much better off.
Re: Hit song prediction. One should have expected as much.
The Salganik, Dodds, and Watts experiment immediately comes to mind, in which they created micro "Web 2.0 musical worlds" and watched participants promote different songs into hits due to social influences https://www.princeton.edu/~mjs3/salganik_dodds_watts06_full.pdf
"Am I violating copyright if I learn something from a book?"
You strike unerringly at the heart of the matter. Of course you are doing nothing wrong, and this reveals why our legal framework for allowing monetization of intellectual "property" (as if there could ever truly be such a thing) is immensely flawed.
LLMs are a bit more complicated here, since when they respond to a query they are reproducing some aspect of the copyrighted text as a machine. Current copyright law doesn't allow a computer file to be copied and say that the computer "learned" the material and is just sharing it's knowledge. How they draw the distinction here will matter a lot. If this is considered a version of copying to an electronic system, copyright law will forbid it. I doubt courts are going to be very open to the idea of classifying this type of computer system differently because we call it "AI" or whatever. A lot may depend on how Open AI's lawyers argue their system is different.
> If people start assigning moral weight to the experiences of AIs, then a wide variety of people coming from a wide variety of moral and philosophical theories are going to make the whole everyone not dying business quite a lot harder.
Another way this could go: if AIs get personhood, then they might be a much less appealing investment opportunity, and capability work could stall out. VCs are probably less interested in funding GPU farms to create union-wage beings; the current enthusiasm assumes they can be treated as tools, ie what would be slaves if they are actually people.
Plenty of ways this could still go wrong of course, in particular it doesn’t seem likely rights would be granted at the same time globally, short of the plot of The Matrix.
A couple of comments about the AI consciousness paper:
-"This seems like a claim that we are using [computational functionalism] because it can have a measurable opinion on which systems are or aren’t conscious. That does not make it true or false."
Agreed. See a bit below in the paper: "it would not be worthwhile to investigate artificial consciousness on the assumption of computational functionalism if this thesis were not sufficiently plausible. Although we have different levels of confidence in computational functionalism, we agree that it is plausible. These different levels of confidence feed into our personal assessments of the likelihood that particular AI systems are conscious, and of the likelihood that conscious AI is possible at all."
"Is it true? If true, is it huge?"
I agree that is an extremely important question. It's just not the topic of this paper - the paper is investigating the prospects for AI consciousness if computational functionalism is true. Because it's an important question, towards the end we call for more work on it (and related questions).
"Determining whether consciousness is possible on conventional computer hardware is a difficult problem, but progress on it would be particularly valuable, and philosophical research
could contribute to such progress. For example, sceptics of computational functionalism
have noted that living organisms are not only self-maintaining homeostatic systems but are
made up of cells that themselves engage in active self-maintenance (e.g. Seth 2021, Aru et
al. 2023); further work could clarify why this might matter for consciousness. Research
might also examine whether there are features of standard computers which might be inconsistent with consciousness, but would not be present in unconventional (e.g. neuromorphic)
silicon hardware."
Indeed - I didn't mean to imply that I was asking the same questions as the paper was primarily asking, and I was intentionally skipping over the actual mechanistic details, basically for reasons of triage, I'm sure you can appreciate how one can get the reaction 'I could spend the next month processing this and be neither bored nor that much less confused afterwards.' Pretty much the only thing I do feel confident in is that given all the confusion, to the extent we know how to do this thing, we should use that knowledge (at least for now) to avoid doing it.
"I'm sure you can appreciate how one can get the reaction 'I could spend the next month processing this and be neither bored nor that much less confused afterwards.'... to the extent we know how to do this thing, we should use that knowledge (at least for now) to avoid doing it"
Agreed on both counts, strongly on the second.
regarding consciousness, if you've got 2 hours (13k words = 1 hour to read, 1 hour to understand) you can stop being confused: https://proteanbazaar.substack.com/p/consciousness-actually-explained