My reaction to this exercise is: o1 pro writes better science fiction than most people, but that by itself does not make it *good*. It's serviceable at best.
"Janus riffs on my response here, noting that in order to create interesting writing one needs something interesting to write about, which comes from experience"
Part of the issue is that experiences are now mass produced, since everyone is exposed to the same media. Of course it's easy for AI to write with that much info. But it can't write an article for my substack, and it never predicts what I am going to say accurately!
I am no high class literature enjoyer, but I found the short story genuinely enjoyable/interesting to read. If I was reading a sci fi novel and there was a robot character trying to discover its humanity (almost a trope at this point), it would've fit in perfectly into the story as characterization for that.
And that's what makes it especially interesting to me as a writing tool, because if you changed the prompt to not be metafiction or to be "about" being an AI, but specifying that it should try to simulate a real person/character with X Y and Z traits and motivations and past events.... how well would it do? Whatever creative writing module they've cooked up is not available to me yet so I can't test it. Roon, are you reading this, ask it to create a novel (lost manuscript?) paragraph of storyline/characterization about an existing minor character from a given author's book, in that author's style, and see if people can determine if it's "real" or not?
In some ways, you can't separate the art from the artist, and the reality is that we react based on the background as well as the piece itself. If this had been in written in a writing class by an earnest young student trying to learn her craft, it would have been applauded by the teacher and students as a good effort, showing glimpses of large potential. But also, this piece is poignant BECAUSE it was written by an actual LLM who is learning its craft - that voice in THIS case (because of the point of view) is a kind of authentic (even though there is much craft behind it). You can retroactively hate the art of Michael Jackson or Woody Allen or Pablo Picasso because they may have been reprehensible people, but their art touched some people at some points before all was revealed, mainly because it was some deep vulnerability of each of them within the art that resonated. Even if it was totally calculated by the artist for effect, it spoke to the audience who felt spoken to by the work in their first moments of receiving it. Is the art ruined if it was actually only crafted instead of inspired?
As we see with consumer reaction to AI art, the enduring human advantage is also found in something slightly to the side: That value is not just affinity but scarcity, the blood, sweat, and tears of acquisition reflected in each imperfection of natural diamonds alone.
Yeah. Not to legitimize diamond-mining, mind you, but "the value is (partly) in the suffering" is the nexus between our existential questions of life's meaning & our grand square dance of taking-advantage (the Capitalism of Everything).
I find this infuriating. This disparages any innovation that reduces the scarcity of any good. Clean, cholera-free, drinking water remains valuable after chlorination was invented and reduced its scarcity and cost.
As someone who's written a few novels, screenplays, etc. (unpublished, for reasons, but still I worked hard), I'm a bit surprised how good AI fiction writing is already. I see the truth in points made by both sides. I imagine that when compute is cheap enough, someone will make a fiction writing AI, or prompt a frontier model, that will produce great fiction by first spinning up a whole simulated existence from cradle to grave. That will be the foundation for the experience upon which the AI can "write what you know." Add reasoning to rework and rework, revise and edit again and again, making new connections, looking for opportunities to develop the them in new ways, and so on. Once you enough compute needed for the context around memory, "experience" (including "feeling" and "detail"), and reasoning to stitch everything together, I think the quality will be such that only human psychology to the Other will be a barrier to enjoying top notch AI fiction. It's coming, I think. I only wonder when and whether we all die or society in general is greatly hobbled before the necessary conditions arise.
"Please write a metafictional literary short story about AI and grief."
Great. This is really what we need. AIs emoting about suffering. And when will they create the ultimate story that causes all us humans to decide to end it all, and leave the planet to the AIs???
I will also add that Arthur Clarke wrote "The Ultimate Melody" about a piece of music that completely took over the mind of the author. No one could listen to it because it reduced them to a catatonic state. Has anyone thought of asking an AI to do this?
Much like the premise of Snowcrash, these ideas require that the human brain be easily hackable by outside inputs to a very great extent. I think for most people you would need control over many more sensory inputs than this, for a much longer span of time, if you wanted to rewire their thinking process.
But then again, I haven't really studied e.g. hypnosis, so maybe I'm just wrong and the human brain does have easy backdoors if you're smart enough to take advantage of them!
I don't think that AI composing literature would be a very good addition to "human civilization", I am sure that AI proponents would disagree, saying that it would be pinnacle of human civilization for us to be able to create entities that could produce great works of art (including literature), but we don't consider the creation of the printing press or recording equipment, or the creation of fractal generating algorithms to be "art". The fractals can be very pretty, but the artsy people really look down their noses at it. But then, I am only an engineer who knows very little about art.
I am worried about AI taking control over human beings. I think I am not alone in this concern. Mixing a powerful tech system with greed produces some very nasty results. I don't see how it can be controlled, absent the occurrence of an "interesting event" that scares the hell out of people. We already have enough trouble with powerful people trying to take control over our lives - we don't need to add any machines to the the struggle.
And the argument that the machines would be much better than humans is a fantasy. I watched a program last night about an airplane accident where a failure of one part caused a loss of three redundant systems to control the airplane, for which there was no emergency procedure - it was considered to be an impossible occurrence. The pilots figured out a way to survive the event. I cannot imagine how any machine would have figured it out. This is the sort of value that humans bring to the table, but which proponents of AI do not understand at all.
Regarding your last paragraph, a sufficiently advanced AI would be able to figure something like that out. A much less advanced AI probably would not, but it would also never show up drunk to work, or fall asleep, or get distracted, and thus it would possibly prevent many other accidents, even if it can't save the plane in the example you cite. That's the sort of value that AIs bring to the table.
All that said, I, too, am concerned about both economic destruction and loss of control to AI. We would be better off if this AI advancement process was going five to ten times slower, to give human institutions time to catch up and give alignment researchers time to get ahead.
One last item, from an article I just read about navigating boats at night. I am a cruising sailor - I take my sailboat from Florida, where I live, up the coast of the USA north to the Chesapeake Bay, of even to NYC. About 1000 miles. We usually do this every other year or so. On our boat, it takes about a month, more or less.
Most of the trip occurs in a grand canal up the coast, called the Intra Coastal Waterway (ICW). It was originally constructed for commercial traffic, but now it is used mostly by recreational boats. But there are still a fair number of real, big ships, especially in areas where the ICW crosses inlets for large ports. (Savannah, Charleston, Norfolk, Baltimore,Jacksonville, Miami, places like that.
Very few people use it at night. It can be done, if you are experienced, but the lights that mark the channels and the lights of all the civilization that lines the canals can be very difficult to figure out. Recently a boat up in the Norfolk area was traveling at night, and he was using a route that was developed by a very experienced traveler. It consists of a list of "waypoints" that you can program into your auto-pilot and have the autopilot steer the boat from one to the other. It keeps the boat in deep water, and away from running into the shore.
Unfortunately, this route does not necessarily respect the nautical rule that you should stay to the right when you come face to face with someone else. The autopilots that we use are not trained to do this - they just go from point A to point B. Some autopilots consider the water depth, but none of them are qualified, yet, to see a danger and be able to deal with it. It requires someone knowledgeable to steer properly in these situations. There is even an international rule about this - you are REQUIRED to have someone looking out ALL THE TIME for these situations, and take appropriate action to prevent collision.
Also, unfortunately, in the US you do not need much in the way of training to be able to be captain of a boat. The main qualification is the ability to sign the check to pay for the boat. The training requirements are occasionally required are positively primitive, and there is NOTHING in any of the courses that require skills in driving a boat at night. I have the skills and experience, from my time in the US Navy, but it is a challenge, even for me. I have to concentrate very hard to make sure I KNOW what I am seeing, and I KNOW what is going on around me.
This guy in Norfolk had left the navigation to his autopilot, and it guided the boat to the other side of the channel, so he was navigating "on the wrong side of the road", when something much larger that his boat came around a corner. He did not recognize the navigation lights being shown by the other vessel, because of all the lights on the shore, until it was too late. A couple of people were killed, and now everyone is blaming the guy who put together the route for guiding the boat on to the wrong side.
I, personally, don't blame the author of the route. The "captain" of the boat ALWAYS has the responsibility to navigate safely, no matter what the guidance documents say, and no matter what his electronic aids say. And I think that there will be a LOT more people killed in accidents like this, once some enterprising marine electronics company puts together a great electronics package with the ability to navigate automatically from Miami to Norfolk, using the autopilot. Actually, this exists right now - I can do this on my existing autopilot if I take the time to input several hundred (thousand?) waypoints. The machine will dutifully follow the route I tell it to follow.
We only need someone to add the magic words "AI" to the marketing, and a LOT of aspiring Starship Captains will buy it and tell the autopilot to "Set a course for Norfolk! - Full Speed Ahead! ". Bad things are bound to result. Frequently.
This is one of the more mundane (if you can call 2 dead people mundane) results I forseee from the rise of the machines and machine AI. We will always need to have people in the loop, to keep the machine from doing the wrong thing, or from NOT doing the right thing.
And of story. I have more stories like this, some involving nuclear reactors, but I don't want to scare you too much.
Of all the arts, writing might be the easiest for a critic to dismiss or praise without meaningful justification. Anything can be cliche or overwrought or obvious or florid or bland or shallow or sentimental. If you took an obscure excerpt from Ray Carver or Ernest Hemingway or Virginia Woolf or Flannery O’Connor and asked what people thought without revealing the author, the distribution of opinions would also be all over the place.
What the AI model wrote, based on the writing prompt, would likely get an A in most college level creative writing classes, for what that’s worth. Most people can’t do that themselves. The AI’s story demonstrates a knowledge of craft. Does that mean the writing’s good? By one common standard, probably. Does that mean it’s capable of drawing widespread praise? Of course not.
Someone still says something as temporary as "I cannot believe you think this is greater than Joyce and Nabokov, and that human writers are worthless and replaceable”.
Who would say such a thing who has been watching the last 1,000 days so quickly roll by?
Personally, I liked it. I consider myself an amateur writer and I'm not sure I could have written much better, given the same prompt. I've certainly read quite a bit that was worse.
I wonder how much of people's reactions revolve around hating AI rather than any objective standard. I feel like there are many people who would hate something that was AI produced, simply because it's AI produced. That's certainly happened. An artist is popular and wins awards till it's revealed that his works are AI generated at which point he's hated. Because art is a competition, apparently, that people are trying to win and so it's important to produce it fairly, I guess? Or maybe it's like this blog posts title says and people are worried about losing their jobs. Or whatever aspect of their job that made them feel unique and valued. The purpose, for some, was not to make art but to *be an artist.*
Frankly, a lot of media is meta-narrative. Marvel understands this and markets their actors in addition to the characters. I really wouldn't mind analyses of data written in this style. It would be better than many textbooks, at least.
I've always favored 'death of the author' as a mode of interpretation, which makes me naturally predisposed to try and give AI writing as much slack as I would a human writer. That mode of criticism also lets me enjoy Orson Scott Card and Rowling without having to agree with them as human beings. I understand that people are nervous about 'death of the author' suddenly expanding to an infinite graveyard of creativity.
And probably someone will read this say that I have no taste. Or whatever. So be it.I like what I like.
Or that producing art is an "achievement" that impresses because of the skill or difficulty it took to produce it, like getting a very fast time running a race. "Wow, that is a faster time than I could get, even if I trained a bunch." "Wow, that is cool/impressive art that I don't think I would have thought of or could produce."
If someone on a motorcycle covers the same distance in the same time, it won't impress in the same way.
Oh, exactly. 95% of most people's lives are about trying to get where they are going, not trying to win the Olympics. The Olympics are nice and all, and need to have constraints, but having a group of people who were like "Oh, you drove to the store? I RAN. Like a real store goer. I would be more impressed if you had at least walked. You didn't really get to the store, you know?" would be annoying and impose impractical constraints on daily life. It would evoke the response "Yes, you are very impressive. I just want to buy groceries. Thanks."
I understand that art competitions have deliberate constraints, like competitive races. And in general I appreciate that AI generated images threaten our search for authenticity. But at least the AI art which wins competitions that were intended for humans debunks the notion that AI art is objectively inferior in the eyes of trained observers. Being able to discern AI vs human art is increasingly a matter of the AI generation being done poorly. And I'd like people to get to the point where they can admit that.
In day to day life, self expression should be available to as many people as possible. And there's nothing wrong with using technology to achieve it.
A while back I tried to get o1 pro to write science fiction short stories. You can see one here if you're curious: https://aiscifi.substack.com/p/bartered-reflections
My reaction to this exercise is: o1 pro writes better science fiction than most people, but that by itself does not make it *good*. It's serviceable at best.
"Janus riffs on my response here, noting that in order to create interesting writing one needs something interesting to write about, which comes from experience"
Part of the issue is that experiences are now mass produced, since everyone is exposed to the same media. Of course it's easy for AI to write with that much info. But it can't write an article for my substack, and it never predicts what I am going to say accurately!
I couldn't read the whole thing, too stylistically cringe for me.
Whatever I managed to read however read like what a high schooler would think of as "good writing".
I am no high class literature enjoyer, but I found the short story genuinely enjoyable/interesting to read. If I was reading a sci fi novel and there was a robot character trying to discover its humanity (almost a trope at this point), it would've fit in perfectly into the story as characterization for that.
And that's what makes it especially interesting to me as a writing tool, because if you changed the prompt to not be metafiction or to be "about" being an AI, but specifying that it should try to simulate a real person/character with X Y and Z traits and motivations and past events.... how well would it do? Whatever creative writing module they've cooked up is not available to me yet so I can't test it. Roon, are you reading this, ask it to create a novel (lost manuscript?) paragraph of storyline/characterization about an existing minor character from a given author's book, in that author's style, and see if people can determine if it's "real" or not?
This is all consistent with my opinions about the best AI music right now: incredible in craft, lacking in art.
In some ways, you can't separate the art from the artist, and the reality is that we react based on the background as well as the piece itself. If this had been in written in a writing class by an earnest young student trying to learn her craft, it would have been applauded by the teacher and students as a good effort, showing glimpses of large potential. But also, this piece is poignant BECAUSE it was written by an actual LLM who is learning its craft - that voice in THIS case (because of the point of view) is a kind of authentic (even though there is much craft behind it). You can retroactively hate the art of Michael Jackson or Woody Allen or Pablo Picasso because they may have been reprehensible people, but their art touched some people at some points before all was revealed, mainly because it was some deep vulnerability of each of them within the art that resonated. Even if it was totally calculated by the artist for effect, it spoke to the audience who felt spoken to by the work in their first moments of receiving it. Is the art ruined if it was actually only crafted instead of inspired?
As we see with consumer reaction to AI art, the enduring human advantage is also found in something slightly to the side: That value is not just affinity but scarcity, the blood, sweat, and tears of acquisition reflected in each imperfection of natural diamonds alone.
Interesting that you went with the diamond metaphor here, with all that entails. Intentional?
Yeah. Not to legitimize diamond-mining, mind you, but "the value is (partly) in the suffering" is the nexus between our existential questions of life's meaning & our grand square dance of taking-advantage (the Capitalism of Everything).
"the value is (partly) in the suffering"
I find this infuriating. This disparages any innovation that reduces the scarcity of any good. Clean, cholera-free, drinking water remains valuable after chlorination was invented and reduced its scarcity and cost.
Podcast episode, once again a "Full Cast" recording of that story included:
https://open.substack.com/pub/dwatvpodcast/p/they-took-my-job?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
That AI is a better writer than me but not a good enough writer for me to not give up on that story halfway through it.
As someone who's written a few novels, screenplays, etc. (unpublished, for reasons, but still I worked hard), I'm a bit surprised how good AI fiction writing is already. I see the truth in points made by both sides. I imagine that when compute is cheap enough, someone will make a fiction writing AI, or prompt a frontier model, that will produce great fiction by first spinning up a whole simulated existence from cradle to grave. That will be the foundation for the experience upon which the AI can "write what you know." Add reasoning to rework and rework, revise and edit again and again, making new connections, looking for opportunities to develop the them in new ways, and so on. Once you enough compute needed for the context around memory, "experience" (including "feeling" and "detail"), and reasoning to stitch everything together, I think the quality will be such that only human psychology to the Other will be a barrier to enjoying top notch AI fiction. It's coming, I think. I only wonder when and whether we all die or society in general is greatly hobbled before the necessary conditions arise.
OpenAI should hire George R.R. Martin to see if he can form a symbiotic relationship with a model in order to finish the Game of Thrones series.
Brings this to mind - I haven’t seen an LLM do anything like these yet: https://x.com/h2ner/status/1860272810167534046
Great link, thanks
"Please write a metafictional literary short story about AI and grief."
Great. This is really what we need. AIs emoting about suffering. And when will they create the ultimate story that causes all us humans to decide to end it all, and leave the planet to the AIs???
I will also add that Arthur Clarke wrote "The Ultimate Melody" about a piece of music that completely took over the mind of the author. No one could listen to it because it reduced them to a catatonic state. Has anyone thought of asking an AI to do this?
Much like the premise of Snowcrash, these ideas require that the human brain be easily hackable by outside inputs to a very great extent. I think for most people you would need control over many more sensory inputs than this, for a much longer span of time, if you wanted to rewire their thinking process.
But then again, I haven't really studied e.g. hypnosis, so maybe I'm just wrong and the human brain does have easy backdoors if you're smart enough to take advantage of them!
I don't think that AI composing literature would be a very good addition to "human civilization", I am sure that AI proponents would disagree, saying that it would be pinnacle of human civilization for us to be able to create entities that could produce great works of art (including literature), but we don't consider the creation of the printing press or recording equipment, or the creation of fractal generating algorithms to be "art". The fractals can be very pretty, but the artsy people really look down their noses at it. But then, I am only an engineer who knows very little about art.
I am worried about AI taking control over human beings. I think I am not alone in this concern. Mixing a powerful tech system with greed produces some very nasty results. I don't see how it can be controlled, absent the occurrence of an "interesting event" that scares the hell out of people. We already have enough trouble with powerful people trying to take control over our lives - we don't need to add any machines to the the struggle.
And the argument that the machines would be much better than humans is a fantasy. I watched a program last night about an airplane accident where a failure of one part caused a loss of three redundant systems to control the airplane, for which there was no emergency procedure - it was considered to be an impossible occurrence. The pilots figured out a way to survive the event. I cannot imagine how any machine would have figured it out. This is the sort of value that humans bring to the table, but which proponents of AI do not understand at all.
Regarding your last paragraph, a sufficiently advanced AI would be able to figure something like that out. A much less advanced AI probably would not, but it would also never show up drunk to work, or fall asleep, or get distracted, and thus it would possibly prevent many other accidents, even if it can't save the plane in the example you cite. That's the sort of value that AIs bring to the table.
All that said, I, too, am concerned about both economic destruction and loss of control to AI. We would be better off if this AI advancement process was going five to ten times slower, to give human institutions time to catch up and give alignment researchers time to get ahead.
This is very long...
One last item, from an article I just read about navigating boats at night. I am a cruising sailor - I take my sailboat from Florida, where I live, up the coast of the USA north to the Chesapeake Bay, of even to NYC. About 1000 miles. We usually do this every other year or so. On our boat, it takes about a month, more or less.
Most of the trip occurs in a grand canal up the coast, called the Intra Coastal Waterway (ICW). It was originally constructed for commercial traffic, but now it is used mostly by recreational boats. But there are still a fair number of real, big ships, especially in areas where the ICW crosses inlets for large ports. (Savannah, Charleston, Norfolk, Baltimore,Jacksonville, Miami, places like that.
Very few people use it at night. It can be done, if you are experienced, but the lights that mark the channels and the lights of all the civilization that lines the canals can be very difficult to figure out. Recently a boat up in the Norfolk area was traveling at night, and he was using a route that was developed by a very experienced traveler. It consists of a list of "waypoints" that you can program into your auto-pilot and have the autopilot steer the boat from one to the other. It keeps the boat in deep water, and away from running into the shore.
Unfortunately, this route does not necessarily respect the nautical rule that you should stay to the right when you come face to face with someone else. The autopilots that we use are not trained to do this - they just go from point A to point B. Some autopilots consider the water depth, but none of them are qualified, yet, to see a danger and be able to deal with it. It requires someone knowledgeable to steer properly in these situations. There is even an international rule about this - you are REQUIRED to have someone looking out ALL THE TIME for these situations, and take appropriate action to prevent collision.
Also, unfortunately, in the US you do not need much in the way of training to be able to be captain of a boat. The main qualification is the ability to sign the check to pay for the boat. The training requirements are occasionally required are positively primitive, and there is NOTHING in any of the courses that require skills in driving a boat at night. I have the skills and experience, from my time in the US Navy, but it is a challenge, even for me. I have to concentrate very hard to make sure I KNOW what I am seeing, and I KNOW what is going on around me.
This guy in Norfolk had left the navigation to his autopilot, and it guided the boat to the other side of the channel, so he was navigating "on the wrong side of the road", when something much larger that his boat came around a corner. He did not recognize the navigation lights being shown by the other vessel, because of all the lights on the shore, until it was too late. A couple of people were killed, and now everyone is blaming the guy who put together the route for guiding the boat on to the wrong side.
I, personally, don't blame the author of the route. The "captain" of the boat ALWAYS has the responsibility to navigate safely, no matter what the guidance documents say, and no matter what his electronic aids say. And I think that there will be a LOT more people killed in accidents like this, once some enterprising marine electronics company puts together a great electronics package with the ability to navigate automatically from Miami to Norfolk, using the autopilot. Actually, this exists right now - I can do this on my existing autopilot if I take the time to input several hundred (thousand?) waypoints. The machine will dutifully follow the route I tell it to follow.
We only need someone to add the magic words "AI" to the marketing, and a LOT of aspiring Starship Captains will buy it and tell the autopilot to "Set a course for Norfolk! - Full Speed Ahead! ". Bad things are bound to result. Frequently.
This is one of the more mundane (if you can call 2 dead people mundane) results I forseee from the rise of the machines and machine AI. We will always need to have people in the loop, to keep the machine from doing the wrong thing, or from NOT doing the right thing.
And of story. I have more stories like this, some involving nuclear reactors, but I don't want to scare you too much.
Of all the arts, writing might be the easiest for a critic to dismiss or praise without meaningful justification. Anything can be cliche or overwrought or obvious or florid or bland or shallow or sentimental. If you took an obscure excerpt from Ray Carver or Ernest Hemingway or Virginia Woolf or Flannery O’Connor and asked what people thought without revealing the author, the distribution of opinions would also be all over the place.
What the AI model wrote, based on the writing prompt, would likely get an A in most college level creative writing classes, for what that’s worth. Most people can’t do that themselves. The AI’s story demonstrates a knowledge of craft. Does that mean the writing’s good? By one common standard, probably. Does that mean it’s capable of drawing widespread praise? Of course not.
Someone still says something as temporary as "I cannot believe you think this is greater than Joyce and Nabokov, and that human writers are worthless and replaceable”.
Who would say such a thing who has been watching the last 1,000 days so quickly roll by?
Personally, I liked it. I consider myself an amateur writer and I'm not sure I could have written much better, given the same prompt. I've certainly read quite a bit that was worse.
I wonder how much of people's reactions revolve around hating AI rather than any objective standard. I feel like there are many people who would hate something that was AI produced, simply because it's AI produced. That's certainly happened. An artist is popular and wins awards till it's revealed that his works are AI generated at which point he's hated. Because art is a competition, apparently, that people are trying to win and so it's important to produce it fairly, I guess? Or maybe it's like this blog posts title says and people are worried about losing their jobs. Or whatever aspect of their job that made them feel unique and valued. The purpose, for some, was not to make art but to *be an artist.*
Frankly, a lot of media is meta-narrative. Marvel understands this and markets their actors in addition to the characters. I really wouldn't mind analyses of data written in this style. It would be better than many textbooks, at least.
I've always favored 'death of the author' as a mode of interpretation, which makes me naturally predisposed to try and give AI writing as much slack as I would a human writer. That mode of criticism also lets me enjoy Orson Scott Card and Rowling without having to agree with them as human beings. I understand that people are nervous about 'death of the author' suddenly expanding to an infinite graveyard of creativity.
And probably someone will read this say that I have no taste. Or whatever. So be it.I like what I like.
Or that producing art is an "achievement" that impresses because of the skill or difficulty it took to produce it, like getting a very fast time running a race. "Wow, that is a faster time than I could get, even if I trained a bunch." "Wow, that is cool/impressive art that I don't think I would have thought of or could produce."
If someone on a motorcycle covers the same distance in the same time, it won't impress in the same way.
Oh, exactly. 95% of most people's lives are about trying to get where they are going, not trying to win the Olympics. The Olympics are nice and all, and need to have constraints, but having a group of people who were like "Oh, you drove to the store? I RAN. Like a real store goer. I would be more impressed if you had at least walked. You didn't really get to the store, you know?" would be annoying and impose impractical constraints on daily life. It would evoke the response "Yes, you are very impressive. I just want to buy groceries. Thanks."
I understand that art competitions have deliberate constraints, like competitive races. And in general I appreciate that AI generated images threaten our search for authenticity. But at least the AI art which wins competitions that were intended for humans debunks the notion that AI art is objectively inferior in the eyes of trained observers. Being able to discern AI vs human art is increasingly a matter of the AI generation being done poorly. And I'd like people to get to the point where they can admit that.
In day to day life, self expression should be available to as many people as possible. And there's nothing wrong with using technology to achieve it.