Tagging AI art


In an earlier essay, I discussed three arguments about AI art, and why I disagree with them. First, it is argued that AI art violated the consent of artists used in training sets; second, that it hurts the livelihoods of artists; and third, that it is bad art.

Part of what inspired me to defend AI art was that the social network Pillowfort polled its users on what ought to be done about AI art. I was surprised to hear that more than half thought that AI art should be banned from the platform, and the vast majority thought it should at least be mandatory to label it as AI-generated. I’ve said repeatedly that I am not personally interested in AI art, but it feels so wrong to single out one particular category of content just because a lot of people don’t like it. I myself produce content that plenty of people don’t like (analytical essays), and there are plenty of popular varieties of content that I dislike but no one would ever think to ban. So I defend AI art not on its own merits, but because I am opposed to efforts to homogenize social media content.

However, let’s consider a couple of things about AI art that might make it particularly annoying or corrosive to a social media platform. Even someone who creates or follows AI art might be concerned about these, and advocate measures to control them. We’re talking about deception and spamming.

Deception

One of the popularly advocated measures against AI art is the idea that it ought to be explicitly labeled. I think this is generally a good idea. I think that there’s a potential that AI artists could personally gain from deceiving audiences into thinking no AI was involved. Our interpretation of the art depends on the creation technique, and many people would have a more negative interpretation if they understood a piece of art to be generated by AI. As one reader put it, many people want human connection in their art. So when someone deceives people into thinking their art was made without AI, that’s falsifying a human connection for personal gain.

Explicit labeling could also reduce a second kind of deception, where an AI-generated photo is interpreted as a real photo. For example, there was that circulated image of the pope in a snazzy coat. Explicit labeling could at least prevent this from occurring by accident.

Although… I’m just not sure how common these deceptive practices really are? I think most people who use AI to generate art are open about it, and are more interested in advocating the legitimacy of the method, rather than trying to reach audiences who hate their method. I think if you had a large enough audience for long enough, it would be detectable. I could only find one story of it actually happening. Personally, I’m more concerned about the far more widespread practice of plagiarism, or the numerous Instagrams built upon screenshotting Tumblr posts.

Maybe it’s more common in writing than visual art. There was a news story was about AI-generated fiction flooding literary magazines. Although, it seems to be a lesser issue than I had thought based on the headline. Editors say it’s easy to tell that it’s AI, and one editor said it made up about 5% of their submissions. It appears that the spammers are people from poor countries who are desperate for money—which makes sense since literary magazines don’t pay that well.

None of this is to say that a policy of mandatory labelling for AI art is a bad idea. It sounds like a good idea.

There are some edge cases we have to think about though. AI might be only used in a minor step, such as inspiration, or as a reference. In writing, AI might be used for organization, or grammar. And hey, video games are art, what if you use AI for coding assistance? Must the usage of AI be disclosed in every case?  It’s a question for me how much people actually care about AI methods when used in minor steps.  The fact that these alternative ways to use AI are so often omitted from the conversation suggests to me that people do not care about them.

To make an analogy, when I talk about origami, does the average reader whether I used grafting, splitting, the tree method, or box pleating? What if I used a computational method like Reference Finder or Tree Maker? The average reader doesn’t care because they don’t know what any of these are. I personally love talking about methodological details, but to omit certain details does not constitute deliberate deception.

Spamming

Mandatory labeling for AI art on social media isn’t only intended to reduce deception. There’s also another goal: reducing spam. Supposedly, AI art spam is a huge problem on certain platforms like DeviantArt. And while people can filter it out if it’s tagged correctly, they can’t filter it out completely because of inconsistent tagging practices.

I’m sympathetic to people worried about spam. I, too, have been plagued by spam “art” that people generate by just writing a few words into an online tool. That’s right, I don’t like memes. People on the internet are very inconsistent about tagging their memes. (More precisely, people consistently do not tag memes.) They don’t even think to consider that this is a category of content that some people may wish to avoid.

I think tagging AI art as such is a good idea. The problem is that it isn’t the only good idea. There are a million other types of content that ought to be tagged appropriately, so that it can be discovered and filtered more easily.

Back when I used to be on Tumblr, this was a perpetual issue. Ace tumblr would constantly complain about people posting hostility towards aces in the ace tags. Aros would complain about aces spamming the aro tags. People would frequently switch to new tags to get away from inappropriately tagged content, but the inappropriately content would follow them to the new tags. (Personally I didn’t follow any tags at all, because they were filled with memes.)

In short there was an awful lot of people screaming at each other about tag spam, and this pattern repeated itself across many communities and generations of social media users. I’m inclined to think that the root of the problem was not individuals who failed to use tags correctly, it’s the fact that people were relying on tags to follow content.

Methods to follow internet content, a tier list:

S tier – Manually choose what to follow.
A tier – Follow aggregators or moderators that hand-pick content for you.
C tier – Rely on algorithmic recommendations.
F tier – Follow unmoderated tags, where anyone on the internet can unilaterally decide that they deserve your attention.

Why would you ever use unmoderated tags if you had any alternative? They may suffice in a small community where people have mostly homogenous interests and perspectives. But that just pressures a community to police its own homogeneity, with people constantly yelling at each other to tag things appropriately. If you’re worried about AI art infiltrating your tags, that just tells me that you were privileged enough to have a tag that mostly worked for you, and you are upset at threats to that privilege. As a person who has never been able to stomach unmoderated tags in the first place, I am not sympathetic.

In conclusion, I think it’s reasonable to ask for standardized tagging of AI art, to reduce deception and help filter content. However, I think the supposed problems it’s addressing are overblown, and I’m hostile to attempts to limit internet content just because some people don’t like it.

Comments

  1. says

    Small addendum, from comments on Pillowfort–people say that deception is more common than I was aware, for example in the adoptable art community. This seems entirely plausible to me.

  2. says

    outside of those loops, never did adoptable art or online art communities in general. but i’ll say this has some amusing ethical parallels.

    let’s say, hypothetically, that the concerns about artificial intelligence stealing people’s art are completely unfounded specious nonsense on the level of conspiracy theories like covid denialism and flat earth. i’m not saying that those concerns are equally foolish, but I have seen outright cons, lies, and misconceptions from that side of the aisle many times now.

    if that’s true, then forcing anybody that uses AI art tools to label their creations might be comparable, ethically, to requiring vaccinated people to wear a badge so like alex jones can know to avoid them.

Leave a Reply

Your email address will not be published. Required fields are marked *