On “AI” Tagging Laws

Content created by a machine learning model should be labeled. Controversial, right?

I believe that intentional obfuscation of the origin of content is intentionally malicious. However, discussions around regulating machine learning (I refuse to call it AI) often treat it as a thing in a box. If we think about an independent image or video clip or article, it is easy to consider regulating the proper labeling of this content. We think about this new content as an isolated deepfake video of a presidential candidate or a website filled with ChatGPT articles and believe that it needs to be regulated, but what separates this ML content from Grammarly or Google Pixel low-light diffusion?

A quick Google search for “arrested for photoshop” results in cases about either police incompetence or Chris Hansen level sex crimes. It follows that photoshopping an image counts as protected expression. If we then demand regulation for ML content is it because this content is too good? Too convincing? Does it become a judicial effort to protect any semblance of reality as conveyed digitally? I think that is really what people who advocate for legislating ‘AI’ really want - for the internet to remain at least somewhat tethered to humanity.

So, let’s deconstruct those two ideas. If the purpose of legislation is because ML content is too good at being real, then that regulation should hinge on intent. Therefore, a piece of content found to be created with the express purpose of causing harm should result in criminal penalty. Someone automating email responses to their coworkers or using ML facetune on dating apps may be using these tools without disclosure, but their intent is not to cause harm and as such they do not go to jail.

They do go to jail if we regulate based on the idea that we need to preserve the tenuous connection between the digital world and the physical one. Though, in this case logically the companies creating the tools are pursued rather than the consumers*. This could be done, though it would be shockingly anti-capitalist to legislate away progress and industry, though the free hand will likely lead to most of these companies crumbling as the market continues to realize that people like being people. In this scenario where does one draw the line? “AI” is not a well defined term and machine learning has been a tool since long before OpenAI lit the venture/tech world on fire. Also globalization and freedom and all that stuff that would make it even more impossible.

I do not think ‘AI’ should be regulated. Ethically I believe that intentionally obfuscating the origin of work created with machine learning is evidence of malicious intent, however regulating it solves very little. Content sharing platforms should severely punish unlabeled work created with machine learning and continue to develop tools that are able to detect it - as should we as people.

There is a whole environmental angle. AI datacenters are going to continue to use monstrous amounts of energy and at some level producing that much heat energy is a concern, but we all know that legislating based on environmental concerns is a myth.

*I heard that Zoom is trying to make a tool where an avatar of yourself goes to meetings and that some dating company wants to have trained avatar first dates

Next
Next

On “Images” not “Pictures”