AI still needs to prove itself when constant errors and law skirting are the norm

If it’s really going to help humanity, AI devs and execs have an uphill battle

Today, the term AI usually refers to some complex algorithm refined via machine learning, trained on a massive database of publicly available data, and wrapped up with a large language model. There’s no actual intelligence involved; the models aren’t actively training themselves from the ground up, and it’s all based on datasets originating from real humans.

That’s all well and good (other than the misleading terminology). Despite my annoyed complaints, what we call AI does show promise and could have a future, improving accessibility for the disabled, efficiency for professionals, and convenience for consumers at large.

But, instead of AI changing the world for the better, consumers are increasingly fed up with buzzwords, grift, and false promises. No longer limited to us alarmist naysayers, the idea that (in particular, generative) AI is theft has entered the public discourse.

And the industry’s latest attempts at regulatory capture

Don’t just take my cynical word for it. There’s an ever-increasing glut of lawsuits and glaring public criticism surrounding corporate theft of not just aggregated text and images from across social media but even intentional data like recordings of voice actors.

If all this isn’t enough, you’re welcome to keep up with the latest in AI-related lawsuits and complaints with ChatGPT Is Eating The World, a thorough tracker of where artists and companies sit in the fight for control of their own work.

How did we get here?

Don’t bother arguing that scraping public databases (like social media or Stack Overflow) legally falls under fair use provisions. A US 9th Circuit Court of Appeals decision holds that even copying a program into RAM serves as a violation. It’s abundantly clear that current US intellectual property laws are wholly unequipped to address data scraping and machine learning composition.

Great phones don’t need gimmicks

OnePlus 12 review: All flagship, no AI

This phone leaves nothing on the table, making for a truly complete package

Indeed, AI giant OpenAI argued to the UK Parliament this year that forcibly limiting AI training to public domain works would hurt progress, essentially because it’s already started misappropriating others’ creations en masse. Lawyers went on to explain how requiring private negotiations for access to IP would squash the company’s business model, which doesn’t quite say what the legal team thinks it does. Still, AI’s widespread adoption and its massive training datasets make true ownership increasingly difficult for courts to navigate.

The already bleak future of AI

Recursive training, poisoned datasets, and model collapse

The publicly available datasets used for AI training have actually been compromised since the massive wave of ChatGPT 3.5 popularity. It’s no secret that training AI on AI-generated data results in regression, and yet, it’s been full steam ahead with no slowdown in sight.

How many IP eggs will researchers break to make an intelligent, predictive omelette?

The paper in question does acknowledge various ways of mitigating this regression, and we’re not likely to see complete model collapse. After all, investors are pouring nearly endless funding into ensuring they can still make profit and control the flow of information by lifting commentary, visual art, code, and other content from its creators. But we’re also not likely to see model performance continue to improve at the same rate.

And, as pro-AI enthusiasts rightly point out, there’s currently no legal framework deeming these actions explicitly illegal, hence wave after wave of lawsuits, investigations, corporate fluff pieces, and outraged artist pushback. If you support human creativity, or subjects of the humanities in general, though, you should very much hope that said legal framework will show up soon.

Complex code trained on stolen data amounts to far more than a paintbrush

I’ve said it before: AI models do not resemble the human mind. Researchers have yet to identify the part of the brain responsible for consciousness and creativity, but what we do know is that wherever it is, it’s not made up of proprietary OpenAI code.

Hey, maybe AI’s not so bad, after all

A paintbrush requires skill and long-practiced talent to create consistently satisfying art in the hands of a human being. Poetry, prose, and song lyrics demand a human element, and stealing that humanity from uncredited sources flies in the face of the concept of creativity itself. And regardless of how interesting and convenient your new software tool is, if it’s a tool for stealing the creative ideas of others, it has no place in the world of art.

Will there be a resolution?

It’s hard to envision a satisfying conclusion to the ongoing data-scraping and art-theft controversies, especially when human creatives tend to suffer loss after loss under the weight of big-budget corporate legal departments and technology-ignorant legislators.

Related

The 5 best AI apps for your Android phone or tablet

Cut through the clutter to find the best AI apps for your Android.

The art and technology industries have never been so fully entrenched in this battle, with user-friendly AI software more accessible than ever. Like many of you, we watch with bated breath which developments stick and offer protections to real, tangible human contributions.

Only time will tell, and when it does, we’ll be here to dissect the fallout.

Related

Gemini AI in Gmail needs to be incredibly accurate for me to trust it

The company making search results worse wants you to trust it with your emails

Read More

Chris Thomas