Google all at sea over rising tide of robo-spam

1 week ago 19
BOOK THIS SPACE FOR AD
ARTICLE AD

Opinion It was a bold claim by the richest and most famous tech founder: bold, precise and wrong. Laughably so. Twenty years ago, Bill Gates promised to rid the world of spam by 2006. How's that worked out for you?

Gates' failure is hanging particularly heavily on Google right now. It's not so much the email version; Gmail's Report As Spam option works well, as empty clickbait content clogging up its search system. That's been with us forever: the difference now is that AI spam is proliferating out of control. Content spam was two percent of search hits before ChatGPT: it's ten percent now; Google is manually delisting sites like never before.

It takes a lot for Google to remove sites from its search results. That means losing ad revenue - the main reason spam sites exist - and revenue is the crack cocaine of publicly traded tech companies. See Microsoft's continued attempts to monetize the Windows desktop. Or Meta's alleged use of harmfully addictive algorithms.

You know this already, you use big tech services and you know the difference between what the companies say in public and what they actually deliver. You don't matter, the revenue you represent does.

The primary reason Google is spending its own money to reduce its revenue is that AI spam content is so cheap and easy to produce that it has a much better chance of overwhelming everything else. It is so toxic, moreover, that it risks driving mass migration of users away from Google, people already fed up with sponsor-heavy search results heavily spiced with pre-AI clickbait garbage. This is certainly an exponential threat to Google, and potentially to all other on-ramps to web content. Which is to say, the web as we know it as a place to create and discover outside big known brands. 

Stopping this is hard. One answer is to out-AI the AI spammers, automating the business of finding and isolating the cheats. Two problems: AI is very resource-intensive and this risks joining cybercurrency in the business of boiling the oceans in an exponential megawatt orgy. The other is that there is no way to win, as AI spam develops the equivalent of antibiotic immunity. 

Even before either happens, the situation is dangerous for Google and other search engines looking to make AI the preferred front-end to the search experience. That AI will have to train on an increasingly contaminated data set of web content, and even if it works, giving a succinct report of the searched info instead of a list of links, it's not clear where what this means for ad revenue. Google earns when you click links, not when you read Google. Move the adverts to the AI report? That removes all of the revenue from ad-supported sites, good and bad, something that will actively kill off good content at the same time that the regulators of the world let loose a cry to shatter the heavens. 

None of these scenarios looks good. Most look apocalyptic. Yet some of the basic assumptions are wrong. Change the context, and different outcomes appear, outcomes that are far from disastrous. Quite the opposite. 

Google has made its uncountable billions by divorcing ad revenue from content providers. Before Google, ad revenue models depended on what people choose to consume. Now, that's algorithmic. The Google algorithm started off well with a good model of what good content looked like in terms of structure, links, and other factors. The better quality a site, the more traffic and the more revenue for them and Google? As if. It's always cheaper to game the algorithm than to play fair, so the cash goes to the gamers – and Google. 

This has been very unhealthy in so many ways, giving Google and its peers the reach and resources to game the system for themselves. The algorithms are wreathed in darkness, ostensibly to prevent third parties gaming them but conveniently providing a corporate shield for shenanigans. We have a worse web for work and play as a result, to say nothing of the social impact of algorithm-derived poison that amplifies fear, anger and divisiveness. Bringing the apocalypse to all this would be no disaster at all. At least to humans. 

If AI kills the algorithm, what happens next? Imagine sites had the option to publish a quality statement online, much as they must have a privacy statement now. Machine parsable, they'd give search and suggestion algorithms a way to honor the intent of the user. Rules like "no unflagged AI content," "transparent ownership," "Marketing," and "Lizard People HQ" or whatever. Set your preferences, and get that content prioritized. Find a site you consider not sticking to its rules, and tell the system not to show it again. And, while it's at it, any site that links to it. 

Open source versus Microsoft: The new rebellion begins Cloud vendor lock-in is shocking, but there's a get out of jail card Apple's GoFetch silicon security fail was down to an obsession with speed Time to examine the anatomy of the British Library ransomware nightmare

The effect would be to wrest control of the algorithm from its creators and where they put their tumbles on the balance of what you want and what they'll give you. The stats – anonymized, opt-in – of what the graph of who chose what would be fascinating, not to say a rich source of signals for identifying cheaters. Everything's gameable, but you can give good behavior a commercially significant chance.  

It may seem to encourage joining bubbles, but also a way to burst out of them. Sites that keep their word will get more revenue and will in time form chains of trust; sites that reject the whole idea will be free to audiences that agree with them: it's misrepresentation that will become harder. 

Gmail's Report As Spam button is an implicit admission that algorithms alone can't keep us safe. If AI clickbait means the same applies to our web of deceit, we may yet be thankful it happened. ®

Read Entire Article