A prediction, which is always a dangerous thing to do: Within the next five years, artificial intelligence (A.I.) engineers will develop a generative algorithm that will make online anonymity impossible.

On its face, this sounds like a good thing. From the innocuous, like petty review-bombing, to the heartless, being cyber-bullying, to the most deadly, being cyber SWATting, many would cite anonymity as the fuel driving some of the most inhumane behavior facing the species currently. Therefore, being able to pull away the veils – all of them – should be a plus, right?

Not so fast. It all depends on whose hands are on the steering wheel.

Generative A.I.

Before we discuss what a disclosure A.I. would look like, we should talk about what “generative A.I.” is in the first place. This form of artificial intelligence takes the wealth of sources from what is already out there on the Internet, slices and dices it up according to prompts given by a user, and develops a pastiche from the closest examples based on those terms. For instance, if you used a prompt in an art A.I. like Midjourney – let’s say, “A dog in a Sherlock Holmes outfit investigates a crime in a swamp” – the program would search the Internet for images of a dog, Sherlock Holmes, the character’s outfit, the poses Holmes is most often depicted in, and a swamp, and then mutate these elements into a single image. You can add terms that dictate tone, color, light, shadow, hand-drawn, photographic and more, and these will be factored into what will essentially be a highly-rendered collage as the output.

We saw the seeds of this all the way back in 2011 when IBM’s A.I. “Watson” won on the game show Jeopardy!, besting previous champions Brad Rutter and Ken Jennings. How did it do that? When the answer was revealed on the screen by former host Alex Trebek, Watson trawled the Internet with incredible speed, digging up the answers (in the form of a question). Was that cheating? Perhaps, but as we’d see only a few years later as people regularly Google answers to questions via smartphones, what was once a bit of a cheap card trick is now commonplace behavior.

But what does this have to do with online anonymity?

Footprints

Investigators and trackers rely upon footprints when on the hunt. This is a skill that has developed throughout human history, back when hunting was a necessary function. You had to know your deer and wild boar tracks from your grizzly bear. One you can make a dinner of, the other makes a dinner of you. Such tracking is rudimentary, basically recognizing shapes and patterns.

In our modern day, a lot can be learned from a footprint, especially in a manhunt. You can know what the person was wearing according to the tread patterns left behind: a sneaker, a work boot, a dress shoe. Many times, the shoe manufacturer has a logo embossed on the bottom, so an investigator can isolate right down to the brand. They’ll know a foot’s length and width, a person’s approximate weight given the depth of an impression, and even some of the individual’s specific traits. Does the person have a limp or was injured prior to this? Then one footprint might be lighter or heavier than the other, and so on.

But footprints are found in behavioral science as well. Each person has specific “tells,” especially in their online lives. There are times of the day when a person is more likely to post than not, either because of being asleep or awake, or being at work or school, or being stuck in a commute at roughly the same time each day. Each of us has a handful of words we habitually fall back upon. That person who uses the term “amazing” all the time and hardly ever uses any of its variants. The person who never gets their “theirs” correct, instead using “there,” and so forth. It can even come down to tone. The person who always communicates in the passive voice. Someone who favors aggressive terminology. 

As a one-off, someone could easily overlook these tells. It’s just a statement, made in a specific way. In aggregate, however, these patterns become clearer and specific. This is the emergence phenomenon. It would take a human investigator weeks, if not years, to scour one person’s communications to lift out the identifying similarities. But as we saw with Watson, surfing the ‘Net with incredible speed, or with Midjourney or ChatGPT, sampling and synthesizing something new from many things old to fulfill a prompt request in seconds, it can be done. And because all of us have been so prolific online for so long, we’ve left footprints everywhere. It’s more than logging data, capturing cookies, and jotting down the “when-and-where.” The seeds of the “who” are there as well, and technology finally has the ability, speed, and directive to analyze this.

Generative A.I. now has the library of “us” to know with a high degree of certainty who we are versus who we might claim to be.

The Reveal?

I think we can all agree that everything depends on who is in charge of the technology. We can also agree, based on innumerable movies and TV shows, that the guy who invents the x-ray glasses is more likely to perv on strangers with them than to let people know they have medical issues they should address. Given the choice to use radiation to power cities for good or to blow another nation off the map, we typically lean toward “bombs away.”

With this in mind, while the utopian ideal that such an A.I. tool – when paired with Twitter, Facebook, Gmail, or whatever would uncover identity theft, specific deception, and bad actors, essentially forcing secrecy out into the light – is a good thing on its face, it has equally disastrous potential. On the one hand, there would be a chilling effect that people would need to stop hiding behind the protective digital disguises. They’d just stop doing it simply because being caught would be more the norm than the outlier. The A.I. has pulled all the individual’s communication footprints and I.D.’d them, no matter what they’re calling themselves.

But there are times when anonymity is a good thing. Those who are dealing with domestic abuse, for example, need to communicate in a way that would not tip off their abuser. Okay, if this technology was only in the hands of law enforcement and not the general public, but these things never stay locked in the professional arsenal. Just look at the millions being caught up in using ChatGPT and calling the results their own work. This only occurs if the tool is obtained easily, and I expect that even if an identification A.I. started under wraps, it wouldn’t stay there too long. It might even be developed by outside parties and made “open source” from the start. 

Yes, such technology would level the playing field, but these are tantamount to “monkey’s paw” wishes. It would uncover many bad actors, but would also enable others to act badly.

Is It Inevitable?

As we move farther from A.I. being fun – watching Ken Jennings being beaten by a robot on TV – to functional, we’re going to see more instances of abuse. We already see some of it, from audio and video mash-ups from long-dead creators, to the abuse of deepfakes inserting non-consenting images into compromising scenarios. We can only expect more, not less of this to impact our lives. Therefore, such technology that seeks to root out what is real and what is manufactured has to rise. Governments will likely demand such as a part of national security. That’s only a hair’s breadth away from technology that not only identifies real from fake, but from one real to another. One can only imagine this is something that Alphabet and Meta are already on top of as a profit center. 

Yes, I imagine it is inevitable that such technology will make secrecy far less secret, for better or worse, and it is coming very, very quickly.

Is there a solution for this, or at least a standard operating procedure we should adopt when facing such widespread disclosure? I think so, but it reeks of Pollyanna-ish attitude adjustment and more than a little concern of Orwellian oversight. We’re going to need to be more honest with each other. The days of the digital masquerade are likely over – in part as a necessity against A.I. encroachment – and you’ll either need to keep your bad ideas to yourself or own them outright, attaching your real name and face to them, because eventually they will be attached to them through automatic measures.

And to those who would argue this particular future is both far-fetched and far-flung, I’d offer this one tidbit. Computer scientists made predictions in January 2023 of where A.I. would be in the next five years. The majority of those predictions were surpassed by March 2023. A.I. developer Geoffrey Hinton, known to some as the “godfather of A.I.” recently left his position with Google to warn about the oncoming threats an unchecked technology would pose to humans. While his warnings are valid and actually quite scary, so too would be a potential solution, being counter-technology to combat artificial intelligence influence, being the exact sorts of algorithms we just illustrated.  

In any case, A.I. is an evolutionary step for the human species. We will have to adapt in some form to live with and navigate around the wild animal that’s now been uncaged, one that we cannot lure back in. It will require the individual to be more authentic, but it will complicate issues of safety and justice where you can’t just blurt out the truth without potential retribution.

The future is here.

About the Author

Dw. Dunphy

Dw. Dunphy is a writer, artist, and musician. For Popdose he has contributed many articles that can be found in the site's archives. He also writes for New Jersey Stage, Musictap.net, Ultimate Classic Rock, and Diffuser FM. His music can be found at http://dwdunphy.bandcamp.com/.

View All Articles