In the twilight of 2024, as I sit at my desk penning this piece, I cannot help but feel a sense of vertigo. The world around me seems to shimmer with an artificial sheen as if reality itself has been subtly altered. And in many ways, it has. 2024 was the year AI didn't just knock on our door — it barged in, rearranged our furniture, and started cooking meals we didn't know we wanted.
Remember when, not so long ago, we thought AI was just about robots and smart speakers? How quaint that notion seems now. In 2024, AI became the invisible puppeteer of our daily lives, its strings attached to everything from our morning news to our nighttime Netflix binges. It is as if we have all unwittingly become participants in a vast, unseen Turing test, constantly interacting with artificial intelligence so seamlessly we cannot tell where the human world ends and the digital one begins.
Take, for instance, the signing of a
$10 million contract between Taylor & Francis and Microsoft AI. This revelation sent shockwaves through the academic community, exposing a troubling new frontier in the AI revolution. Without so much as a whisper to their authors, this venerable publisher had effectively sold the intellectual labor of countless researchers to feed the insatiable maw of Microsoft's AI systems. The implications are staggering. Not only does this deal raise serious questions about copyright and fair compensation, but it also threatens to undermine the very foundations of academic integrity. Imagine a world where AI, gorged on the uncompensated work of human scholars, begins to churn out "research" indistinguishable from the real thing. The line between human and machine-generated knowledge blurs, and the value of original thought becomes increasingly nebulous. This isn't just about royalties or attribution; it is about the future of human knowledge itself.
But it is not just academia. When Google's
AlphaFold 3 unraveled
protein structures faster than a kid with a Rubik's Cube, it was hard not to feel a mix of awe and obsolescence. As a species, we have effectively outsourced our curiosity to machines. Sure, we are curing diseases and solving climate issues at an unprecedented rate, but at what cost to our sense of discovery and achievement?
Speaking of climate, let us talk about the irony of AI-powered environmental solutions.
FireSat, a constellation of Google satellites, the all-seeing eye in the sky, can spot a wildfire faster than
Smokey Bear on his best day. It is a technological marvel, no doubt. But as these AI systems gobble up energy like a starving teenager at an all-you-can-eat buffet, one has to wonder: are we just trading one form of environmental destruction for another? Then, there is the
Clearview AI debacle. Privacy in 2024 became as quaint a concept as phone booths and dial-up internet. Our faces are now just data points in a vast digital tapestry to be bought, sold, and analyzed at will. We have sleepwalked into a surveillance state so pervasive that George Orwell would be saying, "I told you so," if he were not too busy rolling in his grave.
The 2024 U.S. election? A masterclass in digital manipulation. Watching
deepfake videos of candidates saying things they never said became a national pastime. Truth, once a cornerstone of democratic discourse, morphed into a choose-your-own-adventure story, with AI playing the role of an unreliable narrator. Campaigns weaponized deepfake technology, flooding social media with hyper-realistic videos that could make or break a candidate's reputation overnight. The infamous
"Elon Musk cryptocurrency scam" deepfake seemed antiquated compared to the sophisticated political manipulations we witnessed. We found ourselves living in a world where seeing was no longer believing. The phrase "fake news" evolved from a political buzzword to a daily reality, a constant shadow looming over every piece of campaign material. Voters became amateur detectives, scrutinizing every video and audio clip for telltale signs of manipulation, often to no avail. The impact on public trust was profound. As deepfakes proliferated, a growing segment of the electorate retreated into information bubbles, trusting only sources that aligned with their pre-existing beliefs. This "post-truth" environment became fertile ground for conspiracy theories, further polarizing an already divided electorate.
As for regulation, watching lawmakers grapple with AI is like seeing your grandparents try to program a VCR — amusing, but ultimately frustrating. The
EU's AI Act is a valiant effort, but it feels like using a butterfly net to catch a supersonic jet. We are regulating yesterday's AI while tomorrow's version is already plotting its next move. Consider this: the initial proposal did not even mention
"large language models". Go ahead, look it up on the PDF. It is as if they were trying to regulate smartphones using rules designed for rotary phones. By the time the bureaucratic machine churned out its carefully worded directives, the AI landscape had transformed so dramatically that the regulations felt outdated, like trying to govern space travel with maritime law. The phased rollout of the AI Act, stretching over 36 months, is a testament to the glacial pace of legislation in the face of exponential technological growth. While EU officials are still recruiting for their central AI Office, likely puzzling over job descriptions for roles that did not exist a year ago, AI developers are pushing boundaries at breakneck speed. It is a regulatory version of “Catch Me If You Can,” with AI playing the Leonardo DiCaprio role, always one step ahead of the bumbling authorities. It is like worrying about paper cuts while ignoring the looming threat of a supernova. This regulatory myopia is not just frustrating — it is dangerous.
So here we are, at the end of 2024, sending out a collective "Mayday" as AI's tendrils wrap ever tighter around our lives. It is not that AI is inherently malevolent; it is that it is overly efficient at being whatever we ask it to be. We wanted a helper, and we got a taskmaster. We asked for an assistant, and we received an overlord.
Yet, for all my cynicism, I cannot help but feel a twinge of excitement. We are living through a revolution as profound as the Industrial or Digital ages. AI is rewriting the rules of what it means to be human, to create, to think, to exist in this brave new world.
As we stand on the precipice of 2025, one thing is clear: the genie is out of the bottle, and it is coded in binary. Our task now is not to fear AI, but to shape it, to infuse it with the best of our humanity while guarding against our worst impulses. And as I finish this piece, I can not help but wonder: in a world increasingly dominated by artificial intelligence, what does it mean to be authentically human? That is a question no AI can answer for us. At least, not yet.
Stefan Mitikj is a Contributing Writer. Email them at feedback@thegazelle.com.