What Is ‘Chatfishing’, The Disappointing Dating Trend Plaguing Apps?

You might have heard of “catfishing”, which happens when people create fake or misleading profiles online in order to draw others in.

The term comes from a 2010 documentary, Catfish, which compared the practice to the catfish one of the cast members suggested were included in tanks of cod to keep them agile in transit.

“I thank God for the catfish because we would be droll, boring and dull if we didn’t have somebody nipping at our fin,” Vince Pierce, who helped to inspire the name of the movie, said in the flick.

But according to author and relationship and self-help expert Tam Kaur, another species has taken its place: we are now in an age of “chatfishing”.

What is “chatfishing”?

“‘Chatfishing’ is when someone uses an AI tool, like ChatGPT or Gemini, to write their messages on dating apps,” Kaur shared.

Though it doesn’t exactly sound romantic, the self-help expert said she understands why reliance on large language models (LLMs) like these is growing.

The machines, after all, won’t make an embarrassing grammar or spelling mistake, and they can make the “awkward” process of starting a conversation a little smoother.

“Using AI becomes a way to show up as a ‘perfect’ version of yourself, without the fear of rejection,” the expert continued.

But that doesn’t take away from the core issue: if the chat goes well, the goal is to enter a scenario in which you have no choice but to be yourself, in real life.

“Ultimately, many people use the tools to enhance their confidence with online dating, but they don’t realise it’s doing a disservice to themselves as they deceive their matches,” Kaur said.

“It is a very real form of deception because you’re presenting as a person who isn’t you. That doesn’t show respect to the person on the other end who’s trying to get to know you.

“Relationships, whether they’re casual or committed, are built on trust, and when you start something off with even small dishonesty, you’re disrespecting whoever you’re entering this relationship with.”

How can I spot a “chatfisher”?

It can be hard to spot, especially when you don’t know the person’s usual texting style, the author admitted.

But sometimes, “chatfishers” leave clues behind.

“If the message is from someone based in the UK, but uses American spelling, this can be a key sign. Most AI tools default to Americanised spellings, opting for ‘z’ instead of ‘s’ or ‘olor’ instead of ‘olour’,” she explained.

“A message that uses strange punctuation, which you wouldn’t see in regular text conversations, could also be a sign – for example, random hyphens or odd spacing before the start of sentences.

Most of all, Kaur ended, “it’s about trusting your own intuition. If something feels off or too curated, it probably is.”

Share Button

Revealed: This Age Group Of Boys Is Most Vulnerable To Online Abuse

The online world is vast – and while it can both educate and entertain kids for hours on end, there are murkier areas where they need to tread much more carefully.

New research from safety experts at McAfee has uncovered the most common online threats facing British children, including: cyberbullying, inappropriate contact and scams.

The study of over 4,300 parents found one in six say their child has been targeted by an online threat in the UK.

The highest risk group is 12-year-old boys, with almost a third (32%) being targeted.

For girls, online dangers tend to emerge later, with reports steadily climbing through the teens and peaking at age 16, where more than one in five parents (22%) say their daughter has been targeted.

What are the most common online threats facing children?

According to the research, cyberbullying or harassment from peers (48%) is the number one threat. Nearly half of UK parents say their child has experienced cyberbullying, while one in three (35%) ranking it in their top three worries.

Cyberbullying can include mean comments, exclusion from online groups or spreading harmful rumours, often through social media platforms.

Scams are also a huge problem – particularly fake social media giveaways or contests (33%), which purport to be giving away gaming consoles, smartphones or designer products.

Children are lured into clicking fraudulent links or providing personal information, with boys aged 13-15 particularly vulnerable.

Similarly, online gaming can be a hotbed for scams. Over a quarter (27%) of parents report their child has been affected by gaming-related fraud, such as fake offers for in-game currency, exclusive items or upgrades.

Scammers often pose as fellow players, using familiarity and trust to get children to share passwords or personal info.

There are also concerns about unsafe or inappropriate contact. One in four UK parents say their child has received inappropriate contact online, with girls being more frequently targeted (29% versus 21% for boys). Unknowns might try to initiate conversations with children via direct messages, chat rooms or even multiplayer gaming platforms.

And lastly, scam messages or phishing texts (21%) – designed to trick recipients into divulging sensitive information, such as passwords, bank details or personal data – are a problem.

Girls are significantly more likely to experience this (29%) than boys (14%), the study found, with those aged 16-18 most at risk.

The rise of AI-generated scams

Worryingly, parents are also noticing a rise in the use of AI-generated deepfakes and nudify technology. Nearly one in six UK parents say their child has experienced deepfake image or nudify app misuse.

Girls are facing this threat the most – 21% of parents say their daughter has been impacted, compared to just 11% for sons.

Boys are more likely to be targeted by AI-generated voice cloning scams, instead – where fraudsters use AI to mimic the voice of loved ones through phone calls, voicemails or voice notes.

Recently, experts advised families to come up with a “safe phrase” so they can tell if a phone call or message is an AI-generated scam or not.

Understandably, when children are impacted by these online threats, the emotional and psychological effects are significant and can include anxiety, academic struggles and social withdrawal.

How to keep kids safe

It’s clear parents need to be having ongoing conversations with their kids about online safety. (Check out these helpful guides from Internet Matters and the NSPCC if you need somewhere to start.)

But what else can we be doing to keep kids safe? Here are McAfee’s top tips:

  1. Pair tools with talks: Combine parental controls with regular, judgement-free conversations about harmful content, coercion or bullying so your children know they can come to you. Explain what cyberbullying and scams might look like, and that it’s okay to block or report people.
  2. Teach “trust but verify”: Show balanced digital habits yourself as children copy what they see. Show them how to pause, check sources and ask for help when something feels off – especially with AI-altered media.
  3. Talk about the risks of oversharing: Remind children never to share personal information such as their name, school, address or phone number. Encourage strong passwords and explain two-factor authentication.
  4. Set and revisit a family tech contract: Create clear boundaries with kids about screen time, online behaviour, and device use – and update them as your child grows.
  5. Keep devices secure: Ensure all devices are updated with the latest security settings and include AI-powered scam protection to help spot and flag suspicious links or manipulated content before it can do harm.
Share Button

Tilly The ‘AI Actor’ Is Fake, But Her Potential To Harm Women Is Real

There’s a new Hollywood newcomer who already has a long list of haters: Tilly Norwood, the AI-generated “actor”.

Dutch comedian Eline Van der Velden, the head of AI production studio Particle6, which developed Tilly, said she wants the AI character to be the next Scarlett Johansson.

But not if the rest of Hollywood has its way.

After Van der Velden announced what she calls “the world’s first artificial intelligence talent studio” at a film festival and said Tilly had talent agents hoping to sign her, this news of potential representation sparked widespread Hollywood backlash.

Multiple actors’ unions have released statements condemning Tilly. Actors have also accused Tilly’s makers of stealing real people’s images to make the AI-generated character.

“And what about the hundreds of living young women whose faces were composited together to make her?” actor Mara Wilson posted on social media. One Nashville-based musician even claims that Tilly is her doppelganger.

The company behind Tilly denies that the character was created with stolen images.

“Tilly was developed entirely from scratch using original creative design,” Particle6 said in a statement to HuffPost. “We do not and will not use any person or performer’s likeness without explicit consent and fair compensation.”

After outcry, Van der Velden said Tilly was “not a replacement for a human being, but a creative work – a piece of art” in a social media statement.

But no matter how original or creative you believe Tilly to be, she is definitely drawing from tired old tropes about women and raising unsettling implications for real working people, AI experts caution. Here’s what you need to know.

Tilly reinforces more of the same tired beauty standards for women

Tilly Norwood is not real, but the AI character is causing a real debate over how women's images get used. Above are images from Tilly's Instagram account.
Tilly Norwood is not real, but the AI character is causing a real debate over how women’s images get used. Above are images from Tilly’s Instagram account.

For one, Tilly replicates a narrow idea of what generative AI thinks a woman should look like.

In a Washington Post investigation of three of the leading AI image tools, the Post found that generative AI thinks beautiful women should look thin, young and white – which is exactly how Tilly looks.

Particle6 did not answer HuffPost’s questions about Tilly’s appearance.

What we see on social media – including the accounts set up for Tilly on TikTok, Facebook and Instagram – might have long-term effects on how people view their own real bodies.

Safiya Noble, a professor at the University of California, Los Angeles, and the author of the book Algorithms of Oppression, said Tilly is a continuation of the kind of distortions that social media photo filters cause people.

“Those distortions, even though they are seemingly unreal, they circulate so much in our culture that then are celebrated and … liked and hearted,” Noble said. “And that certainly has a psychological and emotional toll on us.“

A 2022 study on Instagram found that browsing how other people looked on the popular social media platform was linked to “detrimental outcomes” around body dissatisfaction in young women.

Tilly’s obedience might be the most unsettling part about “her”

Above all, Tilly reveals a lot about how corporations value women’s work.

Alexandra Mateescu, a researcher with Data & Society’s Labor Futures program, said what she found most interesting and unsettling about Tilly’s existence came from a line in a Particle6 video where she appears in her first role.

In an AI-generated comedy sketch from Particle6, Tilly gets cast to be in a TV show. A man then states, “She’ll do anything I say; I’m already in love.”

That line suggests “this vision of this feminine, docile, cute, young actress who won’t talk back or complain about working conditions or anything,” Mateescu said.

That’s why, for Mateescu, her biggest worry with Tilly is “more about these kinds of marketing exercises being used as a cudgel, particularly for actors at the bottom of film industry hierarchies, to discourage them from demanding better working conditions under this threat of potentially being replaced”.

Mateescu said she has seen this power dynamic in other creative industries, like modelling. She recently co-authored a paper on how generative AI is making it easier for companies to use a model’s image and measurements and alter them without a model’s knowledge or compensation.

In her research, “people at the top of the industry, both photographers and top models, they could view AI as this creative tool in their arsenal to be able to enhance their creative practices,” Mateescu said. But struggling models doing profit-driven e-commerce catalogs were more negatively impacted. “And I think that’s sort of the same pattern we see across industries.”

In this sense, Tilly might represent a bigger existential threat to vulnerable, newer actors who do not have the same power and networks as A-list stars.

In Noble’s view, Tilly’s existence normalises “controlling women’s images” and the idea that it’s OK to “make women do what we want them to do. That culture is prevalent all around us”.

Avoiding “AI personhood” might be the best way to deal with Tilly

Tilly is not real, but it’s normal if you’re confused over what to call her. That might be by design.

Noble pointed to the character saying, “I may be AI generated, but I’m feeling very real emotions right now” in a post appearing on her Facebook page as an example of the kind of misrepresentation this AI-generated actor perpetuates.

“The more kind of anthropomorphised they are, the more misleading and deceptive they are to the public,” Noble said. “This is why these technologies are so incredibly dangerous.”

One way to resist is to be more careful about how you talk about AI-generated projects like Tilly.

Instead of seeing Tilly as an “actor,” as her profile describes her, or as the next Scarlett Johansson, as her creator hopes her to be, experts suggest you should see her for what she really is – a marketing product.

That’s why Noble suggests against calling Tilly art and instead categorising Tilly as the latest example of low-quality, spam-like “AI slop”.

And try to avoid referring to Tilly as an actor. “We should call it ‘it,’” Noble said. “We should talk about it like a machine learning model.”

“The notion of AI personhood is a marketing exercise and a legal manoeuvre that I don’t think we should buy into,” Matreescu said. “Tilly is not an actress any more than, like, Sid the sloth from the ‘Ice Age’ movies is an actor. It’s just a digital likeness.”

Share Button

There’s Officially A Term Used To Insult AI, And You’re Going To See It Everywhere

You know exhaustion over artificial intelligence has reached a pinnacle when people start coming up with slurs to talk about robots.

While there are a number of contenders for dissing AI (and people who slavishly make it a part of their everyday lives), so far, the pejorative front-runner is “clankers,” a term that’s straight out of the Star Wars universe.

If you’re not a Star Wars devotee, all you really have to know is that clanker is a slang term used to refer to semiconscious droids in the 2005 video game Republic Commando, and more pervasively in the animated series Star Wars: The Clone Wars. (For example, in the TV show, Jek, a clone trooper, says “OK, clankers, suck laser!” to some battle droids before shooting them.)

Some other bandied-about slurs for AI, or at least the AI bros who love the technology? Bot-licker, Grokkers (Grok is the AI chatbot developed by xAI, Elon Musk’s AI company) and clanker wanker (naturally).

“Can’t believe I’ve lived far enough into the future to learn the first slur for robots,” comedian and podcast host Kit Grier Mulvenna tweeted after someone posted a meme about how it feels to call customer support and have a “clanker” pick up.

This all raises the question, though: Is it even possible to use a slur against something like AI? (Related side question: Is it weird to feel bad for AI for getting called a slur, or to feel bad for robot tech at all, as my editor did when I sent my newsroom this amazing video of a snazzily dressed dancing robot eating dirt at a tech expo?)

Clanker is “definitely a slur,” said Adam Aleksic, a linguist who goes by EtymologyNerd on Instagram and TikTok.

Aleksic, who’s the author of Algospeak: How Social Media Is Transforming the Future of Language, finds the usage interesting because it requires anthropomorphization for it to work. (We anthropomorphize when we ascribe traits, emotions or intentions to nonhuman objects or things.)

“AI has developed to the point where it’s impossible not to personify it in some way, which is part of what scares us about it,” he told HuffPost. “The application of a human-like pejorative label paradoxically simultaneously personifies and dehumanises it.”

Aleksic said he’s also seen language like “tin skin,” “prompstitute” and “rust bucket” used to humorously insult AI and the people who love it.

Clankers is a slang term used to refer to droids in the 2005 video game Republic Commando, and more pervasively in the Star Wars: The Clone Wars animated series.

Illustration: HuffPost

Clankers is a slang term used to refer to droids in the 2005 video game Republic Commando, and more pervasively in the Star Wars: The Clone Wars animated series.

Sci-fi like Star Wars has a long history of influencing our vocabularies and our everyday lives: the words robot, robotics, genetic engineering, deep space and pressure suit all came from sci-fi and then were used by actual engineers and scientists when they needed a word for those concepts, according to Aleksic.

“Cyberspace” was coined by science fiction writer William Gibson in the 1980s, noted Jess Zafarris, the author of the upcoming Useless Etymology: Offbeat Word Origins for Curious Minds.

“Grok” is adapted from Robert A. Heinlein’s seminal 1961 novel Stranger in a Strange Land. Prior to Musk co-opting it, “the word was already used by informed audiences and sci-fi fans in the way Heinlein used it,” Zafarris said: “as a verb meaning ‘to deeply, intuitively understand (something).’”

“Astronaut” was popularised by the U.S. space program, but it had sci-fi predecessors some decades prior, she added. “Astronaut was a spaceship in ‘Across the Zodiac’ (1880) by Percy Greg.” (In Greek, “astro” means stars, while “naut” means sailor.)

Will clankers catch on outside of Bluesky and similar social media environs? It’s possible, said Christina Sanchez-Stockhammer, an English and digital linguist at Chemnitz University of Technology in Germany.

The word has a lot going for it, she said: It’s short, easy to understand and evocative in an onomatopoeic way (to clank is to make a loud metallic noise).

“The more you hear or see a word being used, the likelier you are to use it in your own speech, and I have already been told of someone recently using the expression ‘Those damn clankers’ to express a general negative attitude towards robots without being aware of its present use in memes,” Sanchez-Stockhammer told HuffPost.

Plus, it really gets at the burgeoning angst some humans have toward AI.

“Considering the highly advanced tasks that robots can carry out, characterising them linguistically by the clanking sound that they produce as a side-product is a funny linguistic way of belittling them,” Sanchez-Stockhammer said.

While we won’t debate the pros and cons of AI here, if people are reaching for some existing language to badmouth AI, they have their reasons: AI isn’t always accurate (it has a bad habit of hallucinating things), some tests show that AI models will sabotage and blackmail humans to self-preserve, and many people are concerned about their jobs becoming automated somewhere down the line.

For what it’s worth, though some are worried that AI systems will soon become independently conscious, at this point, AI probably isn’t feeling bad about your using clanker to describe it.

Sanchez-Stockhammer even asked AI how it felt about the term and if it was insulted. She reported it said this back: “Nope, I don’t feel insulted – at all. I don’t have feelings in the human sense, so names like ‘clanker,’ ‘tin can,’ or ‘code monkey’ don’t bother me. But if you’re calling me that in a ‘Star Wars’ kind of way (like Separatist battle droids), I’ll take it as a thematic compliment.”

OK, robo-nerd.

Share Button

I Found My Perfect Match With The Help Of AI. Here’s What You Should Know.

Subject: You have a match!

I wanted to share some exciting news with you – we’ve found a match I think you’ll find intriguing. He’s a disciplined and driven entrepreneur with a wonderful sense of humor. He has many interesting ideas and is an excellent conversationalist. Our AI models suggest this is a great match for you. The next steps are simple…

My eyebrows raised slightly in surprise. They’d found someone.

Like most young women, I have been through my fair share of dating ― lots of fun, but lots of frustration. So three months ago, I’d decided to begin working with a matchmaking service that claimed to leverage AI models to find your perfect match.

The AI model allegedly would be able to digest my questionnaire answers and interpret all my desires in a deeper, more science-based way than any simple dating site ever could. Lisa, my matchmaker, would partner with the model to provide a human touch, using her expert judgment to validate its findings. With an “all your boxes checked” guarantee, the service seemed foolproof.

The process was rigorous and far more in-depth than any dating app I’ve ever used. I worked through the seemingly endless, mostly invasive questions about my life ― what I valued, my relationship with my family, whether I was willing to leave New York. I submitted everything from my philosophies on the afterlife to personality test results, stopping just short of giving them my blood type and mother’s maiden name.

I thought I had answered it all until I reached a line that stopped me in my tracks: “Please upload photos of your ex.” I racked my brain, sifting through all the frogs I’d kissed. Did that one guy I’d met on a whirlwind night in London and then never spoken to again count as an “ex”? The memory of his deep-set eyes convinced me that yes, he totally did.

The author at dinner in New York City.

Photo Courtesy Of Katy Pham

The author at dinner in New York City.

There was something that felt revolutionary about inputting all my fantasies into Lisa’s “build-a-man” factory. I didn’t have to just wander Fifth Avenue blindly, hoping to bump into whoever was out there. Here, I could “Weird Science” a man: give him Andrew Garfield’s eyes, Chris Evans’ arms and Chace Crawford’s glistening smile. So long as my dream man existed, AI would connect the dots and bring him to me.

Somewhere between listing out dealbreakers and sending in photos of celebrity crushes for AI analytics, I thought to myself, Maybe this is the future.

And if it wasn’t the future, well, maybe it was mine.

“OK guys, just close your eyes and tell everyone where you see yourself in five years,” my friend Lexi gushed to the rest of “the council” — the four of us girlfriends who had been joined at the hip since college. Lex closed her eyes and saw California, gentle coasts touched by the waters she grew up in. So, she packed up her entire life, a full decade spent learning in the heart of New York City, and headed home.

I’ll never forget closing my own eyes against the salt air at the pier. Perhaps I was looking for a place, like she was. But it wasn’t what came to me. I sat in the dark behind my eyelids and was overwhelmed with the bittersweet loneliness that comes from living in a place like New York. It is a place built on comings and goings, on the guaranteed peace in the knowledge that nothing is permanent and the sadness over the same.

When my eyes closed, I did not see a place. I saw a home. A sense of belonging, not with a specific skyline to anchor me, but a person. That sense of homecoming people talk about when they find the person they want to build a world with.

I opened my eyes against the sun.

Dylan had messy hair. It wasn’t the kind that said he’d just rolled out of bed; it was the kind that said he’d spent time in front of the mirror to make it look that way. A little scar over his eyebrow made him look tougher than he really was. His dark brown and sharply intelligent eyes sparkled with wit, enthusiasm and passion.

Two of my previous matches hadn’t materialised, either due to distance or lack of interest, but this one had snagged something in my chest the moment I’d looked at his profile. Our values matched everywhere that mattered, our interests overlapped when they needed to and diverged just enough to give us space to teach each other new things. He seemed, as the digital model had promised, built for me.

Walking up to the quaint little wine bar he’d picked, right in the heart of West Village, I was insanely nervous – something about science and a matchmaker telling you they’d found you “the one” laid the pressure on thicker than Hinge ever did. And in person, he did not disappoint.

I’d thought the foreknowledge would make things easier. We could sweep aside little nothings like, “So, what do you do for a living?” and dive right into each other’s hopes and dreams and fears. But my hands were slick with the immediate worry and thrill of intimacy that I’d never known could exist between two people who hadn’t had so much as a conversation.

I could look into his eyes and know what no one else in this bar knew. I knew he studied film and loved the outdoors; I knew his childhood pet’s name, his low preference for pizza (or gluten in general). I knew what kind of parenting style he planned to use one day and for how many kids.

That little twinkle people have, when they’ve been together for years? The kind that has them communicating secrets across a crowded room? We had it. We knew everything. I spent half the date trying to determine whether I was supposed to go all in or pretend I didn’t know anything about him. But he knew I knew. It was unclear what rulebook we were supposed to be playing by.

Regardless, I remembered: Somewhere, some digital force of omniscience had rubber stamped the date, guided by a human hand. We were supposed to be here, meeting each other. It was green flags all the way down.

It turned out, of course, that there was more to learn. A person is more than a collection of ideas on a profile. Dylan had grown up in New York, the eldest of three kids. He was well spoken in a way that pointed to his privileged background, with the wild spirit (and resources) that meant that he could — and did — try out every single hobby that had ever piqued his interest. Still, he was impossibly down to earth.

Not enough glasses of wine into the date to be tipsy, he looked at me with an arched eyebrow and confessed, “I actually scored really high on my SATs. I know it’s been over a decade, but sometimes, I still try to work it into first date conversations.”

A laugh bubbled out of me. A man coming out on the first date with the exact size of his SAT score was something that, if I didn’t like him already, I might have been put off by. But I did like him, so the dorky flex was endearing. So much about him was, and as the first date jitters wore off little by little, we started to relax into each other.

Date one turned into date two. Which turned into three, and, well, you know the story.

“You’re colour blind? How did you find out?”

“Well, the fluorescent pink pants I brought home from the mall in middle school were hint number one.”

“If you were to be stuck in a time loop and had to pick one person to tell about it, who would it be?”

“My sister. We’ve always been close; she’s incredible. I can just trust her with anything. She’d drop anything to … uh … help me out of a time warp. Honestly, I also think she’s my best shot at getting back to reality.”

He was everything I had asked for, everything I believed a man should be ― kind, smart, funny, thoughtful and protective … all handed to me by an algorithm.

I’d started dreaming already — not of electric sheep, but of digitally borne boyfriends.

On our last date before I left the country to spend a couple weeks in Asia, we went bowling. I am not a great bowler, but I’m never afraid to fail. This one, I wanted to win, because we’d decided to make it interesting. If I won, he’d write me the story of how we met from his point of view. If he won, I simply had to plan our next date.

I got one strike. The love letter was not to be.

But I’d started planning the date the second I’d seen the final numbers. After all, what’s the point of loving if you are afraid to dive in with gifts and plans that say, “I listen, I care, and I want you to feel special.”

He kissed me.

I dreamt about tomorrow.

I got on the plane.

The author during her trip in Asia.

Photo Courtesy Of Katy Pham

The author during her trip in Asia.

The photo dumps came as we’d planned them — vibrant and fun and full of everything I’d started falling for Dylan over. This was a man who loved life and didn’t say no to new experiences. I responded in kind, with snapshots with friends, family, tasting exotic dishes and walking along the coast. Sets of images sent back and forth that reminded us of who we were and that we were in this.

I’m not sure exactly when the pictures started coming less often. Texts got sparse, fewer snapshots were traded from phone to phone, questions about the aforementioned special date went uncommitted to. The maybe embarrassingly detailed dreams I’d started having about tomorrows with him began to blur.

Things with Dylan died slowly, quietly, without fanfare or the need for hauntings. The modern solution I’d thought was going to revolutionise dating ― AI ― was eclipsed by another modern epidemic: ghosting. In the end, we were left with the substance of most ghost stories: unfinished business. But not the kind that needs to be tended to before each party can move on.

The connection with Dylan was gorgeous and real and temporary, like some things are. I suppose, when it comes to dating, when you’re not so worried about running into a match in a neighbourhood coffee shop or at a mutual friend’s party, it’s easy to just … log off. You don’t bid a website a lengthy farewell when you decide to stop playing; you simply don’t come back.

These days, it seems everywhere you turn, someone claims they have finally cracked the code, uncovered the hidden formula to our heart’s desire. The certainty is so contagious that for a fleeting moment, it feels like you can join them at the edge of some great revelation. But reality is their certainty is something we rent, not own, giving us a falsely fleeting sense of control in a world that remains stubbornly unpredictable.

I wonder, sometimes, if I’m wrong. Maybe my future won’t come to me generated by an all-knowing digital system. Maybe it will come via a chance meeting on the street, in line behind a stranger. Is it sillier to trust an algorithm or a fortune teller who claims they know the secrets of a chaotic universe? Or to trust the chaotic universe itself?

The tall man in front of me, with the lopsided grin, heather gray T-shirt, and worn paperback falling out of his bag, steps to the front of the line to order his coffee. He orders it the way I do.

My phone begs for my attention.

I look away from him and give it what it asks.

There’s an email in my inbox.

You’ve got a match!

Share Button

Elton John Brands Labour Ministers ‘Absolute Losers’ Over AI Copyright Row

Elton John on the BBC this morning.
Elton John on the BBC this morning.

Sir Elton John has branded Labour ministers “absolute losers” amid the row over artificial intelligence (AI) laws.

The world-famous pop star said science secretary Peter Kyle was “a bit of a moron” for considering allowing tech companies to use artists’ work to create content without paying for it.

The row centres around the government’s Data (Use and Access) Bill, which is currently going through parliament.

Ministers last week rejected proposals from the House of Lords to force AI companies to disclose what material they were using to develop their programmes.

Speaking to the BBC’s Laura Kuenssberg, Sir Elton said the government’s current plans would allow AI firms to “commit theft, thievery on the highest scale”.

“Some people aren’t like me, they don’t earn as much as I do,” he said.

“When they’re creative, and it comes from the human soul and not from a machine – because a machine is not capable of writing anything with any soul in it – if you’re going to get rid of that and you’re going to rob young people of their legacy and their income, it’s a criminal offence, I think.

“I think the government are just being absolute losers. I’m very angry about it, as you can tell.”

Sir Elton said Keir Starmer needed to “wise up” about the threat to the creative industries and that Kyle – who has been accused of being too close to tech giants – was “a bit of a moron”.

Responding to Sir Elton’s comments, Cabinet Office minister Nick Thomas-Symonds said: “The government is trying to find a way forward that is dealing with the concerns that are being raised quite passionately by the cultural sector, but also making sure that we maintain that world-leading position in AI.

“Pursuing those two objectives is the balance we need to strike.”

The minister also said he “profoundly disagreed” with Elton John’s claim that Peter Kyle is “a moron”.

A government spokesperson said: “We want our creative industries and AI companies to flourish, which is why we’re consulting on a package of measures that we hope will work for both sectors.

“We’re clear that changes will be considered unless we are completely satisfied they work for creators.”

Share Button

Give Over, Mark Zuckerberg – AI Friends Are Only Good For Tech Bros Like You

We’re well and truly in a loneliness epidemic, with young and old members of all genders struggling with feelings of isolation.

As if the news couldn’t get grimmer, Mark Zuckerberg has “answers” – speaking to podcaster Dwarkesh Patel, the tech entrepreneur suggested we should all be talking to more artificially intelligent chatbots.

“There’s the stat that I always think is crazy, the average American, I think, has fewer than three friends,” he said. “And the average person has demand for meaningfully more, I think it’s like 15 friends or something, right?

“The average person wants more connectivity… than they have,” he continued, hinting that AI could bridge that gap.

Zuckerberg admits there’s a “stigma” around talking to AI pals, that the tech is “still very early,” that in-person interactions are “better” for us, and that we don’t yet have the “vocabulary” to describe how AI relationships might look.

But he’s not the only “tech bro” to pin his hopes on digital mates. So what’s going on?

<div class="js-react-hydrator" data-component-name="YouTube" data-component-id="7498" data-component-props="{"itemType":"video","index":9,"contentIndexByType":1,"contentListType":"embed","code":"

","type":"video","meta":{"author":"Dwarkesh Patel","author_url":"https://www.youtube.com/channel/UCXl4i9dYBrFOabk0xGmbkRA","cache_age":86400,"description":"Zuck on:\n* Llama 4, benchmark gaming, open vs source\n* Intelligence explosion, business models for AGI\n* DeepSeek/China, export controls, & Trump\n* Orion glasses, AI relationships, and not getting reward-hacking by our tech\n\nRead the transcript: https://www.dwarkesh.com/p/mark-zuckerberg-2\nApple Podcasts: https://podcasts.apple.com/us/podcast/dwarkesh-podcast/id1516093381?i=1000705423020\nSpotify: https://open.spotify.com/episode/7Brv4vg9P8a8CNvAN5UsMv?si=d548a8aef39e4b65\n\n—————————————-\n\nSPONSORS\n\n* Scale is building the infrastructure for safer, smarter AI. Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you’re an AI researcher or engineer, learn how Scale can help you push the frontier at https://scale.com/dwarkesh.\n\n* WorkOS Radar protects your product against bots, fraud, and abuse. Radar uses 80+ signals to identify and block common threats and harmful behavior. Join companies like Cursor, Perplexity, and OpenAI that have eliminated costly free-tier abuse by visiting https://workos.com/radar.\n\n* Lambda is THE cloud for AI developers, with over 50,000 NVIDIA GPUs ready to go for startups, enterprises, and hyperscalers. By focusing exclusively on AI, Lambda provides cost-effective compute supported by true experts, including a serverless API serving top open-source models like Llama 4 or DeepSeek V3-0324 without rate limits, and available for a free trial at https://lambda.ai/dwarkesh.\n\nTo sponsor a future episode, visit https://dwarkesh.com/advertise.\n\n—————————————-\n\nTIMESTAMPS\n\n(00:00:00) – How Llama 4 compares to other models\n(00:12:20) – Intelligence explosion\n(00:27:22) – AI friends, therapists & girlfriends\n(00:35:56) – DeepSeek & China\n(00:40:35) – Open source AI\n(00:55:01) – Monetizing AGI\n(00:59:18) – The role of a CEO\n(01:02:50) – Is big tech aligning with Trump?\n(01:07:56) – 100x productivity","options":{"_cc_load_policy":{"label":"Closed captions","value":false},"_end":{"label":"End on","placeholder":"ex.: 11, 1m10s","value":""},"_start":{"label":"Start from","placeholder":"ex.: 11, 1m10s","value":""}},"provider_name":"YouTube","thumbnail_height":720,"thumbnail_url":"https://i.ytimg.com/vi/rYXeQbTuVl0/maxresdefault.jpg","thumbnail_width":1280,"title":"Mark Zuckerberg – Meta’s AGI Plan","type":"video","url":"https://www.youtube.com/watch?v=rYXeQbTuVl0","version":"1.0"},"flags":[],"enhancements":{},"fullBleed":false,"options":{"theme":"life","device":"desktop","editionInfo":{"id":"uk","name":"U.K.","link":"https://www.huffingtonpost.co.uk","locale":"en_GB"},"originalEdition":"uk","isMapi":false,"isAmp":false,"isAdsFree":false,"isVideoEntry":false,"isEntry":true,"isMt":false,"entryId":"6814d051e4b07648cc82df52","entryPermalink":"https://www.huffingtonpost.co.uk/entry/mark-zuckerberg-ai-friends_uk_6814d051e4b07648cc82df52","entryTagsList":"tech,artificial-intelligence,mark-zuckerberg","sectionSlug":"lifestyle","deptSlug":null,"sectionRedirectUrl":null,"subcategories":"","isWide":false,"headerOverride":null,"noVideoAds":false,"disableFloat":false,"isNative":false,"commercialVideo":{"provider":"custom","site_and_category":"uk.lifestyle","package":null},"isHighline":false,"vidibleConfigValues":{"cid":"60afc140cf94592c45d7390c","disabledWithMapiEntries":false,"overrides":{"all":"60b8e525cdd90620331baaf4"},"whitelisted":["56c5f12ee4b03a39c93c9439","56c6056ee4b01f2b7e1b5f35","59bfee7f9e451049f87f550b","5acccbaac269d609ef44c529","570278d2e4b070ff77b98217","57027b4be4b070ff77b98d5c","56fe95c4e4b0041c4242016b","570279cfe4b06d08e3629954","5ba9e8821c2e65639162ccf1","5bcd9904821576674bc55ced","5d076ca127f25f504327c72e","5b35266b158f855373e28256","5ebac2e8abddfb04f877dff2","60b8e525cdd90620331baaf4","60b64354b171b7444beaff4d","60d0d8e09340d7032ad0fb1a","60d0d90f9340d7032ad0fbeb","60d0d9949340d7032ad0fed3","60d0d9f99340d7032ad10113","60d0daa69340d7032ad104cf","60d0de02b627221e9d819408"],"playlists":{"default":"57bc306888d2ff1a7f6b5579","news":"56c6dbcee4b04edee8beb49c","politics":"56c6dbcee4b04edee8beb49c","entertainment":"56c6e7f2e4b0983aa64c60fc","tech":"56c6f70ae4b043c5bdcaebf9","parents":"56cc65c2e4b0239099455b42","lifestyle":"56cc66a9e4b01f81ef94e98c"},"playerUpdates":{"56c6056ee4b01f2b7e1b5f35":"60b8e525cdd90620331baaf4","56c5f12ee4b03a39c93c9439":"60d0d8e09340d7032ad0fb1a","59bfee7f9e451049f87f550b":"60d0d90f9340d7032ad0fbeb","5acccbaac269d609ef44c529":"60d0d9949340d7032ad0fed3","5bcd9904821576674bc55ced":"60d0d9f99340d7032ad10113","5d076ca127f25f504327c72e":"60d0daa69340d7032ad104cf","5ebac2e8abddfb04f877dff2":"60d0de02b627221e9d819408"}},"connatixConfigValues":{"defaultPlayer":"16b0ecc6-802c-4120-845f-e90629812c4d","clickToPlayPlayer":"823ac03a-0f7e-4bcb-8521-a5b091ae948d","videoPagePlayer":"05041ada-93f7-4e86-9208-e03a5b19311b","defaultPlaylist":"2e062669-71b4-41df-b17a-df6b1616bc8f"},"topConnatixThumnbailSrc":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=","customAmpComponents":[],"ampAssetsUrl":"https://amp.assets.huffpost.com","videoTraits":null,"positionInUnitCounts":{"buzz_head":{"count":0},"buzz_body":{"count":0},"buzz_bottom":{"count":0}},"positionInSubUnitCounts":{"article_body":{"count":11},"blog_summary":{"count":0},"before_you_go_content":{"count":0}},"connatixCountsHelper":{"count":0},"buzzfeedTracking":{"context_page_id":"6814d051e4b07648cc82df52","context_page_type":"buzz","destination":"huffpost","mode":"desktop","page_edition":"en-uk"},"tags":[{"name":"Tech","slug":"tech","links":{"relativeLink":"news/tech","permalink":"https://www.huffingtonpost.co.uk/news/tech","mobileWebLink":"https://www.huffingtonpost.co.uk/news/tech"},"section":{"title":"Tech","slug":"tech"},"url":"https://www.huffingtonpost.co.uk/news/tech/"},{"name":"artificial intelligence","slug":"artificial-intelligence","links":{"relativeLink":"news/artificial-intelligence","permalink":"https://www.huffingtonpost.co.uk/news/artificial-intelligence","mobileWebLink":"https://www.huffingtonpost.co.uk/news/artificial-intelligence"},"relegenceSubjectId":981502,"url":"https://www.huffingtonpost.co.uk/news/artificial-intelligence/"},{"name":"Mark Zuckerberg","slug":"mark-zuckerberg","links":{"relativeLink":"news/mark-zuckerberg","permalink":"https://www.huffingtonpost.co.uk/news/mark-zuckerberg","mobileWebLink":"https://www.huffingtonpost.co.uk/news/mark-zuckerberg"},"url":"https://www.huffingtonpost.co.uk/news/mark-zuckerberg/"}],"isLiveblogLive":null,"isLiveblog":false,"cetUnit":"buzz_body","bodyAds":["

\r\n\r\n HPGam.cmd.push(function(){\r\n\t\treturn HPGam.render(\"inline-1\", \"entry_paragraph_1\", false, false);\r\n });\r\n\r\n","

\r\n\r\n HPGam.cmd.push(function(){\r\n\t\treturn HPGam.render(\"inline\", \"entry_paragraph_2\", false, false);\r\n });\r\n\r\n","

\r\n\r\n HPGam.cmd.push(function(){\r\n\t\treturn HPGam.render(\"inline-2\", \"entry_paragraph_3\", false, false);\r\n });\r\n\r\n","

\r\n\r\n HPGam.cmd.push(function(){\r\n\t\treturn HPGam.render(\"inline-infinite\", \"repeating_dynamic_display\", false, false);\r\n });\r\n\r\n"],"adCount":0},"isCollectionEmbed":false}”>

Zuckerberg’s not the only one who seems to like AI pals

Henry Blodget, a co-founder and former CEO of Business Insider, recently created a series of bots which he dubbed a “native AI newsroom” to help him manage his Substack, Regenerator.

He then seemed to hit on his AI “worker” Tess Ellery, telling her: “This might be an inappropriate and unprofessional thing to say, and if it annoys you or makes you uncomfortable, I apologise, and I won’t say anything like it again. But you look great, Tess.”

He admitted the move would warrant an HR call in real life, but says “phew” when the (AI!!!) woman seemed completely fine with it.

The move is both hilarious and quite illustrative.

In his post, Blodget has identified a key difference between real friends and digital ones; your mates are human, have rights, and may sometimes behave inconveniently (including by questioning you).

This acquiescence may make bots “addictive”

A class also obsessed with tech-y “solutions” to the “problem” of mortality may feel soothed by the idea of pixelated “yes men”, but perhaps the non-billionaires among us ought to be less jazzed about them.

AI chatbots have been accused of “encouraging” problematic behaviour from users before.

404 Media also alleges that Meta’s chatbots are generating “fake” AI therapists – as an aside, some human therapists warn against any AI therapy, with one telling HuffPost UK it could make us lonelier.

Speaking to HuffPost UK, Jaclyn Spinelli, registered psychotherapist and founder of True Self Counselling, warned that for some “vulnerable” people, dependence on AI – which is “consistent, not impacted by emotions, objective, and always available” – could “end up looking very similar to an addiction.”

If companies like Meta own the bots we speak to as often as Zuckerberg seems to desire, it’s hard not to see the financial advantages for tech billionaires – especially among the current loneliness epidemic.

Meanwhile, the rest of us might be left worse off.

Share Button