pub-260179357044294

New meta’s AI chatbot on whatsapp sparks privacy fears

Listen to article

WhatsApp’s newly launched AI feature has sparked frustration among users after it emerged the tool cannot be removed from the app, despite the company describing it as “entirely optional.”

The Meta AI chatbot, represented by a multi-coloured blue icon on the Chats screen, provides users with AI-driven responses to queries.

However, its permanent presence in the interface has drawn comparisons to other controversial tech rollouts, such as Microsoft’s now-revised Recall feature.

A WhatsApp spokesperson told that the company sees the tool as similar to non-removable options like ‘channels’ or ‘status,’ and added: “We think giving people these options is a good thing and we’re always listening to feedback from our users.”

The backlash echoes wider concerns about user control and digital privacy as tech firms integrate AI deeper into everyday services.

The introduction of WhatsApp’s AI assistant coincides with Meta’s announcement of a separate update aimed at teen users on Instagram.

The company disclosed it is piloting an artificial intelligence system in the United States that can identify accounts created by minors who may have provided false age information.

As for WhatsApp’s AI assistant, not all users will see the new blue circle icon yet. Meta confirmed the tool is being gradually introduced across select regions and may not appear immediately, even within countries where it is available to others.

The blue circle, which appears in the corner of the chats screen, is accompanied by a search bar prompting users to “Ask Meta AI or Search.”

The same feature is being integrated into Facebook Messenger and Instagram, both of which are also owned by Meta.

This chatbot is driven by Meta’s own large language model, Llama 4. Before interaction, users are presented with a detailed introduction explaining the tool’s purpose and noting that its use is “optional.”

According to Meta’s website, the AI can provide answers to questions, offer educational insights, or assist with creative thinking. In testing, the chatbot returned accurate weather details for Glasgow within seconds, including temperature, rainfall probability, wind and humidity.

However, one suggested link mistakenly referenced Charing Cross station in London rather than the Glasgow location.

Public reaction, particularly in Europe, has been mixed. Users on platforms such as X (formerly Twitter), Bluesky and Reddit have voiced frustration over the feature’s permanence. Among them, columnist Polly Hudson criticised the inability to disable the assistant.

AI and privacy expert Dr Kris Shrishak offered sharper criticism, alleging Meta is leveraging its vast user base to test AI products and gather data. He argued that Meta’s AI development process involved “privacy violations by design” through the use of scraped online content, including pirated books.

A report by The Atlantic suggested Meta may have accessed millions of pirated texts via Library Genesis (LibGen) to train Llama. Author groups globally are now campaigning for government intervention, while Meta faces legal action from writers over the use of their intellectual property.

Asked about the Atlantic findings, a Meta spokesperson declined to comment.

While Meta has stated that the chatbot only accesses messages users send directly to it—and that all personal chats remain end-to-end encrypted—concerns remain.

The UK’s Information Commissioner’s Office said it is monitoring how Meta AI processes personal data on WhatsApp, especially involving minors.

“AI development depends heavily on personal data,” the agency said. “Organisations must ensure they meet legal obligations, particularly where children are concerned.”

Dr Kris Shrishak has urged users to exercise caution when interacting with Meta AI.

He explained that while end-to-end encryption protects messages exchanged between friends, communication with the chatbot operates differently.

“When you’re chatting with a friend, encryption keeps Meta out,” Shrishak said. “But when you use Meta AI, one side of the conversation is Meta itself.”

Meta has also warned users to think carefully about what they share with the AI assistant.

In guidance published on its site, the company advises against submitting any personal or sensitive details users would not want stored or referenced.

“Only share information you’re comfortable with being retained and potentially used,” the company said.

#metas #chatbot #whatsapp #sparks #privacy #fears

Optimized by Optimole
Optimized by Optimole