Meta launches AI-driven responses, but experts warn of potential dangers

The latest move of Meta into AI has sparked both interest and concern. The company has rolled out a new AI feature across its platforms.

Meta’s latest move into artificial intelligence has sparked both interest and concern.

The company has rolled out a new AI feature across its platforms, along with a standalone app, designed to generate personalised responses based on the information users have already provided — including identity data and preferences.

On the surface, the tool offers convenience. Users can interact with the AI agent in a conversational way, receiving tailored responses without the need to search or filter through unrelated results. However, questions about privacy and safety are growing louder.

According to Professor Kok-Leong Ong, a Professor of Business Analytics in the College of Business and Law at RMIT University, these tools are attractive because of their ease and accuracy.

“AI agents are becoming increasingly popular because they are easy to use and provide accurate information. Users can submit a conversational request and receive relevant answers that draw from data in the ecosystem from which a user has subscribed,” Ong says.

But as their popularity rises, so do the risks — particularly for younger users. “We have already seen Mark Zuckerberg apologise to families whose children were harmed by using social media. AI agents working in a social context could heighten a user’s exposure to misinformation and inappropriate content,” Ong warns.

“This could lead to mental health issues and fewer in-person social interactions.”

Read more: New Microsoft report signals communication revolution driven by AI agents

The concern doesn’t stop there. Meta’s vast data collection capabilities mean that the new AI features are working with an already deep pool of personal information. While the personalised responses may seem helpful, users may not realise how much of their own data is being used on social media — or how to control it.

“We have already seen Mark Zuckerberg apologise to families whose children were harmed by using social media. AI agents working in a social context could heighten a user’s exposure to misinformation and inappropriate content. This could lead to mental health issues and fewer in-person social interactions,” Ong says.

He adds that users need to be cautious when managing settings and accepting new terms on Meta.

“Meta already has a huge amount of information about its users. Its new AI app could pose security and privacy issues. Users will need to navigate potentially confusing settings and user agreements. They will need to choose between safeguarding their data versus the experience they get from using the AI agent. Conversely, imposing tight security and privacy settings on Meta may impact the effectiveness of its AI agent.”

While the technology holds promise, especially in public relations and digital communication, the key takeaway is to use it mindfully.

“That’s not to say we shouldn’t use AI agents. But we should all look at mitigating risks, including by regularly reviewing settings, understanding newly introduced terms and conditions, and being mindful about the sensitive information you share on these types of apps.” Ong concludes.

Comms Logo
Commsadmin
+ posts
Share

Related Posts

Recent Posts