Search
Close this search box.
Search
Close this search box.

Australian businesses enhance resilience with advanced AI protection

Corporate communications face a significant threat from cyber threat deepfakes generated by artificial intelligence.

Businesses are increasingly concerned about the impact of cyber risks as AI technology continues to advance, making them more realistic and easily accessible. AI-manipulated media possesses the power to convincingly modify audio and visual content, thereby creating the potential for fabricating events or statements.

This capability presents a notable risk to the credibility of business communications, potentially eroding trust and consumer confidence. In Australia, the reality of the deepfake threat is undeniable. In the past year, a significant number of Australian businesses have experienced information security incidents related to deepfakes. This concerning statistic highlights the importance for businesses to fully understand and address the risks linked to deepfakes.

Incidents surge in Australia

Security breaches tied to deepfakes are on the rise in Australia, with nearly a quarter of businesses confirming such events in the last year. These incidents range from tampered audio recordings to AI-created video impersonations, leading to significant financial and reputational damage. A worrying trend is the growing occurrence of non-consensual sexual deepfakes, where explicit content features the superimposed likeness of an individual. This abuse has led to the implementation of strict laws, with those producing and distributing such content facing up to seven years in jail. Another alarming issue is the use of deep-fake images by predators targeting minors.

The eSafety Commissioner of Australia estimates that a shocking 90% of deepfakes are explicit, highlighting the urgent need for protective measures for those vulnerable. The rising number of these incidents underscores the vital need for businesses and individuals to understand and mitigate the risks posed by deepfakes. As technology continues to evolve, so does the potential for misuse, stressing the need to stay informed and vigilant against this growing threat.

Legal response to breaches

In the face of the escalating deepfake cyber threat, the Australian government has acted decisively. The Attorney-General has proposed a bill that bans the distribution of deepfakes without consent, with the goal of deterring malicious actors and shielding individuals and businesses from the abuse of deepfake technology. The suggested law levies strict criminal penalties on those who distribute sexually explicit content without consent, including content digitally produced using AI or other technologies. The penalties include up to 6 years in jail for distributing non-consensual, sexually explicit, deep-fake material and up to 7 years for producing and distributing such content.

Despite these safeguards, Australia currently has no specific laws dealing with the use of fabricated material. Existing legal frameworks, such as defamation, copyright, and consumer protection laws, offer potential legal remedies for victims of deepfakes. As deep-fake technology progresses, so must the legal and regulatory frameworks. The Australian government’s commitment to addressing this issue is clear in its legislative actions and ongoing efforts to strengthen existing laws. However, the fluid nature of this threat requires constant vigilance and flexibility.

Advanced deepfake detection technologies

Companies are actively harnessing advanced technologies to tackle the dangers of misinformation. The development and application of pioneering tools for deepfake detection and validation are at the heart of this effort. CyberCX stands out as a company that delivers specially crafted solutions to identify and counteract deep-fake threats. These solutions use top-tier technology to spot deepfakes and confirm the genuineness of content, thus shielding businesses from possible harm.

Along with these detection and validation tools, companies are setting up strict policies and standards to deal with damaging and illegal deepfakes. These tactics include procedures for screening and removal, as well as methods to detect and mark deepfakes within their user base. While these tech-based defences are essential, they are not the only solutions. They are components of a wider strategy that includes legal actions, corporate policies, and public education campaigns. As deepfake technology keeps progressing, the countermeasures must keep pace, demanding constant innovation and alertness.

Deepfake scams unveiled

Deepfake technology has played a key role in several notable frauds, leading to significant financial damages.

  • Hong Kong Heist: A deepfake con led to a global corporation’s Hong Kong branch losing US$25.6 million. The fraudsters used deepfake technology to create a multi-person video conference where all participants, except the victim, were fabricated images of real people. Using publicly available video and audio clips, the fraudsters successfully duplicated the appearances and voices of the targeted individuals.
  • British Energy Firm Fraud: An emergency call from someone posing as the CEO of the company’s German parent firm led to a British energy company falling victim to a deep-fake scam. The fraudsters likely used commercial voice-generating software to execute the scam.
  • Thai Extortion Scam: Thai criminals used deepfakes to pose as police officers in extortion video calls in early 2022.
  • Indian Deepfake Scam: A 73-year-old Indian man received a call from someone posing as his former coworker asking for money, making him a victim of a deep-fake scam.

These case studies underline the urgent need for robust security measures to counter the growing threat of deep-fake scams. They stress the importance of awareness, vigilance, and the use of advanced technologies to detect and counter such threats.

More on AI: Benefits and hurdles of AI in social media

Challenges to corporate trust

Deepfake technology’s rapid growth is significantly challenging corporate trust and business communication integrity. AI-manipulated videos impersonating corporate executives have severely damaged trust among stakeholders and the public. These deceptive strategies exploit digital communication channels’ weaknesses, threatening corporate message reliability and leadership authenticity.

The distribution of fake content via deepfakes has sparked fears about misinformation and its potential impact on market stability and investor confidence. For instance, videos manipulated to make false statements by corporate leaders can cause drastic stock price swings and irreversibly tarnish a brand’s reputation. To combat these threats, companies need to put in place robust cybersecurity measures and promote a culture of deep-fake threat awareness.

Proactive steps like advanced authentication technologies and media monitoring systems are vital to bolstering defences against malicious actors seeking to exploit corporate vulnerabilities. By investing in employee training and awareness programmes, businesses can prepare their workforce to identify and counter potential deepfake incidents, thereby preserving corporate trust and enhancing communication resilience in a progressively digital world.

Deepfakes, artificial intelligence’s deceptive creations, are posing an escalating threat to corporate communications. These digital frauds have the capability to impersonate key individuals convincingly, leading to the proliferation of false information and creating an environment of skepticism. This not only damages trust, an essential resource for any corporation, but also obstructs the open dialogue necessary for effective communication. In reaction to these threats, Australian businesses and the government are actively implementing measures through legislation, technology, and education.

However, the rapid progression of this threat requires unceasing alertness and adaptation. Businesses must prioritise investing in employee training and awareness campaigns to detect and counter fake threats. Looking forward, the deepfake phenomenon is predicted to become more complex, making its detection increasingly challenging. This emphasises the urgent need for ongoing research and the development of counteractive measures. To maintain the effectiveness of their defences, businesses must keep up-to-date with the most recent advancements in this domain.

This post was also published on Public Spectrum.

Comms Logo

A new knowledge platform and website aimed at assisting the communications industry and its professionals. Contribute your op-ed, press releases, how-to articles, videos and infographics at media@commsroom.co

Share
Comms Room Staff
Comms Room Staff
A new knowledge platform and website aimed at assisting the communications industry and its professionals. Contribute your op-ed, press releases, how-to articles, videos and infographics at media@commsroom.co