AI research nonprofit receives funding to study AI safety

AI Safety

Gradient Institute, an independent, nonprofit research institute that works to build safety, ethics, accountability and transparency into artificial intelligence (AI) systems, has received a donation from Cadent, an ethical technology studio, to develop research on technical AI Safety.

As AI systems evolve rapidly, the tools to ensure their safe development and deployment remain underdeveloped. This donation will help Gradient Institute’s efforts in addressing this crucial gap.

Cadent’s donation will support a three-month research project of a PhD student working on AI Safety, under the supervision of Gradient Institute researchers. The project will aim to investigate the potential misuse of large language models for manipulating individuals for commercial, political, or criminal purposes, and to explore original technical solutions against such threats.

This research is also expected to provide insights for the development of future standards and regulations to help protect citizens against AI-powered subliminal forms of scams or political propaganda. The findings of the research project will be documented in a research paper to be produced by Q2 2024.

Gradient Institute’s Chief Scientist, Dr Tiberio Caetano, highlights the importance of investment in AI Safety research.

“Today’s reality is that AI systems have become very powerful, but not as safe as they are powerful,” he said. “If we want to keep developing AI for everyone’s benefit, it’s imperative that we focus more on making these systems safer to close this gap.”

This donation is also a key part of Cadent’s mission. As a Social Traders Certified social enterprise, more than 50 per cent of Cadent’s annual profits are reinvested in charities and projects dedicated to causes such as AI safety.

Cadent’s Managing Director, James Gauci, encourages others to consider supporting Gradient Institute’s vital research.

“In an age where the latest large-scale hack or major AI model is just around the corner, ethical considerations in technology and AI have become paramount,” he said. “We believe that all technologists must rise to the occasion.”

In the past 10 years, the computing power (often referred to as “compute”) used to train top-tier AI systems has surged by a factor of 10 billion. To put this into perspective, this rate of growth matches that observed throughout AI’s 60-year history prior to this decade. AI training compute is increasing very fast. Crucially, in today’s AI development landscape, greater compute directly translates into enhanced AI capabilities.

This means we can amplify an AI’s skills just by allocating more computing resources. For example, a tenfold increase in compute could potentially empower an AI to master a new language, instruct on chemical syntheses, or even code like a seasoned programmer—all without new foundational research.

However, this rapid evolution comes with challenges.

While the intelligence of an AI system scales with more compute, its safety doesn’t follow suit. Some large language models (LLMs) have shown potential to aid in synthesising chemical weapons or creating pandemic-grade pathogens.

Studies suggest that as these LLMs grow smarter, they might acquire advanced persuasive abilities, posing a risk of large-scale manipulation and deception, whether for commercial, political, or malicious purposes. Furthermore, these models could lower the barriers for cyberattacks, increasing their frequency and threat to critical infrastructure.

But there’s hope. Through intensive research, it is possible to embed safety mechanisms into advanced AI systems. This encompasses creating technical assurances that AI systems don’t indulge in dangerous activities, such as guiding on weapon creation, engaging in deceitful tactics, or disseminating harmful misinformation.

 

Share
Comms Room Staff
Comms Room Staff
A new knowledge platform and website aimed at assisting the communications industry and its professionals. Contribute your op-ed, press releases, how-to articles, videos and infographics at media@commsroom.co