Celebrating 25 Years of Scoop
Special: Up To 25% Off Scoop Pro Learn More

World Video | Defence | Foreign Affairs | Natural Events | Trade | NZ in World News | NZ National News Video | NZ Regional News | Search

 

Gradient Institute Receives Donation From Cadent For AI Safety Research

SYDNEY, AUSTRALIA, NOVEMBER 9, 2023 - Gradient Institute, an independent, nonprofit research institute that works to build safety, ethics, accountability and transparency into AI systems, has received a donation from Cadent, an ethical technology studio, to develop research on technical AI Safety.

As AI systems evolve rapidly, the tools to ensure their safe development and deployment remain underdeveloped. This donation will help Gradient Institute’s efforts in addressing this crucial gap.

Cadent’s donation will support a three-month research project of a PhD student working on AI Safety, under the supervision of Gradient Institute researchers. The project will aim to investigate the potential misuse of large language models for manipulating individuals for commercial, political, or criminal purposes, and to explore original technical solutions against such threats. This research is also expected to provide insights for the development of future standards and regulations to help protect citizens against AI-powered subliminal forms of scams or political propaganda. The findings of the research project will be documented in a research paper to be produced by Q2 2024.

Gradient Institute’s Chief Scientist, Dr Tiberio Caetano, highlights the importance of investment in AI Safety research.

“Today's reality is that AI systems have become very powerful, but not as safe as they are powerful,” he said. “If we want to keep developing AI for everyone's benefit, it's imperative that we focus more on making these systems safer to close this gap.”

Advertisement - scroll to continue reading

Are you getting our free newsletter?

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.

This donation is also a key part of Cadent’s mission. As a Social Traders Certified social enterprise, more than 50 per cent of Cadent’s annual profits are reinvested in charities and projects dedicated to causes such as AI safety.

Cadent’s Managing Director, James Gauci, encourages others to consider supporting Gradient Institute’s vital research.

“In an age where the latest large-scale hack or major AI model is just around the corner, ethical considerations in technology and AI have become paramount,” he said. “We believe that all technologists must rise to the occasion.”

In the past 10 years, the computing power (often referred to as "compute") used to train top-tier AI systems has surged by a factor of 10 billion. To put this into perspective, this rate of growth matches that observed throughout AI's 60-year history prior to this decade. AI training compute is increasing very fast. Crucially, in today's AI development landscape, greater compute directly translates into enhanced AI capabilities. This means we can amplify an AI's skills just by allocating more computing resources. For example, a tenfold increase in compute could potentially empower an AI to master a new language, instruct on chemical syntheses, or even code like a seasoned programmer—all without new foundational research.

However, this rapid evolution comes with challenges.

While the intelligence of an AI system scales with more compute, its safety doesn't follow suit. Some large language models (LLMs) have shown potential to aid in synthesising chemical weapons or creating pandemic-grade pathogens. Studies suggest that as these LLMs grow smarter, they might acquire advanced persuasive abilities, posing a risk of large-scale manipulation and deception, whether for commercial, political, or malicious purposes. Furthermore, these models could lower the barriers for cyberattacks, increasing their frequency and threat to critical infrastructure.

But there's hope. Through intensive research, it is possible to embed safety mechanisms into advanced AI systems. This encompasses creating technical assurances that AI systems don't indulge in dangerous activities, such as guiding on weapon creation, engaging in deceitful tactics, or disseminating harmful misinformation.

Gradient Institute welcomes additional supporters to help it on its mission towards building safety, ethics, accountability and transparency into AI. As a government-approved research institute and charity, all donations to Gradient Institute are tax-deductible. If you are interested in learning more about donations, and other ways you could support Gradient Institute, please visit: https://www.gradientinstitute.org/support-us/.

ABOUT GRADIENT INSTITUTE

Gradient Institute is an independent, nonprofit research institute that works to build ethics, accountability and transparency into AI systems: developing new algorithms, training organisations operating AI systems and providing technical guidance for AI policy development. With AI systems for automated decision-making proliferating rapidly, and increasingly being used to make or influence decisions in all areas of human endeavour — including education, health, finance, media, employment and retail — it is now important to explore how to ensure such systems do not perpetuate systemic inequality or lead to significant harm to individuals, communities or society. For more, visit www.gradientinstitute.org.

© Scoop Media

Advertisement - scroll to continue reading
 
 
 
World Headlines

 
 
 
 
 
 
 
 
 
 
 
 

Join Our Free Newsletter

Subscribe to Scoop’s 'The Catch Up' our free weekly newsletter sent to your inbox every Monday with stories from across our network.