A notable figure in the tech world, Mustafa Suleyman, who previously served as a leading executive in Google’s cutting-edge artificial intelligence division, DeepMind, has issued a stark warning concerning the potential dangers posed by AI.
Suleyman is deeply concerned that unchecked advancements in AI might lead to new, more lethal biological threats.
Speaking candidly on The Diary of a CEO podcast, Suleyman stated, “The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible.”
Advertisement - story continues below
Suleyman further added that these synthetically designed pathogens, possibly resulting from AI-driven research, “can spread faster or [be] more lethal…They cause more harm or potentially kill, like a pandemic.”
He emphasized the urgent need for stricter regulation surrounding AI software.
Stop the censors, sign up to get today's top stories delivered right to your inbox
Raising a hypothetical, yet plausible, concern, Suleyman shared his “biggest fear” that a “kid in Russia” could genetically engineer a new pathogen which could trigger a pandemic that’s “more lethal” than anything the world has faced thus far, within the next five years.
Recognizing the immediacy of this potential threat, he said, “That’s where we need containment. We have to limit access to the tools and the know-how to conduct such high-risk experimentation.”
Advertisement - story continues below
RELATED: Dark Web ‘Fraud for Hire’ Commercial Uncovered: Billion Dollar Industry Sells AI Tools to Criminals
As the tech industry converges in Washington on September 13 for an AI summit, led by Sen. Majority Leader Chuck Schumer, Suleyman’s voice resonates with a sense of urgency.
“We in the industry who are closest to the work can see a place in five years or 10 years where it could get out of control and we have to get on top of it now,” he said.
“We really are experimenting with dangerous materials. Anthrax is not something that can be bought over the internet and that can be freely experimented with,” he continued. “We have to restrict access to the those things.”
“We have to restrict access to the software that runs the models, the cloud environments, and on the biology side it means restricting access to some of the substances,” Suleyman added.
Advertisement - story continues below
His statements echo a broader sentiment within the tech community. Earlier in March, a host of tech magnates put pen to paper, signing an open letter that advocated for a six-month moratorium on AI training.
Elon Musk, CEO of Tesla and SpaceX, also shared similar concerns, drawing parallels to fictional narratives, cautioning that unchecked AI could potentially turn hostile, much like the robots in the popular film series, “Terminator.”
Suleyman ended on a contemplative note, stating, “Never before in the invention of a technology have we proactively said we need to go slowly.”
Advertisement - story continues below
We need to make sure this first does no harm.”
“That is an unprecedented moment,” he added. “No other technology has done that.”