Global AI Safety Week: Powerful computing efforts launched to boost research

Two major steps towards government oversight of artificial intelligence (AI) took place this week in the United States and the United Kingdom. Behind both initiatives are moves by each country to strengthen AI research capabilities, including efforts to expand access to the powerful supercomputers needed to train AI.

On October 30, US President Joe Biden signed his nation’s first artificial intelligence executive order, which included a massive set of guidelines for US federal agencies to guide the use of artificial intelligence and put in place guardrails for the technology. And on November 12, the UK hosted the historic AI Safety Summit, convened by Prime Minister Rishi Sunak, with representatives from more than two dozen countries and tech companies including Microsoft and Meta. The summit, held at a notorious wartime code-breaking facility, produced the Bletchley Declaration, which agreed to better assess and manage the risks of advanced frontier artificial intelligence systems that could be used to develop dangerous technologies such as biological weapons.

“We were talking about AI that doesn’t exist yet, things that are going to be released next year,” says Yoshua Benjiu, an AI pioneer and scientific director of Milla, the Quebec Institute of Artificial Intelligence in Canada, who attended the summit.

Both countries are committed to developing a national AI research resource that aims to provide AI researchers with cloud-based access to heavy computing power. The UK, in particular, has invested heavily, says Russell Wald, who leads the Policy and Society Initiative at the Stanford Institute for Human-Centered Artificial Intelligence in California.

These efforts make sense for a branch of science that relies heavily on expensive computing infrastructure, says Helen Toner, a policy researcher at Georgetown University’s Center for Security and Emerging Technologies in Washington, DC. A major trend in the last five years of AI research is that you can get better performance from AI systems just by scaling them up. But it’s expensive, he says.

Bengio agrees that training a frontier AI system would take months and cost tens or hundreds of millions of dollars. In academia, this is currently impossible. Both research resource initiatives aim to democratize these capabilities.

That’s a good thing, Bengio says. Currently, all the capabilities of working with these systems are in the hands of companies that want to earn money from them. We need universities and government agencies that are really working to protect people so they can better understand these systems.

All bases

Biden’s executive order is limited to directing the work of federal agencies because it is not a law passed by Congress. Still, Toner says, the order is broad in scope. What you’re seeing is that the Biden administration is really taking AI seriously as an all-purpose technology, and I like that. Good thing they try to cover a lot of bases.

Toner says one of the important emphases in this order is to create much-needed standards and definitions in artificial intelligence. According to Toner, people use words like unbiased, robust or explainable to describe AI systems. They all sound good, but in AI, we have almost no standards for what these things actually mean. This is a big problem. The order calls for the National Institute of Standards and Technology to develop such standards alongside tools (such as watermarks) and red team testing, where good actors try to exploit a system to test its security to ensure that AI-powered systems are safe. to use Safe and reliable

The executive order directs agencies that fund life science research to develop standards to protect against the use of artificial intelligence to engineer hazardous biological materials.

Agencies are also encouraged to help skilled immigrants with AI expertise study, reside, and work in the United States. And the National Science Foundation (NSF) should fund and launch at least one regional innovation engine that prioritizes work related to artificial intelligence, and in the next 1.5 years at least four national artificial intelligence research institutes in addition to the 25 institutes currently are willing to be financed, to establish

Research resources

Biden’s order commits NSF to launch within 90 days the proposed National Artificial Intelligence Research Resource (NAIRR) system to provide access to powerful, AI-enabled computing power through the cloud. “There’s a lot of excitement about this,” Toner says.

This is what we have championed for years. Wald says: This recognition is at the highest level that is needed.

In 2021, Wald and his colleagues at Stanford published a white paper outlining what such a service might look like. In January, a NAIRR task force report called for $2.6 billion in funding over an initial six-year period. It should be considerably larger in my opinion, says Peanut Wald. He says lawmakers must pass the Creating Artificial Intelligence Act, a bill introduced in July 2023 to free up funding for a full-scale NAIRR. “We need Congress to take this seriously and invest and invest,” Wald says. If they didn’t, they left it to the companies.

Similarly, the UK is considering creating an Artificial Intelligence National Research Resource (AIRR) to provide supercomputer-level computing power to a variety of researchers keen to study frontier AI.

The UK government announced plans for a UK AIRR in March. At the summit, the government announced that it would increase an AIRR investment fund from 100 million (US$124 million) to 300 million as part of an earlier 900 million investment to transform the UK’s computing capacity. Given the population and GDP, the UK investment is far more important than the US offer, Wald says.

The project is supported by two new supercomputers: Dawn in Cambridge, which is due to go live in the next two months; and the Isambard-AI cluster in Bristol, which is expected to come online next summer.

Simon McIntosh-Smith, director of the Isambard National Research Center at the University of Bristol, UK, says Isambard-AI will be one of the world’s top 5 AI-capable supercomputers. Along with Down, these capabilities mean UK researchers will be able to train even the largest frontier models in a reasonable amount of time, he says.

Such moves would help countries like the UK develop the expertise needed to channel AI for the public good, says Bengio. But, he said, legislation is also needed to protect against future artificial intelligence systems that are smart and hard to control.

He says we are on a path to building systems that are both very useful and potentially dangerous. We already ask pharmacies to spend a lot of money to prove that their drugs are non-toxic. We must do the same.

#Global #Safety #Week #Powerful #computing #efforts #launched #boost #research
Image Source : www.nature.com

Leave a Comment