Our website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

Governments, firms should spend more on AI safety, top researchers say

by Jessica Weisman-Pitts
0 comments
2023 10 24T065638Z 1 LYNXMPEJ9N090 RTROPTP 4 ORACLE AI NETSUITE

Governments, firms should spend more on AI safety, top researchers say

By Supantha Mukherjee

STOCKHOLM (Reuters) – Artificial intelligence companies and governments should allocate at least one third of their AI research and development funding to ensuring the safety and ethical use of the systems, top AI researchers said in a paper on Tuesday.

The paper, issued a week before the international AI Safety Summit in London, lists measures that governments and companies should take to address AI risks.

“Governments should also mandate that companies are legally liable for harms from their frontier AI systems that can be reasonably foreseen and prevented,” according to the paper written by three Turing Award winners, a Nobel laureate, and more than a dozen top AI academics.

Currently there are no broad-based regulations focusing on AI safety, and the first set of legislation by the European Union is yet to become law as lawmakers are yet to agree on several issues.

“Recent state of the art AI models are too powerful, and too significant, to let them develop without democratic oversight,” said Yoshua Bengio, one of the three people known as the godfather of AI.

“It (investments in AI safety) needs to happen fast, because AI is progressing much faster than the precautions taken,” he said.

Authors include Geoffrey Hinton, Andrew Yao, Daniel Kahneman, Dawn Song and Yuval Noah Harari.

Since the launch of OpenAI’s generative AI models, top academics and prominent CEOs such as Elon Musk have warned about the risks on AI, including calling for a six-month pause in developing powerful AI systems.

Some companies have countered this, saying they will face high compliance costs and disproportionate liability risks.

“Companies will complain that it’s too hard to satisfy regulations – that ‘regulation stifles innovation’ – that’s ridiculous,” said British computer scientist Stuart Russell.

“There are more regulations on sandwich shops than there are on AI companies.”

 

(This story has been refiled to show the researchers wrote a paper, not a letter, in paragraphs 1-3 and 7)

 

(Reporting by Supantha Mukherjee in Stockholm; editing by Miral Fahmy)