The open letter, which was published on the website of the nonprofit organization "Future of Life Institute", advocates for confirming that the impact of developing powerful AI systems is positive and the risks are controllable before proceeding.
The letter elaborates on the great risks that AI systems with human-level intelligence may pose to society and humanity, stating that "this has been widely recognized by numerous researchers and top AI laboratories," and raises four consecutive questions:
Should we let machines flood our information channels with propaganda and lies?
Should we automate all jobs, including those that bring satisfaction? Should we develop non-human intelligence that could eventually surpass us in quantity and intelligence and eliminate us?
Should we risk losing control over our civilization?
The open letter points out that if such a pause cannot be implemented quickly, the government should intervene and take measures. In addition, "AI labs and independent experts should use this pause to jointly develop and implement a shared safety protocol for advanced AI design and development, and be rigorously audited and monitored by independent external experts."
The letter also calls for collaboration between developers and policy makers to significantly accelerate the development of robust AI governance systems. This should involve regulatory agencies, audit and certification systems, monitoring and tracking of high-performance AI systems, issues of responsibility following harm caused by artificial intelligence, and public funding for research into AI safety.
The open letter concludes by pointing out that our society has already put other potentially catastrophic technologies on hold, and artificial intelligence should be no different. "Let us enjoy a long ‘AI summer' and not blindly enter the autumn without preparation," the letter states.
The open letter has sparked extensive discussions, especially from a legal perspective, as it involves some important legal issues.
The President of AAAA, Yanfei Ran, Esq., founder of a well-known American law firm iLead Law LLP, has expressed her opinion on this hot topic. Ran pointed out that whether the development of powerful AI systems poses potential risks is a matter of concern. With the continuous development of artificial intelligence technology, some people worry that this technology may get out of control and bring catastrophic consequences to human society. For example, some people worry that AI systems may self-replicate and surpass human control, or may be hacked and used for malicious purposes. Therefore, appropriate legal measures need to be taken to regulate the development and use of AI systems to ensure the safety and stability of human society.
Lawyer Ran also pointed out that whether immediate action is needed to suspend the development of powerful AI systems is also a controversial issue. On the one hand, if AI systems can be properly regulated and controlled, they may bring many benefits, such as automated production, improved medical care, and enhanced security. On the other hand, if AI systems get out of control, they may bring serious consequences, such as destroying human society, affecting human health and safety, and so on. Therefore, it is necessary to weigh these two factors and formulate appropriate legal policies to ensure the safety and stability of AI systems.
Lawyer Ran believes that how to formulate appropriate legal policies to regulate the development and use of AI systems is a complex issue. It is necessary to establish an appropriate legal framework to ensure the safety and stability of AI systems, including regulatory agencies, auditing and certification systems, supervision and tracking of high-performance AI systems, liability issues for harm caused by artificial intelligence, and public funding for AI technology safety research. In addition, it is necessary to strengthen research and regulation of AI systems, and promote the rational, sustainable, and safe development of artificial intelligence technology.