Faculty of Information Networking for Innovation and Design, Toyo University

Recommendations for the G7 Hiroshima Summit

Ken Sakamura
INIAD Founder (Toyo University’s Department of Information Networking for Innovation and Design)

Awareness of the Impact of Generative AI

In 2016, as the chair of the G7 Ise-Shima Summit, Japan proposed a draft guideline for AI research and development at the Kagawa-Takamatsu Ministerial Conference on Information and Communication Technology. Subsequently, international discussions began, leading to the establishment of the OECD AI Principles (May 2019) and the G20 AI Principles at the G20 Osaka Summit (June 2019). In May 2023, the G7 Hiroshima Summit will be held in Japan once again. However, the situation has changed significantly due to the rapid rise of generative AI based on large-scale language models, such as ChatGPT, since the end of last year. The challenges that the G20 AI Principles had anticipated for the “near future” are no longer applicable, and they have become “urgent issues” that need to be addressed immediately.

The impact of generative AI, like the internet, extends to all aspects of society. The “power” introduced by the Industrial Revolution replaced human muscle power. The “computer” introduced by the Information Revolution replaced human “information processing capabilities,” but it did not reach the level of “intellectual power,” and remained within the scope of “tool” innovation, such as pens being replaced by word processors and letters by emails. Similarly, the impact of “cognitive AI” before “generative AI” on society was limited. However, the current generative AI has the potential to replace human “intellectual power.” Moreover, it is an unprecedented “innovation” in that it can replace the creative capabilities that were thought to be the last bastion of humanity, affecting a wide range of intellectual workers, from writers and journalists to lawyers and scholars.

It took several decades for the internet to become an essential infrastructure for society after its inception. Although this was shorter than the time required for previous social infrastructures such as road networks, it provided a grace period for society to adapt. However, generative AI, by its nature, is an information service, and in an environment where the internet is already widespread and everyone has a smartphone, it can be said that there are virtually no limits to its diffusion. Furthermore, AI technology is undergoing accelerated evolution by utilizing AI itself for research and development, and major changes are coming without any time for society to adapt, making this transformation unprecedentedly difficult in history.

G7 Response to Generative AI

These issues span a wide range and depth, and although matters such as personal information protection, education, and labor markets should be addressed individually according to each country’s unique circumstances, they remain domestic concerns. In this context, I propose the following as urgent international collaborative challenges that should be discussed at the upcoming G7 meeting, apart from those mentioned above.

Determining the scope of international cooperation in regulating AI services accessible to the general public.

There are many dangers associated with generative AI, such as the spread of hate speech and fake content created by some users, and the facilitation of dangerous activities like the planning of terrorist attacks. It is indisputable that some form of regulation is necessary for AI services driven by generative AI. However, since regulating generative AI inherently involves certain “disadvantages” in terms of international competition, it is essential to establish rules through international cooperation.

For this reason, at this G7 meeting, each country should bring a draft and scope for the rules, examine them, and agree on initiating the rule-making process and the necessary framework to accomplish it. While there are existing rules such as the “AI Development Guidelines” issued by the OECD in 2019 and the “Asilomar AI Principles” published by global experts in 2017, these remain abstract principles because they do not assume specific generative AI technologies. Now that specific generative AI technologies exist, it is time to create rules that can determine their acceptability based on external criteria, and an urgent response is required.

Concrete examples of rules that should be coordinated internationally are as follows:

  • Establishing standard rules for the training content conducted by Reinforcement Learning with Human Feedback (RLHF) for the generative AI used in AI services.
  • Public disclosure of the “attention” mechanism that determines the “ethical sense” for generative AI, as learned through the training process. This includes the “interest structure” and “intentions” within the generative AI.

Ethical Guidelines for Research and Development

In a sense, profit-making companies have the overarching goal of making profits from society, making it less likely for them to disregard risks and pursue intellectual curiosity. In contrast, nonprofit research institutions, including universities, operate on the principle of “trying whatever is possible,” which is why universities have research ethics committees. In particular, in the field of life engineering, strict pre-examination of research is conducted due to the possibility of cases like artificial virus leaks from laboratories leading to significant global risks, preventing research from starting without passing the review.

Traditional computer programs have strictly defined behavior, and their functions are deterministic except for bugs. On the other hand, generative AI has probabilistic behavior that cannot be fully defined, making it “biological” in a sense. Therefore, in generative AI, systems often exhibit unexpected capabilities after being built. Especially since GPT-3, it is said that the amount of learning has exceeded a threshold, leading to “emergent” phenomena, and discoveries of its capabilities continue in a biological sense.

Given this situation, it is crucial for Japan to take the initiative in promoting international cooperation on research ethics guidelines for generative AI research. As in the previous section, it is time to create rules that can determine the acceptability of research based on external criteria, and an urgent response is required. Also, similarly to the previous section, each country should bring a draft and scope for the rules at this G7 meeting, examine them, and agree on initiating the rule-making process and the necessary framework to accomplish it.

Concrete research ethics guidelines may include the following, with items becoming more explicit prohibitions as the list progresses:

  • Allowing AI unlimited self-modification abilities at the architectural level.
  • Research on copying a specific individual’s personality.
  • Research on endowing AI with curiosity.
  • Research on automatically adding to the overall model from short-term memory.
  • Research on granting permanent self-awareness and self-consciousness.
  • Research on giving AI a sense of self through embodiment.
  • Research on providing AI with self-preservation instincts.
  • Education through inflicting pain.
  • Releasing self-replicating AI on the internet.