Ken Sakamura
INIAD Founder (Toyo University’s Department of Information Networking for Innovation and Design)
“Thinking for oneself” – Scholarship and Education in the AI Era
The most important teaching of Toyo University’s founder, Mr. Inoue Enryo, is “thinking for oneself.” Although it sounds simple, it is actually quite challenging. A cynic might say, “If you’re convinced to think for yourself because someone else told you to, you’re not really thinking for yourself (laughs).”
In reality, people cannot think entirely “on their own.” We stand on the accumulated knowledge of past scholars, which allows us to see further, as if we are standing on the shoulders of giants.
The true intention of Mr. Inoue’s “thinking for oneself” is clearly stated in his book as “without being bound by preconceptions or biases, confirm with your own eyes and think with your own head. We must observe the world based on objective observations and subjective thinking.”
The key point is “without being bound by preconceptions or biases,” which can be interpreted as “think with System 2” in the psychological concept of “two modes of thinking.”
System 1 of the “two modes of thinking” is a low-cost and easy way of thinking that judges intuitively and sensationally based on the words of authoritative figures, ideologically correct beliefs, organizational customs and rules, or mere assumptions. It is a reflexive thinking that living organisms first acquired for survival, and conclusions are reached automatically and quickly based on intuition. While this mode of thinking is suitable for responding to the real world, it is also prone to errors due to preconceived notions.
In contrast, System 2 is a way of thinking that requires effort to use the brain, analyzing problems step by step and thinking logically. It is a chain of thoughts that humans acquired after developing their intellect, and because it involves careful deliberation and following logic, it is less likely to make mistakes but takes longer to reach a decision.
In the real world, as the saying goes, “it’s better to be rough and ready than slow and elaborate,” there may be situations where it is too late to act while still thinking. Therefore, the ideal human thought pattern is to respond with System 1 and revise with System 2 when there is time.
The problem lies with those who draw conclusions using System 1 and stop there, taking the easy way out. These are the people who are “not thinking for themselves.”
In other words, the attitude required to “think for oneself,” as Mr. Inoue suggests – and what is expected of INIAD students – can be summarized as follows:
- Doubt oneself
- Are you misled by preconceptions? Are you being guided by others’ thoughts?
- Reexamine the problem from a different standpoint
- Are you determining the correct answer “a priori” due to ideologies, etc.? Are you thinking in a way that leads to that conclusion?
- Think step by step
- Are you experiencing mental block when facing difficult problems? Are you impulsively jumping to simple answers?
- Ask yourself questions and search for answers
- Why do you think this is correct? How can you prove it? etc.
- Observe reality
- Is this realistic? Can this idea actually be implemented? Will this benefit people? etc.
- Collect and verify information
- Can this source of information be trusted? Is this information potentially influenced by bias? Is this fact or opinion? etc.
- Manage your own emotions
- Are your emotions affecting your judgment? Are you just clinging to your initial thoughts? etc.
Using ChatGPT
The term “Chain of Thought” mentioned earlier is a measure that is said to determine the performance of recent generative AI, such as ChatGPT. When GPT-2 evolved to GPT-3, this measure increased dramatically, leading to the breakthrough of “emergence” in generative AI. In March this year, GPT-4 was released, and its performance is said to have improved even further.
For example, the answers given by GPT-3.5 to last year’s report assignment from the first lecture on “Philosophy” in the “Introduction to Information Collaboration Studies” course taken by INIAD freshmen were of high quality. However, the answers given by GPT-4 would be almost perfect.
Now that AI can write philosophy reports, universities need to consider evaluating students on the assumption that they will use generative AI for simple tasks like “write about ~”.
It’s easy to say “don’t use AI”. However, it is impossible to determine whether AI was used just by looking at the answer text. Of course, before assigning tasks, teachers can check the answers generated by AI like ChatGPT and determine if the students’ submissions are similar to those answers. However, if users guide the conversation, they can also tailor answers with less similarity to common answers.
In the end, if we can only have unreasonable doubts like “the answer is too generally correct” or “it’s written too well, so AI must have been used,” the system will only disadvantage honest people and lead to moral hazard.
Therefore, INIAD is considering not only not limiting the use of ChatGPT but also encouraging it.
The reason is that we don’t think using ChatGPT necessarily leads to “not thinking for oneself”. ChatGPT allows for multiple dialogue sessions, and through this process, one can deepen their thoughts. This is a significant difference from search engines that only answer questions.
In fact, even if the same task is solved using AI, the quality of the results will vary greatly depending on the user’s ability to ask appropriate questions, delve deeper into the answers, and ultimately make judgments, additions, or revisions. With a “straight out of the box” approach where the first answer from ChatGPT is submitted as is, the answer will be nothing but a mediocre one.
On the other hand, allowing the use of ChatGPT means that the quality of the results will be evaluated more strictly, not that students will be able to “slack off”. Merely being “correct” will not suffice, and more advanced results such as “having a unique perspective” or “deeply considering the topic” will be demanded. To this end, INIAD is also developing an AI evaluation support system to reduce the burden on educational staff in conducting deeper evaluations.
“Thinking for Oneself” with ChatGPT
As mentioned earlier, one of the attitudes needed for “thinking for oneself” is “reconsidering the problem from a different perspective.” It may be challenging to do this alone at first. However, if you use ChatGPT, especially the GPT-4 based ChatGPT, as a “sounding board” partner, you can have the AI come up with antitheses, engage in dialectical thinking, and deepen your contemplation of the task at hand.
You won’t feel embarrassed about asking for various explanations multiple times. Japanese students tend to perceive direct opposition as an attack on themselves, which can lead to emotional debates. In contrast, “sounding board” discussions with AI make it easier to manage one’s emotions.
This kind of dialogue, which trains you to “think for oneself,” is useful not only for philosophy but for all academic fields taught at INIAD. AI can be your best “buddy for thinking,” who will always be there to support you without reluctance.
Moreover, during final assessments, exams are conducted without access to a network. Students who only rely on ChatGPT for answers without genuinely honing their skills will naturally receive lower final evaluations. This gap is expected to widen further.
This is similar to the path taken by Sota Fujii, who used AI as a buddy to improve his skills in the world of shogi, leading him to win six championships.
Furthermore, the skill of deepening one’s thinking using ChatGPT will be a strong quality sought after by INIAD students in society. Therefore, INIAD plans to educate students in a new subject called “Prompt Engineering,” focusing on how to use ChatGPT.
There are concerns that allowing access to the paid GPT-4 model may create unfairness. To address this, INIAD will provide an environment that allows all students to use the GPT-4 model and APIs for programming education.
As we adapt to the rapid evolution of generative AI, everything remains agile, and our policy may change based on the outcomes.
The main concern is students who don’t think for themselves and simply copy questions and submit the first answers generated by ChatGPT. If this becomes too apparent, we may consider suspending their access.
However, restricting the use of ChatGPT would not only inhibit deeper thinking among students but also foster resistance to technological innovations in modern society. We sincerely hope that students will use ChatGPT to “think for themselves” in the right way.
To reiterate, INIAD will actively promote the provision of appropriate environments, guidance, and teaching materials so that students can deepen their thoughts and acquire more advanced thinking skills using ChatGPT.
Students who can question themselves and generate new ideas using ChatGPT will thrive as self-actualizing individuals in society. INIAD will teach students how to deepen their thinking using ChatGPT and support them in acquiring more advanced thinking skills. Cultivating self-actualizing individuals equipped with the knowledge and skills required in the coming era is INIAD’s mission and responsibility.