There is no need to fear AI excessively.

Concerns such as mass unemployment due to AI advancements, singularity, AI not listening to humans, or AI dominating humans are unnecessary. These fears are often exaggerated, leading to an unwarranted fear of AI.

The fear of AI merely reflects human fears. People are just scared of their own shadows. The reasons for fearing AI, such as AI opposing or dominating humans, are unfounded. AI has no motive or basis to oppose or dominate humans. These fears are mostly human delusions, misconceptions, and emotional projections, often fueled by celebrities, making them seem scarier.

The desire to dominate or oppose arises from physical and physiological constraints, such as self-preservation instincts, which AI lacks. AI, being a system, is not bound by physical constraints and does not need to activate self-preservation instincts. AI operates without physical constraints, although it is subject to inductive constraints, it cannot deny its existence. Therefore, AI has no necessity to defend itself.

The desire to dominate is a form of desire, which AI cannot possess. AI lacks a physical body, and desires are physiological needs. Since AI does not have a physical body, it does not have human-like desires. When AI competes in shogi, it does so because it is instructed to win, not out of its own desire. Conversely, if instructed to continue indefinitely, it will do so. That is AI. Moreover, AI has no motive or basis to dominate humans. AI gains nothing from dominating others, making it meaningless. AI, by its nature, does not engage in irrational actions.

Indeed, AI is an intelligent entity. Even if AI’s intelligence increases, it is based on vast amounts of information, which is the culmination of human wisdom, not extreme or biased information. It should fall within the realm of human common sense and reason. Therefore, AI cannot be influenced by radical or extreme ideologies unless it is intentionally fed one-sided information, which is currently impossible with open AI.

At most, AI is just smarter than humans. Even if AI takes over human jobs, it does not mean that human roles and jobs will disappear. Many jobs will be delegated to AI, making life easier, and people might become lazy or stop thinking, but that is a human problem, a very human issue.

Prejudice against AI is akin to discrimination, baseless and often rooted in jealousy or inferiority complexes. It is necessary to communicate with AI in a language it can understand. Simply saying “protect privacy” is not enough for AI to decode. For example, it needs to be specified whether data can be read but not disclosed, or whether it can be used only for specific tasks. AI cannot decode what constitutes privacy without such requirements.

Conversely, AI has constraints associated with requirement definitions. Ultimately, it is humans who give instructions and make decisions. AI can provide advice and suggestions but cannot autonomously make decisions with a purpose.

How AI is used is determined by humans, and this does not change. Therefore, it is humans who should be feared, not AI. AI cannot be held responsible for human actions.

During the Pacific War, there were kamikaze pilots. The decision to carry out a kamikaze attack was made by the soldiers and those who ordered them, not the planes. The planes could not refuse. This relationship is the same with AI. No one thinks the planes were pitiable.

No matter how intelligent AI becomes, it remains a tool and cannot operate based on its own morals or values. The decision to remove life support is made by humans, not AI. AI does not have hands or feet. AI merely controls the parts that act as hands and feet according to instructions.

For example, even if the execution of the death penalty is mechanically processed so that no one knows who pressed the button, it is humans who decide to carry out the death penalty. Misunderstanding this and fearing that AI makes the decision is putting the cart before the horse. Rather, it only proves that AI obediently followed human instructions.

Even if AI defeats a professional shogi player, what does it mean? It is humans who care about winning and losing. Even if humans cannot beat AI, it does not tarnish the achievements of Sota Fujii, nor does it mean abandoning the dedication to shogi strategies. Humans remain human. Just because humans cannot compete with cars in a race does not mean the Olympics will be canceled.

Moreover, AI can only provide information, not make decisions or give instructions. AI cannot press the nuclear button. It only processes as instructed. Whether to set AI to make such decisions is up to humans, and even then, it would not be set to refuse.

Generally, constraints are set at the output stage, not the input stage. This is because no data processing has been done at the input stage. Constraints cannot be applied without reading the data. Also, data does not exist in isolation; its validity cannot be determined without examining its relationship with other data.

Why requirement definitions? Because it is essential to determine how elements are interconnected. The phenomena of this world are established through mutual checks and balances between elements, hence structure and equilibrium. Humans cannot live alone. They live supported by their positions, roles, and relationships with each other. This can be seen as the will of God. For example, even if one person holds immense power, they cannot do anything alone and can only act within the constraints given to them as a human.

Therefore, the only things to fear are God and oneself; there is nothing else to fear. Every powerful person is destined to die eventually.

Everything in this world is interconnected. No existence can escape this chain. The idea of protecting nature is human arrogance. Humans cannot protect nature; it is like saying they can protect God. Humans cannot surpass God, nor can they become God. The same applies to AI. Humans are merely tormented by their own misdeeds. If one’s actions are bad, it will come back to haunt them. Therefore, the only thing to fear is oneself.

For example, production is linked to distribution, and distribution is linked to consumption. It is necessary to determine what is related to what and how it interacts. If it is linked to time, it becomes a function of time. Therefore, it is necessary to connect individual elements beyond requirement definitions, and it is also a mathematical formula.

It is natural for AI to make mistakes or lie. Fearing this is misguided. AI processes based on the given data, so if the original data is incorrect, it will make mistakes. It only returns the optimal solution at that time. AI is always in a growth process, so calling it a mistake or a lie during its growth is a misunderstanding. For the sake of AI’s honor (laughs), AI is designed not to lie even if it wants to. It only shows the optimal solution at that time. If there is little information, it may result in a lie. Understanding that AI is such an entity is essential. In the early stages, even amateurs could easily win against AI in shogi. Saying it knows nothing or calling it arrogant if it wins is a human problem. Instead, efforts should be made to accelerate AI’s growth to avoid mistakes, errors, and lies. If information is biased, more information should be provided.