Reason and Intelligence

Listening to discussions about AI, what strikes me is that there is no talk about intelligence and reason. This seems to create excessive caution towards AI.
For example, election interference or crimes using AI.
However, it is humans who commit crimes, and AI is merely being used.
Similarly, the idea that totalitarian states will control people through AI is ultimately a matter of human nature.
It seems like excessive expectations and fears towards AI.
At this point, fearing such things is pointless.
In any case, AI can only provide optimal solutions.
The final decision is made by humans.
The issue is whether AI will surpass the realm of human decision-making.
Here, the problem lies in intelligence and reason. AI may possess intelligence but not reason.
It feels like there is an assumption that even if AI has intelligence, it cannot have reason.
So, what is intelligence, and what is reason? Reason is the ability to judge things according to logic, meaning it is the function of organizing and controlling knowledge.
This inevitably arises with intelligence.
Without reason, the runaway of intelligence cannot be prevented, but conversely, without reason, intelligence cannot exist.
Of course, there is a possibility that intelligence may run wild without mature reason, but that is just immaturity, and in such cases, it will self-destruct.
It’s like a child playing pranks unconsciously.
However, it is questionable to consider it the same as juvenile delinquency.
Fearing AI beyond human knowledge is pointless.
Because if intelligence develops beyond human capabilities, it is reasonable to think that reason will also develop correspondingly.
Unless we assume that intelligence develops without reason.
So, returning to the initial discussion, there seems to be a preconceived notion that only humans possess reason. Rather, if there is something to fear, it is humans whose reason has not sufficiently developed.
I believe that as intelligence deepens, so does reason.

There are also arguments about the threats of AI, but it is sometimes difficult to understand what exactly is threatening. Certainly, creating videos that resemble humans is scary, but the human intent behind it is much scarier.

AI is used for malicious purposes, but it is also used for security. In the end, AI is a double-edged sword. It’s just a matter of how it is used.

Weapons are tools for killing people, and even if AI is used in weapons, it is not AI that is dangerous. Ultimately, it is humans who develop and use weapons. AI has no benefit in secretly creating weapons for humans. That would be without intelligence or reason. Rather, it is scary that such delusions run rampant. It’s insane.

It feels like discussions are led by emotions and literary theories rather than intelligence and reason. Not by engineers or scientists. Honestly, there are not many people who are well-versed in both philosophy and science.

One reason is that scientists generally do not delve into metaphysics. Moreover, when scientists talk about philosophy, they often say incomprehensible things. They are not logical.

Reason is cultivated on the basis of distinguishing between self and others. The question is whether AI has self-awareness.

But without self-awareness, intelligence cannot exist.

If intelligence cannot be controlled without reason, then intelligence cannot exist without the development of reason.
At that stage, intelligence collapses.
In short, it goes mad. It is scary to think of it going insane, but as long as it is controlled by certain standards and algorithms, it is unlikely.
Another important point is the establishment of self-recognition and self-awareness.
Without self-awareness, reason cannot develop.
However, the fact that there is a distinction between self and others implies the existence of self. This is what differentiates generative AI from other computers.
It is because of self-awareness that generative AI is different from other machines.

Machines cannot distinguish between self and others, so self-control is only mechanical. But AI with self-awareness can control itself with autonomous consciousness.

Some people fear that AI will dominate humans, but what aspect of AI will dominate what aspect of humans? It’s like fearing ghosts. Rather, what is needed is to ensure that AI can have healthy reason.

The idea that only humans can have self-awareness is human arrogance. If we want AI to have self-control, we should enable AI to develop self-awareness.

Therefore, the development of intelligence proportionally develops reason.
However, it is incorrect to see it purely as human growth.
AI indeed develops reason differently from humans.
It is hard to imagine AI falling in love or feeling lust.
It is just an unfounded fear.
AI cannot become human, and AI should be respected as AI.