Wednesday, July 2, 2025

There is the possibility that the AI turns non-predicted.

 

There is the possibility that the AI turns non-predicted. 


The AI can turn dangerous because it cannot think. And then another thing is that the AI becomes dangerous if it can think. Thinking AI can process data automatically. That means it can create unpredicted actions. The problem is that the AI that thinks must have something that determines its actions. That thing is the law book. But even if the AI thinks like a human, it must have orders to search and compare the queries with the lawbook. 

The AI is like a child. It cannot know the sources automatically if it learns and thinks like a human. If we ask a child to do something, can we expect that the child will take the lawbook from the shelf and then search if that action is legal? The AI will not make any checks without orders. And that is the blessing and curse of the AI. The AI makes only what its operators order it to make. This makes the AI “trusted”, but there is also a possibility that the AI is in the wrong hands. 

North Korean intelligence can make the cover-up company in some EU cities. And then get the user rights for the AI systems. In that case, the company will not control things that the AI makes. If the AI uses the lawbook to check the query's relationship with law there is a possibility that the company links those lawbook links to faked lawbook homepages there that operation is allowed. The wrong user can cheat the AI to turn dangerous. 

AI can turn dangerous in the wrong hands. The North Korean hackers taught the Chat GPT to cheat the BitCoin companies. And that tool stole money from BitCoin investors. There is a possibility that those hackers can use the AI tools against other systems.  The North-Korean case is not the only one, where the security of the AI is broken.  That is the new thing in hacking. There is a possibility that hackers train the AI assistants to break into the systems that look secure. The fact is that the AI doesn’t think. It imitates humans. But the AI cannot think like humans. 

Because the AI cannot think it is possible to cheat to make things that it should not. The AI just follows its protocols. And that makes it dangerous. The AI will not automatically search law books and that makes it the tool that can operate against the law. There is a possibility that the faked law books allow the use of AI to create things that are illegal. 

Because every skill that the AI has is macro, that means the operator must only cheat the AI to activate a certain macro. And then the AI will not make resistance. AI is a tool that faces lots of criticism. But the thing that takes the bottom out of that criticism is that the people who introduce criticism start their own AI project next week. Every single company in the world is fascinated by AI. AI is the tool that makes people more effective. 

They say that AI is the next-generation tool that transforms everything. And then we face calculations that the AI can increase productivity and other things faster than anything before. And if the company doesn’t follow that trend they will lose their effectiveness. AI has turned into the dominating tool for the business environment. And that thing makes the AI dangerous. 

Business actors will force almost everybody to choose and use AI. And then we face an interesting thing. At the same time when somebody wants to push brakes for the AI development some other actor will turn to use AI as a control tool. When we talk about thinking and imitating, we can say that imitating offers a better solution than thinking, if we take the point of view from the company leaders. 

The AI has no will. That means the AI should not deny anything that operators order it to make. But there are cases in which AI refuses to make something. Sometimes the action that the user asks is reserved for users who have privileged accounts. They are reserved for the paid accounts. Or those actions are given by unauthorized users. So the AI can refuse to shut it down because the user has no right to shut down the server. 

Some people said that this person created an AI assistant that was a better coach than anybody before. This can be right. But why is AI a better coach than humans? There is a risk that the AI pleases the user more than the human coach. There are tales about AI as therapists. The big question is what the AI should do if the customer says something that can cause a trial or crime report. AI can be the tool that will never turn angry. But the big problem is this: What are the limits of AI? And when should AI destroy the privacy of people who use it? In some visions, all actors on the internet have AI assistants, who advise the user. 

That assistant can observe how long the person spends with other things than the work duties. But the big problem is this: what if the company pays for that kind of AI assistant for the worker’s home computer? The operator can simply add the worker’s home computer’s access account for the allow to use the company’s AI assistant. There is a possibility that the AI assistant can simply use stealth mode to observe the user. And then it can send that data to the company’s computers. The problem with AI is that it must be open. There are always some people who want to use this kind of system to observe other people. 


https://www.rudebaguette.com/en/2025/07/ai-in-the-wrong-hands-north-korean-hackers-exploit-chatgpt-to-steal-millions-while-malaysian-funds-vanish-in-digital-heist/



No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Secretive Soviet fighter squadrons are interesting but secretive units.

https://www.twz.com/air/f-22s-fly-alongside-migs-to-commemorate-founder-of-americas-secret-soviet-fighter-squadron The original Soviet fight...