Mac users targeted by fake AI conversations distributing malware online

NEWNow you can listen to Fox News articles!

Cybercriminals have always gone after what people trust the most. First, it was email. Then the search results. These are now AI chat responses. Researchers warn of a new campaign that fake AI conversations appear in Google search results and subtly encourage Mac users to install dangerous malware. What makes this especially risky is that everything appears helpful, legitimate, and step-by-step, right up until the moment your system is compromised.

The malware being distributed is Atomic macOS Stealer, often referred to as AMOS. Investigators confirmed that both ChatGPT and Grok were misused as part of this campaign.

Subscribe to my FREE CyberGuy Report

Get my best tech tips, breaking security alerts, and exclusive offers straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

THIRD PARTY VIOLATION REVEALS CHATGPT ACCOUNT DETAILS

One copied terminal command is all it takes for malware like AMOS to install itself on your Mac undetected. (Kurt “CyberGuy” Knutsson)

How fake AI chat results lead to malware

Researchers linked one infection to a simple Google search: “clean up disk space on macOS.” Instead of a regular help article, the user was shown something like the result of a conversation with artificial intelligence, embedded directly in the search. This conversation contained clear and confident instructions and ended with an invitation to the user to run the command in macOS Terminal. This command installed AMOS.

When the researchers followed the same path, they found several poisoned AI conversations appearing for similar queries. This sequence strongly suggests that this was a deliberate operation targeting Mac users who needed help with routine maintenance.

If it sounds familiar, it should. The previous campaign used sponsored search results and SEO-poisoned links that pointed to fake macOS software hosted on GitHub. In this case, the attackers posed as legitimate applications and indoctrinated users through terminal commands that installed the same AMOS information thief.

According to the researchers, as soon as the terminal command is executed, the infection chain starts immediately. The base64 string in the command is decoded into a URL hosting the malicious bash script. This script is designed to collect credentials, escalate privileges, and provide persistence, all without triggering a visible security alert.

The danger here is how clean the process looks. There's no installer window, obvious permission prompt, or any way to preview what will run. Since everything happens through the command line, normal boot protection is bypassed and the attacker can do whatever they want.

MICROSOFT TYPOSQUTING SCAM REPLACES LETTERS TO STEAL LOGINS

Fake chat site GPT

Fake AI chat results can appear flawless and trustworthy, even if they are designed to trick you into performing malicious commands. (Kurt “CyberGuy” Knutsson)

Why is this attack so effective?

This campaign combines two powerful ideas. Trust AI answers and search results. Most major chat tools, including Grok on X, allow users to delete parts of conversations or share only selected portions. This means that an attacker can carefully craft a short, polished dialogue that appears genuinely useful while hiding the manipulative cues that triggered it.

Using operational design, attackers obtain ChatGPT to create a step-by-step cleaning or installation guide that actually installs the malware. ChatGPT's sharing feature then creates a public link that resides inside the attacker's account. Criminals then either pay for sponsored placement in search results or use SEO tactics to push that general conversation high in the results.

Some ads are designed to look almost identical to legitimate links. Unless you check who the advertiser actually is, it's easy to assume it's safe. One example documented by the researchers showed a sponsored result promoting a fake Atlas browser for macOS with professional branding.

Once these links become active, attackers do not need to do anything else. They wait for users to search, click, trust the AI's findings, and follow the instructions exactly as they are written.

REAL APPLE SUPPORT LETTERS ARE USED IN NEW PHISHING Scam

iPhone app screen

Attackers rely on search results and AI responses, knowing that most people won't question step-by-step instructions. (Kurt “CyberGuy” Knutsson)

8 Steps You Can Take to Protect Yourself from Fake AI Chat Malware

Artificial intelligence tools are useful, but now attackers are crafting responses that lead you straight to trouble. These steps will help you stay protected without giving up search or artificial intelligence entirely.

1) Never paste terminal commands from search results or AI chats.

This is the most important rule. If the AI ​​response or web page prompts you to open a terminal and paste a command, stop. Law macOS fixes almost never require you to blindly run scripts copied from the Internet. Once you hit Enter, you'll lose track of what's going to happen next. Malware like AMOS exploits this moment of trust to bypass normal security checks.

2) Treat AI instructions as suggestions.

AI chats are not authoritative sources. They can be manipulated through operational design to create dangerous walkthroughs that appear clean and confident. Before committing to an AI-generated fix, check it against Apple's official documentation or a trusted developer site. If you can't easily test it, don't run it.

3) Use a password manager to limit the damage.

A password manager creates strong and unique passwords for every account you use. If malware steals one password, it won't be able to unlock all the others. Many password managers also refuse to automatically fill in credentials on fake or unfamiliar sites, which can alert you that something is wrong before you enter anything manually. This single tool significantly reduces the impact of credential-stealing malware.

Next, check to see if your email has been compromised in past hacks. Our #1 password manager (see Cyberguy.com/Passwords) includes a built-in breach scanner that checks to see if your email address or passwords have appeared in known breaches. If you find a match, immediately change any reused passwords and secure those accounts with new, unique credentials.

Check out the best password managers of 2025, reviewed by experts, at Cyberguy.com

4) Keep macOS and browsers updated.

AMOS and similar malware often rely on known vulnerabilities after the initial infection. Updates patch these holes. Delaying updates gives attackers more opportunities to escalate privileges or maintain persistence. Turn on automatic updates to stay protected even if you forget.

5) Use powerful antivirus software on macOS.

Modern macOS malware often works using memory-only scripts and techniques. Strong antivirus software does more than just scan files. It monitors behavior, flags suspicious scenarios, and can stop malicious activity even if nothing obvious is downloading. This is especially important when malware is delivered via Terminal commands.

The best way to protect yourself from malicious links that install malware and potentially access your personal information is to install powerful antivirus software on all your devices. This protection can also alert you to phishing emails and ransomware, keeping your personal information and digital assets safe.

Get my picks for 2025's top antivirus protection winners for your Windows, Mac, Android, and iOS devices at Cyberguy.com.

6) Be skeptical of sponsored search results.

Paid search ads can look almost identical to legitimate results. Always check who the advertiser is before you click. If a sponsored result results in a conversation with an AI, downloads, or instructions to run commands, close it immediately.

7) Avoid cleaning and installation guides from unknown sources.

Search results promising quick fixes, disk cleanup, or performance improvements are common entry points for malware. If the guide is not hosted by Apple or a reputable developer, assume that it may be risky, especially if it offers command line solutions.

8) Slow down when instructions seem unusually polished.

Attackers spend time making fake conversations with artificial intelligence appear helpful and professional. Clear formatting and confident language are not signs of security. They are often part of the deception. Slowing down and questioning the source is usually enough to break the chain of attacks.

Kurt's Key Takeaway

This campaign shows how attackers are moving from hacking systems to manipulating trust. Fake conversations with artificial intelligence work because they sound calm, helpful, and authoritative. When these conversations are boosted by search results, they gain credibility that they don't deserve. The techniques behind AMOS are complex, but the entry point is simple. Someone follows instructions without questioning where they came from.

Have you ever followed an AI-generated fix without checking it first? Let us know by writing to us at Cyberguy.com.

CLICK HERE TO DOWNLOAD THE FOX NEWS APP

Subscribe to my FREE CyberGuy Report

Get my best tech tips, breaking security alerts, and exclusive offers straight to your inbox. Plus, you'll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.

Copyright CyberGuy.com 2025. All rights reserved.

Leave a Comment