- Open bans associated with China, North Korea for malicious assistant surveillance and phishing.
- Chinese actors have used CHATGPT to develop proposals for monitoring behavioral profiling tools and systems
- North Korean actors tested phishing, credential theft and macOS malware development using paraphrased tips
OpenAI has banned Chinese, North Korean and other accounts that reportedly used CHATGPT to launch surveillance campaigns, develop phishing techniques and malwareand engage in other malicious practices.
In the new reportOpenai said it has observed people reportedly associated with Chinese government entities or government-linked entities using its large language model (L.L.M.) help write proposals for surveillance systems and profiling technologies.
They included tools for monitoring people and analyzing behavioral patterns.
Phishing Research
“Some of the accounts we banned attempted to use CHATGPT to develop tools for large-scale monitoring: analysis of data sets often collected from Western or Chinese social media platforms,” the report said.
“These users typically asked CHATGPT to help develop such tools or generate promotional materials about them, but not to implement monitoring.”
Prompts were worded to avoid triggering security filters and were often expressed as academic or technical queries.
While the returned content did not provide direct observation, its outputs were used to refine the documentation and planning of such systems, as stated.
North Koreans, on the other hand, have used Chatgpt to study phishing techniques, credential theft, and macOS malware development.
OpenAI said it has observed these account testing tips related to social engineering, password harvesting, and debugging malicious code, especially targeting Apple system
According to Openai, the model abandoned direct requests for malicious code, but emphasized that threat actors were still trying to bypass safeguards by rephrasing hints or asking for general technical assistance.
Like any other tool, LLMs are used by both financially motivated and state-sponsored threat actors for all types of malicious activities.
This misuse of AI is evolving, with threat actors increasingly integrating AI into existing workflows to make them more efficient.
While developers like Openai work hard to minimize risk and ensure their products can't be used like this, there are many clues that fall between legitimate and malicious use. This gray zone activity, the report hints, requires nuanced detection strategies.
By using Registry
Follow Techradar on Google News And add us as your preferred source Get our expert news, reviews and opinion in your feeds. Be sure to click the “Next” button!
And of course you can also Follow Techradar on Tiktok For news, reviews, unboxing videos and regular updates from us at whatsapp too much.