The Meta security team acknowledges that there is a large number of fake ChatGPT malware out there to hijack user accounts and take over business pages.
In the company’s new Q1 Security Report, Meta shares that malware operators and spammers follow trends and highly engaged topics that catch people’s attention. Of course, the biggest tech trend right now is AI chatbots like ChatGPT, Bing, and Bard, so tricking users into trying a fake version is now in fashion – sorry, crypto.
Meta-security analysts have detected about 10 forms of malware posing as tools associated with intelligent chatbots such as ChatGPT since March. Some exist as web browser extensions and (classic) toolbars – they are even available through unnamed official web stores. the Washington Post I reported last month about how fake ChatGPT scams use Facebook ads as another way to spread.
Some of these malicious ChatGPT tools have built-in artificial intelligence to look like a legitimate chatbot. Meta continued to block more than 1,000 unique links to detected malware duplicates that were shared across its platforms. The company also provided the technical background on how the scammers gained access to the accounts, which includes hack sessions while logged in and maintaining access — a method similar to what Linus Tech tips down.
For any business that has been hacked or shut down on Facebook, Meta provides a new support flow to fix them and restore access. Business pages are generally subject to hacking because individual Facebook users who gain access to them are targeted by malware.
Now, Meta is rolling out new Meta Work accounts that support existing single sign-on (SSO) credential services, which are usually more secure, than organizations that don’t link to a personal Facebook account at all. Once you migrate a business account, hopefully, it will be much more difficult for malware like strange ChatGPT to attack.