South Korean giant Samsung has banned its employees temporarily from using popular generative AI tools like OpenAI’s ChatGPT through the company’s personal computers after it found that such services are being misused.
A memo was released at the end of April after Samsung discovered that some staff had uploaded sensitive code to ChatGPT. The company also advised staff to use caution and not to enter any private or business-related information when using ChatGPT and other products outside of the workplace.
Not only Samsung but also US investment firm JPMorgan prohibited the usage of ChatGPT among its employees earlier this year. Amazon has also advised its staff not to upload any information or code to ChatGPT-like services.
These tools can assist engineers produce computer code, for example, to speed up their operations and help employees to reduce their workload. But entering sensitive company data into such services may be a risk for businesses, leading to critical information leaks.
Meanwhile, Meta said that there has been a significant increase in malware disguised as ChatGPT and similar AI tools. The company’s researchers have detected 10 malware families using ChatGPT and other similar themes to compromise accounts across the internet since March 2023 alone. It has blocked more than 1,000 such links from its platform.
The company also said that scammers frequently use mobile apps or browser extensions that masquerade as ChatGPT tools. While the tools may provide some ChatGPT functionality in some cases, their true goal is to steal their users’ account credentials.
The criminals frequently target users’ personal accounts to get access to a linked business page or advertising account, both of which are more likely to have a linked credit card. While Meta is making its own plans to combat this issue, untrustworthy chatbot apps have invaded app stores already.
In Google’s Play Store, if you search for ChatGPT, you will find a long list of apps from developers. Researchers have discovered that bogus ChatGPT apps are being promoted on third-party Android app stores also which will launch malicious malware on people’s smartphones.
This is not it. Fake apps have also been discovered on Mac App Store. Dozens of apps, claiming to be OpenAI or ChatGPT apps, have been found on the App Store. The developers of these apps are flooding the App Store with lookalike apps and are confusing consumers with fake reviews and OpenAI’s logo.
One of the researchers found two such app developers Pixelsbay and ParallelWorld in App Store. Both of them apparently have the same parent company in Pakistan. It should be noted that ChatGPT has no official app.
However, another security researcher described on social media how a website that looks similar to the official OpenAI ChatGPT domain could infect a user’s device with malware that steals sensitive personal information.
While cyber experts will advise people to stay vigilant, keep away from such engagements, and avoid sharing sensitive information while using credible AI chatbots also, the question of international regulation standards emerges.
Fake apps and dubious websites can be taken down. But the concern related to AI is larger than that. ChatGPT hasn’t yet celebrated its first birthday, and by now we already have a list of similar alternatives, including Google’s Bard, Microsoft Bing, GitHub Copilot X (for coding), and many more.
Needless to say that all these are happening at a time when countries are exploring legal processes to rein in the AI sector which is completely unregulated and growing at a rapid pace. Even though it may sound unusual, the fact is China, the US, the UK, the EU, Australia, and some other nations are seeking input on regulations.
But some believe that the growth trajectory of this developing sector will make it hard for policymakers to set a stable legal framework and rather a flexible structure will be needed to control AI’s pace.
Read all the Latest Tech News here