Email Marketing

The Dangers of ChatGPT: Great Personalization Tool or Great Phishing Technology? 

minute read

Post Image

OpenAI’s ChatGPT has taken the world by storm.  

Apart from the millions of people who have already tried it, email service providers (ESPs) are training it using campaign data to produce highly effective email subject lines and content.  

Salesforce users are exploring how ChatGPT can create formulas and validation rules. And, Microsoft has now incorporated it into its Bing search engine—there’s already talk of this being a potential “Google killer!” 

So, how does the new technology work?  

ChatGPT (Generative Pre-trained Transformer) uses deep learning techniques to process terabytes of web data (containing billions of words) to create answers to prompts or questions from users.  

Interactions are like talking to a person. Many say ChatGPT is the first AI application to pass the Turing test—meaning it exhibits intelligent behavior equivalent to, or indistinguishable from, a human being.  

We’ve already seen some eye-catching use cases: 

  • The UK’s Times newspaper used ChatGPT to write its daily editorial, and then asked readers to identify which content was machine-authored. Most couldn’t! 
  • Online greeting card retailer Moonpig is considering integrating ChatGPT into its systems. Customers can ask it to generate a personalized message or poem without having to scratch their heads for the right words.   

ChatGPT enables innovation and productivity that could be a game changer, rivalling the creation of the web itself. But every coin has two sides. Is there a potential dark side to this new technology?  

Let’s look at how ChatGPT might create new opportunities for both cybersecurity attackers and defenders. 

Better phishing attempts 

Researchers are already seeing AI-generated code posted to cybercrime forums. One ChatGPT use case from the “dark side” is spear phishing—email scams intentionally targeted towards specific individuals, organizations or businesses. Using this tactic, fraudsters can steal data for malicious purposes or install malware on targeted computers. 

In a typical example, a company’s staff might receive an email from their “CEO,” with an urgent request to contribute to a linked document or read an attachment. The familiar name paired with a request they might normally expect increases plausibility—and their likelihood to respond.  

In fact, it’s entirely possible that the recent Royal Mail ransomware attack began with a spear phishing email. 

Because ChatGPT can produce content in the style of a nominated person, spear phishing emails will become even more convincing soon. Fraudsters can now request content sounding like the target’s CEO, making the message even more realistic, and maximizing the chance of employees falling for it. 

Antisocial engineering 

ChatGPT might also enhance other forms of impersonation fraud, especially in cases where trust needs to be established.  

Using ChatGPT, fraudsters can pose more convincingly as bank employees, police officers, tax officials (who tell victims they have an outstanding bill), or service provider reps (who claim victims’ routers have been hacked, and they need remote access to their computer to fix the problem). 

An increasingly common example involves messages from friends or family members who are “overseas.” They will typically say their wallet and phone have been stolen, and someone has lent them another phone to request emergency funds.   

The death of original content 

Many educators are extremely concerned that ChatGPT means students will never need to write another essay.  

Some have already put this to the test, submitting AI-generated articles—and receiving credible pass marks (provided they’re OK with not being top of the class). Educators are growing seriously worried AI will lead to mass cheating, and some schools are already moving away from homework essays in response. 

Privacy concerns 

ChatGPT may know which planet was first photographed by the Webb space telescope (it was HIP 65426 b). But the way this data has been sourced may clash with emerging privacy laws like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). 

The Clearview AI case, where a facial recognition database was built using image scraping, ended in multi-million dollar fines from a pan-European cross-section of regulators.  

The way ChatGPT works is not entirely the same—more like a food blender for data, where the original ingredients are no longer recognizable. But there will still be the question of whether individuals intended their data to be used in this way. 

Copyright infringement is a more likely scenario. In the United States, a class-action lawsuit has been filed against Stability AI, the developer of the AI art generator Stable Diffusion. Prosecuters alleged infringement on the rights of the original image owners.  

And, Getty Images, the UK-based photo and art library, says it will also sue Stable Diffusion for using its images without a license. 

GPT = Get Protected Today 

While these scenarios may sound a bit apocalyptic, there are established best practices that can be used as safeguards: 

  • Vigilance is always the first line of defense. Banks and police officials will never ask for card details, PINs, or internet banking passwords. So, be highly skeptical of any similar requests.  
  • When receiving an email, check for data only a legitimate sender would have. This is why banks often include zip codes as part of their messages. Use of Brand Indicators for Message Authentication (BIMI) will also play a key role here, as the underlying authentication protocol (DMARC) confirms the email comes from a verified sender. 
  • At work, implement rules to highlight emails from outside your organization. An email claiming to be from your CEO but prefaced with “External” would be an immediate red flag. 

How to identify AI-authored content

We’re also seeing new tactics emerge to help identify AI-authored content. For example, tools like GPTZero were developed to help teachers and lecturers determine if their students’ essays are machine-authored.  

Additionally, there are predictions that OpenAI will insert a watermark to text sourced from ChatGPT. 

Individuals will also become more creative about how they protect data that generative AI’s might use. There is a famous story about rock band Van Halen demanding bowls of M&Ms with brown ones removed as part of their concert requirements. Critics accused them of letting fame go to their heads. Then, their singer David Lee Roth explained this was to ensure their contracts with event promoters (with significant safety requirements) had been fully read. 

It’s entirely possible people will start seeding incorrect “facts” about themselves on the web (similar to how antispam vendors deploy pristine spam traps).  

For example, I could state in a blog that I was raised in Australia. (It was actually South Africa.) Anyone who knows me will know the Australian reference is untrue, and it would be a red flag if it was subsequently used in content about me! 

AI is still a force for good 

Despite the concerns I’ve outlined in this post, the overall impact of this AI revolution should be overwhelmingly positive.

It’s comparable to other critical inventions like the automobile and the computer. Jobs will change (so will lives), but new ones will emerge—especially in fields related to AI development and deployment. Perhaps most importantly, productivity will increase. 

Want to hear our predictions on how generative AI will serve the world of marketing? Check out our recent State of Email Live webinar where we do a deep dive into these opportunities.   

 

And in response to your unasked question, no, I didn’t use ChatGPT to write this article!