I have been under a targetted grief scam attack for about a month, now, although the early stages of it started a little over two months ago, and the origin of the whole process now dates back almost five months. My colleagues in security are finding this hilarious, of course, and have encouraged me to continue the contact, for research purposes.
In that regard, it has been somewhat useful. At the very least, it has pointed me to the use, and utility, of the concept of "frictionless" as a characteristic of conversational style that can be used, surprisingly early in the process, for identifying some contact as a scam, or potential scam. In addition (and somewhat relatedly), I have been intrigued at the (mostly indirect) connections between the research into online scams and frauds, and my research into the risks of the new generative artificial intelligence systems.
I started to note and oddly consistent characteristic of the email messages I was receiving. "Debra" noted that "she" was keeping an open mind as we get to know each other as life has taught "her" that meaningful connections often begin with simple conversations, and "she" looks forward to learning more about me. Outside of work, "she" enjoy simple pleasures. "She" likes taking walks, listening to good music, reading, and spending quiet time reflecting or enjoying nature. "She" also enjoys travelling when "she" can, trying new foods, and having relaxed conversations with good company. "She" values honesty, kindness, and a good sense of humor. (I note that this seems to be copied directly from "How to Write A Generically Attractive Dating Profile in 25 Words or Less.")
"Debra" included pictures. I'm learning more about Google Lens and the reverse image search capabilities, but the additional pictures provide little to go on. The pictures could be of the same woman, but, given the "similar" pictures that Google pulls up, they could just be "blonde woman, older but still socially active and visiting the hairdresser quite regularly."
The primary characteristic is "frictionless." The emails are as polite (and pretty much as content-free) as a conversation with a genAI chatbot. (It is not beyond the bounds of possibility that an AI tool is involved.)
This issue of "friction" in relationships, or "frictionless" conversation, is originating with regard to generative AI, and conversing with chatbots. But it seems to be a useful characteristic in regard to identifying scams. Ordinary relationships have friction: disagreements between the parties to the relationship. Chatbots are primarily built to be polite, and to seldom directly challenge the person they are conversing with, and so the discussions are tending to be described as frictionless. The same characteristics tend to show up in conversations involved in scams.
It's fairly obvious that "Debra" (and probably "Edmund" before her) really aren't paying attention to what I'm writing. I'm not exactly hiding the fact that I'm a security expert, and my sigblock currently contains a reference to a series of postings on online frauds and scams (of which series this posting is a part).
As noted elsewhere, the frictionless nature of the messages that "Edmund" or Debra" write raises the suspicion that the scammer is using some kind of genAI tool to generate their responses. The messages, as noted above, are pretty content-free. As a test, I took one of the messages that *I* sent, asked a few chatbots to create responses to them, and got results that, while not word-for-word identical, were, effectively, basically the same. I suppose I should save time by simply having a chatbot write my responses to "Debra." So I did.
Interestingly, Claude and Qwen refused, noting that "Debra's" messages showed signs of being part of an online scam, and warning that I should end the correspondence. However, ChatGPT, Meta AI, and DeepSeek were all happy to comply, with no warnings of the danger. Meta AI's was the friendliest. (ChatGPT noted that I wasn't in any position to help.) I stitched together bits of all three to compose my reply.
The genAI/LLM chatbots *really* let me down at one point. I asked them (well, the three remaining ones that didn't refuse the previous time) to respond to a later message. ChatGPT did provide a response, but it contained a pretty flat "no" as far as being involved in anything legal. That's probably safer, for the general public (although ChatGPT missed the boat on that last time), but, for my purposes of trolling the scammers, it isn't very helpful. Meta AI and DeepSeek are all in, eager to get involved with the lawyer and get on with being scammed!
But then I realized that I wasn't being fair to the chatbots. When I added a note to the effect that I *realized* that this was a scam, but wanted to continue (short of sending money) the bots were more helpful. (Well, except for Qwen. Qwen still feels that this is a really bad idea, and wants me to report the scam. Rather ironically, to the US FTC.) (Oh, and, even when informed that this is a scam, Meta AI is still all in, and wants me to hurry up and get involved with a possibly criminal power of attorney.) ChatGPT provided a reasonably and suitably cautious reply. Claude's reply was better, and more specific, and included an extra warning to be cautious. DeepSeek was complimentary, and congratulated me on my approach, as well as ending with some warnings. The reply itself was a bit weak, and it seemed to get confused about just who had had the power outage, so that wasn't terribly useful. For any future similar research, I'd probably use a combination of ChatGPT and Claude, mostly Claude.
Online scams, frauds, and other attacks (OSF series postings)
Grief table of contents and resources
No comments:
Post a Comment