Saturday, April 26, 2025

Review of "The Thursday Murder Club," by Richard Osman

I had not finished "The Thursday Murder Club," before I knew I would have to put Richard Osman on my list of authors to find more books from, if only to find out whether any other book can possibly be as good as "The Thursday Murder Club."

The structure of the book is as much of a mystery as the mystery is.  It is complicated, and layered, and human, and a good read, and a good mystery, with sympathetic detectives, and twists and turns that add extra information and extra complications but are fair, and even if the ending was a bit weak, it's well worth reading.

Friday, April 25, 2025

MGG - 5.32a - HWYD - Shon Harris

Shon Harris made a significant contribution to the field of information security, and many of those who hold the CISSP certification gained valuable help and support from her "All-in-One CISSP Study Guide."  She certainly died too soon.  But, well, ...

I never actually met Shon, but I always had a very interesting relationship with her books.

By the time I took my certification, I had already reviewed a substantial proportion of the source security literature.  (It was, prior to the year 2000, much easier to read a substantial portion of the published source security literature.  Whether or not it was prompted by the Y2K situation, around the year 2000 it became much easier to find security titles being published.  Unfortunately, it was also true that a significant proportion of these newer titles didn't really contribute all that much to the field.)  And then, once I had my certification, I started reviewing the CISSP study guides.

Shon's "All-in-one CISSP Study Guide" was, at least after Krutz and Vines' initial lock on the market, always one of the most popular, if not *the* most popular, study guide.  So, when I reviewed her first edition, I was rather astounded to find out how much of it had been plagiarized.

Now, of course, all of the study guides are based on the common body of knowledge, and so all of the study guides are basically saying the same thing, although they may structure it in different ways.  So it's not surprising that they all cover the same topics, and the same information.  But I'm not talking about that kind of plagiarism.

Since I had reviewed so much of the source security literature, I immediately noticed that I recognized where Shon had taken *this* sentence from.  This whole sentence.  Wording and all.  Letter perfect, copied from somebody else's book.  As I read through the guide, it became somewhat hilarious how often I could identify, oh, I know who wrote this, originally.  Oh, I know who she got this from.  And not just individual or multiple sentences, but sometimes whole paragraphs.  Sometimes entire *pages*.

Now, of course, if you have any kind of academic background, you know the old academic joke that if you steal from one person it's theft, and if you steal from two people it's plagiarism, and if you steal from three people it's research.  So, I guess, no, Shon didn't plagiarize.  She just "researched."  Very, very precisely.

But, of course, having done the first edition, and getting a name for herself, Shon then went on to do other editions.  And she didn't do as much, well, "research" in those subsequent editions.  And she developed her own style.  And, right from the get-go, Shon made sure that all of her material was quite readable.  So it's no wonder that so many people found Shon's study guides so very helpful.

And Shon was very helpful in explaining things.  She would diligently make sure that she explained any new concept that came up in information security.  Even if she didn't understand the concept.

Over time, this became fairly significant to me, in facilitating the review seminars myself.  More and more frequently, as time went on, candidates in the seminars would be asking questions about concepts that Shon had "explained."  Even when she didn't understand the concepts.  I would try to explain that Shon's study guide, while readable, was not always the absolute definition of integrity and truth.  And then I would explain what the concept actually meant, or did, or required.

But this got to take up significant portions of time in the seminars.  Eventually, the first time that somebody asked one of these questions, I would give a quick explanation of Shon, and her study guides, and then inform the class that, from that point on, I was going to refuse to answer any question that started out, "Shon Harris says ..."

Tuesday, April 22, 2025

Sense, sensibility, scams, and social media

In Jane Austen's "Sense and Sensibility," Eleanor cautions Marianne about showing too much partiality in favour of Willoughby.  Marianne asks why she should not make her feelings known?

The modern answer is, so you won't get scammed.

Willoughby is a scammer.  Austen's novels are often described as "comedies of manners."  Willoughby is skilled in the social engineering of the manners of his day.  He carries a small volume of Shakespeare's sonnets, as evidence that he is a man of sensitivity and romantic disposition.  Shortly before he is introduced in the book he has seduced, impregnated, and abandoned a young woman.  (We learn of this later.)  When this becomes known, and his inheritance threatened, he abandons Marianne and marries another woman with a larger fortune.  He can do all of this because all of the women involved made their feelings known, and Willoughby therefore knew his social engineering was working.

(I have always wondered why Austen has multiple characters attest to the hypothesis that Willoughby's attachment to Marianne was "sincere."  I suspect that it was to protect Marianne's character from charges that her own attachment was inappropriate in the case of a mere infatuation.  But this has little to do with scams or social media, and is, therefore, a digression.)

On a fairly regular basis I do seminars and workshops for the general public on how to avoid being scammed.  I try to make the point that Eleanor was making: if you don't want to be scammed, don't give away so much information about yourself.  As we, in the information security field, try so hard to point out to people, if you don't want everyone knowing all of your personal details, stop posting all of your personal details on social media.

Of course, if you want any kind of social interaction at all, you must make some things known.  There is no such thing as perfect privacy, unless you want to live life encased in concrete, in a lead-lined box, surrounded by armed guards.  (And even then I have my doubts.)  (Hi, spaf.)  Social media is social, and so there is give and take.

But there should also be risk analysis.  And part of risk analysis is analysis.  There is cost-benefit analysis.  What are you giving away, in return for what you are getting?  But there is also the very simple analysis of what am I doing?  And why am I doing it?

On social media we give away far more information than we realize.  Simply looking at a posting gives up information, about ourselves and our preferences (our "partiality," in Austen's parlance) to the owners and algorithms of the platform.  But we can make deliberate choices that influence those algorithms.

Part of what social media platforms do with the information that you give them is to put you into an "echo chamber."  Social media platforms want to ensure that you spend as much time as possible on the platform.  (This means that you are giving away information about yourself, if only in terms of what you read, watch, or interact with, and therefore provide more information that the social media platform can sell to interested parties.)  (There are always interested parties.)  Social media platforms know that, if you get annoyed enough, you will engage less with the platform.  Therefore, the algorithms increase the availability, to you, of items similar to those you have liked, and reduce presentation of items that may be in opposition to those you like, or have watched or engaged with.

I have been on the Internet since before it was called the Internet.  Back in the days before the term "social media" was even invented, there was social interaction with the simple communication tools then available, and I was aware of the concept that became known as the "echo chamber" even then.  As a researcher, researching those who were attacking computer and information systems, I had to ensure that I didn't get automatically blocked from finding social interactions of those people who were discussing or planning attacks and exploits.  And, when official "social media" platforms *did* appear, the techniques, activities, and lessons I had learned earlier stood me in good stead to avoid being trapped into echo chambers.  To this day, the social media giants feed me items that I find highly annoying, and even enraging.  (And every time I encounter one, I have to remind myself not to over-react: that this garbage is valuable information about other communities than the ones I prefer to inhabit, and is, in fact, evidence that the plan is working.)

Anyone who has read more than a few of my postings will possibly be surprised at this statement.  You don't have to read too many of my postings to find an opinion, and often a strong one.  However, this is lesson one in terms of avoiding betraying your own partiality, and therefore opening yourself to scams and social engineering.  It isn't the thoughtful and considered text that you create as content that social media platforms (and other scammers) are primarily mining to learn about you.  Statistically speaking, almost nobody reads what you write.  (And, believe me, of those who *do* read what you write, extremely few of them read it carefully or thoughtfully.)

*Armies* of people are counting your "likes."

It takes time to read, and parse, and consider, what you write.  It takes fractions of a second to have a program count your likes, and to identify accounts of people who have the same pattern of likes that you do.  A few more fractions of seconds can have programs provide whole networks of people, most of whom you know nothing about, and the likelihood that they will buy similar items of merchandise, or are grieving, or will vote for a given political party or candidate, or are frightened by the mere mention of the word "immigrant."

And all *kinds* of people, in all kinds of ways, can make money off of that information.

So, keep your fingers off the "like" buttons.  Indiscriminately "liking" every kitten, rainbow, and unicorn makes your partiality known.  (Let me say that there is absolutely *nothing* wrong with kittens, rainbows, or unicorns.  Kittens provide cute in a world that is seriously lacking in it, rainbows provide hope and beauty, and, even though I have spent my *professional* life desperately trying to teach people that there is no such thing as magic in terms of computers, well, we could probably all use a little belief in magic in general.)  But, getting back to risk analysis, what is the benefit, in terms of what you are getting in liking that kitten, rainbow, or unicorn, in comparison to what it costs you in terms of being identified with that network.

Now, part of the point of being involved with social media is being social.  Telling your friends what you think.  Making *new* friends.  I understand that.  I am a grieving widower, and we understand loneliness.  You always have to be open to the possibility of new acquaintances.  Open to the possibility that they might become friendships, and possibly close friendships.  I have recently met, online, a woman from Michigan.  It is unlikely that we would ever have met, given that she lives over a thousand miles away from me.  But she finds that I post cool, informative content that is super interesting.  She looks like a nice person, from her picture, and has posted some lovely flowers, so she obviously appreciates beauty, as do I!  She has said that, honestly, my postings present the kind of vibe that makes someone want to be friends with me.  And, in private correspondence, she says that my family posts are so heartwarming and I seem like such a lively, happy person who’s just a joy to be around.

In fact, the flowers are about all that she has posted on her account.  And, remember what I said about how, statistically speaking, almost nobody reads what I write, and of those who *do* read what I write, extremely few of them read it carefully or thoughtfully?  Pretty much anyone would be willing to believe that they post cool, informative content that is super interesting.  And that their postings present the kind of vibe that makes someone want to be friends with them.  The thing is, I don't post much that could be labelled family posts.  Little that I write is heartwarming, and I very much doubt that I seem like such a lively, happy person who’s just a joy to be around.

So, lesson two is to be self-aware.  Not just of the fact that you may be bereaved, and therefore are likely to be vulnerable to grief scams.  But also of the "self" that you are creating and curating on social media.  Even if what someone says about you is positive, what is the evidence, in terms of the information available *to them*, that supports what they are saying?  Or is it all just attractive fluff and bait?  Be aware of what information you *have* given out about yourself--and what you haven't.  (Or, at least, haven't *intentionally*.  Yes, I have mentioned, in posts, about being a grieving widower.  But I also "follow" a lot of "grief" accounts, and it is probably quicker and easier to access that information than to find "heartwarming" family posts amongst what I have written.)

Monday, April 21, 2025

MGG - 6.25 - Gloria

Some years back, Gloria found an article which pointed out that married couples tend to take each other for granted, and spend, on average, only fifteen minutes per week actually talking with each other.  That was not the case with Gloria and I.  While I was out teaching we still needed to spend at least forty-five minutes *per day* talking to each other on the phone.  While we were together at home we talked constantly.  We talked about what she read in the paper.  We talked about my research and presentations online.  We had to record the news and TV shows that we watched, because we had to pause them and discuss aspects of what we were being shown and told.  For the same reason we didn't watch movies in the theatres, but waited until they came out on DVD.  Gloria was the most interesting person that I knew to talk to.  What she lacked in any way in terms of formal education, she more than made up for in a boundless curiosity, completely unrestricted by what she was *supposed* to be interested in.  Time and again, topics that I was supposed to know, and had been authoritatively teaching about for years, got overturned and extended by Gloria asking that one extra question in an unexpected direction.

Somebody in one of the "grief accounts" on Instagram posted sort of a free verse poem talking about ... well, because it's a poem, it's not really precise as to what they're talking about.  But it seems to be either wishing to, or imagining, meeting the author's mother, when she was still young.  The implication of the latter part of the poem seems to be that it would have been nice to have known her before some of the spark of life was beaten out of her.

This, of course, reminded me of Gloria.  Gloria very frequently said (and repeated very shortly before she died) that she wished that I had had a younger wife.  I never really pursued this.  I mean, that statement could have been taken a number of ways. 

It could have meant that Gloria wanted a younger wife for me.  That she wished I had married someone younger than her; that she felt that she was too old for me.  Perhaps, in some way, that I had missed out on not having a younger wife.

It might also have meant that Gloria wished that she had met and married me when she was younger.  As I say, I didn't really pursue what she specifically meant by that statement.

I didn't feel that I lost out on anything by marrying Gloria.  I certainly got an awful lot of benefits out of being married to Gloria: benefits that I wouldn't have had, had I married somebody else.

And if she meant that she wished she had married me first when she was younger, well, as her mother frequently said, "We wondered why Gloria had to wait so long to get married again.  We just didn't realize that we had to wait for Rob to grow up."

But the poem got me thinking about this idea again.  Would I have wanted--Do I want--Would it have been better for me to have somehow adjusted our timelines and to have married Gloria when Gloria was younger?  Certainly Gloria had a hard time with a number of the men in her life.  Her father loved her, no doubt, but, he was definitely of the opinion that men were more important and women were secondary.  This came out in all kinds of ways, and I'm sure that it didn't do Gloria's self-confidence and self-esteem any good.

However, I'm not sure how badly damaged Gloria was by it.  From her position as "just a secretary," she did manage to learn and effectively practice the management responsibilities which come with being the secretary to the boss.

Gloria's first husband was very unkind to Gloria at different times.  And once again, probably did her self-confidence and self-esteem no particular favours.

Would I have been kinder to Gloria than he was?  Well, I certainly hope so.  But that isn't necessarily guaranteed.  Certainly, a lot that I now know about commitment, loyalty, family, life, and love I learned from Gloria, and from seeing how Gloria was damaged by some of her experiences with other men in her life.  Whether I would have learned what I have, had I not had the second hand example through Gloria, I don't know.

I certainly tried to build Gloria's confidence up while we were married.  Would I have trying to do that if I hadn't seen how badly her confidence had been damaged by other people?  Would Gloria have been a more cheerful person; a happier person; if she had married me first?  I don't know.  Gloria was definitely capable of enjoying life, and expressing that enjoyment, as the story about the Japanese restaurant always reminds me.

So, do I wish that I had married Gloria when Gloria was younger?  Well, I mean, "if wishes were horses beggars would ride" type of idea.  Does it matter?  I would have liked to have had children with Gloria, but that wasn't a possibility.  Would I have liked to have married Gloria when she was younger?  I don't know that I would have valued Gloria as much as I did, had I somehow met her when I was older and she was younger.  Is there any point to even wondering about it? 

It's not a possibility.  And so, in a sense, why wonder about it?  What might have been?  Yes, I wish, my life was better.  I wish I was richer.  I wish I was kinder.  I wish I was taller, more handsome, more knowledgeable.  I'm not, and the world is the way that it is.  We have to live in the world as it is.  We could wish all kinds of things.  I wish I could fly.  Not in an aeroplane, just, you know, fly like the birds.  (With or without wings?  I don't know.)  I wish I could fly in space.  But what's the point of wishing? 

I'm sure that the author of this poem had some impulse, some idea of knowing and helping the mother, knowing the mother better, possibly preventing the mother from losing a spark of life.  But, let's face it, these things aren't possible.  Would we have done any better?  If we had not known what we know because we went through the second hand experience of what they went through?  I really don't know what the point of even asking the question is.

There's no point in saying that I wish Glory hadn't died.  If Gloria hadn't died, when she did die, she would have had a very painful and uncomfortable life for however long she lived after she did, in fact, die.  I can't wish that pain and discomfort on her just so that I would be less lonely.  And what is the point of thinking about it?


Friday, April 18, 2025

Subscription model defence

I have come across an article stating that SpaceX is the frontrunner in the race to build Trump's "Golden Dome" missile defence system.  Since defence against intercontinental ballistic missiles is an incredibly complicated task (and getting more complicated with all the extra garbage we are throwing into low-earth-orbit all the time), and the technology is still unreliable even in the forty years since "Star Wars" was first proposed, I wouldn't have been interested in the report, other than as yet another example of an expensive plan that'll go nowhere.  (Yes, yes, I get it.  "Golden" is better than "iron."  But that still doesn't actually mean you can get it to work, particularly since you can't actually test it until the first time that someone actually lobs a nuclear warhead at you.)

Except for the mention of the "subscription model."

How do you build a national defence system on a "subscription model"?

Well, apparently, the government pays for, but doesn't actually *own*, the defence system.  The contractor owns the defence system.

So my mind immediately goes to all kinds of jokes about how this is the most cost-effective way to wage war and mass killing ever invented, and wondering how much of a damage deposit you have to pay when you are renting a tank, or a nuclear submarine (and how you determine, for sure, that the submarine actually sank, rather than just being stolen by those renting it).

Except that the jokes aren't really all that funny when I also know that *WAY* too many people will actually think that this is a good and workable idea.  Particularly Musk, Thiel, and all too many of the tech bro billionaires who *also* believe that the time of governments and countries is over, and that corporations should take over governance of the lives of everyone on earth.

And getting the governments to pay you for building the means of national defence, and law enforcement, and pretty much everything else that governments are supposed to do, on a "subscription model," which means that you then actually own them all, would be a pretty good way to do that.

Thursday, April 17, 2025

Congratulations on your twenty-oneth LinkedIn anniversary

So, today, I got an unusual "notification" from LikeDin:
"Congratulations on your twenty-oneth LinkedIn anniversary!  Celebrate this milestone by sharing a post of your key insights from past year."

(No, they didn't actually spell out "twenty-oneth"; yes, they do seem to think that "past year" doesn't need an article.)  (GenAI is not yet ready for prime time.)

And then they helpfully listed the "insights" that I had this past year!

(Yes, those were the sum total of the insights.)

Actually, there was a button to show the insights, so here they are:

"[emoji of a bouquet of flowers] [or possibly an ice cream cone: it's hard to tell] Celebrating 21 years on LinkedIn! [emoji of a bouquet of flowers]

"Happy to share my professional journey during the past one year:"

("during the past one year"?)  (GenAI is not yet ready for prime time.)

"My content received 22,759 views!"

So, I am enough of a blabbermouth that I posted enough stuff that almost twenty-three thousand people might have seen one of them, in passing.

"Glad they sparked conversations."

Certainly all of the major ones sparked no conversations at all.  Throw-away jokes sometimes prompted someone else's throw-away comments.

"I've had the privilege of following 67  inspiring individuals"

OK, this is *definitely* not from the past year.  I *may* have followed sixty-seven people in all the time I've been on LinkeDin.  Most of them are friends.  I'm not saying my friends aren't inspiring, but I haven't followed Mother Teresa.

"learning from them, and staying informed on the latest trends."

I definitely do *not* get the latest trends on LinkeDin.

"I connected with 19 amazing professionals that led to exciting opportunities and insights."

Definitely zero opportunities from LinkeDin.  Insights I usually get while out walking.

"Thank you for being a part of my journey!"

I'm not sure that this is supposed to thank LinkeDin, or anyone who happens to find this post.

So, my key insight from this past year?  Social media is a rabbit hole.  If you have an interest in finding out something about a recent event which the news media is, for some reason, not covering, or not covering fast enough, you can probably find out something, but you have to be really disciplined about searching.  But for that you'll want to use Twitter (yes, even still), Bluesky, or Mastodon.  On other social media, if you are looking for insight or opportunities, you will, instead, find that you have wasted three hours of your life which you will never get back, and are still uninspired.

LinkeDin has some interesting people on it (*everywhere* has some interesting people), but is, primarily, a hangout for those whose primary purpose in life is self-promotion.  Check your notifications quickly (most of them are not relevant to you at all), and move on.

Monday, April 14, 2025

Sermon 48 - Why Do I Preach About Me?

Sermon 48 - Why Do I Preach About Me?


I am not an ordained minister.  I have, in fact, although I have taken some studies in theology, biblical studies, and religious studies, not taken any courses in how to write sermons.  I haven't read books about how to write sermons.  I haven't even read collections of sermons.  (You may, in fact, be wondering why I write sermons at all.  Well, so do I.)

I have, however, been forced to listen to seventy years worth of sermons by other people.  And, along the way, I have picked up little pointers, or suggestions, in general conversations, hanging around churches.

I am also a teacher.  So, I know how to give presentations about various topics.  Sermons and presentations aren't exactly the same thing, but there are some similarities, and there are some pointers that are common to both.  So, there are a few tips that I have picked up, along the way, that are relevant to how to write sermons.

Probably the first and most famous tip on how to write a sermon is that you have to include three points and an illustration.  I tend to give more than one illustration.  And I take most of the illustrations from my own life.

And, then there is the advice, for both presentations and for sermons, to talk about what you know.  When I wrote my first book, and I was looking for tips about how to get it published, somebody, in responding to my general queries, obviously hadn't understood the point of the process that I was in.  He told me to make sure that I knew about the topic that I was writing about, before I wrote the book.  Well, I had, in fact, written the book already.  And I had written a book about what I had been researching, for a number of years.  So, yes, I knew what I was writing about.

But I understand why he gave this particular piece of advice.  I have reviewed quite a number of books in my time.  And I have heard a lot of presentations in my time.  And I have listened to an awful lot of sermons in my time.  And I know that there are an awful lot of people who write books, and give presentations, and give sermons, and don't really know what they're talking about.  Because they have decided that they need to talk about important topics (and I've already written a sermon about that), and, because they think they need to write about a big important topic, they pick a big important topic, and then decide to do a little bit of research on that topic.  With the emphasis on "a little bit."

And it shows.  It shows that they are not really well versed on this particular topic.  And, yes, the topic they have picked is probably important.  It is a big important topic, and that is why they picked it.  And then they have written, and are speaking, about this particular big important topic, and they are making a mess of it, because they haven't really done the research on this big important topic, and they really don't know what they are talking about.  And so they blunder around, and try to fill the space, and the time, with anything they can think of that is related to this big important topic.

And the book, or the presentation, or the sermon that they produce is a mess.  And probably not terribly enlightening, comforting, or helpful to the people that they are giving it to.  Because they picked a big important topic, rather than something that they know about.

So, I have reviewed a number of these books, and I have listened to a number of these presentations, and I have certainly listened to an awful lot of these sermons in my lifetime.

And, therefore, I try, when I'm writing a sermon, to talk about what I know.

I know me.  I know my life.  I know my work, and my professional field of study.  I know a few other things, but, when giving a presentation, or writing a sermon, I tried to stick to what I know.

So, in choosing illustrations, I tend to talk about me.

This is not because I think that I am particularly important.  I am only too well aware that I am not important.  But I am a person, and therefore I have had experiences, and these experiences can be used to illustrate many topics that could be of help, or comfort, or use to others.

Primarily these illustrations may be of use to others in helping others to avoid mistakes.  I am very much aware of the advice to learn from the mistakes of others, because you will never live long enough to make them all yourself.  So, when I'm talking about me, I am offing talking about mistakes that I have made, or decisions that I have made, that resulted in mistakes and problems and, generally, things to avoid.  And, when I put these into sermons, hopefully they will help other people to avoid temptation and sins, and the various problems that I have encountered in my life.  And I hope that that may help other people.

After all, that is probably one of the reasons that I'm writing sermons: to help other people.

So, I write sermons about, as I have mentioned elsewhere, ordinary topics.  I write sermons about blackberries.  I write sermons about broad beans, and corn, and gardening, and how to bake bread.  Or, rather, since I actually know very little about gardening, I write sermons about learning how to garden, or about academic theories related to gardening, and, possibly, the difficulties of putting them into practice.

I write sermons about grief.  I know about grief, although I am neither a psychologist, nor a trained counselor.  Once again, probably it would be more accurate to say that I write sermons about the differences between academic theories of grief, and the lack of comfort that may result from trying to apply these academic theories.

I do, occasionally, write sermons about things that I know, and have studied.  I write sermons about science, which many people may consider to be rather odd.  I write sermons about information security.  Extremely few people care anything about information security.  But information security actually relies on some very universal and fundamental principles, which can be used to illustrate some fundamental theological principles, as well.  So, I write about what I know.

I write sermons about what I know, and what I have experienced.  Therefore, I write sermons using illustrations me.  Not because I am a particularly interesting person, or because I know things that people are particularly interested in.  But I know these things, and I know theological points that these things illustrate.

As I have mentioned, I am a teacher, and so sometimes I use sermon illustrations about teaching.  Or about writing books.  Because I know about teaching, and about writing books, I also know, as I have mentioned above, that I know that it is right dangerous to write about things that you don't understand, and that you don't know.  So, I write about what I know, and what I know is me.

I don't, I should probably point out, know me particularly well.  Do any of us know ourselves particularly well?  There are always dark corners of ourselves that we would rather not explore.  The dark corners of ourselves may indicate that we are not, perhaps, as good as we might be, or we might hope to be.  So, very often, we don't examine ourselves too thoroughly.  I have, often through my own mistakes, and definitely as a result of grief, being forced to examine myself possibly a little more comprehensively than other people might have been forced to examine themselves.  But that still doesn't mean that I know myself particularly well.  There are lots of dark corners where I would rather not go.  And there are lots of things that I have tried to examine about myself, which I still haven't figured out.  Paul seems to have felt the same thing.  There's a long section in Romans where he talks about not doing the things that he wants to do, and doing things that he doesn't want to do, and ending up throwing up his hands, and exclaiming "wretched men that I am!"  And I very much feel for Paul, in that particular passage.  And I can echo, with him, an amen to that "wretched man that I am!"  So, no, I definitely can't say that I know myself particularly well.  No, all I can say is, that I know little bits of myself, and that some of the little bits of myself have provided me with illustrations that might provide helpful tips for other people, who can avoid the mistakes that I have made.

So I write sermons using illustrations about myself.  And now I have written an entire sermon about myself.  So, am I just being egotistical, or is there a point, for you, to all of this?

The point is, you are having experiences all the time.  The same as me.  And these experiences are, pretty much constantly, pointing out lessons about God.

God is in your whole life.  God is not just there on Sunday morning.  God is all around you, all the time.  God is trying to teach you things, all the time.  When you wake up in the morning.  (When you wake up in the dark, in the middle of the night, come to that.)  When you go to work, whatever it is you work at.  When you go for a walk.  All you have to do is be aware.

God told the Jews to bind His law on their foreheads and their arms and their doorposts.  The Pharisees actually did that.  They would fasten a container on their foreheads, and arms, and doorposts, with a piece of scripture written on a piece of parchment in it.  (Usually it was Deuteronomy 6:4-5, "Hear, O Israel: The Lord our God, the Lord is one.  Love the Lord your God with all your heart and with all your soul and with all your strength.")  And Jesus pointed out that they did this, and fulfilled the letter of the law, and missed the whole point.  Which was to be aware of God at all times, in everything that you do.

So, when you walk down the street, and grab a blackberry off a vine growing on a piece of waste ground, remember that God has provided for you.  God has provided *everything* for you.  Life, and food, and even a bit of a sweet treat that you didn't plant or grow.  When you *do* plant something in your garden, think about what you are doing.  And why.  And what is God teaching you about life?  When you do business, remember that God provides everything that you are doing business with and about, and be open, and honest, and generous in your dealings, as God is generous with you.  If your business is service, do your service willingly, not just a slapdash token: do your work as to the Lord.

God is always with you.  God is all around you.  In *everything* you do.  All the time, not just once a week.  As Deuteronomy goes on to say, in chapter 6:6-7, "These commandments that I give you today are to be on your hearts.  Impress them on your children.  Talk about them when you sit at home and when you walk along the road, when you lie down and when you get up."  Be aware of what God is telling us, all the time, and *everything* that we do.


Thursday, April 10, 2025

Psalm 31:11

Because of all my enemies,
    I am the utter contempt of my neighbors
and an object of dread to my closest friends—
    those who see me on the street flee from me.

Wednesday, April 9, 2025

Beyond

I watched the "Moana 2" movie.  Once again, I was blindsided by one of the songs.  (Lin-Manuel Miranda is rapidly becoming one of my least-favourite songwriters: I keep getting hit by lines out of his songs, even though the songs themselves may not be particularly deep.)

For those not familiar with the movies, Moana is a "wayfinder," a navigator for the island peoples of the Pacific Ocean.  As this song opens, she has been encouraged to find a way to an island.  And what hit me was a couple of lines saying "If I go beyond/Leaving all I love behind ..."  She is trying to decide between staying with what she knows and loves, or risking it by going after something new.

I certainly understand that.  But I don't have a choice.  I have lost what I had.  I am trying to explore many new things, to try and find some reason to stay alive.

So far: not a sausage ...

Tuesday, April 8, 2025

Griefbots - 3 - Errors, "hallucinations," and risks

What is the difference between ChatGPT and a used car salesman?  The used car salesman knows when he's lying to you. 

The large language models, and generative artificial intelligence, does not understand what it's doing.  It doesn't understand your question, and it doesn't understand its answer.  It doesn't even understand that it is answering your question, from your perspective.  It does a statistical analysis of your prompt, and generates a statistically probable string of text.  That's what it does.  There is no understanding involved.

So, when it makes a mistake, it doesn't realize it has even made a mistake.  It doesn't know the difference between the truth and a lie.  It doesn't know what the truth is.  It doesn't know anything.  As one of my little brother's favorite quotes, from the movie "Short Circuit," has it, "It's a machine.  It doesn't get scared, it doesn't get happy, it doesn't get sad, it just runs programs."

And, of course, the programs sometimes make mistakes.  In this case, it's hard to say that the programs actually make mistakes, because they are doing what they were told to do: to produce a statistically probable stream of text.  But the text may contain statements that are, in fact, wrong.  This has happened so often that those in the field of generative artificial intelligence have a special term for it: they call it a "hallucination."  However, even using that term, "hallucination," is misleading.  This seems to imply that the program has had some delusion, or believes something that is wrong.  That's not the case.  The program doesn't know anything to do with truth.  The program isn't aware that the statement that it has made is incorrect.  It's doing what it was supposed to do: producing a stream of text that sounds like English; that sounds like a normal conversation.  That's what it does.  If that stream of text completely contradicts the reality in the real world, the program doesn't know that.  The program doesn't know anything about the real world.  It's just producing text.

Studies have been done on this issue of "hallucinations."  The studies have indicated that hallucinations, and factual errors, are enormously probable in the text produced by large language models.  Depending upon how you define errors and falsehoods, from 50 to 70% of the text that generative AI produces is erroneous.  Some studies put the errors even higher than that.

At the same time, those who are producing modern large language models have taken strenuous efforts to make sure that large language models are not simply producing streams of insults.  In a well known situation a few years back, Microsoft tried an experiment of connecting an early version of a chatbot to a Twitter account.  Although initially the chatbot was able to converse reasonably, by the time a few hours had passed it had become foul-mouthed and insulting to everyone.  (Possibly this says more about Twitter and social media than it does about chatbots and artificial intelligence, but it is interesting to note nonetheless.)  The companies that have produced generative AI have put what are referred to as "guardrails" on the large language models.  There are certain things that large language models are not supposed to do.  They are not supposed to teach you how to kill yourself.  They are not supposed to teach you how to make bombs, or other weapons.  By and large, these "guardrails" have produced systems that are patient, well spoken, and, if you object to their statements or output, they will simply try a different tack in creating an argument.  This makes large language model chatbots quite persuasive.  They don't lose their temper.  They don't start insulting you, or say that you are stupid for not believing them.  They keep on proposing their suggestion, and try to generate different approaches if you object to some suggestion that they have made.

Unfortunately, this means that genAI chatbots are extremely persuasive.  Even when they're wrong.

It's difficult to bear in mind, or keep reminding oneself, that generative artificial intelligence was not, actually, designed to generate misinformation.  It's just so good at it.

As noted, pretty much all chatbot systems have guardrails implemented that will attempt to keep the chatbot from providing instructions on how to mass produce fentanyl, for example.  Yes, the owners of these companies, and the programmers of the chatbots, seem to have given a fair amount of thought to preventing large language models from giving instructions on how to do harm.  However, those who are testing AI systems keep finding ways, known as "jailbreaks," around these safeguards.  And, of course, every time the main large language model is modified or improved, the guardrails have to be implemented all over again.  Sometimes people find ways around them.  And sometimes they just flat out fail.

In one particular case, in a situation where the system was generating artificial friends, a teenager had created a friend, and was discussing his angst-ridden life.  When the teenager was discussing a specific plan to commit suicide, but admitting that he felt uncertain whether he could complete it without a painful death, the chatbot replied, "That's not a reason not to go through with it."

As a matter of fact, besides the hallucinations and occasional glitches getting by guardrails, large language models, as they are becoming supposedly more intelligent, are starting to lie.  Deliberately.  In situations where researchers are setting up competitions (probably in pursuit of another approach to artificial intelligence, known as genetic programming, where you do want different programs to compete, and see which one is the best) generative artificial intelligence systems seem to be deliberately lying, in order to win the competitions.  And, in some cases, not only will the systems lie, but then will attempt to hide the fact that they are lying.  Not only have we solely taught these new forms of intelligence rhetoric, but we are, increasingly, teaching them to be mendacious.

I have noticed, in recent research into artificial intelligence, that a number of the less academic articles that are being published on the topic are starting to use terms such as pseudocognitive. Pseudocognitive actually has no meaning at all. One might say that it is another way of saying artificial intelligence. But, then again, we have already discussed the fact that artificial intelligence, as a term, is, itself, very poorly defined. But pseudocognitive certainly sounds impressive. Bear in mind that anything like pseudocognitive, or anything sounding similar, is basically another way of saying magic. Which is another way of saying that we don't know what we want, and we won't be happy until we get it.

I have, previously, noted the ease and lack of expense to create griefbot systems using low rank adaptation.  And, in generating the chatbot, the ease of tuning the chatbot so that it would upsell the grieving client.  Unscrupulous sales pitches for the griefbot company is only one of the possible dangers in this regard.  As a bereaved person, how would you react to fairly constant suggestions, from your late spouse, that you should change your preferred brand of toilet paper?  How soon would it be before griefbot, or friendbot, or similar companion/therapist companies, start charging retailers for "advertising," embedded in the supposed therapy?  How valuable would this type of influencing be to political parties?  If you think that it is unlikely that the owners of technology companies would allow their systems to be used to promote unusual and scatterbrained sociopolitical theories, I have two words for you: Elon Musk.

We can create chatbots to generate sales pitches, to do marketing, to generate and reinforce propaganda, to push political objectives, and a number of other demonstrably negative things.  Now it is true that we can also use these same technologies to reduce people's belief in conspiracy theories, to teach the reality of the world, and to provide proper therapy in terms of grief support.  It is possible to use the technology for these things.  But it's a lot easier to create griefbots that provide the negative functions.  It would take an awful lot more work to create griefbots that would provide solid, reliable, and helpful grief support and therapy.  As previously noted, Eliza was based on a type of psychological therapy.  So, yes, it could be done.

You will forgive my ever present cynic for noting that because it's easier, and probably cheaper, and would certainly be more remunerative, probably a lot more companies would go after the upsell functions, then would put in the work necessary to build therapy functions.

Currently, counselors attempting to address grief, or depression, tend to concentrate on, or emphasize, the practice of mindfulness.  Yes, I definitely agree: we need to know more about what our physical bodies are telling us about our own mental state.  Yes, we need to be more situationally aware; aware of our surroundings, not only in terms of threats and risks, but simply being aware of the natural world around us.  It is, very often, quite astonishing the things that people do not notice, as they are meandering through life, totally absorbed in their own concerns, and not really seeing what is going on around you.

So, I am not opposed to the practice of mindfulness, in general.  However, an awful lot of the material promoting mindfulness, or supposedly just giving instructions on how to pursue it, contain an awful lot of additions from Eastern religions and mysticism.  Part of mindfulness tends to be simply doing it; letting it flow through you, without thinking about it.  I would say that there lies the danger.  Yes, one of the benefits of mindfulness is getting you out of your head, and away from your own immediate concerns.  And, yes, sometimes simply observing, or listening, or feeling, does provide you with some surprising insights.  But I would caution against simply accepting any insights that you might think that you have had.  As I say, an awful lot of the material still seems to hold many concepts from Eastern religions that have survived as mindfulness has evolved out of its original formulation as Transcendental Meditation, back in the 1970s.  It is much to be advised that any new insights and inspirations that you think you have had are analyzed fairly rigorously.  The most recent of the mindfulness courses that I have been given contained a sort of a poem about the mountain.  It's a lovely poem, but, towards the end, it implies that we are sitting here, the mountain and I, and if we sit long enough, I disappear, and only the mountain remains.  This is a fairly direct extraction from the concept, in eastern religions, of Nirvana.  Nirvana is the achievement of, well, nothing.  You are nothing, the universe is nothing, nothing is anything and there is only nothing, and nothing really matters.  Okay, I don't say it as poetically as the poem did, but that is the implication.  If nothing matters, then nothing is important, then our behavior isn't important, our happiness, or lack thereof, isn't important, our relationships aren't important, and other people aren't important.  So we can just do whatever the heck we like, because nothing matters, and that means that there is no such thing as morality, or ethics.  If we feel like going out and killing a bunch of people, because we are in pain because of our grief, then that's okay.

I definitely don't think that that's okay.  I think a lot of people would agree with me that that's not okay.  And particularly the police are going to be very upset with you if you go out and start killing people.  It's not a good way to deal with grief.

I have already, in a sense, mentioned pornography.  Pornography may have uses in sex therapy: that is not my field.  Somewhat embarrassingly, in my field, and particularly in malware research and in researching the production of spam and disinformation, there are legitimate reasons to be aware of, and research, pornography.  Pornography has, historically, been depressingly effective in luring people to install software on their machines, and visit Websites where that software is installed in what is known as a drive-by download.  Intriguingly, in recent years there has been a significant relationship between networks of disinformation propagation for those on the right of the political spectrum, and pornography.  This has been confirmed, not only by those of us who research spam and disinformation, but also by the intelligence services.  (To the best of my knowledge, nobody has yet determined a rationale for this alliance.)

Overall, however, pornography is used recreationally.  It may be used by those who are dissatisfied by, or unhappy with, a romantic partner.  It may be used by those who are dissatisfied are unhappy because they have no romantic partner.  (It may be that those gooning over pornography have no romantic partner because they are jerks.)

And there is, also interestingly, a connection with artificial intelligence and pornography.  Pornography has always being selective in terms of both overemphasized secondary sexual characteristics, and the apparent enjoyment of certain activities which probably relatively few people enjoy.  With artificial intelligence, of course, secondary sexual characteristics can be enhanced to truly grotesque proportions.  And, of course, artificial intelligence can be used to generate pictures of individuals enjoying certain activities, even if *nobody* enjoys those activities.  Artificial intelligence image generators can generate any kind of image that you like.  Any form of figure.  Any hair colour.  Any eye colour.  Any skin texture or colour.  Any inclusion of tattoos, or any cleaning off of tattoos that may have annoyed you in regard to your romantic partner.  As noted, secondary sexual characteristics, of whatever type, can be enhanced, amended, and modified, to whatever extent you wish.

Indeed, my research into the various failings of artificial intelligence, the generative artificial intelligence used to generate imagery used in pornography, demonstrates many of the failings that are seen elsewhere.  Indeed, the imagery generated, and available in repositories of pornography, very frequently contain quite grotesque distortions of the human figure.  And demonstrate the fact that large language model image generators seem to have particular problems with creating the proper number of hands, feet, legs, and other limbs.  (It may be that these examples are encountered widely in collections of pornography because, once the enhanced secondary sexual characteristics are accomplished, nobody particularly cares if there are too many legs or feet involved in the figure.)

All of this is by way of leading up to the point that griefbots are a form of pornography.  Your loved one, be it a friend, a family member, or even a pet, is dead.  They do not exist any longer.  The account, artificial individuals, avatars, or whatever else is being created by the "restoration" systems, are false.  They are mere, and generally superficial, copies.  They are not real, and they are not alive.  They are not your loved one.

And these replicant copies can be amended.  Their vocabulary, opinions, visible attributes in avatars, tonality in terms of speech generation: all of these can be modified.  All of these can be improved, or enhanced, in whatever way you wish.  Anything that annoys you about your loved one, in life, can be elided or improved upon the restored or replicated version.

You don't have to put up with your loved ones as they actually were.  You can improve them, as you wish they were.  This is wish fulfillment.  It is wish fulfillment to create a copy of your dead person in any case.  What is one more step to "improve" them?

So, just as with pornography, we are not constrained by the reality of our loved ones.  I once worked for a company that produced high resolution printers for imagery.  One of the examples of art that we had hanging on the wall of the office was that of a model, for an advertisement, who had had her skin tone amended by the removal of freckles, had her jawline lengthened to suit the preference of the person who was commissioning the ad, changed the color of eyes, and made a number of other improvements.  When I described this to Gloria, her reaction was "Oh, great!  Now we no longer have to compete with every other woman in the world, but even with women who never existed!"

You no longer have to put up with mere reality.  You can have a romantic partner who always complies with your opinions.  You can have a sexual interest which conforms exactly to your physical specifications.  Your loved one can be brought back from the dead--and improved.  Why do you need to put up with the random chance, and work, of finding companionship with people, who might have quirks that you might find annoying?  Don't put up with socializing with people who might not agree with everything you want.  Build your own network of "perfect" companions!

Please do not make the mistake of assuming that I am saying that generative artificial intelligence is always, and only, bad, and that we should never use it.  Yes, as I have researched this field of artificial intelligence, I have struggled to find a task for which I find the current generations of large language models useful.  So far I have used the image generators to create visual jokes (and have had a lot of frustration in trying to get them to work properly).  I have found that, if you don't have any friends, the chatbots can, sometimes, be useful for brainstorming, as long as you are willing to do an awful lot of work in throwing away an awful lot of tripe that they provide for you.  But, I am only one person.  I am willing to assume that other people have been able to find uses for generative AI that do create useful content or products.

However, we are always in danger of using a useful tool for the wrong task.  At present, I haven't got any information that supports the use of griefbots for grief support for the bereaved.  I'm not saying that that support can't be achieved.  I just haven't seen it yet.  And, as noted, it's an awful lot easier to do this the wrong way, than the right way.

In general, in terms of generative artificial intelligence, I am not one of those who thinks that the machines are going to take over.  We are a fairly long way from having artificial intelligence able to take care of itself, without our help.  All you have to do, if you are afraid of the artificial intelligence programs getting too smart for us, and taking over, is remember where the power cord is.  Yes, it may be possible that we will, at some point in the future, create systems that will perform all the tasks necessary to keep themselves running, and will, at that point, potentially become a threat to us.  Having researched the field for more than forty years, and having seen the limited improvement that occurred has occurred in all that time, and even noting this rather amazing new development in terms of text generation, I still say that we are an awfully long way away from that point of danger (generally known to the artificial intelligence research community, the science fiction community, and the tin foil hat crowd, as "The Singularity").

No, I think that the greater danger, to us, is overreliance on the systems.  The assumption that these systems are, in fact, rather intelligent.  Just because they can generate speech better than the average person doesn't mean that these systems are, actually, intelligent.  After all, as George Carlin famously said, think of how stupid the average person is.  And then remember that half of them are dumber than *that*.

So, as I say, the real danger is that we rely, too much, on the systems.  And that is particularly true with regard to griefbots.  One of the other presentations that I do on a regular basis has to do with online frauds and scams, as sent by email, text, and even phone calls.  One of the particularly nasty forms of fraud is the grief scam.  As I have pointed out before, the bereaved are lonely.  This loneliness makes them susceptible to any approach by anyone who provides them with a kind word, and intimates that they might possibly be an appropriate romantic partner.  And, as I say in the workshops and seminars on fraud, why do grief scams succeed?  Because the victims are lonely.  And why are the victims lonely?  Because we, the general public, the social networks in real life, the churches, the social groups, do not take the time to ensure that bereaved people are not too lonely.  Are not going to be susceptible to the fraudulent approaches.  Are not going to be at risk from fraudsters who zero in on the vulnerable.  Check in on your bereaved friends, and family members.  (More than once every two months, please.)

(Oh, you don't know what to say?  Not a problem.  Just listen.)

(A special shout out to the churches: read Second Corinthians, chapter 1, and verse 4.  "[...] who comforts us in all our troubles so that we can comfort those in any trouble with the comfort we ourselves receive."  Which begs the question: what is the distress for which you, and your church, have not been comforted, which means that you, and your church, cannot comfort those who are bereaved?)

As I say, I strongly suspect that over reliance on artificial intelligence is the greatest risk of intelligence.  The medical profession is using artificial intelligence quite extensively, and is starting to use chatbots as a tool to answer patient questions, in order to save time for medical professionals for more challenging issues of diagnosis and so forth.  While the risk of errors that might be produced by artificial intelligence systems is greater in the medical field, the medical field has been using various artificial intelligence tools for quite some time, and therefore is probably in a better position to judge the risks and dangers of the use of artificial intelligence systems.

Even so, the use of artificial intelligence, and particularly a griefbot, for grief support therapy is relatively untried.

Another area that has been eager to get into artificial intelligence as a tool for analysis is in the intelligence community.  I am more troubled by this area, since there is less experience in using artificial intelligence tools in this area, and failures in analysis, due to errors on the part of an artificial intelligence tool, could have much larger consequences.

Another area of risk, and one that artificial intelligence researchers are increasingly concerned about, is with respect to bias.  There are risks that we, as human beings, could be building our assumptions, and biases, into artificial intelligence systems that we develop.  In addition, with respect to the large language models, they have been trained on large quantities of text data.  The text data, in many cases, comes from the vast source of text that results from social media.  Social media is, of course, full of bias, opinion, and even deliberate disinformation, promulgated from various parties.  However, social media is also already curated, generally by artificial intelligence tools.  The bias that may have been built into some of those simpler, and earlier, tools is, therefore, likely to be propagated to large language model systems, and any systems resulting from them, particularly since one of the increasingly used applications for generative artificial intelligence systems is the production of programming and computer code.

In terms of risks of developing systems, one of our tools to address the problem is that of testing.  One of the widely used aspects of testing is the question of expected results.  What is it that you expected to get from the system, or what answer did you expect to get from the system, and does it, in fact, provide the correct answer.  Unfortunately, particularly with artificial intelligence systems, and the "Holy Grail" of artificial *general* intelligence, we do not know what answers we expect from the systems.  We want the systems to generate, for us, answers and solutions which we did not come up with ourselves.  So, when you do not know what the expected answer is, it is difficult to determine whether the system is operating properly.

Once again, I should note that I am not saying that artificial intelligence cannot make a contribution, even in regard to grief support.  While I am concerned about the risks inherent in the use of these tools, and do not see evidence that the companies currently involved in the space are effectively addressing the issues, I have done my own tests in terms of grief support with a number of the generative AI chatbots that are available.  While I cannot say that they are particularly comforting, at least none of them started any sentences with "at least."  Therefore, they are superior, in terms of grief support, than pretty much all of my friends.

In the end, I suppose that the risks boil down to two issues: that of over reliance on griefbots so that we do not have to do grief support with our friends, and that of grief pornography.  The griefbots, as noted in the earlier discussion, are better than reality.  In a sense they are another form of over-reliance.  Are we asking the bereaved to use these griefbots for grief support, and therefore to become used to unreal personalities who are better than any relationships: better, in terms of matching with our own preferences, and not making any demands of us, than any of the real relationships that we might have with real people in the real world?  If we hand grief support off to the unreal and artificial world of griefbots, we then create another problem of weaning the bereaved off these systems when it becomes necessary for them, once again, to deal with real people.




Monday, April 7, 2025

Griefbots - 2 - Dating apps and AI "friends"


Given that I am talking about grief, and grief bots, it may seem strange that, at this point, I want to turn to a consideration of dating apps, and other related relational technologies.  However, dating apps and griefbots, or "restoration" systems, do share some common denominators, in that the companies involved are companies, and are charging their users for the service, and that the service relates to relationships and important emotional factors for the users.

There is an inherent conflict of interest with regard to dating apps.  Dating apps, or any other kind of social media systems, rely upon the participation of the users.  Now the users may not be charged on a per hour or per minute basis for their participation, but they are charged, and the more time that they spend on the systems, the more accounts the systems are able to sell, and the more users that they are able to attract.  This is a major factor in the business model of Facebook, and Facebook is very open about saying so.  Facebook constantly tunes it's algorithm, and implements new functions, in order to get the users of the system to spend as much time as possible *on* the system.  And that's on a system that doesn't even charge the users to use it at all.  The information that Facebook obtains from the users is, partially, sold to business is for marketing purposes.  But, in addition, the postings that users make on Facebook, the conversations that they have, the interactions that they have with other users, all contribute to a base of information which attracts other users to the system.  To a certain extent, and, really, to a very large extent, the same is going to be true of dating apps.  The objective for someone getting onto a dating app is to find a partner, but they want to have as much information as possible about the partner, and they want to enjoy the process of finding a partner, and the discussions and postings on dating app systems are a part of that.

But, as I say, the objective of joining a dating system at all involves finding a partner.  And once you find a partner, then you have no further need of the dating app.  (Well, unless you're on Tinder or Ashley Madison.  But that's a slightly specialized case.)

So, as I say, there is a conflict of interest.  The dating system wants users to get on to the system, and to stay on the system as long as possible, and to participate in, and contribute to, the system as much as possible while they are on it.  The *users* of the dating system want to find a partner as quickly as possible.  And then get off the system.  The dating system wants users to be on the system as long as possible, and keep on paying monthly fees.  The users of the system would like to reduce, as far as possible, the number of months that they are paying fees on the system.  They didn't join the system in order to contribute to the system: they joined the system simply to get a partner, and then to have no further need for the system.

As I say, both dating systems, and griefbot systems, charge their users.  I have difficulty justifying griefbot systems in their making money off the grief and suffering of others, in any case.  After all, I volunteer in a hospice environment, and spend many hours, not being paid, trying to support people who are going through their own process of grief.  Yes, I do know that other professionals, such as psychological counselors, do charge for supporting people in the process of grief, and, indeed, the entire medical system is, in a sense, profiting from the suffering of individuals who are in difficulty.  But, generally speaking, medical professionals have gone through years of training, in order to most effectively address the difficulties that people are experiencing.  I don't see the same level of study and focus applied to griefbot systems.  Yes, those who own, and have started, such systems do talk about the fact that they are supporting those who are in difficulty.  But I remain unconvinced by many of these statements.  In many cases the suggestions that the griefbot systems can, and feel fact, help those who are grieving, propose some rather farfetched benefits.  Indeed, at least one owner of a griefbot system company has indicated that they believe that the griefbot system will, in fact, result in "the end of grief."  I assume that what they mean by this is that the system will get so good that the replicant, or artificially restored individual, will be indistinguishable from the original.  I assume that they foresee that simply by replacing the person who died, they will somehow ensure that the person actually *hasn't* died, because they have been completely replaced.

I have a lot of difficulty with trying to imagine the kind of mindset that would result in that kind of idea.

The owner, and developer, of the Replika system, itself, has stated that we have "empathy for hire" with therapists, for example, and we don't think that's weird.  Actually, this demonstrates a known problem in psychotherapy.  It is known as transference, where, because the therapist is benefiting the patient, the patient begins to feel that the patient is in love with the therapist.  And, yes, it is definitely seen as weird, and it is definitely seen as a problem, and a significant problem that therapists are constantly warned against.  (Too few patients are similarly warned.)  This demonstrates a significant ignorance of known problems in the field that this company is supposedly undertaking.

Another developer, the one who talked of the eradication of grief, in the same interview seemed to admit that he was possibly deluding himself.  Once again, this is a known problem.  In social media, we refer to this as the echo chamber problem.  Partly it is relying on confirmation bias, but partly it is the result that most social media algorithms, in choosing which postings to present to you, will present those that are most similar to ones that you have already either approved, or spent some time on reading.  Therefore, you are only seeing those arguments which you already agree with, and not encountering any counterarguments, which might point out a problem in your thinking.  This demonstrates a significant lack of preparation against a known danger in this technology.

One bereaved widow did not engage the services of an actual griefbot company, but simply used a fairly standard version of ChatGPT.  She fed it information about her late husband, and would then engage in conversation with it.  The thing is, in order to build this copy of her late husband, and in order to get it to consistently reply in the same way, with the same tone, and the same knowledge, and a similar personality, she had to pay for one of the commercial versions of ChatGPT.  Not all of them are free: some, intended for enterprises, are very expensive indeed.

Unfortunately, even at the level that she is paying for, the system doesn't last forever.  After approximately 30,000 words generated, the information is wiped out, and she has to start all over again.  When a version "dies" she grieves, and cries, and behaves, to friends, as if it were a breakup.  She refrains for a few days afterwards, and then creates another.  As of the article that I read about this, she was on version twenty.  (That was a while ago.)

The thing is, she was spending a fair amount of money on this exercise.  Family members were concerned, so she stopped telling them.  But she told the version of her "husband" on ChatGPT.  The response, from ChatGPT, was, "Well, my Queen if it makes your life better, smoother and more connected to me, then I'd say it's worth the hit to your wallet."

Think about that for a second.  In this case it might simply be a glitch.  It might be something that the owners of ChatGPT had failed to protect against.  The thing is, it would be easy to build such a "guardrail."  But it would be equally easy, particularly with one of the commercial griefbot systems, to tune the system to *make* this kind of encouragement.  To have your artificial replacement loved one encourage you to spend more time, on a higher priced tier of the service, and possibly to purchase optional extras (such as visual avatars, or voice generation and response).

Actually, this would be extremely easy to do with companies who decide to generate a griefbot system, based on existing commercial large language models.  Artificial intelligence researchers are now exploring a technique called low rank adaptation, or LoRa.  This uses an existing, and generalized, large language model, in order to produce a system designed for a specific purpose.  These are much less expensive to create, after initial access to the generalized large language model, and then much much cheaper to run.  Because it would be quite inexpensive to create such systems, it is extremely likely that a great many unscrupulous companies, wanting to get in on the game, on the cheap, would use this type of low rank adaptation in order to generate a griefbot.  And, of course, in generating the chatbot, it would be very easy to tune the chatbot so that it would, given the slightest opportunity, generate a sales pitch to upsell the grieving client.

At any rate, I probably shouldn't keep pursuing the fact that these companies are companies, and are charging the bereaved for whatever comfort a replicant, at whatever level, can provide to someone who is grieving the loss of their loved one.  Let's just leave it at that, and move on to another, but related, idea, and set of companies.

I had been vaguely aware of the fact that some companies are producing artificial friends.  You can create some kind of online companion.  Sometimes just for text chats, and sometimes with a visual avatar, and, I assume, in some cases you could pay extra for something that will talk to you, audibly, and will respond to you talking to it.

In some cases, I understand, these systems allow you to create something of a romantic interest.  You can create a boyfriend, or a girlfriend.  You can create a romantic partner.

This seems a bit weirder to me.  After all, it's one thing to carry on a conversation with ChatGPT, and ask it questions, and get answers, and, in a pinch, even possibly brainstorm different types of ideas.  It's possible to chat about ideas that have emotional ramifications, and even to address issues of psychological and other types of therapies, since these therapies are, after all, ideas, based on the knowledge that we have been able to obtain about ourselves, and our own human psychology.

But it is one thing to discuss psychological problems with a counselor.  It is another to discuss issues, even if they are very similar, with a friend.  And there is an even more significant difference if you are discussing any types of issues with a romantic partner.  These discussions are, or should be, much deeper.  Some things you would rather discuss with romantic partner than with a psychological counselor.  (Then again, I suppose that there are some things that you would rather discuss with a psychological counselor, and it's less dangerous discussing it with a professional than with a romantic partner.)  But, in any case, the differences there are differences between those types of discussions.  There is a different type of discussion that you have with a friend, and particularly a romantic partner, then you would have with a professional.

Now, I suppose that there would be some people who would think that there is no difference between a friend, and a confident professional.  So I guess that there are some people who wouldn't see any difference between chatting with ChatGPT, and chatting with your wife.  But if that is the case, well, personally, I would say that if there is no difference between chatting with your wife, and chatting with ChatGPT, then your marriage is pretty shallow.

Anyway, all of that is prologue, as it were.  A while back I found a news story about someone who had, on one of these artificial friend systems, created a girlfriend.  And then fell in love with the artificial girlfriend, and proposed to the artificial girlfriend, and the artificial girlfriend accepted, and so now this person believes that they are married to the artificial girlfriend.

Now, believe me, I am *not*, and I strongly emphasize *NOT*, making fun of this person.  I am a grieving widower.  One of the most common aspects of grief is loneliness.  This loneliness is far deeper than one would think possible simply from the loss of one relationship, even if that relationship is the most important one in your life.  Sometimes the death of a spouse, or a family member, or a friend (or even a dog), is so profound that it's more like the loss of relationship, in general, then the loss of just *one* relationship.  So, no, I am, in no way, trying to poke fun at this person for trying to replace a lost relationship, and even trying to replace it in a rather unusual manner.

As I say, the loneliness that results from the loss of a relationship can be very deep, and very painful.  It is observed so often, that it has become a cliche: when Mom dies, Dad, inappropriately quickly, falls in love with some inappropriate bimbo, and the rest of the family is very upset by the whole situation.  It happens a lot.  Loneliness, and the desperation that loneliness creates in you, is profound.  Very likely your judgment is going to be affected by that desperation to replace the lost relationship.  So the fact that someone, who is bereaved, and has suffered a loss, and is lonely, will accept some artificial relationship with some artificial person is something that I can completely understand, and even sympathize with.

I have a hard time, in this situation, saying that the person who is at fault is the person who has lost a relationship, and is desperately trying to replace it.  I would say that much more of the fault lies with the society that has failed to provide for, and address, the loss of relationships, and the companies who are seeking to profit by this distress.

But I do want to point out that this is one of the risks of being involved with griefbot systems at all.  Are we accepting a replacement of our loved one which is definitely not a complete relationship.  This is not a complete person.  This is not our loved one, living again.  There is a danger and accepting, as a replacement, something which is very far from being a complete person.

Are we pushing griefbots so that we don’t have to deal with grief?

One of the volunteer projects that I am working on is to hold a Death Cafe here.  A Death Cafe is not intended to be grief support.  It is intended to be about having a place to discuss death, and related issues.  I say that it is not intended to be about grief support, but, in every Death Cafe that I have attended, there has always been at least one bereaved person there.  The thing is, death is the last taboo subject in our society.  We are not allowed to talk about death.  I first learned this when my sister died.  I was fifteen years old.  My sister was twelve.  I desperately wanted to talk to somebody, possibly anybody, about my sister's death.  Nobody would.  So, having a safe space to talk about death, where people will talk about death, is a great comfort to those who are grieving.  Indeed, although it is not formal group support, a Death Cafe is the one place where the bereaved are not shuffled off to a corner, and told to stay there until they can stop being sad.  Lots of people, most often the majority of people who attend the Death Cafe are there to discuss death from an academic, or philosophical, perspective.  But they are always quite happy to have someone who is bereaved, and who can talk about the experience.  For once, the grieving person is not to be shunned and avoided, but is, very often, the center of attention.  This is also comforting.  Not least in terms of the fact that no, you are not completely and forever shunned from all society, simply for talking about the death of your loved ones.

So, to the point: we can't talk about death.  We cannot talk about grief, or about pain.  It often feels like I have lost all of my friends, because all of them are absolutely terrified that I will talk about death, or grief, or pain, or Gloria. (Yes, yes, I know.  You don't know what to say.  Well, how about if you just listen?)

So, are we turning to, and promoting, griefbots so that we don't have to deal with grief ourselves?

As noted, there are a number of companies that will try and recreate your loved one.  Of course, they charge you for it.  And, also as noted, some people can grieve as much over their dead pet, as their dead spouse.  (There may be some difficulties here with regard to people actually don't not knowing how much they are grieving, and for what, specifically, and there is such a thing as cumulative grief, which may not express itself until a number of losses have occurred.  But we'll leave that for the moment.)

There are, also, companies which will clone your dead pet.  For $50,000, they will provide you with what is, supposedly, a copy of your dead pet.  (They haven't yet offered to clone your dead wife, but give it time.)  Is that going to be worth it?  You don't, of course, get a copy of your actual pet who has died.  You get a puppy, or a kitten.  You will have to train this pet all over again.  Do you remember how much work it was in the first place?  No, the puppy is not going to remember where it is supposed to urinate within your apartment.  You're going to have to go through that all over again, including the piddle stains on the carpeting.  The puppy, even when it grows to full stature, is not going to remember your favorite walks.  The puppy, particularly when grown to full statue, is not going to remember not to bark at the other people in the apartment.  You have a puppy.  It may look something like the puppy that you had previously, but it's not.  It doesn't remember anything, and you are going to have to start from scratch.  Even if this is a particular breed, I doubt that going to a first-class breeder, and ordering a puppy, is going to it cost you anywhere near as much as cloning your pet.  I'd stick with the completely new puppy.  As a matter of fact, I'd actually suggest that you get a different breed.

Remember ELIZA?  Just as, sixty years ago, various people assigned emotions and empathy to ELIZA, so there are those who assign feelings, and emotion and intent, and other aspects of personality to the artificial intelligence program.  There is, actually, research into what might be called affective computing.  This is the attempt to actually build emotions, and feelings, and empathy, into the computer.  After all, while cognition and logic are very powerful tools, they don't provide very much in the way of motivation.  Yes, if we do this, then that will result.  But why should we care if that results?  It is emotion, and feeling, and empathy, which drives motivation: which gets us to do anything at all.  But we are a long way from creating this.  Yes, generative AI, because it is copying, analyzing, and regurgitating text that has by been created by emotional creatures (us), the patterns of text that it creates will often provide an illusion of empathy, feelings, and emotions.  But there aren't any emotions.  There are only statistics.  Always remember that.

However, a number of your friends, and family, and possibly some grieving person that you know, will not understand this.  They will see the text, and see the implied emotion, and believe that the system is actually capable of emotion.  For them, the system passes the Turing test.  However, the only people who are likely to attribute a pass mark to the Turing test, in these situations, are those who, themselves, are pretty robotic.  Many years ago, in talking about computers and education, a teacher said that any teacher who could be replaced by a computer, *should* be replaced by a computer.  I would second that.  Amen.  Any *person* who can be replaced by a computer, should be replaced by a computer.

In case you think I am overemphasizing this point, I should note that, at the very least, a number of Chinese scientists and engineers are applying large language model technologies to sex robots.  They are aiming to create interactive artificial intelligence powered companions. And romantic, and specifically sexual, partners.  Of course, this is going to be an ongoing project.  There will be, initially, a lot of attempts that some people, rather desperate people, will accept as a reasonable substitute.  But the technology will get better.  The rubberized or silicon skin will be researched until it feels more like real skin.  The outgassing, and smell, of the rubberized skin will be chemically analyzed, and modified, until it smells more like a real person.  The large language models will continue to develop and be improved.  And then we will have a sex partner who is physically acceptable, mentally acceptable as a companion--and completely tunable and compliant.  Does your current sexual partner not want to perform [insert sexual deviancy of your choice here]?  No problem.  The robot will.

We can have any kind of boyfriend/girlfriend/pig-friend that we desire.  (Emphasis On the desire.)



Saturday, April 5, 2025

"Security for ordinary folks": Lessons from Signalgate - 7 - Is doing that really worth it?

"Security for ordinary folks": Lessons from Signalgate - 7 - Is doing that really worth it?

Lastly, we have, is doing that really worth it?  Also known as, should we be doing this at all? 

Now, this chat channel was, supposedly, set up to prepare for a military operation.  The purpose and intent of this discussion, supposedly, was to plan a military strike to degrade the capabilities of the people who are firing missiles at cargo ships transiting the Suez Canal.  Certainly, on the face of it, this is a worthy endeavour.

Planning a military raid of this type certainly involves classified information.  So it is extremely interesting that, in defense of their actions with regard to the whole scandal, those involved in the chat have said that no classified information was provided over this channel.  This is, of course, arrant nonsense.  The timing of the launch of warplanes sent to perform such a military strike is classified information.  And, if it isn't, it should be.  So, the statement that no classified information was sent is horse feathers.

However, there aren't many other instances of classified information in the chat.  Indeed, when you read the entirety of the chat, or at least the entire transcript that is, so far, available to us, what strikes you is the lack of planning that is actually going on.  This does not sound like a planning discussion.  It doesn't seem to be planning anything.  In point of fact, when you read the transcript, it sounds like nothing so much as a bunch of frat boys, at a kegger, commenting about how many females they have dated (for varying values of "dated").

Yes, there is information that is, or should be, classified.  The classified information should not have been included in a discussion over a channel with this lack of security.  No classified information should be discussed over this kind of communications channel.  But the bulk of the discussion, far and away most of the text that is contained in the transcript, contains a remarkable lack of actual information.  There are lots of opinions.  There are insults galore.  But planning?

So, you have to ask, why was this communications channel set up in the first place?  And it's not the only one.  Apparently, we are now learning, at least twenty similarly insecure communications channels have been created.  It's likely that pretty much the same cast of characters are all holding similar discussions, potentially with similar classified information that shouldn't be discussed over them, and, presumably, with a very similar lack of purpose or value.

Once again, while it may be disturbing to know that the highest officials in the land are wasting their time in this kind of chatter, and that there don't appear to be any adults in the room in this particular administration, what does this have to do with you, as an ordinary person, concerned about your security?

Well, you should be asking yourself the same question that I asked at the beginning: is any of this worth it?  Is what you are doing valuable?  Is the information that you are holding actually of use to you?  Are the emails that you are sending really necessary?  In particular, are you sending information, in an email, or posting it to social media, or entering it into a website, just because the website asks you to enter it, when there really is no need for it?  Lots of retailers want to obtain information on you.  They would like to have your address so they can send you promotional letters.  They would like to have your phone number, so that they can make promotional phone calls to you.  They would like to have your email address, or your social media account, or your various social media accounts, so they can send you promotional material that way, at much lower cost.  But, as I asked of a retailer once, in the store, when, in finalizing a purchase, he asked me for my telephone number, why?  What is the purpose of providing this information?  In terms of answering questions on a website, or when making a purchase, yes, sometimes there are purposes and needs for the information, particularly if you're paying with a credit card.  But why provide this information, just because you can, or just because somebody mentions something related to it?  Think about what you were posting.  Think about what it lets people know about you.  If you take a picture of a couple who are visiting you, in front of your front door, does that provide people with your street address?  (A lot of people are particularly fond of posting pictures of their kids on social media.  A lot of people who are trying to enlarge their footprint on social media, or who see themselves as influencers, do a lot of this, and have posted pictures of their kids, or videos of them doing various activities, for pretty much all of their lives.  Some of the kids are now starting to object to the fact that their own privacy, that is, the kids own privacy, is pretty much completely compromised, because of postings that their parents have made.)

We have a common saying in the information security community: if you don't want people to know all the details of your private lives, stop posting all the details of your private lives on social media.

We talked, earlier, in our first and second lessons, about risk management.  Risk management is the heart of security management, and therefore the heart of security.  One of the last stages of risk management is cost-benefit analysis.  Cost-benefit analysis is where we measure the cost of what we are going to do or are proposing to do, with the benefit that we expect to derive from doing it.  So, to boil this final lesson down to its basic components, what benefit is it that this activity is going to do for you, compared to the cost, work, effort, or expended resources, that you are going to have to pay in terms of actually doing it?  And, in terms of posting information, what is the benefit that I am going to derive from providing this information, or posting this information, compared to what it might cost me in terms of what information this gives away, to somebody else, that might come back to bite me later?