Saturday, April 5, 2025

"Security for ordinary folks": Lessons from Signalgate - 7 - Is doing that really worth it?

"Security for ordinary folks": Lessons from Signalgate - 7 - Is doing that really worth it?

Lastly, we have, is doing that really worth it?  Also known as, should we be doing this at all? 

Now, this chat channel was, supposedly, set up to prepare for a military operation.  The purpose and intent of this discussion, supposedly, was to plan a military strike to degrade the capabilities of the people who are firing missiles at cargo ships transiting the Suez Canal.  Certainly, on the face of it, this is a worthy endeavour.

Planning a military raid of this type certainly involves classified information.  So it is extremely interesting that, in defense of their actions with regard to the whole scandal, those involved in the chat have said that no classified information was provided over this channel.  This is, of course, arrant nonsense.  The timing of the launch of warplanes sent to perform such a military strike is classified information.  And, if it isn't, it should be.  So, the statement that no classified information was sent is horse feathers.

However, there aren't many other instances of classified information in the chat.  Indeed, when you read the entirety of the chat, or at least the entire transcript that is, so far, available to us, what strikes you is the lack of planning that is actually going on.  This does not sound like a planning discussion.  It doesn't seem to be planning anything.  In point of fact, when you read the transcript, it sounds like nothing so much as a bunch of frat boys, at a kegger, commenting about how many females they have dated (for varying values of "dated").

Yes, there is information that is, or should be, classified.  The classified information should not have been included in a discussion over a channel with this lack of security.  No classified information should be discussed over this kind of communications channel.  But the bulk of the discussion, far and away most of the text that is contained in the transcript, contains a remarkable lack of actual information.  There are lots of opinions.  There are insults galore.  But planning?

So, you have to ask, why was this communications channel set up in the first place?  And it's not the only one.  Apparently, we are now learning, at least twenty similarly insecure communications channels have been created.  It's likely that pretty much the same cast of characters are all holding similar discussions, potentially with similar classified information that shouldn't be discussed over them, and, presumably, with a very similar lack of purpose or value.

Once again, while it may be disturbing to know that the highest officials in the land are wasting their time in this kind of chatter, and that there don't appear to be any adults in the room in this particular administration, what does this have to do with you, as an ordinary person, concerned about your security?

Well, you should be asking yourself the same question that I asked at the beginning: is any of this worth it?  Is what you are doing valuable?  Is the information that you are holding actually of use to you?  Are the emails that you are sending really necessary?  In particular, are you sending information, in an email, or posting it to social media, or entering it into a website, just because the website asks you to enter it, when there really is no need for it?  Lots of retailers want to obtain information on you.  They would like to have your address so they can send you promotional letters.  They would like to have your phone number, so that they can make promotional phone calls to you.  They would like to have your email address, or your social media account, or your various social media accounts, so they can send you promotional material that way, at much lower cost.  But, as I asked of a retailer once, in the store, when, in finalizing a purchase, he asked me for my telephone number, why?  What is the purpose of providing this information?  In terms of answering questions on a website, or when making a purchase, yes, sometimes there are purposes and needs for the information, particularly if you're paying with a credit card.  But why provide this information, just because you can, or just because somebody mentions something related to it?  Think about what you were posting.  Think about what it lets people know about you.  If you take a picture of a couple who are visiting you, in front of your front door, does that provide people with your street address?  (A lot of people are particularly fond of posting pictures of their kids on social media.  A lot of people who are trying to enlarge their footprint on social media, or who see themselves as influencers, do a lot of this, and have posted pictures of their kids, or videos of them doing various activities, for pretty much all of their lives.  Some of the kids are now starting to object to the fact that their own privacy, that is, the kids own privacy, is pretty much completely compromised, because of postings that their parents have made.)

We have a common saying in the information security community: if you don't want people to know all the details of your private lives, stop posting all the details of your private lives on social media.

We talked, earlier, in our first and second lessons, about risk management.  Risk management is the heart of security management, and therefore the heart of security.  One of the last stages of risk management is cost-benefit analysis.  Cost-benefit analysis is where we measure the cost of what we are going to do or are proposing to do, with the benefit that we expect to derive from doing it.  So, to boil this final lesson down to its basic components, what benefit is it that this activity is going to do for you, compared to the cost, work, effort, or expended resources, that you are going to have to pay in terms of actually doing it?  And, in terms of posting information, what is the benefit that I am going to derive from providing this information, or posting this information, compared to what it might cost me in terms of what information this gives away, to somebody else, that might come back to bite me later?


Friday, April 4, 2025

"Security for ordinary folks": Lessons from Signalgate - 6 - Accountability

And, not quite finally, the last "A" in IAAA: accountability.

Accountability doesn't just have to do with accounting.  Although there definitely is some relationship between them in terms of auditing and investigation.  Accountability is not just about who is going to be held to account, or who is going to be fired because something went wrong.  As we all know, very often the person who gets fired is not necessarily the person who is actually responsible for what went wrong.  (However, that is also a political statement and not necessarily something that ordinary folk need to know about in terms of their security.)

Accountability, in terms of information security, is about who did what, and sometimes even *what* did what.  Accountability is making sure that our systems, and we, individually, keep track of, and keep records of, who did what.  So that if something goes wrong we can figure out what actually did happen, and who made certain parts of what happened happen, and what we need to do to prevent it from happening again.  Yes, sometimes the accountability identifies someone who did something contrary to policy, and that that is what caused the problem.  However, sometimes it is important to know what actually did happen so that we can identify the fact that there *wasn't* actually policy or any tool that would have prevented whatever bad thing happened from happening.  In that case, the point is not to fire the person who did something wrong, but rather to make sure that we look at our situation again, and our entire security system as a whole, and make sure that we do have the proper policies and tools to address things that can go wrong, and try and *prevent* things from going wrong.

Accountability, in technical terms, is primarily about systems.  Identifying from the identification that is stored, who was it that performed an action, or what system it was that performed an action.  This provides us the information that we need to have in order to figure out what it was that actually did happen.  Who did what, and did whatever people did, in the normal course of their work, cause the problem that we see.

Our systems need to track this.  This is why identification, authentication, authorisation, and accountability are all done together as IAAA in access control.  Everything is based and centred on the identification.  Who was it that performed some action.  Who did what?  Or, as I say, sometimes *what* did what.  The identification is key to the authorisation and the accountability.  As we noted before, in order to really have a proper system and really have a good grasp of what happened and why, we have to verify the identification with authentication.  But authentication is based and done on the basis of the identification.  The authorisation different entities are allowed to do, and the rights and permissions that they have, is based on the identification.  And, of course, it's all topped off with accountability.  Are we, in fact, able to track anything that an entity did, and all the actions that were taken with respect to who took those actions.

In the case of this scandal, we are pretty well all certain that the people who actually created the problem will never be held accountable.  Nobody is going to discipline them and nobody is going to restrict their actions.  So, we can pretty much guarantee that similar problems are going to happen in the future.  There isn't any punishment.  There isn't any negative reinforcement for this careless behaviour.  Therefore, the behaviour will continue.

However, if you are not the lackey of a dictator who wants to take over large parts of the world (and doesn't want to have anyone around to say he can't do it), what does all of this mean to you, as an ordinary person, in regard to your own security.

Well, the first thing that it means is that there are reasons for identification authentication, authorisation and accountability.  It does mean that if you want to actually know what happens with regard to your systems and why they don't work the way you thought that they were supposed to work, you have to know where to find that accountability information.  It's there for a purpose and it's there for a reason.  It's not just there to prevent you from getting your work done.  So don't keep trying to find ways to turn it off or avoid it.  Don't try to fool it.  It's there to help you.  Don't hurt yourself by turning it off.

I was dictating this to myself as I was walking home from the hospital.  I stopped in to get breakfast (even though it was afternoon, because I was at the hospital because of an emergency call out for vigil this morning).  The manager apologized for having all kinds of management papers spread all over the booth.  He noted that they were about to have an audit, and so he was making sure that everything was up to date and being handled properly.  That is part of management, and it's good to know that he was keeping an eye on things.  In reality, of course, while special attention to these things might be paid when you are facing an audit, really you should be doing it all the time. ( After all, that's why they have audits in the first place: to impress upon you the need to pay attention to all the details.)  This is why we want accountability.  We need to make sure that we are doing things properly. In the corporate world, we have to have auditors because we need to have somebody *else* look at what we're doing.  However, in the small business world, we can't always afford to have somebody come and audit us.  This is why it is so important to do our own auditing, and to make sure of our own accountability.  Therefore, in an informal situation, where somebody isn't imposing an audit help on us, it becomes more important, rather than less, that we make sure that we have accountability in whatever it is that we are doing.

Next: "Security for ordinary folks": Lessons from Signalgate - 7 - Is doing that really worth it?

Thursday, April 3, 2025

"Security for ordinary folks": Lessons from Signalgate - 5 - Authorization

A couple of lessons ago I mentioned the importance of identification and authentication, as well as the IAAA factors in access control.  We now turn to the second "A" in that list: authorization.

Authorization has to do with the granting of rights, or privileges, or permissions.  Authorization gives you the ability to perform certain functions that have been granted.  Authorization has, actually, a great deal to do with this entire scandal.

First of all, none of the people involved in this, in any way, had authorization to be doing what they were doing.  The sensitivity of the information being discussed in the chat meant that the Signal app, and the cell phones, should not have been used.  The importance of the information meant that, if this information was to be discussed, it should have been discussed on channels that were much more secure than cell phones and the Signal app.

Additionally, there was the fact that somebody had enabled a setting that would have deleted all of the discussions after a week.  This function should not have been enabled, since this was official government business, and the rules governing official government business mean that the information should have been retained for archiving, even if the information might not have been made available, possibly for decades to come.  It still should have been archived, and submitted to the archives.

Then there was the adding of the reporter.  Adding someone to such a discussion, discussing topics of such sensitivity and importance, should have undergone a formal process.  Nobody, regardless of who they were, should have been added without the process being followed, and there being assurance that appropriate people were added to the discussion, and that nobody who was not authorized, and not an appropriate party to the discussion, would be added.  There doesn't seem to have been any process in place.  Even the creation of the chat channel itself could not have gone through appropriate processes, since the technology used for the discussion was not appropriate to the sensitivity of the information, and therefore would have been flagged had the proper process has been followed.

(The auditing of the channel, and the recording of the discussions, on the part of the reporter was, itself, unauthorized.  However, in this case, the reporter's actions were probably the least unauthorized of all of the activities of everyone involved with the entire scandal.  Initially, the reporter felt that this was some kind of prank being played on him, or an attempt at disinformation.  Given the importance of disinformation in modern politics, it is not surprising that the reporter, while not engaging with any of the discussions, still recorded them, in order to try and figure out what was going on, and, if possible, who was doing it.  The reporter was not, of course, authorized to participate in, or even listen to, the chat.  However, the indisputable verification of the reality of this channel, and the high probability that these were real discussions, by real members of the administration, didn't happen until the end.  Having determined that this was a real communications channel, and that this was real, and very sensitive, military information that was being bandied about, the reporter thereupon left the channel.  And wrote the story.)

Nobody authorized the creation of the chat channel.  (And, apparently, it's not the only one.)  Nobody authorized the use of these particular technologies for the discussion of information of this sensitivity.  Nobody authorized the individuals involved in the chat channel to make exceptions to the policies and regulations that restricted that discussion of information on the unauthorized technologies and channels.  There were appropriate channels which should have been used for the discussion of this level of information, and those channels were not used.  The people who were involved in the discussion were all authorized to use those appropriate technologies and channels.  If these discussions needed to be held (and we will discuss that in a subsequent episode of this series) then there were channels available.  The use of cell phones, and Signal, was definitely not authorized, and not appropriate.

Next: "Security for ordinary folks": Lessons from Signalgate - 6 - Accountability

Wednesday, April 2, 2025

Griefbots - 1 - intro and AI

Griefbots, thanabots, and "Restoration" systems

Griefbots - 1 - intro and AI

At about the same time that Gloria died, Replika started making the news.  Replika was, at that time, text chat based only.  You could train a Replika account with email from your deceased loved one.  I had plenty of email from Gloria, and still do.

I decided against trying the system.  I wasn't sure whether I was more afraid of it being disappointing, or of getting hooked on it.

So I still don't know which is the greater danger.  I don't know whether those people who use griefbots, or thanabots, or "restoration" systems, are simply fooling themselves, and thinking that this chat does, in some way, reproduce their conversations with their loved one, before the loved ones death.  Possibly they receive some kind of comfort, in having conversations with some facsimile of their loved ones.  Then again, possibly they experience cumulative grief, when they finally realize that their loved one is, in fact, dead, and that the facsimile isn't, in fact, the loved one.

Possibly there is some kind of cumulative grief involved in the fact that the loved one dies, and then is "restored", and then possibly "dies" again, at some later date, when the company that runs the system goes bankrupt, or the system to simply gets too old, or is updated and their account doesn't survive the transfer, or they simply run out of money to pay for the account.

Or, maybe, the system runs along, and they don't really discriminate between whatever the system produces, and whatever kind of conversation their loved ones did produce, and they just carry on the illusion until they themselves die.

And maybe they never really get over their grief, because they have this artificial anodyne, and the artificial chatbot, or griefbot, or thanabot, is sufficient for them, and they never do form a new relationship with any actual carbon-based lifeform that might be more suitable for them.

I don't know.  As I say, I never dared to try the experiment with Gloria and Replika, and I don't know how I, personally, would react or have reacted.  And the information that I have been able to find is basically anecdotal, and the plural of anecdote is not data.

So I don't know how real the benefits are.  I don't know how real the risks and dangers are.  But I am definitely aware of the potential risks, and, I strongly suspect that too few people are aware of the risks, or have given much thought to them.

(The CBC has made available a documentary entitled "Eternal You."  It is not comprehensive, and doesn't address all the risks associated with griefbots and related systems, but those that it does cover are covered well.  It is available at https://gem.cbc.ca/eternal-you or https://www.youtube.com/watch?v=4Koqc2aPUK4 )

Initially, as noted, the idea to explore griefbots came from Gloria's death, and the increasing presence of Replika in this space.  Then came the explosion of interest in artificial intelligence, and the proosed applications, driven by the large language models.  I created a presentation on griefbots as a kind of specialized extension of a broader presentation of AI.  However, as I explored the field, and in association with volunteer work in grief support, I was astounded by the number of companies that have started to enter this field, with a variety of products.  Given the lack of understanding of the limits of AI in general, and increasing work in the psychological dangers of a variety of areas of information technology (including social media), I felt more urgency in getting this article, and series, out to a broader audience.

Today I was asked for which audience I am writing this article.  I think it's a pretty broad audience.  My colleagues in information technology will have a greater understanding of artificial intelligence, and the oversimplification that I am making in order to ensure that this article is not too lengthy for the general public.  For those involved in grief counseling and support, my lack of training and specialization in this field will no doubt show.  However, I hope that you can understand the concerns that I am trying to raise, and will, if asked by your clients, be able to provide some detail, and possibly a balanced opinion in regards to whether or not griefbots are a good idea for the bereaved, in either general or specific, and at least raise the issues of risk or danger.  For those in the general public, some of you may be bereaved, and might be considering griefbots for yourselves, or may have friends among the bereaved who might be considering signing up for these systems.  Again, hopefully this piece will provide some realistic assessments of what we thought griefbots are or are not, and what benefits, balanced against the risks and dangers, there may be.

Given that this is a bit about artificial intelligence, or AI, I asked ChatGPT to opine on the psychological dangers of artificial intelligence, and the use of artificial intelligence, particularly in counseling and psychological situations.  The number one point that ChatGPT listed was "a lack of understanding."  Indeed, this was borne out by a situation where, at an event for the public, I set up a computer to allow people to interact with one of the LLM systems.  Anyone could try it out.  Nobody did.  So probably very few people have, actually, taken advantage of the opportunities to get to know how these systems work.  (And don't.)  Therefore it is probably a good idea to provide at least a terse outline of what artificial intelligence is, and is not.

First of all, artificial intelligence is not a thing.  It is *many* things.  Artificial intelligence is a general term given to a number of approaches of getting computers to have functions which we have come to expect from people.  Unfortunately, as well as a number of different approaches in order to tackle this task, the task itself is ill-defined.  Alan Turing, who is considered one of the fathers of modern computing, and computing machinery, did once specify what has come to be known as the Turing Test.  The test goes something like this: if you put a subject (which we might call the tester) in front of a terminal, and the wire to the terminal goes off through a wall, and the tester carries on a conversation, via the terminal, with the system that is to be tested (which we can call, for example, the testee), if the tester cannot, after carrying on a conversation for some length of time, decide whether behind the wall is another person, or a computer running an artificial intelligence program, then if it is, in fact, an artificial intelligence program, that artificial intelligence program is considered to have passed the Turing test, and is therefore, intelligent.

The thing is, we don't really know if Alan Turing actually meant this to be a determining test about whether or not someone has, in fact, written a program which is artificially intelligent.  It is equally possible that Alan Turing was making a statement about the difficulty of creating artificial intelligence, when we can't even define what real intelligence is.  The Turing test is, in fact, a measurable test.  But it doesn't really define, to everyone's satisfaction, whether or not we have created a truly intelligent artificial personality.

For example, how intelligent is the tester?  Does the tester have experience with assessing other artificial intelligence programs, as to their level of intelligence?  Does the tester have a broad range of knowledge of the real world?  Has the artificial intelligence program been fed data based upon questions and conversations that the tester has had with artificial intelligence programs in the past?

And this is just about generating a conversation.  What about making a computer see?  What about getting the computer to look at an image, either still or video, and identifies specific objects out of that image?  What about being able, from an image, to plot a way to navigate through this field, without destroying various objects that might be in it?  What about teaching the computer to hear?  All of these are things that the field of artificial intelligence has been working on, but they have nothing to do with carrying on a conversation over a terminal with some unknown entity.

In the interest of keeping this article reasonably short (I don't want to risk TL:DR), I won't go through the sixty or seventy year effort to create artificial intelligence, and the various successes and failures.  No, I'll keep this reasonably short, and just pick on the one that has, over the past three years, been much in the news, and much in demand in business circles, and which everyone tends to talk about.

This is the approach known as the large language model, or LLM, or generative artificial intelligence, or generative AI, or genAI.

As I say, this has created a great stir.  ChatGPT, and Claude, and Perplexity, and Deepseek, and Qwen, and Meta AI, and Gemini, are all examples of generative AI.  They have astounded people with their ability to answer questions typed into them, and give reasonable answers, sounding realistic and lucid, and do, for many people, seem to pass the Turing test.  The reality is a bit different.

Large language models are trained are descendants of a process called neural networks.  Neural nets are based on an idea about the human brain, which we now know to be somewhat flawed, and definitely not comprehensive.  However, it is a very complicated kind of statistical analysis.  You feed neural nets a lot of data.  When the neural net notices a correlation between items within the database, it flags that correlation, and, every time it finds an example that meets the correlation, it strengthens the connection.

Unfortunately, this leads to an example of what is called, in psychology, superstitious learning.  That is, that the system notices a correlation which isn't, in fact, a correlation.  It builds on a kind of confirmation bias, and the system will keep on strengthening a correlation every time it finds, even if randomly, some data that seems to fit the correlation.  The negative, a lack of evidence, or even relationships in the data that contradict the correlation, are ignored.  So, neural nets can make mistakes.  And this is only one example of the types of mistakes that they (and we) make.

Large language models feed the neural net a great deal of text.  You will have seen news reports about those who are building large language models being sued by the owners of intellectual property, which gets shoveled into the large language models.  There is also, of course, an enormous trove of text which is available at no cost, and so is widely used in feeding the large language models.  This is, of course, social media, and all the various postings that people have made on social media.  However, this text is not exactly high quality.  So we are feeding the large language models with a great deal of data which can teach the large language model how to structure a sentence, or a paragraph, and even possibly to use punctuation (if, indeed, social media users can be forced somehow to use punctuation), but any meaning may be rather fragmented, disjointed, and quite possibly incorrect.  So, we have taught genAI rhetoric, but we haven't taught it anything about epistemology, or metaphysics.

And this business of saying that we are asking a question, and getting an answer, is an example of misleading the public by the use of our terminology.  You may think that you are asking a question.  The system doesn't understand it as a question.  It is simply, to use the term that the generative artificial intelligence people use, themselves, a prompt.  This prompt is parsed, statistically, with the very complex statistical models that the large language model has created for itself.  Then the genAI will generate a stream of text, once again, based simply on the statistics, and probability, of what the next word is going to be.  Yes, it is certainly impressive how this statistical model, complex though it may be, is able to spit out something that looks like considered English.  But it isn't.  It's just a statistically probable string of text.  The system didn't understand the question, or even that it *is* a question.  And it doesn't understand the answer.  It's just created a string of text based on statistics.

It doesn't understand anything.

And if you think anything different, you're fooling yourself.

Now, some of you may be somewhat suspicious of the proposition that a mere statistical analysis, no matter how complex, can generate lucid English text.  Yes, I am oversimplifying this somewhat, and it's not just the probability of the next word that is being calculated, but the next three words, and the next seven words, and so forth.  The calculation is quite complex, but it still may sound odd that it can produce what seems to be a coherent conversation.

Well, this actually isn't very new.  There is a type of statistical analysis known as Bayesian analysis, or Markov chain analysis.  It has been used for many years in trying to identify spam, for spam filters for email.  And, around twenty years ago, somebody did this type of analysis (which is much simpler and less sophisticated than the large language model neural net analysis) on the published model novels of Danielle Steele.  Based on this analysis, he wrote a program that would write a Danielle Steele novel, and it did.  This was presented to the Danielle Steele fan club, and, even when they knew that it was produced by a computer program, they considered that it was quite acceptable as an addition to the Danielle Steele canon.  And, as I say, that was two decades ago.  And done as a bit of a lark.  The technology has moved on quite a bit since then, particularly when you have millions of dollars to spend on building specialized computers in order to do the analysis and production.

A lot of the griefbots, or thanabots, or "restoration" systems are based on this kind of technology.  Sometimes they are using even simpler technologies, that have even less "understanding" behind them.

Some of the chatbots are based on even simpler technologies.  For example, over sixty years ago a computer scientist devised a system known as ELIZA.  This system, or one of the popular variants of it, called doctor, was based on Rogerian psychological therapy, one of the humanistic therapies.  The humanistic therapies, and particularly Rogerian, tend to get the subject under therapy to solve his or her own problems by reflecting back, to the patient, what they have said, and asking for more detail, or more clarity.  That was what ELIZA did.  If you said you were having problems with family members, the system would, fairly easily, pick out the fact that "family members" was an important issue, and would then tell you something like "Tell me more about these family members."  Many people felt that ELIZA actually did pass the Turing test, since many patients ascribed emotions, and even caring, to the program.

(If you want you can find out more about ELIZA at https://web.njit.edu/~ronkowit/eliza.html )

Other chatbots have been developed, based on simple analysis and response mechanisms, and sometimes even simpler than those underlying ELIZA.  Chatbots have been used in social media all the way back to the days of Usenet.  Yes, Virginia, there was social media before Facebook.

Next: Griefbots - 2 - Dating apps and AI "friends"

Tuesday, April 1, 2025

"Security for ordinary folks": Lessons from Signalgate - 4 - Cell phones, info capture, attack and breach

Cell phones are not secure.  And then, I suppose that I have to qualify that by saying cell phones are not *very* secure.  And then I suppose that I have to qualify even *that* by saying *most* cell phones are not very secure.

So, to start off with, yes, there are some cell phones which are secure.  There are some cell phones that are secured to specific levels.  But these cell phones are usually restricted in quite a few different ways.  One of the ways that they are restricted is that you cannot install just any app on one of these cell phones.  The cell phone itself will not allow you to.  And this takes care of an awful lot of the insecurity of cell phones, in that most apps for cell phones are not secured.  Security has not been part of the design of the app.  Okay, yes, some aspects of security *may* be *part* of the app.  The app may require you to enter a username and a password to get access to your specific account.  And, indeed, the cell phone app *may* protect the sign on; the exchange of your username and password with the system that is hosting that account, and may even possibly encrypt the information that you are transferring back and forth between your phone and the app.  But all of that is "maybe" on your bog standard cell phone.  On a secure cell phone it is going to be mandatory.  And anything that doesn't apply stringent security protocols is not going to be allowed on that cell phone.

But that is only one part of the whole security puzzle.  When I am preparing candidates for their professional certification in information security, I start with security management.  The point being that you can Have all the security tools that you want, and still not be secure.  You can be an absolute wizard at setting up firewalls, and know absolutely everything that there is to know about establishing a really secure firewall, but if you don't do all the rest of security, and if you don't manage it all together, you're not going to be secure.  In physical terms, I may illustrate it by saying you can have a front door that is solid, and barred, and has really fantastic locks, and you're not going to be secure if your back window is wide open.  So, you have to do the whole job with regard to security.  And cell phones definitely don't do the whole job.  Cell phones are there for availability.  Cell phones are there for convenience.  Cell phones are not for total and complete security.

To understand why, we go back to our Signalgate scandal.

The person who set up the group chat actually thought about security to a certain extent.  But only to eliminate a concern about people being able to get the contents of the chat at a later date.  This person enabled the setting that said that all the messages on the group chat would disappear after a week.  Yes, that can be helpful in terms of security.  (It's also illegal, in terms of government regulations with regard to archiving of all official government communications.  But so many other things were illegal about this whole story that what's one more?)

Anyway, back to this issue of the messages disappearing after a week.  Actually, this doesn't give you much security at all.  For one thing, you can simply copy the text of the messages and put them someplace else.  You can paste the text that you have copied from the messages into a text file on another app on the phone that allows you to make text notes.  Or you can take the text that you took off this group chat, and paste it into an email, and email it to yourself.  There's all kinds of ways that you can take this information and keep it, even though somebody has said that the information is supposed to disappear after a week.

There may be a setting on the Signal app that enforces something that says no, you can't copy that text.  This does make it a little bit harder to keep the text, but not very much.  For one thing, just about every cell phone allows you to take a snapshot of the screen: a screenshot.

And in fact, when those who were party to this chat (officially, at least) complained that the reporter was misrepresenting what had been said at the chat, and that nothing classified had been said at the chat, the reporter was able to provide an entire transcript of what had been said on the chat, including all the emojis that had been sent in messages in the chat (which, of course, would not have copied over as text).  But all he had to do was take screenshots of the messages on the chat.  And, there they all are.  A complete transcript: complete with emojis and everything that was said.

This is one of the reasons that cell phones are not secure.  There are far too many ways of taking information and copying it somewhere else that *isn't* secured, even if you apply security to the cell phone.

But wait, as they keep telling us in the ads, there's more!

Cellphones are actually computers.  Small computers, specialised to communications functions, but they are computers.  And of course, most of them can be connected to the internet.  And therefore people have found ways to write malware for cell phones.  And those pieces of malware can be sent to people, embedded in messages that read, "Hey, you'll get a kick out of this!  Click on this link!"  "Hey, this app is really fun!"  Install it on your phone!"  Or something like that.

And people will run a program, whether they realise it's a program or not.  And that program will take over their cell phone.

Most people, and particularly those people who are willing to think that there are no rules, and therefore rules about not just running any old software, and not clicking on any old link that somebody sends you in any email message or text.  People who are willing to not identify and verify people that they add to a group chat.  And people who are willing to discuss highly classified information on systems that are not rated for that level of sensitivity of information.  Well, those kinds of people will probably be quite willing to click on anything Without realising that it might be a piece of software that can take over your phone.

And, of course, once the software has taken over your phone, it can do whatever it wants.  Including setting up a permanent link to send anything that you tap into the phone (your credit card number?  high security government account password?), and anything that shows up on your screen when you are looking at the phone, and take recordings of every telephone conversation you have with that phone, and send it to ...

Well, anyone, really.  Chinese intelligence agencies.  North Korean intelligence agencies.  Russian intelligence agencies.  Possibly (*shudder* *shock* *horror*) even *Canadian* intelligence agencies!  Who *knows* what damage this could do!


Next: "Security for ordinary folks": Lessons from Signalgate - 5 - Authorization

Monday, March 31, 2025

"Security for ordinary folks": Lessons from Signalgate - 3 - Signal, Identity and authentication

Well, I suppose if we are talking about Signalgate, we should talk about Signal.

Signal is, essentially, a texting program.  It uses the Internet, rather than the texting channel for telephone service.  At least, for the most part.  You may be fairly familiar with Signal: you may use it under another name.  If you use WhatsApp, WhatsApp is basically identical to Signal, with one difference.

So, if you have used WhatsApp, you know all about Signal.  You know that it is primarily about text messages, and you probably know that you can use it to create groups, and send text messages to a number of people in the group.  You can also use it for audio and voice calls, but most people are just using it for the texting.  And, particularly, the group text chats.

(I suppose that I should mention the one difference between them.  WhatsApp is owned by Meta, which is, essentially, Facebook.  Therefore, it is Facebook which is managing the connection and setup of all communications done over WhatsApp.  The text chats, and even the voice and video calls, are encrypted.  Therefore people think that they are secure.  By and large that is probably true.  However, since Facebook sets up all the calls, it would, theoretically, be possible for Facebook to listen in on all WhatsApp calls and chats.  Signal uses the same technologies, and even the same protocols, as WhatsApp.  They are basically identical.  However, whereas Facebook manages all the calls for WhatsApp, Signal allows you to choose different hubs to manage your calls.  Therefore, while it would be possible for a single hub to listen in on the calls managed by that hub, no single hub would be able to listen in on all calls that are made through Signal.)

I suppose that it might be possible that this point, that simply having encryption doesn't guarantee you privacy, could be lesson 3A.  It certainly is important to know what encryption does do, and what it doesn't do, and the fact that encryption has to be managed properly in order to do the things that you want it to do.  But that actually isn't the lesson that I want to emphasize in this particular lesson.

No, what I want to emphasize, as lesson three, is identity.  Actually, when we in security talk about access control, we talk about IAAA: that is, identification, authentication, authorization, and accountability.  We will talk about authorization and accountability in later lessons.  Right now I want to talk about identification, and authentication.

First of all, somebody on the Signal channel wanted to add someone else.  We don't know who it was that they wanted to add.  Nobody is saying much of anything, and when they do say anything, most of the time they lie, and most of the time the lies conflict with each other.  So we don't have a lot of reliable information about this whole mess.  But we do know that they wanted to add someone to this channel, and that they weren't careful about the actual identification of the person that they added.  The person that they actually added was, in fact, a reporter that the Trump administration did not particularly like.  And, of course, there was absolutely no reason in the world that the people running the chat would want to add that reporter.

As a matter of fact, when the reporter was first added to the channel, and started seeing traffic on it, the reporter thought that it was some kind of hoax.  In fact, the reporter, initially, when he saw the initial messages going out on this Signal channel, felt that it was probably set up by someone in support of the administration, and was an attempt to fool the reporter into reporting on a story that was false, and then be made to look like a fool when the story was proven false.

However, as the messages went on, it looked more and more like this was, in fact, real communication, between real members of the Trump administration.  Who were, in fact, discussing planned attacks on Yemen.  And, so it proved to be.  Information about war planes being dispatched on bombing missions was given, prior to the aircraft taking off, and was, thereafter, confirmed by military reports of the activities, after the fact.

But back to the identification.  As I say, the people who added the reporter to the channel were not careful about the identification of the person that they added.  Additionally, they did not take the further step, which, in terms of information and access control would be an absolute minimum necessity, of doing the authentication.  This is verification, very often by something you know, or something you have, or something you are, that you are, in fact, the person that you're identification says you are.

So, neither the identification, nor the authentication, were done correctly.  In fact, the authentication wasn't even done at all.

So what does this mean to you, as an ordinary person, wanting to keep yourself secure or safe?  Well, the first thing to do is be careful with identification.  Identification, really, never can be trusted.  It is always simply asserted.  I say that I am Rob.  For the purposes of normal social conversation, this is probably sufficient.  But, if you wanted to do any business with me, you probably would want to know that you were dealing with Robert Slade.  And, indeed, since there are a great many Robert Slades in the world, you would probably want to know which Robert Slade you were dealing with.

As a matter of fact, if you wanted to do any significant business with me, you would probably want to verify, somehow, that I was, indeed, Robert Slade, and not just somebody *saying* he was Robert Slade.  You would want to authenticate the fact that I was Robert Slade.  If you are dealing with me over the Internet, and can't demand to see my driver's license (or something like that), then you might want to set up an account somehow with a coded username, which would be a form of identification that we might agree to, and then, every time we wanted to deal with each other, have a form of authentication.  The authentication might be something that I know: for example, a password.  It might be something that I have: such as the aforementioned driver's license, or possibly my cell phone number, to which you could send a text, with a pin, and then ask me to confirm what the pin was.  Or we could get really fancy and have fingerprint readers, or send pictures of each other, and that would be something that we are: otherwise known as biometrics.

Authentication is the really important part.  That's why those of us in information security keep on yammering on about the fact that you should choose long passwords, and strong passwords, and use a mix of upper and lowercase letters, and throw some numbers in there too, and even some punctuation marks.  Making the password hard to guess means making the authentication more reliable.  And, as I say, authentication is the important part.

And authentication is the part that these military geniuses Signally failed to do.


Friday, March 28, 2025

"Security for ordinary folks": Lessons from Signalgate - 2 - Cell phones and SCIFs

Lesson two is about cell phones.  No, I'm not going to say that you can't use cell phones.  Cell phones, for good or ill, are now part of our lives.  But a definite part of this story, and scandal, has to do with cell phones.

Cell phones are not secure.  At least not *very* secure.  Just today I got some information about a family of malware for cell phones, specifically targetting instant messaging systems, and with at least one component directly aimed at the Signal app.  And a bit later we will go into some of the details about why, and how, cell phones are not terribly secure.  But cell phones are certainly convenient, and sometimes they are even life-saving.  So, no, I am not saying that cell phones are evil, or that you should never use cell phones.

What I am saying is that you should think about how, and why, you use cell phones.

In this particular case, cell phones definitely should not have been used.  The Signal app should not have been used.  The information being discussed was very important, and confidential, particularly at the time that it was being discussed, and, despite the subsequent attempts to say that the information was not classified, and did not come under a category that needed to be classified and that somebody involved in the conversation could have declassified the information, whether or not the information actually was declassified, this type of information either was, or definitely should have been, classified, and shouldn't have been discussed in this type of communications arrangement.  Government and military people in the United States use, and are provided with, what is known as a SCIF: a Secure Compartmented Information Facility.  This is not simply a phone, or a terminal, but an actual facility: a room, locked, with either a card or a keypad in order to identify everyone who enters it, with a phone, or a terminal, that is built to a standard of security that would make it very difficult for any adversary to eavesdrop on any conversations.

So, what does this have to do with security for ordinary folks?  Ordinary folks are not provided with an SCIF.

This is quite true, but, once again, we go back to the idea of information classification.  (That's why we started off with the topic of information classification.)  Once again, you don't necessarily have to have some kind of formal information classification system.  But you should consider the information that you are dealing with, and how important it is, to you, and the communications channel that you are using.  Are you using this particular communications channel just because it's convenient?  Do you have another communications channel that might be better for this particular piece of information, or discussion?  Is there some other communications channel that both you, and the person you want to have a conversation with, share, and is it more suitable given the sensitivity (importance) of the information that you were going to discuss?

Cell phones, as I said, are convenient.  But they also have a lot of functions that might not immediately come to mind when all we want to do is place a phone call.  Just about every cell phone has a speakerphone option.  Are you sure that the person on the other end of the call doesn't have their phone on speakerphone?  Could it be that other people, sometimes quite a distance away, could overhear the entirety of both sides of your conversation, because the other person has their cell phone speakerphone on?  Then there's the fact that pretty much all cell phones can be set up to record a conversation.  This isn't unheard of with the landline, but it generally takes a little bit more trouble to do it.  It can be done easily, and quickly, on a cell phone just by downloading an extra piece of software.  Again, we'll go into a bit more detail about some of the problems with regard to cell phones in a subsequent piece in this series.  For now, just be aware of what can happen when sending different types of information over different types of communications channels.  Think about how important the information is, to you, and whether the ease and convenience of the channel that's immediately to hand makes it the best fit for the type of communications you want to engage in.

Using cell phones, and group chats, to discuss really important and top secret attack plans; the type of information that, if it goes as stray, could get people killed; well, cell phones probably aren't the best fit for that.  And besides, it would be illegal anyways.