Tuesday, January 6, 2026

HCW - 1.06 - logic - CPU


If you have ever taken any training in computers at all you will undoubtedly have seen something similar to this diagram.  This is the CPU, the control process unit.  This is, basically, the heart of any computer.

The computer industry is doing its level best to make this diagram obsolete.  They are incorporating more functions into the CPU, and they are, in fact, combining more CPUs into a single CPU.  A few years ago, you would have seen advertisements, for computers, talking about a multi-core CPU.  That simply meant that multiple versions of this same basic outline would have been put into a single integrated circuit, which then became the control process unit for the computer.  This was in order to make computers more powerful, and faster.  Unfortunately, it came with its own potential problems in terms of security.  It led to the possibility, indeed the probability, of something called a race condition.  When you divide up the activities going on inside a computer, which you have to do in order to get an advantage out of multicore CPUs, it's possible that one activity may finish slightly before, or slightly after, another.  And, indeed, the end result of these processes may be slightly different depending upon which process finishes first.  Hence the name race condition: in a race between two processes, which one is going to finish first?  Since the result may depend upon which process finishes first, that means that the results are uncertain, and possibly unreliable.  Any such circumstance pretty much inevitably leads to the possibility of a security problem.  We have seen many instances, over the years, when there have been attacks on computers by abusing the possibility of race conditions.  Unfortunately, I think that we lost this argument a number of years ago.  You don't even see any mention of multicore CPUs anymore, mostly because everybody has multicore CPUs these days.

We have talked about doing arithmetic.  The arithmetic logic unit, or ALU, is that part of our central processor that does the arithmetic.  It adds, it subtracts, and it may have some very sophisticated mathematical functioning built into it.  As we will see when we talk about programs programming, computers computer programs basically do two things: they do calculations, arithmetic calculations, or they manage databases.  So, for those programs that do calculations, those calculations start with the CPU and the arithmetic logic unit.

But this is probably the first place to mention a truism in terms of computers.  We tend to say that anything that the hardware guys can do, the software guys can emulate.  And anything the software guys can do, the hardware guys can emulate.  In other words, when we build computer hardware, we don't have to design all the possible functions into the hardware.  This is particularly true in terms of the arithmetic logic unit.  When you have built an addition function into the arithmetic logic unit, you might, or might not, include hardware circuitry that does multiplication, as well.  However, you don't need to.  Multiplication is simply repeated addition.  So, even if you don't build multiplication circuits into the computer, you can write a program, or a function, using your basic addition circuits, in order to make the computer do multiplication.  If you are building a computer that you think is going to be doing a lot of multiplication, then you probably want to design and build multiplication circuitry for your arithmetic logic unit.  Doing multiplication by repeated use of the addition circuitry is possible, but it probably takes a little bit longer, and probably uses more power.  So, if you are building a computer processor and intending for the computer that you are building to do an awful lot of mathematical calculations, you probably want to add more hardware circuitry in the arithmetic logic unit, in order to make more functions available in hardware, in order to make the calculations run faster.

But you have to have a control unit.  The control unit provides all the rest of the functions you want in the computer.

One of the functions that you desperately want is going to be a clock circuit.  Computers run on timing.  By the time we get into data communications, you will also find out that communications runs on timing.  But there are physical limitations on what you can do with the computer, and one of them is the speed of light.  When you turn a computer circuit on, the entire length of the circuit doesn't come all on all at once.  The electricity, and the voltage force, takes a while to travel from one end of the circuit to the other.  It's not something that you would normally think about, because it travels at almost the speed of light.  And, in addition, particularly with integrated circuits, the length of the circuit can be extremely small.  These days, in order to see the circuitry at all, you can't even look at it visually with an optical microscope: you generally have to use an electron microscope.  The circuits are that tiny.

So, the space that you have to worry about is very short, but it does have a length.  And you have to make sure that your clock cycle is not shorter than the longest time that it takes the force to travel through a circuit in your CPU.  It takes light about one nanosecond to travel about a foot.  The circuits and connections in an integrated circuit can easily be as small as one 1000th of a foot.  And sometimes even less.  But you do have to make sure that you do not exceed that limit.

And there are some circuits that are rather convoluted, and you have to make sure that you give enough time for the electrical force to travel the entire length of a winding circuit.  Remember that even the basic flip flop feeds its output back into its own input.  So clock cycles are very important in a computer, and particularly in an integrated circuit like the CPU.

In fact, the clock chips in a computer, and, these days, there are generally at least two, are usually in separate components, rather than being built into the integrated circuit.  The clock chips simply give a regular timing signal, which is then used by the clock circuitry within the control unit, to produce the actual clock signaling that is going to be used by the CPU.

(Clock chips are physical, and actually analog, devices.  They produce a very regular clock cycle, but they, being physical and analog devices, do vary somewhat and are not absolutely identical in terms of the timing signal that they produce.  The fact that most modern computers have two clock chips actually is useful for us.  There are certain situations, such as cryptography, where it is very important to create randomness.  You can't produce randomness with computer programs.  You can produce something that looks unpredictable, but, when you are relying upon it to protect your system, as you do with cryptography, you have to actually have something that is, actually, random and unpredictable.  Measuring the differences between the two clock chips inside your computer is a good way to produce actual random information.)

The control unit actually also manages the actual op codes (or operating codes) of the computer, which are the basic commands that we use for computer programming.  We will be dealing with that shortly at a later lesson in this series.  The control unit is going to read the op codes, and, turn on or address the various circuits that will give us the functions that the op codes call for.  This is kind of like the addressing of memory which we started to get into when we were dealing with memory.  And, of course, the control unit also manages the access to the memory of the computer as well.

In terms of the memory, as controlled by the control unit, we are primarily talking about primary memory.  This is, generally, the working memory of the computer.  But even within that primary memory, there are different types of memory.  There are, for example, registers within the central processing unit itself, sometimes within the control unit, and often within the arithmetic logic unit.  These are, generally speaking, very small segments of memory, that operate very quickly in conjunction with the rest of the central processing unit, and are, in fact, part of it.  Then there is the main, working, memory of the computer.  This holds the data that the computer is working on, and also, in computers that we use these days, holds the programs that the computer is actually running.  But even within this working memory, there are some divisions.  Sometimes, in order to speed up the operation of the computer, there is cache memory, which operates at a higher access speed than the main working memory of the computer, but is not, completely, part of the central processing unit, and doesn't always run at the same access speed as the registers.  There are additional variations on the theme.  But this is probably getting us a bit away from the basic idea of how computers actually work.

But the second, and more important division, in terms of types of memory, is between the primary memory, and the secondary memory.  In this particular illustration, they are both listed in the same core as the central processing unit.  Generally speaking, even today, this is not quite true.  Secondary memory tends to be peripheral to the central processing unit.  As a matter of fact, this brings us to our next topic, that of input and output.

An awful lot that we might initially think of as part of the computer is, in logical terms, part of the peripherals.  There is the central processing unit, and there is the working memory.  That tends to be about as far as we get before we start talking about peripherals.  And yet, an awful lot of what we, in terms of the actual working parts of the computer, think of as peripherals are, in fact, very closely physically connected.  And, in these days of cell phones, and tablets, and laptops, it gets very hard to physically distinguish this as well.

Anything outside of the central processing unit, and the memory main working memory, tends to be peripheral to the computer.  That basically just means outside.  But it's hard, these days, to determine what is inside, and what is outside.

So, let me give you a bit of a list of those things that are peripheral to the central part of the computer.  The keyboard is peripheral.  The keyboard is part of the input system.  We are entering data on the keyboard of the computer, so that it can be stored in memory, and possibly processed.  The screen or monitor is peripheral to the computer.  It is part of the output system.  It shows us the results of the processing that we have taken place on the computer.

And the secondary memory, which we have also talked about before, is on the outside, and therefore peripheral to the computer.  Secondary memory, as we mentioned; tapes, discs, jump drives, and so forth; is part of the input and output system.  Sometimes we call for a file to be loaded into memory, and then it's part of the input.  Sometimes we want to store the results of processing that we've done on the computer, and then it's output.

Peripheral devices very often have a small ability to do their own thing.  A keyboard, for example, through the cable connecting it to the actual central part of the computer, will send an indication that a key has been pressed.  Then it will send the data indicating which key has been pressed.  Sometimes a combination of keys will have been pressed, and then the data will indicate what that collection or combination of keys, being pressed together, means.  Generally speaking the keyboard doesn't trouble the computer with which individual keys have been pressed, in terms of a combination, but only what that combination actually means.

Sometimes the processing of a peripheral device can become quite complicated.  For example, modern screens or monitors for computers are often connected with an HDMI cable.  These cables can send in an awful lot of data very quickly.  But it's not just the fact that it can handle a lot of data quickly so that you can watch a streaming movie that is important.  It's also the fact that the HDMI cable communicates in both directions.  When the computer needs to display something on the screen it will send an indication on the HDMI cable.  The monitor peripheral will react to that, and start setting up an area of memory so that the data can be loaded there and then displayed.  But the HDMI cable will also communicate, back to the computer, the fact that this processing has started up, and, when it is complete, the fact that that area of memory in the monitor is ready to receive the data that the computer wishes to send.  Once the computer sends the data, the HDMI cable will probably also communicate, back to the computer, that the information has, in fact, been displayed.  All of this processing that is going on, within the monitor, is happening in the same way that it would happen within the central computer.  It's just that this takes some of the processing load off the central computer, in order to allow the central computer to speed up other, possibly more important, operations.  The processing that is going on in the peripheral device is handled by the same types of circuitry that we have already talked about in terms of the computer itself.  When we want circuitry to do a specific thing, in a monitor, or in a hard drive for storage, we again figure out what functions we need, what truth tables they require, and therefore what gates we have to put together in order to provide those functions.

There is one more topic here that I would like to address, and it is the idea of a Turing machine.  I mentioned Alan Turing before, very briefly.  One of the very important ideas that Alan Turing created was that of the Turing machine.  The Turing machine is not necessarily a computer itself.  On the other hand, in terms of the idea, the Turing machine is a universal computer.

I am not going to describe how a Turing machine works.  For one thing you wouldn't believe me.  For another thing, the idea is really a mathematical one, and it doesn't really make a lot of sense, in terms of how existing computers actually work.  But the idea of the Turing machine is an important one, because of the fact that the Turing machine is a universal computer.  The Turing machine can perform any operations that any computer can perform.  This gives us a mathematical tool that will allow us to decide whether or not a function that we want to perform with a computer can, in fact, be computed.  If it can't be done with a Turing machine, it can't be done.  And therefore, this is a means of deciding whether or not we can ever actually write a program to perform a certain function.  This saves us time in terms of not trying to do something that never can, in fact, be done.

But it also helps us out in another way.  I mentioned before the truism that what the hardware guys can do, the software guys can emulate, and what the software guys can do, the hardware guys can emulate.  This is, in a way, supported by the idea of the Turing machine.  And it also means that certain types of processing can be done on existing available machinery, but also might be able to done better, or faster, if we redesign the computer in various ways.  That's always something to keep in mind when you are trying to decide whether a given computer system or program is actually going to be useful.


No comments:

Post a Comment