PDA

View Full Version : How do Computers Work?



Grytorm
2014-04-02, 05:46 PM
Truthfully I don't really understand how computers work at an extremely basic level. Computer languages make a decent amount of since but not whatever Binary refers to. I would assume that the basic function of computers is a series of interconnected switches capable of going between two states. A computer program on a basic level is a description of the switches used to process a certain operation. Inputs change a switch that sets off the series that processes the larger set of switches which eventually have an output set of switches that are then processed visually. Is this a rough approximation of how computers work or is it nothing like Minecraft? Also are the switches in some way physical things that are referenced by other switches or are they even weirder? Why can't a computer be assembled that uses more states than two?

tensai_oni
2014-04-02, 06:08 PM
You have a decent idea of how it works. In layman's terms, computers use electric currents that enter logic gates (http://en.wikipedia.org/wiki/Logic_gate), arrays of logic gates are used for more sophisticated calculations.

There is no limitation that forbids us from using systems other than binary, but binary is simply more practical. That's because a current's voltage is treated as binary - if it reaches up to a certain point, that's a "0", but if it is higher than that point then it's a "1". If the system was trinary, we'd have two breaking points which makes creating logic gates harder from a physical point of view, and also it's more likely that there will be a mistake due to voltage jumps or something like that.

Drumbum42
2014-04-02, 08:40 PM
So, here's a good example of why binary is what we use:

As tensai_oni said, computers are just a bunch of electric currents, and the 1's and 0's are represented by voltages. So lets say 0 is 1V and 1 is 5V. What happens if a current of 2V is read, either by mistake or if there's some sort of physical defect? This will still be treated as a 0 because it's closer to 1V then 5V.

Now lets say we use a base 10 system, so 0=1V 1=2V 2=3V ext.... If something jumps from 1V to 2V when it was not meant to, "0" is now "1" all the math and logic after that point is wrong. The point is to have the least amount of errors possible, and making all possible answers either 1 or 0 limits that as much as possible.

Also, if you want to know more binary stuff I'd suggest looking at some of the more "fun" binary stuff like converting it to numbers and letters. This turns binary into a more readable format. After all, the 1's and 0's actually mean things, like 101010=42. Well at least I think it's fun.....

Knaight
2014-04-02, 09:01 PM
After all, the 1's and 0's actually mean things, like 101010=42. Well at least I think it's fun.....

Kind of. The issue here is that to include negative numbers, binary needs to work a bit differently - and there are different ways to show negative numbers. 0101010 would generally come across as 42, but 101010 could just as easily be -22 (in 2s complement).

As for computer workings. These (http://www.goodmath.org/blog/2014/02/21/how-computers-really-work-math-via-boolean-logic/) articles (http://www.goodmath.org/blog/2014/02/17/representing-numbers-in-binary/) are both helpful. The second actually gets to numerical representation.

Grinner
2014-04-02, 09:02 PM
You have a decent idea of how it works. In layman's terms, computers use electric currents that enter logic gates (http://en.wikipedia.org/wiki/Logic_gate), arrays of logic gates are used for more sophisticated calculations.

To add onto this, microprocessors are vast networks of these gates. The operation of the network is describe as an assembly language, and that language is used in the design of operating systems. The operating system then acts as a middleman in operations between hardware and software. Programmers used to have to deal with the hardware directly, but changes were made in OS design which shifted much responsibility onto the OS itself.


While we're on the subject, I've never been quite certain as to how overclocking works. As I understand it, the idea is that the clock determines how frequently the computations are checked. By forcing the processor to check computations more frequently, the processor performs those computations faster. The problem, then, is that if the processor is not given enough time to compute, the transistors may not be given enough time to switch, producing errors.

What I don't understand is how that works out in practice. Is the clock an actual circuit within the processor? How are computations checked?

valadil
2014-04-02, 09:24 PM
You have a decent idea of how it works. In layman's terms, computers use electric currents that enter logic gates (http://en.wikipedia.org/wiki/Logic_gate), arrays of logic gates are used for more sophisticated calculations.


Yep. At the most basic level you have interconnected ones and zeros. Combine a few of them and you get a NAND gate. Tack a few more on to there and you have a complete set of logic gates.

Now take a step up from there. Logic gates are now your most basic unit. These can be combined to form basic arithmetic. (I'll leave said composition as an exercise to the reader since my intro to CS course was 13 years ago and I haven't dealt with these since.)

Okay, now whatever circuitry you've built up is your basic unit. Now we'll compose these, etc. 15-20 levels of abstraction later and you've got a computer as you know it. No, I can't guide you through all those levels of abstraction.



While we're on the subject, I've never been quite certain as to how overclocking works. As I understand it, the idea is that the clock determines how frequently the computations are checked. By forcing the processor to check computations more frequently, the processor performs those computations faster. The problem, then, is that if the processor is not given enough time to compute, the transistors may not be given enough time to switch, producing errors.

What I don't understand is how that works out in practice. Is the clock an actual circuit within the processor? How are computations checked?

(Again, this is from an intro to CS course 13 years ago, so my memory is probably fuzzy...)

As I understand it, there isn't a continuous electric signal going through your computer. Instead it's a series of pulses. Each time electricity gets pushed through your CPU, it'll change the internal state of things such that the next pulse will behave differently. Any change that happens takes exactly one of these cycles, and bigger computations will take many, many cycles to complete.

These pulses happen a certain number of times per second. Hertz (hz) is 1/second. So a 3.4ghz or whatever process is pulsing 3.4 billion times per second. Overclocking is just changing a setting that tells it to pulse a little bit faster.

tensai_oni
2014-04-02, 09:24 PM
Yes, the clock is an actual physical device.

The way computations are checked... let's say the microprocessor is organised in such a way, that gate A takes the end result of gates B and C and perform an operation (NAND or whatever) on them. Gate A won't work in the same cycle as B or C, or rather will work on results of B and C from the previous cycle. For an up to date result, you have to wait for the next cycle.

Considering that a circuit will pass through a huge number of gates, and each gate takes up a cycle, it all adds up. So the shorter individual cycles are, the less you have to wait.

Gravitron5000
2014-04-03, 09:00 AM
So, here's a good example of why binary is what we use:

As tensai_oni said, computers are just a bunch of electric currents, and the 1's and 0's are represented by voltages. So lets say 0 is 1V and 1 is 5V. What happens if a current of 2V is read, either by mistake or if there's some sort of physical defect? This will still be treated as a 0 because it's closer to 1V then 5V.

Now lets say we use a base 10 system, so 0=1V 1=2V 2=3V ext.... If something jumps from 1V to 2V when it was not meant to, "0" is now "1" all the math and logic after that point is wrong. The point is to have the least amount of errors possible, and making all possible answers either 1 or 0 limits that as much as possible.

Also, if you want to know more binary stuff I'd suggest looking at some of the more "fun" binary stuff like converting it to numbers and letters. This turns binary into a more readable format. After all, the 1's and 0's actually mean things, like 101010=42. Well at least I think it's fun.....

Actually it doesn't quite work that way.

[wall 'O text] In the example you provided, the signal could be sampled as either a 0 or a 1, and should be treated as a legitimate error. Generally if you get a signal at 2V, then there is something wrong with the circuit that is sending you the signal. How it really works is that your driving circuit is only capable of sending 2 levels of voltage, other than for a small time when the signal is switching between states. Your drivers voltage is guaranteed to be a 0 when below a certain voltage (V-outlow), and a 1 when above a different voltage (V-outhigh). The receiving circuit is designed so that it reads a 0 whenever it receives a voltage below a certain threshold (V-inlow), and a 1 when above a different threshold (V-inhigh). If designed correctly, then V-outlow is somewhat smaller than V-inlow, and V-outhigh is somewhat larger than V-inhigh. This gives you some margin to allow for noise, with a region that is undefined, that should only be seen during transitions between 0<->1 states. A clock is generally used to sample the input signal so that you will never need to worry about the signal in transition being seen as valid.[/wall 'O text]

Yup, digital design is my day job ...

*edit to add ...

Yes, the clock is an actual physical device.

The way computations are checked... let's say the microprocessor is organised in such a way, that gate A takes the end result of gates B and C and perform an operation (NAND or whatever) on them. Gate A won't work in the same cycle as B or C, or rather will work on results of B and C from the previous cycle. For an up to date result, you have to wait for the next cycle.

Considering that a circuit will pass through a huge number of gates, and each gate takes up a cycle, it all adds up. So the shorter individual cycles are, the less you have to wait.

... but the time it takes to get the result can't go any faster than the minimum time it takes for the longest logic operation (in this case either A->B or B->C, whichever takes longer). This is why there is a limit to how fast you can clock your processor. Now there are three things that can affect the speed of the logic (outside of the complexity of the operation itself, which is fixed once implemented). Process (actual variations on how the device is constructed in the factory), Voltage (Higher voltages tend to speed up the logic), and temperature (lower temperatures also speed up the logic). This is why you see overclockers with crazy cooling on their chips (also running things faster will also make them create more heat).

Max™
2014-04-03, 01:19 PM
You want to know how computers work?

Watch some of the minecraft videos where they build computers inside the game.

Going from redstone blocks, torches, and carts to this: https://www.youtube.com/watch?v=aQqWorbrAaY

Well, once you understand how they make the basic units work in the game, understanding the basic units in the real world isn't as difficult anymore.

Incidentally, that computer he set up in game there and was jumping around on the big keyboard typing? When he typed /night on the screen there the game read it as a command and executed it to switch the game to night time, that is mindbogglingly awesome.

pendell
2014-04-03, 01:32 PM
Okay.
At it's most basic level , a computer is an electric circuit , composed of things like Nor gates (http://en.wikipedia.org/wiki/NOR_gate) and nand gates (http://en.wikipedia.org/wiki/NAND_gates).

These are simple electric circuits that, depending on what electric inputs go in, will either make the output wire active or not active.

For instance, a NOR gate means "not or". Imagine we have two inputs, A and B:

If A is off and B is off, the gate output will light up.

If either A or b light up, the gate will go dark.

In any computer language you care to name, this would translate to the programming phrase

If NOT (a or b) THEN C ;

With me so far?

You can put these small, basic circuits together to form more complex circuits -- AND gates, OR gates , GOHERE gates, and so on.

Eventually you have a library of instructions and operations that occur based on what electrical impulses you send into the circuit.

Then, when you get tired of writing out the wires, you can introduce binary notation to describe the on and off states of the input switches.

0010001111000010

But that's hard to read, so we convert it to hexadecimal, since on a computer the basic unit of storage is a byte - 8 bits, but its convenient to group 2 bytes together, thus making hexadeximal notation.

The number above becomes

0x23C2.

That's easier to read eh?

But it's still not easy enough, which is why we started inventing mneumonics to remember the codes. Then we got clever and wrote programs to read the mneumonics and automatically replace them with the appropriate binary numbers.

So we write
MOVE 1, AX
MOVE 2, BX
ADD BX,AX
MOVE BX, CS

which corresponds to
A=1; B=2; c=a+b;

These one-for-one mneumonics are called Assembly language, and the program we use to translate the human word into the binary code is an Assembler.

Ah, but we're not done yet! Clever people invented languages so that they could make very complex statements, which would be first converted to Assembly, and from there to the binary program, which at the final state would be input to a bunch of switches. The output of that instruction is fed back in to the next instruction, which does it again for the next instruction, and so forth, for as long as the program runs.

So any programming you do is essentially instructing the computer to emulate a complex series of electrical circuits, consisting of circuits, resisters, and diodes.

But you're probably never going to deal with that directly. Instead, there are many layers of insulation between you and the raw metal.

At the bottom level are the circuits themselves, which provide a basic library of operations.

Sitting on top of that is something called Microcode (http://en.wikipedia.org/wiki/Microcode), which provides a library of basic functions and procedures using the raw circuits.

Sitting on top of THAT is the actual Assembly language for the processor, which we mentioned above.

On top of that are the general-purpose programming languages like C++ and Python and FORTRAN and Java, which allow you to express quite complicated ideas without having to worry about registers or memory addresses -- the programming language does that for you.

Sitting on top of THAT are the application software like games which we all know and love.

Does that make sense? If you're still confused, you can probably pick up a set of the circuits big enough for humans and rig up a simple calculator with a soldering iron and a breadboard, purely using electrical circuits. A computer is basically that, except that it consists of billions of circuits smaller than molecules, at least in modern computers.

Respectfully,

Brian P.

hajo
2014-04-03, 01:47 PM
TAlso are the switches in some way physical things that are referenced by other switches or are they even weirder?
Logic gates can be done in a lot of ways.
Early designs used punchcards, relays, vacuum tubes etc.
But to get speed, small space and low power consumption, you need transistors, and integrated circuits.
See logic gates (http://en.wikipedia.org/wiki/Logic_gates#Universal_logic_gates) and http://en.wikipedia.org/wiki/Digital_computer

A "minimal" cpu needs about 3500 "switches".
And you can see the difference between a computer made from 3500 vacuum tubes the size of a lightbulb, using 100 W each, and say, a 6502 which also had only a few thousand transistors.
See http://en.wikipedia.org/wiki/Microprocessor and http://en.wikipedia.org/wiki/MOS_Technology_6502

Also: Visual Transistor-level Simulation of the 6502 CPU (http://www.visual6502.org/)


Why can't a computer be assembled that uses more states than two?
Sure it is possible to built that, see http://en.wikipedia.org/wiki/Analog_computer
But to mass-produce a device with many/millions of parts, reliable and cheap, it is easier to do with a simple design.

Drumbum42
2014-04-03, 03:32 PM
Actually it doesn't quite work that way.

[wall 'O text] In the example you provided, the signal could be sampled as either a 0 or a 1, and should be treated as a legitimate error. Generally if you get a signal at 2V, then there is something wrong with the circuit that is sending you the signal. How it really works is that your driving circuit is only capable of sending 2 levels of voltage, other than for a small time when the signal is switching between states. Your drivers voltage is guaranteed to be a 0 when below a certain voltage (V-outlow), and a 1 when above a different voltage (V-outhigh). The receiving circuit is designed so that it reads a 0 whenever it receives a voltage below a certain threshold (V-inlow), and a 1 when above a different threshold (V-inhigh). If designed correctly, then V-outlow is somewhat smaller than V-inlow, and V-outhigh is somewhat larger than V-inhigh. This gives you some margin to allow for noise, with a region that is undefined, that should only be seen during transitions between 0<->1 states. A clock is generally used to sample the input signal so that you will never need to worry about the signal in transition being seen as valid.[/wall 'O text]

Yup, digital design is my day job ...


Yea, I was assuming VERY large errors for the sake of example. Like a +/- of 1.5V for sample error. I like to simplify to explain a general idea, because it gets really complex really fast. (though I may have possibly over simplified) I mean we haven't even gotten to Fetch->Decode->Execute yet.

Edit: Fetch->Decode->Execute is an abstraction of what a CPU does all day. Get Instruction -> Decode Instruction (or OpCode) and prepare for execution -> Execute Instruction -> DO IT AGAIN!

factotum
2014-04-04, 01:54 AM
A "minimal" cpu needs about 3500 "switches".


Factoid: the Intel 4004, widely regarded as one of the first true microprocessors, had around 2,300 transistors in it. A high-end modern CPU has more than a billion of the things, although that's at least partially because a modern CPU has several processors built into a single chip and also has an awful lot of on-board RAM, which is expensive on transistor count.

The Prince of Cats
2014-04-04, 05:01 AM
Why can't a computer be assembled that uses more states than two?
They can, but there's not as much use for them as for binary. A ternary-state computer uses the values of on/yes/one, off/no/zero and a third state of 'maybe' which allows for actions based on unknown variables. I know ternary paradigms are good for agent-oriented computing, but it's been a decade since I did those lessons at university...

ObadiahtheSlim
2014-04-04, 07:09 AM
Three-State is used in modern computers. The values are On, Off, and High Impedance. The third state is basically to let it know that it's not connected. Useful for when you have several different parts all trying to talk at the same time like a bus (http://en.wikipedia.org/wiki/Bus_%28computing%29)

ace rooster
2014-04-04, 11:22 AM
While we're on the subject, I've never been quite certain as to how overclocking works. As I understand it, the idea is that the clock determines how frequently the computations are checked. By forcing the processor to check computations more frequently, the processor performs those computations faster. The problem, then, is that if the processor is not given enough time to compute, the transistors may not be given enough time to switch, producing errors.

What I don't understand is how that works out in practice. Is the clock an actual circuit within the processor? How are computations checked?

A field effect transistor can be (loosely) thought of as a capacitor connected to a switch, so that when the voltage across the capacitor is above a certain value, the switch is closed, otherwise it is open. Putting current into the capacitor does not instantly close the switch because the capacitor has to charge a small but non zero amount. This is (partly) where the delay in transistors comes from. Increasing the voltage causes the current to increase which charges the capacitance faster which switches the transistor faster. Hence the chip can work reliably at faster speeds. Power rises as the square of current, so power output increases fast, hence the meaty cooling required.

Imagine a not gate connected to it's own input. It's output switches between high and low based on the time it takes a transistor to switch. This is a clock. Increasing the voltage makes this switching faster. By putting a line of not gates longer than any step in the computation you can be sure that any other step will be finished by the time the clock switches. Thats a simple way of doing it at least, don't know how it is actually usually done.

Saidoro
2014-04-05, 01:55 AM
Yea, I was assuming VERY large errors for the sake of example. Like a +/- of 1.5V for sample error. I like to simplify to explain a general idea, because it gets really complex really fast. (though I may have possibly over simplified) I mean we haven't even gotten to Fetch->Decode->Execute yet.
Read errors aren't the main reason many-state logic isn't used(they're a reason, but not the big one). A 10-state gate would require many, many more transistors to build than the 2-state ones that are common practice, at least enough to use up any space you've gained when switching to 10-state allowed you to reduce the overall number of gates. There's just no profit in it, and then things like read errors come along and move it from merely pointless to something that's largely unheard of.

A field effect transistor can be (loosely) thought of as a capacitor connected to a switch, so that when the voltage across the capacitor is above a certain value, the switch is closed, otherwise it is open. Putting current into the capacitor does not instantly close the switch because the capacitor has to charge a small but non zero amount. This is (partly) where the delay in transistors comes from. Increasing the voltage causes the current to increase which charges the capacitance faster which switches the transistor faster. Hence the chip can work reliably at faster speeds. Power rises as the square of current, so power output increases fast, hence the meaty cooling required.

Imagine a not gate connected to it's own input. It's output switches between high and low based on the time it takes a transistor to switch. This is a clock. Increasing the voltage makes this switching faster. By putting a line of not gates longer than any step in the computation you can be sure that any other step will be finished by the time the clock switches. Thats a simple way of doing it at least, don't know how it is actually usually done.
Typically the clock signal would be generated using a crystal oscillator (http://en.wikipedia.org/wiki/Crystal_oscillator)(probably conditioned with some other intermediate circuitry before being fed to the gate).

Balain
2014-04-06, 02:21 AM
My first course in assembly we did a little bit of programming in machine language, all the binary instructions. It really really really makes me appreciate high level languages.

Then we did bare metal programming on raspberry pi, as cool as having direct control over hardware made me appreciate having an os

Oh on a side note you can make any of the logical gates using only NAND or NOR gates. I seem to recall a few architectures using this but not which ones exactly.

gomipile
2014-04-06, 02:54 AM
I can't recommend Computerphile's videos on YouTube highly enough for helping to build a basic intuition for these sorts of things.

hajo
2014-04-06, 05:05 AM
assembly .. programming in machine language ..
bare metal programming on raspberry pi
To get that feeling, try programming in a really minimal language such as BF (http://esolangs.org/wiki/BF) :smallamused:
Also, some people have actually built real (http://vonkonow.com/wordpress/category/toys/) hardware (http://www.robos.org/sections/electronics/bfcomp/) for BF,

"ideal device to learn how computers work (both software and hardware)"
and even designed a CPU (http://www.clifford.at/bfcpu/bfcpu.html) :smallcool:


you can make any of the logical gates using only NAND or NOR gates. I seem to recall a few architectures using this

From Apollo Guidance Computer (http://en.wikipedia.org/wiki/Apollo_Guidance_Computer):

"was the first to use integrated circuits (ICs). .. the Block I version used 4,100 ICs, each containing a single 3-input NOR gate"

Edit: Caution, links to BF* get censored

shawnhcorey
2014-04-06, 08:10 AM
Factoid: the Intel 4004, widely regarded as one of the first true microprocessors, had around 2,300 transistors in it. A high-end modern CPU has more than a billion of the things, although that's at least partially because a modern CPU has several processors built into a single chip and also has an awful lot of on-board RAM, which is expensive on transistor count.

Modern CPUs also have redundant circuits. If the main circuit fails, it is cut out and the backup circuit is wired in. This is because the more transistors you have, the greater the probability one will fail.

Digital computers are based on the von Neumann architecture (https://en.wikipedia.org/wiki/Von_Neumann_architecture).

hajo
2014-04-06, 09:24 AM
recommend Computerphile's videos on YouTube
Such as Home-Made Z80 Retro Computer (http://www.youtube.com/watch?v=OtpaY8VD52g&list=UU9-y-6csu5WGm29I7JiwpnA) :smallcool:

And a blog about some home-made computer, using 6502 (http://uebersquirrel.blogspot.de/2013/10/the-spoiler-entry.html) and Forth (http://uebersquirrel.blogspot.de/search/label/forth).