PC Tips

Computers - the universal problem solving machine


What is a computer?
In a very broad sense computers are a kind of automatic electronic device with an ability to solve various problems. Automatic, because, it can carry out operations on its own. That too, repetitively.

But how does it do so?
To solve a problem a computer needs to know where the problem lies. This is done by a process called "input". Once this "input" process is complete the "input" message or problem is sent to the brains of the computer to be worked out. When the worked out process is over the result is told back to us through a process called "output". In this sense you can also say  a computer is a device that calculates a result ("output") from one or more initial items of information ("input").

However, it sounds too simple to mean a machine that does so many complicated things. In fact, this is an outline of the machine called a computer. There are plenty of other small and big stages that play the real trick in between.

To make it clear in terms of the computers you are familiar with, let us be more specific. The key-board in which a message is written, or, the mouse you are to click are both meant for the "input" functions. While the screen, or, monitor, in which you see the results is meant for "output". A printer attached to a computer is a pure "output" device. The part that carries out the main job that is the problem solving, is called the Central Processing Unit, or, the CPU. And you can call it the brain of the computer. Not only it 'solves a problem' or, 'works it out'. It also stores it for future use. And retrieves, or, calls back the same whenever asked for. Based on these functions the CPU is said to be made of three main sections. The control unit - the part that mainly carries out  the workings; the arithmetic logical unit(ALU) - a dedicated helping hand for the control unit; and the primary storage place for storing. In addition to this storage place there is yet another place for storage of information. Called secondary storage, it lies (though attached to) outside the CPU. So you see it has so many sections, each specialised in carrying out the intelligent works like reading your problems, processing them for a solution, storing them and finally giving you back the solutions.

Now, doesn't it sound strange for a machine, made up of some lumps of plastics, metal, and tiny looking silicon chips and a mesh of wires?

Yes, it does! After all, these are the kinds of stuff that even your old non-programmable TV set, or radio receiver is made of with. While they cannot act on their own, a computer does.
Again, being a machine, it doesn't have got an organic brain, like human beings.

This is called software. And this is in addition to all those chips and all, also present in other electronic devices.

So how is this possible?
This is because a computer is favored with a special arrangement. This arrangement is meant for backing a computer with an inbuilt logic that guides it throughout. In fact, this inbuilt logic gives necessary instructions. Charts out the course of actions. And also the order in which they are to be done to solve the problems, or, get the result. The problems that may require hundreds, or, thousands of strenuous hours to solve manually. Quite intelligent, isn't it?
Indeed. But this arrangement cannot come just like a magic. And this is where computers need the full support of human brains. The brains that can fix up the logic inside the electronic device to turn it into computer.

To put it simply this special arrangement provided in a computer is called the software. It gives computer a sort of 'brain' to follow certain course of actions backed by reasons. And all the rest of stuff like those chips, plastics, and wires are called the hardware.

While the hardware needs the support of engineering knowledge, it is mainly the mathematical logic on which the software stands. The better the logic, or the more advanced the logic, the better is the ability of a computer to solve problems.

A deeper look into the basics of this logic:
And while talking about computer logic, it is better to refer it as digital logic. Because the backbone of this logic, or the well defined reasoning system, is digits.
So a computer, the digital one that you see around, cannot understand anything unless it is stated in terms of digits. And the digits used here are binary. Binary refers to a two number system as against the 10-digit decimal system that we have been so familiar with since the day we learn counting. Thus any number, can be represented by the binary digits 0 and 1 in the same way it is done with the digits '0' through '9'. And mathematics holds the key to the main concept in the development of the computer. The concept that all information can be represented as sequences of zeros and ones.

Well, why alone this '0's and '1's? 
This is because computers are electrically operated. And the basic instruction that any electrical device can act upon, is 'switch on', and, 'switch off'. When switch is on current flows and the device comes to life. When the switch is off, current stops flowing, and the machine turns dead.

Similarly a computer is also supposed to understand and work on the same principle. And this is what the inventors of the computers also thought of. In fact, the values of '0' and '1' are realized in the machine by the presence or absence of electric current. Translated In terms of mathematical logic they can be said to represent false or true, respectively. So based on this principle the entire digital logic of the computer is designed. And computers can record electric impulses coded in the very simple binary system.

And the binary digit, or bit, has become the basic unit of data storage and transmission in a computer system. Simply because of the ease with which these digits help in realizing the two conditions of a running electrical and electronic device.

More...
Well, this is only the very rough view of the way computers work. But there are plenty of other small and big things that make the computers of today. Let us take a brief look at some of them.

However, before we do that, it is better to take a look at a brief history of computers. It will help us know the way the early computers would work. And also the way they have reached the present stage starting from their earliest ancestors. 

====================================================================

History of generations of Computers

d while processing this directive]
The computers that you see and use today hasn't come off by any inventor at one go. Rather it took centuries of rigorous research work to reach the present stage. And scientists are still working hard to make it better and better. But that is a different story.

First, let us see when the very idea of computing with a machine or device, as against the conventional manual calculation, was given a shape.

Though experiments were going on even earlier, it dates back to the 17th century when the first such successful device came into being. Edmund Gunter, an English mathematician, is credited with its development in 1620. Yet it was too primitive to be recognized even as the forefather of computers. The first mechanical digital calculating machine was built in 1642 by the French scientist-philosopher Blaise Pascal. And since then the ideas and inventions of many mathematicians, scientists, and engineers paved the way for the development of the modern computer in following years.

But the world has had to wait for yet another couple of centuries to reach the next milestone in developing a computer. Then it was the English mathematician and inventor Charles Babbagewho did the wonder with his works during 1830s. In fact, he was the first to work on a machine that can use and store values of large mathematical tables. The most important thing of this machine is its use in recording electric impulses, coded in the very simple binary system, with the help of only two kinds of symbols.
This is quite a big leap closer to the basics on which computers today work. However, there was yet a long way to go. And, compared to present day computers, Babbage's machine could be regarded as more of high-speed counting devices. For, they could only work on numbers alone!

The Boolean algebra developed in the 19th century removed the numbers-alone limitation for these counting devices. This technique of mathematics, invented by Boole, helped correlate the binary digits with our language. For instance,  the  values of 0s are related with false statements and 1s with the  true ones. British mathematician Alan Turing made further progress with the help of his theory of a computing model. Meanwhile the technological advancements of the 1930s helped much in furthering the advancement of computing devices.

But the direct forefathers of present-day computer systems evolved in about 1940s. The Harvard Mark 1 Computer designed by Howard Aiken is the world's first digital computer which made use of electro-mechanical devices. It was developed jointly by the International Business Machines (IBM) and the Harvard University in 1944.

But the real breakthrough was the concept of the stored-program computer. This was when the Hungarian-American mathematician John von Neumann introduced the Electronic Discrete Variable Automatic Computer (EDVAC). The idea--that instructions as well as data should be stored in the computer's memory for better results--made this device totally different from its counting device type of forerunners. And since then computers have increasingly become faster and more powerful.

Still, as against the present day's personal computers, they had the simplest form of designs. It was based on a single CPU performing various operations, like, addition, multiplication and so on. And these operations would be performed following an order of instructions, called program, to produce the desired result.

This form of design, was followed, with a little change even in the advanced versions of computers developed later. This changed version saw a division of the CPU into memory and arithmetic logical unit (ALU) parts and a separate  input and output sections.

In fact, the first four generations of computers followed this as their basic form of design. It was basically the type of hardware used that caused the difference over the generation. For instance, the first generation variety was based on vacuum tube technology. This was upgraded with the coming up of the transistors, and printed circuit board technology in the 2nd generations. It was further upgraded by the coming up of integrated circuit chip technology where the little chips replaced a large number of components. Thus the size of computer was greatly reduced in the 3rd generation, while it become more powerful. But the real marvel came during the 1970s. It was with the introduction of the very large scale integrated technology (VLSI) in the 4th generation. Aided by this technology a tiny microprocessor can store millions of pieces of data.

And based on this technology the IBM introduced its famous Personal Computers. Since then IBM itself, and other makers including Apple, Sinclair, and so forth, kept on developing more and more advanced versions of personal computers along with bigger and more powerful ones like Mainframe and Supercomputers for more complicated works.

Meanwhile the tinier versions like laptops and even palmtops came up with more advanced technologies over the past couple of decades. But only advancement of technology cannot take the full credit for the amazing advancement of computers over the past few decades. Software, or the inbuilt logic to run the computer the way you like, kept on being developed at an equal pace. The coming of famous software manufacturers like Microsoft, Oracle, Sun have helped pacing up the development. The result of all these painstaking research is to add to our ease in solving complex problems at a lightning speed with a device that is easy to use and operate, called computer.