Pages

Sunday, April 29, 2012

Random Access Memory (RAM)

Random Access Memory is a volatile memory and once we put on the computer, it reads the RAM size of our computer or memory space available for our software. The program in RAM is erased as soon as we put off the computer. Therefore, RAM is a volatile memory.
Random Access Memory can be divided into two major categories - the Static RAM (SRAM) and the Dynamic RAM (DRAM). Static Random Access Memory is capable of storing information statically as long as power is supplied to the device. Dynamic memory will only store information for a few milliseconds before it is lost. A few milliseconds of time may not seem much, but within this length of time microprocessor is able to accomplish quite a few tasks.

Read Only Memory (ROM)

Read Only Memory can be subdivided into many different categories, such as PROM (Programmable Read Only Memory), EPROM (Erasable and Programmable Read Only Memory), EEPROM (Electrically Erasable and Programmable Read Only Memory). The ROM is a device that is mask programme at the factory by the manufacturer in the last phase of fabrication. It is most often used to a large production runs because the manufacturer charges thousands of dollars for the initial mask.
The PROM is a field programmable device programmed by a machine called PROM burner or PROM programmer. Here, the programming is done only once.
The EPROM has an advantage over the PROM because it can be erased. The EPROM is erased by exposing it to a high-intensity ultraviolet light for approximately 6 to 40 minutes.
The EEPROM is a device that can be programmed and erased electrically. It is often called read mostly memory (RMM) - since it is often used to store data for an extended period of time.
For an example, as soon as we put on the computer, the messages displayed and other devices connected to it are checked by the program inside the ROM, which is called firmware.

Monitor (Visual Display Unit)

Every computer must have a display terminal called Visual Display Terminal (VDT) or Visual Display Unit (VDU). Instant reference of hardware and software level interpretation is displayed on the monitor. This is basically Cathode Ray Tube (CRT). The information regarding bits (0s and 1s) are displayed on the monitor in character form which we see as alphanumeric characters. Only the characters can be seen on the screen. Each screen has very tiny blinking materials caused to appear by electric current called pixel. The higher the number of pixels on the screen, (it is assumed that) the higher is the resolution or better is the character display or graphics display.
Cathode Ray Tube (CRT) or Visual Display Unit (VDU)  has, usually 24 lines or rows and 80 columns. Different types of monitors are listed below:
MDA - Monochrome Display Adapter (one colour B/W)
CGA - Coloured Graphics Adapter (colour Red Green Blue)
MCGA - Multi-Colour Graphics Adapter
EGA - Enhanced Graphics Adapter
VGA - Video Graphics Array
SVGA - Super Video Graphics Array
LCD - Liquid Crystal Display

Thursday, April 26, 2012

Artificial Intelligence

It is an advanced concept of computer science that studies how human think, and then attempts to apply that knowledge in hardware and software to give computers similar capabilities. Areas of research include speech and pattern recognition, the ability to perform language translations, natural language processing, the ability to learn from previous experience, and the information. Practical applications for AI range from medical diagnostics to computer configuration and the interpretation of oil well logging data. In Artificial Intelligence, problem solving and decision making system that uses a computer representation derived from the knowledge and experience of a human expert is called expert system. An expert system consists of two parts: the domain or database of factual knowledge about the subject, and a set of rules that provide a method of using that knowledge. Expert systems are usually confined to a very narrow area of expertise. This is also known as rule-based system.

Fifth Generation Computers (Future)

The microprocessors which are extensively used as memory devices contain very pure form of silicon in crystal form. Attempts are being made by Japan, Germany and USA for the development of memory device and main processors (microprocessors) by using protein or Gallium Arsenide (GaAs), after the use of which, it has been estimated, the capability of computer will increase millions of times. Recently, these are named as Super Conductors. Japan has taken keen interest to win the race of developing super computers by using super conductors. These computers may possibly be very small like microcomputers which should enable us to use all the facilities of super computers at home. The development of super conductors was declared by Japan in 1978.
Attempts have been made for several years fora the development of super conductors using Gallium Arsenide (GaAs) or biochips. These computers will be able to understand natural language and will have thinking power called Artificial Intelligence. Only the experts will be able to use these computers because they will be much more sophisticated than present microcomputers.

Fourth Generation Computers (1974)

Intel Corporation developed Intel 4004 chips in 1971 for the first time as microprocessors. The result of the development of IC is the development of microcomputers. Many American computers started to manufacture microcomputers since 1973. IT System, AST, ALR, Macintosh, IBM PC Agtec, Wang Laser, Letron, Scan and Imex microcomputers (IBM and IBM compatibles) came into market. Microcomputers are highly efficient in data processing, very much reliable, small and elegant. It can accommodate a huge number of data in a small place. Micro computers are as small as a notebook and are capable of communicating with Mini Computers or Mainframe Computers with the help of Modem (Modulator and Demodulator). In urban areas of Kathmandu and Lalitpur only, many computers are used per household. They are used in teaching, offices, campuses, government and non-government organizations. All the computers used at home and in offices are fourth generation microcomputers. The main processors used in these computers are microprocessors.
Fourth Generation Computers have microprocessors which have serial numbers. The serial number indicates the capability of the computer and speed as well.
In Large Scale Integration (LSI) about 20 thousand transistors and resistors are placed in compact form. Later the compact form of LSI was developed, in which 10/20 LSI could be placed, which was called VLSI (Very Large Scale Integration). The VLSIs are called microchips or microprocessors. The development of VLSI leads to the development of Home computers or Personal Computers (PC), which are famous worldwide and are mostly used.

Third Generation Computers (1966-1973)

Manufacture of silicon chips was completed in late 1950s and in 1963, IBM company marketed computers containing IC as memory devices. IBM 360 is an example of this type. From 1964 computers began to be connected with Visual Display Unit (VDU), high speed printers and magnetic tapes as storage media or backing media. All computers after 1965 contain IC in them. Third generation computers are still small in size and they are much faster as they used small chips containing thousands of parts integrated in them. For mass storage, floppy disk, hard disk, tape or card were used in this generation of computers. Multiprocessing and multiprogramming are the two main advantages of this generation of computers. Actual market of computers started from the third generation of computers.

Second Generation Computers (1959-1965)

Transistor was invented by Nobel Prize winners John Burdeen, Walter H. Brattain and William B. Shockley who jointly invented transistor ( short name for transfer resistor) in 1948-1949. Magnetic core of transistor is more capable of storing information than vacuum tube. The computers using transistors as storage media are classified as Second Generation Computers. One transistor could do the task of 1000 vacuum tubes. Due to this reason, it seemed that smaller and faster computers can be manufactured. We can take an example of Mark-I which has 18000 vacuum tubes and can b replaced by 18 transistors. IBM 1401 is an example of this advantage. Second Generation Computers are relatively smaller than the First Generation Computers and they are much faster and reliable.
General purpose computer was first developed by IBM. In 1950, IBM developed IBM 650, a general purpose computer. Similarly, in 1951, Remington Rand developed UNIVAC-I which could be used in business data processing. During this period, many other companies were also involved in developing computers.

First Generation Computers (1946-1958)

The storage media or memory used in the first generation computer was vacuum tube. Mark-I was developed using vacuum tubes by Howard Aiken in 1937. Mark-I had a length of 51ft, width 3ft and a height of 8ft. It weighed 32 tonnes. 18000 vacuum tubes were used and nearly 7 lakh 50 thousand pats were assmbled in this computer. Nearly 500 miles long wire was cut into pieces to connect different components. This mainframe computer took 4.5 seconds for multiplication operation, and 3 additions were performed in 1 second.
Meanwhile EDVAC, UNIVAC-I, MARK-II, ENIAC, Z-3, Z-4 etc. mainframe computers were manufactured.
Using many vacuum tubes, ENIAC (Electronic Numerical Integrator and Calculator) was devised by John Mauchly and J. Presper Eckert Jr. in 1946. This computer was comparatively efficient as it could perform 30 days' work done by any other computer, in one day at that time. From 1947 to 1955, this computer was used in American Offices.

Monday, April 9, 2012

Division process of Generations of Computers

In 1962, computer scientists held a conference in which they decided to give generations for the development of computers. In different generations, different kinds of memory units were used. In the First Generation Computers, vacuum tubes were used; in the Second Generation Computers, transistors were used as memory device; in the Third Generation Computers, Integrated Circuits (Chips) were used as memory device. But recent advancements over chips has made Fourth Generation Computer consisting of Very Large Scale Integration (VLSI), and very soon, we will get Fifth Generation Computers with Artificial Intelligence (AI).
Later in 1969, an Intel engineer named Marcian "Ted" Hoff presented his design ideas for a microprocessor chip to the representatives from a Japanese calculator company. The first microprocessor - the Intel 4004 - could execute only a few instructions, and it could manipulate only tiny amounts of data at one time. Intel produced 8008 at the end of 1971 and 8080 in 1974.
In the spring of 1976, a young Hewlett-Packard technician named Steve Wozniak bought a microprocessor from MOS Technology and set out to build a computer around it. This computer -Apple I- was shown at the Homebrew Computer Club in Silicon Valley. A friend of Wozniak named Steve Jobs, suggested that they will form a company to market the Apple. With the financial and managerial help from Mike Markkula, a former Intel engineer and marketing executive, Apple suddenly became a major entrant into the computer industry.
In 1984, the lower-priced Apple Macintosh appeared with many of the same hardware/software features. Apple was not alone of course. Dozens of software suppliers announced integrated packages in 1984 that were designed for popular personal computers.
But still we have not reached the point of success and Japan has decided to produce fifth generation computers.

Development of Chip

Actual test and implementation of computer started at the end of 1950s. The computer has reached the present stage after going through three major modification stages. Integrated Circuit (IC) was patented by Harwick Johnson of RCA in 1953. Later developments resulted from:
a) the application of photo engraving to IC manufacture.
b) the use of layers of silicon oxide insulation in order to build up multiple layers of crystal circuitry.
The latter of these two methods is often referred to as MOS (Metal Oxide Semiconductor).
Integrated circuits with the equivalent of more than 100 components are called LSI (Large Scale Integration) and those with more than 1000 components are called VLSI (Very Large Scale Integration).
Modern Integrated Circuits (IC) are built on wafer thin slices of extremely purified silicon crystal CHIPS. Development process of chip is a very tedious task. Meticulously the chips are developed under computer control. Development of a chip is summarized in points given below:

  1. The high-purity polycrystalline silicon is the raw material used to manufacture integrated circuit chips, the heart of the computer.
  2. Under computer control, the chunks are melted in a crucible and slowly draws upward forming cylindrical boule, or crystal.
  3. The crystal is sliced into thin wafers and polished to a mirror finish.
  4. Using photolithography, complex circuit patterns representing thousands of transistors, diodes, and capacitors are created on the water's surface.
  5. Each chip on the water is then electrically tested.
  6. The wafers are then diced, or cut into individual chips.
  7. The result is an individual chip approximately 5mm square, a marvel of minaturization that forms the electronic building blocks of countless devices that are improving the world in which we live and work.

Dr. Herman Hollerith

Dr. Herman Hollerith was a census statistician at the U.S. census bureau in the mid 1880s. At that time, the bureau was still trying to count results from the 1880 census and saw little prospect of completing the count before the next census which was schedules for 1890. Hollerith proposed a mechanised solution to the problem which was based upon equipment handling punched cards. His idea was to code the data by representing it by punched hole combinations on the cards. His machine is called electromechanical punched card equipment or Hollerith's Tabulating Machine or Census Machine. In 1896, he founded the Tabulating Machine Company to make and sell his invention. Later in 1924, this firm merged with others to form the International Business Machines (IBM) Corporation which is by far the most popular and biggest company in computer manufacturing.

Lady Augusta Ada Lovelace

One person who recognized the importance of the Analytical Engine from the very beginning was Lady Augusta Ada Lovelace, daughter of the famous English poet Lord Byron. Lovelace was a mathematician and a long-time supporter of Babbage. It was her who persuaded Babbage to use the binary system for his invention instead of the decimal system. She also thought of ways to program the machine, so that it would repeat the same set of instructions and carry out instructions if certain conditions existed. These techniques are still in use today to make computer programs more efficient. Because of her work, many consider Lovelace to be the first programmer. On her honour a programming language is used in American defence, called ADA.

Analytical Engine and Difference Engine

One of the creative thinkers of the 19th century was the Englishman Charles Babbage (1792-1871) A.D. He is considered to be the Father of Modern Computer Science. Like Pascal and Leibniz, Babbage was a mathematician. And he too, wanted to build a machine that could perform difficult calculations accurately and quickly.
Babbage was Professor of Mathematics at Cambridge University and started a small model of his "Difference Engine". He demonstrated this machine to the Royal Society in the year 1823. The demonstration won government backing for Babbage who wished to produce a larger machine able to generate reliable astronomical and mathematical tables containing values accurate to 20 decimal places. The machine was never  completed because of mechanical difficulties. However, Babbage's researches led him to develop the concept of an Analytical Engine in the year 1833, essentially a general purpose automatic calculator, which he designed in 1834. This design owed much to Jacquard's invention and incorporated many features present in modern computers. He had a concept of using binary digits (Bits) in this machine.
His idea can be summarized in the following points:

  1. Data and program instructions fed in via a device using a suitable medium (punched cards).
  2. Storage facilities for data and instructions.
  3. A mechanised unit for calculation - a "mill".
  4. A suitable output device.
Note: Using his idea about 100 years later in 1937, Mark-I was developed by Howard Aiken.

Sunday, April 8, 2012

Jacquard's Loom

Between 1802 and 1804, Joseph Jacquard, a French textile manufacturer, perfected a mechanical means of automatically controlling weaving looms to facilitate the production of woven cloth with complex patterns.
An essential feature of the Jacquard's Loom was a series of punched cards strung tightly together side by side in a long continuous strip. These cards were automatically fed through a loom mechanism in a sequence, with the purpose of controlling the loom's weaving action/ The pattern in woven cloth was produced by raising particular sections of warp threads (those fixed to the frame) each time the shuttle was passed across the frame. In the Jacquard's Loom, each warp could be raised by an individual hook unless a sprung pin deflected the hook. If the spring pin aligned with a hole in a punched card, one end of the pin would pass through the hole, so that the other ed of the pin failed to deflect the hook. Before passing of the each shuttle, the next card was moved close to the pins. Thus the operation depended on, whether each warp would be raised or not raised. Thus, the basic rule of
operation is :
Hole in card - warp thread not raised
No hole n card - warp thread raised

Jacquard's Loom was the start of a chain of developments which has reached to the robot operated by factory production line of today.
The above example can be compared with the binary operation or the binary coding which is the basis of operation in modern computers. The first person known to have used binary codes for number representations was Francis Bacon in 1623.

Leibniz's Stepped Calculator


In 1671, Gottfried Wilhelm von Leibniz, a German Mathematician, invented a calculating machine which was able to perform true multiplication and division. He wrote "It is unworthy of excellent men to lose hours like slaves in the labour of calculation...", a statement which would no doubt endear him to many Maths pupils. He set out to build a better mechanical calculator. He knew that such a machine would save much time enabling scientists to concentrate on their studies rather than on the calculations. So in 1694 Leibniz completed his machine with modifications which used cylinders and gave the name Stepped Reckoner. Pascaline could only add and subtract but Leibniz's could also multiply, divide and find square roots.

Pascaline

It was 19 year old Blaise Pascal, in 1642, a French mathematician who devised the first true calculating machine, reputedly to help his father who was a tax controller! Numbers were entered by dialing a series of numbered wheels, and a series of toothed wheels transferred the movements to a result dial. Printed on wheel had the numbers from 0 to 9. When the first wheel made a complete turn from 0 to 9, it automatically caused the second wheel to advance to the next number, and so on.
Pascaline could add and subtract easily by the movement of wheels. The numbers of calculating capability of Pascaline was 9 crores 99 lakhs 99
thousands 9 hundreds and 99 (9,99,99,999).
In the late 1960s, a new computer programming language was developed by Professor Niklaus Writh in Zurich, and it was named PASCAL in recognition of Pascal's contribution to computing.
More than 50 years passed before anyone invented a calculating machine more advanced than the Pascaline. During that time, scientists' need to do complicated calculations continued to grow. Instruments such as Abacus and Pascaline helped, but they were too limited in capacity. As a result, some experiments were never completed. Others did manage to complete only after months and years of tiring calculations.

The Slide Rule

In 1620, just six years after the invention of logarithms, William Oughtred invented the slide rule, which is a calculating device that uses the principles of logarithms. A simple slide rule consists of two graduated scales, one of which slips upon the other. The scales are devised in such a way that suitable alignment of one
scale against the other makes it possible to obtain products, quotients or their function by inspection.
Middle scale has been positioned with '1' against the '2' of the top scale, so that multiples of 2 can be read off along the scale. Alternatively, the scale shows divisions giving quotients of 2.
Slide rule is an example of analog device, which is not like a digital device in which readings can be taken in a smooth continuous scan along the scale, instead of stepping along the set of distinct values.

Napier's Bones

John Napier, a Scottish Mathematician did a considerable amount of work on aids for calculation. The most notable of which was the invention of logarithms 1614 A.D. He also devised a set of rods for use as multiplication aids. These rods were carved from bone and are often called Napier's Bones. We can easily make a set of ourselves by copying the multiplication tables into cards as shown in the given figure. This also shows how to use the rods once we have cut them out.

Abacus

Abacus is portable device that consists of beads strung on wires or wooden rods. Using an abacus one can rapidly and accurately add, subtract, multiply or divide large numbers. No one is sure about exactly when the first abacus appeared. Historians agree though, that this device appeared between 2000 and 5000 years ago and that it had its origin in ancient China, Egypt and Greece. Abacus is still used in some parts of the world today.
Abacus has two parts divided by a mid bar. The part above the mid bar is called heaven and each bead has a value equal to 5. The part below the mid bar is called earth and each bead on earth has a value equal to 1. While calculating, the beads are brought near to the mid bar.

Saturday, April 7, 2012

Processing Capabilities

Computer processing involves manipulating the symbols that people use to represent things. Likewise, we create and manipulate many kinds of symbols that represent the facts and concepts of our lives.
The word data is the plural form of datum which means fact. Data are facts-the raw materials of information. Data are represented by symbols, but they are not information except in a limited sense. As used in data processing, information is data arranged in an order and from that is useful to the people who receive it. That information is a relevant knowledge, produced as the output of data processing operations and acquired by people to enhance understanding and to achieve specific purposes.
The following four operations are the only ones a computer can perform, but they enable computers to carry out the data processing activities:

  1. Input/output operation
  2. Text manipulation and calculation operation
  3. Logic/comparison operation
  4. Storage and retrieval operation

Clock Speed of the Computer

Clock speed of the computer is measured in terms of Megahertz (MHz). One million cycles per second is called Megahertz. The original IBM PC operated an 8088 microprocessor at 4.77 MHz. The modern Pentium processor runs at 1GHz or above.
In addition to being very fast, computers are very accurate. The circuits in a computer have no mechanical parts to wear and malfunction. Performing their hundreds or thousands (or millions) of operations every second, these circuits can run errorless for days at a time. If the input data entering the computer are correct and if the program of instructions is reliable, then we can expect that the computer generally will produce accurate output. Perhaps we have heard the phrase, garbage-in-garbage out, or GIGO. These are the error performed by the user level, not by computer itself.

Speed and Accuracy Capabilities of a Computer

A computer works one step at a time. It can add and subtract numbers, compare letters to determine alphabetical sequence, and move and copy numbers and letters. There is nothing so difficult in these operations. What is significant is the computer's speed. The time required for computer to execute such basic operation as adding and subtracting varies from a few microseconds (millionths of a second) for small machines to 80 nanoseconds (billionths of a second) or even less for large ones. Fractions of a second is given below:
Unit of Time                                                                           Part of a Second
Millisecond (ms)                                                       One-thousandth (1/1,000)
Microsecond (µs)                                                     One-millionth (1/1,000,000)
Nanosecond (ns)                                                      One-billionth (1/1,000,000,000)
Picosecond (ps)                                                       One-trillionth (1/1,000,000,000,000)

Introduction to Computers

The term 'computer' is derived from Latin word Computare which means - calculate. Similarly, in English language Calculate means to do mathematical operation. About 40/50 years ago, calculator was introduced. Calculator computes the numbers. This is also a digital operating device. But, computers originated with a huge size and extreme processing capabilities. Computer has become very powerful device to aid the development in the field of data processing and research. Therefore, it has become almost indispensable in modern technologies. Computer capabilities can be summarized in points as follows:

  • Computer is a device capable of calculating very fast and accurately.
  • Computer is a device which is capable of accessing data from a remote work station.
  • Computer is a device capable of storing millions of data and information in a small storage device.
  • Computer is a device which can give logic and follows the instructions given in program.
  • Computer never gets tired and it can perform repeated calculations.
  • Computer helps in the logical development of human mind and in precise work.
  • Computer takes data as input and after processing gives out the processed result which is called output.
In other words, computer is a fast and accurate electronic symbol (or data) manipulating system that's designed to automatically accept and store input data, process them and produce output results under the direction of stored program of instructions.

Network Communication

Ever since the beginning of time, people have had an uncontrollable need to communicate. Our nature drives us to exchange ideas, information, and opinions. Without communications, we can't learn. Without learning, we can't grow. And without growth, we shrivel up and disappear. Creativity and communication drive the human race more than any other force. They are essential elements of life.
Networking is the ultimate level of communication. It transcends words and pictures to provide pathway for thoughts, ideas and dreams. Networking as it exists today is the result of millions of years of evolution and growth.
If we go through the history of communication, Charles Babbage and Countess of Lovelace are listed as the pioneer in the field of present communicating world. They developed new concept of character representing. Suddenly, the alphabet shrunk from twenty six characters to two: 0 and 1. The world was thrust into a technological era that would spawn numerous networking inventions including the telephone, phonograph, motion pictures and the television. Finally, data communications was born a century later when computers were linked across distances. Your life will go on changing.
NASA uses data communications tot control the space shuttle and realign geosynchronous satellites, surgeons perform computerized operations from miles away, and because of data communications, today's work force is better to balance home and work by working from home, or telecommuting. Data communications has had a profound effect on humanity.
The world will go on changing. It will become a small global village. In the new information age, humans will become very big fish in a very small pond.

Thursday, April 5, 2012

Units of CPU

Arithmetic and Logical Unit (ALU)
The Arithmetic and Logical Unit (ALU) of the Central Processing Unit (CPU) acts as one of the prime components of the computer. It is the place where actual execution of instruction takes place during the processing of operations. The Arithmetic and Logical Unit (ALU) actually performs the arithmetic operations such as addition, subtraction and multiplication and logical operations such as OR, AND, NOT etc.

Control Unit (CU)
All the computer resources are managed from the control unit. The control unit controls the flow of data through the CPU and to and from other devices. The control unit is the logical hub of the computer. Many instructions carried out by the control unit involve simply moving data from one place to another from memory to storage, from memory to the printers and so forth.

Memory Unit (MU)
The memory unit is the integral memory system that is used to hold data and instructions for the immediate access use by the computer's ALU. It operates at the highest speed and can be directly accessed by the CPU.

Storage Unit

The storage unit is a non-volatile memory in a computer system which provides a very large amount of space for data storage for the future use. Since, the main memory of the computer system is volatile, costly and of less storage capacity, secondary memory is used to store large amount of data and programs for the future retrieval. Secondary storage is non-volatile in nature. It means that it does not lose its contents when the computer is turned off. It is also known as backing storage or external memory. The popular secondary storage media used in the personal computers are: floppy disk, hard disk, optical disk and flash disk.

Central Processing Unit

The Central Processing Unit, commonly known as CPU, is the most important part of the computer system. Every task to be done by a computer must be interpreted and executed by the processor. This makes the processor the most important component on the motherboard. This is the reason why the processor is called the brain of a computer.

Every CPU has three parts which are:
  • Arithmetic and Logical Unit
  • Control Unit
  • Memory Unit (Primary Unit)
CPU mainly performs the following functions:
  • It controls the transmission of data from input devices into memory.
  • It processes the data held in main memory.
  • It controls the transmission of information from main memory to output devices.
  • It performs all the arithmetic and logical calculations.

Output Unit

Output unit consists of all the output devices. Those devices which help the user to get information from the computer are known as output devices. Output devices receive information from the Central Processing Unit (CPU) and present it to the user in the desired form.
An output device performs the following functions:
  • Accepts results produced by the computer in binary coded form.
  • Converts the binary coded data into human understandable form.
  • Supplies converted results to the outside world.
There are two types of output generated by the output devices, Hardcopy Output and Softcopy Output. The output generated by the monitor is called softcopy and it exists temporarily. The output generated by the printer is called hardcopy output and it exists in a permanent form of paper.

Understanding the Computer

Computer system is a combination of a computer and its associated hardware components that work together to perform the task in computer. A computer system functions correctly and effectively only when all the interconnected hardware components perform their assigned task or work.
 A computer system follows basic steps of:
Input- feeding of data and instruction.
Process- converting raw data into useful information.
Store- keeping the data for the future references.
Output- delivering the processed result to the user.

The basic components of computer system are as follows:
1. Input Unit
Input unit consists of all the input devices. Those devices which help to supply data or instructions into the computer are known as input devices. It acts as a communication channel between the user and the computer.

Input devices perform the following basic function:
  • Accept data from the outside world.
  • Convert data into binary code that is understandable to the computer.
  • Send data in binary form to the computer for further processing.
2. Output Unit
3. Central Processing Unit
4. Storage Unit

Note: The explanation of the units of computer except input unit are given in separate posts.