And that word was – laziness. From time immemorial, both the development of all mankind and each Homo Sapiens in particular, was driven by the desire to make their lives as easy as possible. It is for this reason that a huge number of brilliant ideas were generated according to the principle “I want to change the channel without getting up from the couch” or, as in our case, “I don’t want to count a huge boring pile of numbers on my own”. This was the original idea of the great physicist and mathematician Blaise Pascal, who in 1642 assembled the first adding machine with the melodious name “Pascalina”. It was the first machine capable of performing all four basic arithmetic operations. The idea itself appeared when young Pascal watched his father, a taxman, spend long hours calculating the taxes collected. This design was a small box with several gears connected to each other. The appearance of this unit marked both a period of slight relief among people closely associated with mathematics, and a long time without progress in this area.
Trump card
The entire history of the development of personal computers is a spasmodic line of development with rather non-standard solutions and approaches to the situation. And despite such a long lull in the field of computer development, a new round of innovation still made the machine work. The truth is not simple, but… weaving. It was weaving looms, almost a century and a half after gears, that made it possible to get closer to modern PCs. It’s just that at the beginning of the 19th century in France, massive experiments were carried out to automate work in silk factories. And it was then that the first fully automatic loom using punched cards was created. This created a huge sensation not only among the masters of cutting and sewing, but also on one meticulous mathematician, like all Englishmen, Charles Babbage, who was able to discern much greater potential in such a curiosity. A prototype of a certain machine arose in his head, which could not only count, but also display the results of calculations on a punched card – a negative plate for photographic printing. He spent a lot of time creating this device, but in the end, going far beyond the budget and spending 3 times more time (nine years instead of the planned three), he managed to build only alpha version an experimental model that still thought it was better Perelman all computers that existed at that time.
Slowly but surely, scientists, mathematicians and other bright minds of that time began to pick up this idea, using it in their scientific tricks. The most famous is the Hollerith Tabulator, which could work not only with numbers, but also with entire data tables (a kind of “stone age” Excel). It was used during the population census and became the first such device of this type to be sold on an industrial scale.
Getting closer and closer.
The history of PC development has visited a huge number of different countries throughout its time, so the country of beer and sausages is next in line – Germany. It was there that engineer and one of the pioneers of computer engineering, Konrad Zuse, lived and worked. He was the first of all to create a machine that worked not in the decimal number system, but in the binary one. This invention, unfortunately, still had a huge drawback – mechanical internals, although the fact of using a non-standard number system within the framework of history is difficult to overestimate. But Zuse did not stop “pioneering” there, so after three years of persistent and focused work (in 1941), he created a mechanism that combined both mechanics and electrics, and the main part was a relay.
At this time, work in the same direction was in full swing across the Atlantic Ocean. In far, far away America, back in 1944, a young (at that time) graduate of Harvard University, Howard Aiken, under a contract with IBM – the company founded by Hollerith in 1896 – made the first American programmable computer under the sonorous name Mark I, the work of which was based on relays. However, due to the fact that it was created on the basis of Babbage’s ideas and developments, it operated only with data in the decimal number system.
Big and fat.
Huge gears and other mechanisms https://playorocasino.co.uk/login/ are not only bulky, but also heavy, so mechanical machines were doomed from the very beginning. This pushed the bright minds of that time to search for a new, more technological base. After long and persistent thought processes, they came up with the idea of using the idea of one American inventor Lee de Forest, who created a triode in 1906 – a three-electrode vacuum tube. Due to its anthropometric characteristics, this was the best of all relay replacement options at that time. Based on this device, in 1946, at the University of Pennsylvania, the first computer was ENIAC. This colossus consisted of 18,000 notorious lamps, weighed as much as 30 tons, extended over 200 square meters, and consumed so much energy that “it’s impossible to tell in a fairy tale, not to describe with a pen.”. It still used the decimal system, and was controlled in a switchboard style with switches. This led to a large number of problems caused by by crooked pickers incorrectly set switches. Another important name is associated with this project – John von Neumann. This was precisely the person who first proposed writing a program into the machine’s memory in such a way that later, if necessary, it could be changed during the process. This principle was the basis for the creation of a completely new computer, EDVAC, in 1951, which uses a binary system and uses RAM based on ultrasonic mercury delay lines. The memory stored 1024 words, and each word consisted of 44 binary digits.
Through thorns to the stars.
EDVAC allowed people to fully understand the prospects such technologies open up. This industry has begun to gain enormous momentum and incredible speed in its development. As a result, the period until approximately the mid-80s of the 20th century began to be divided into generations.
First generation of computers (1945-1954)
This period is the time of laying the foundations. By this time, all the developers had roughly decided on what the computer should consist of. The internals should have been as follows: central processor, RAM and input/output devices. The central processor itself included an arithmetic-logical unit and control devices. They worked on lamp elements, which led to the absorption of a huge amount of energy, as well as to incredible unreliability. They were used to solve scientific problems, and the programs were compiled in assembly language.
Second generation of computers (1955-1964)
The change of generations was determined not by someone’s “authoritative” opinion out of nowhere, but by several determining factors: huge lamps were replaced by compact transistors, delay lines in RAM were replaced by memory in magnetic cores. These things led to a significant reduction in the size of the device, increasing its reliability and performance. Index registers, hardware for performing floating point operations, and instructions for calling subroutines were developed. The time of the second generation was also marked by the emergence of high-level programming languages, which was a prerequisite for the emergence of portable software that did not depend on technology. Another important discovery is the emergence of I/O processors. They made it possible to free the CPU from I/O control and do this using special devices. All these innovations have significantly expanded the range of users, as well as the number of problems solved. And as a result, the OS began to be used.
Third generation of computers (1965-1970)
In this case, the reasons for the change of generations were the following: instead of transistors, integrated circuits began to be used, this increased productivity, reduced price and size, and small-sized variations appeared. Due to the increase in power, it became possible to run several programs simultaneously, which led to an expansion of OS functions.
Fourth generation of computers (1970-1984)
In this case, the change of generations led to the creation of large and ultra-large integrated circuits (LSI and VLSI), which resulted in a decrease in the size and cost of computers, as well as an increase in the number of users of this design. In the early 70s of the 20th century, the well-known company Intel, well known to almost everyone, released the 4004 microprocessor, and it was after this significant event that the microprocessor part of the computer’s existence began to develop by leaps and bounds, which would eventually displace everything else. This device was a 4-bit parallel computing device with rather limited capabilities. This can be understood by the fact that it was originally used in calculators. Noticing the obvious promise of this invention, Intel intensified development in this direction, which led to the creation of a breakthrough project that predetermined the entire future development of PCs. This project was the 8-bit 8080 processor, which had an extensive instruction system. This processor was used to create the Altair computer, for which the young Bill Gates wrote one of the first BASIC language interpreters. And this moment is the transition to the fifth generation.
Fifth generation of computers (1984. – our days)
And the real thing has come! Well, almost. The transition to this generation marked the complete dominance of microprocessors over all other variations, which turned out to be a “dead-end branch of evolution”. The speed of development of this brainchild was simply amazing – over several decades there was a leap that exceeded all previous development times. In 1976, Intel created the 16-bit processor 8086, in 1982 an improved version was released – 80286, which supported several operating options: real and protected, had a 24-bit capacity. In 1985, Intel introduced a new 32-bit processor, the 80386. It was many times more powerful than the previous ones and could support up to 4 GB of RAM. In addition, a new operating mode appeared – virtual 8086 mode, which allowed several programs to work simultaneously. After a short period of time, the 80386 processor changed to 80486. It added a 512 KB cache memory and increased clock purity to 133 MHz. And in 1993, Intel released the legendary and well-known Intel Pentium microprocessor, which marked the entry of computer technology into modernity and our everyday life a little more than completely.
No Comments