MINDPRIDE Computer Services

 
Home | About Us | Our Services | Contact Information | Tutorials, Articles & Dictionaries | Site Map

HOME 

 

About Us

WhyUS

Services

Virus Alerts

 

Contact

Estimates

Refer A Friend

Site Map

 

Links

Privacy Policy

Disclaimer

MakePayment

 

History Of Operating Systems

    See also release dates for specific release dates of various operating systems.

earliest computers

    The first computers were analog and digital computers made with intricate gear systems by the Greeks. These computers turned out to be too delicate for the technological capabilities of the time and were abandoned as impractical.

    The first practical computers were made by the Inca using ropes and pulleys. Knots in the ropes served the purpose of binary digits. The Inca had several of these computers and used them for tax and government records. In addition to keeping track of taxes, the Inca computers held data bases on all of the resources of the Inca empire, allowing for efficient allocation of resources in response to local disasters (storms, drought, earthquakes, etc.). Spanish soldiers acting on orders of Roman Catholic priests destroyed all but one of the Inca computers in the mistaken belief that any device that could give accurate information about distant conditions must be a divination device powered by the Christian “Devil” (and many modern Luddites continue to view computers as Satanically possessed devices).

    In the 1800s, the first computers were programmable devices for controlling the weaving machines in the factories of the Industrial Revolution. Created by Charles Babbage, these early computers used Hollerinth (Punch) cards as data storage (the cards contained the control codes for the various patterns). The first computer programmer was Lady Ada, for whom the Ada programming language is named.

    In the 1900s, researchers started experimenting with both analog and digital computers using vacuum tubes. Some of the most successful early computers were analog computers, capable of performing advanced calculus problems rather quickly. But the real future of computing was digital rather than analog. Building on the technology and math used for telephone and telegraph switching networks, researchers started building the first electronic digital computers.

bare hardware

    In the earliest days of electronic digital computing, everything was done on the bare hardware. Very few computers existed and those that did exist were experimental in nature. The researchers who were making the first computers were also the programmers and the users. They worked directly on the “bare hardware”. There was no operating system. The experimenters wrote their programs in assembly language and a running program had complete control of the entire computer. Debugging consisted of a combination of fixing both the software and hardware, rewriting the object code and changing the actual computer itself.

    The lack of any operating system meant that only one person could use a computer at a time. Even in the research lab, there were many researchers competing for limited computing time. The first solution was a reservation system, with researchers signing up for specific time slots.

    The high cost of early computers meant that it was essential that the rare computers be used as efficiently as possible. The reservation system was not particularly efficient. If a researcher finished work early, the computer sat idle until the next time slot. If the researcher's time ran out, the researcher might have to pack up his or her work in an incomplete state at an awkward moment to make room for the next researcher. Even when things were going well, a lot of the time the ccomputer actually sat idle while the researcher studied the results (or studied memory of a crashed program to figure out what went wrong).

computer operators

    The solution to this problem was to have programmers prepare their work off-line on some input medium (often on punched cards, paper tape, or magnetic tape) and then hand the work to a computer operator. The computer operator would load up jobs in the order received (with priority overrides based on politics and other factors). Each job still ran one at a time with complete control of the computer, but as soon as a job finished, the operator would transfer the results to some output medium (punched tape, paper tape, magnetic tape, or printed paper) and deliver the results to the appropriate programmer. If the program ran to completion, the result would be some end data. If the program crashed, memory would be transferred to some output medium for the programmer to study (because some of the early business computing systems used magnetic core memory, these became known as “core dumps”)

device drivers and library functions

    Soon after the first successes with digital computer experiments, computers moved out of the lab and into practical use. The first practical application of these experimental digital computers was the generation of artillery tables for the British and American armies. Much of the early research in computers was paid for by the British and American militaries. Business and scientific applications followed.

    As computer use increased, programmers noticed that they were duplicating the same efforts.

    Every programmer was writing his or her own routines for I/O, such as reading input from a magnetic tape or writing output to a line printer. It made sense to write a common device driver for each input or putput device and then have every programmer share the same device drivers rather than each programmer writing his or her own. Some programmers resisted the use of common device drivers in the belief that they could write “more efficient” or faster or "“better” device drivers of their own.

    Additionally each programmer was writing his or her own routines for fairly common and repeated functionality, such as mathematics or string functions. Again, it made sense to share the work instead of everyone repeatedly “reinventing the wheel”. These shared functions would be organized into libraries and could be inserted into programs as needed. In the spirit of cooperation among early researchers, these library functions were published and distributed for free, an early example of the power of the open source approach to software development.

UNIX takes over mainframes

    I am skipping ahead to the development and spread of UNIX, not because the early history isn’t interesting, but because I notice that a lot of people are searching for information on UNIX history.

    UNIX was orginally developed in a laboratory at AT&T’s Bell Labs (now an independent corporation known as Lucent Technologies). At the time, AT&T was prohibited from selling computers or software, but was allowed to develop its own software and computers for internal use. A few newly hired engineers were unable to get valuable mainframe computer time because of lack of seniority and resorted to writing their own operating system (UNIX) and programming language (C) to run on an unused mainframe computer still in the original box (the manufacturer had gone out of business before shipping an operating system).

    AT&T’s consent decree with the U.S. Justice Department on monopoly charges was interpretted as allowing AT&T to release UNIX as an open source operating system for academic use. Ken Thompson, one of the originators of UNIX, took UNIX to the University of California, Berkeley, where students quickly started making improvements and modifications, leading to the world famous Berkeley Standard Distribution (BSD) form of UNIX.

    UNIX quickly spread throughout the academic world, as it solved the problem of keeping track of many (sometimes dozens) of proprietary operating systems on university computers. With UNIX< all of the computers from many different manufacturers could run the same operating system and share the same programs (recompiled on each processor).

    When AT&T settled yet another monopoly case, the company was broken up into “Baby Bells” (the regional companies operating local phone service) and the central company (which had the long distance business and Bell Labs). AT&T (as well as the Baby Bells) was allowed to enter the computer business. AT&T gave academia a specific deadline to stop using “encumbered code” (that is, any of AT&T’s source code anywhere in their versions of UNIX).

     This led to the development of free open source projects such as FreeBSD, NetBSD, and OpenBSD, as well as commercial operating systems based on the BSD code.

    Meanwhile, AT&T developed its own version of UNIX, called System V. Although AT&T eventuallysold off UNIX, this also spawned a group of commercial operating systems known as Sys V UNIXes.

     UNIX quickly swept through the commercial world, pushing aside almost all proprietary mainframe operating systems. Only IBM’s MVS and DEC’s OpenVMS survived the UNIX onslaught.

     “Vendors such as Sun, IBM, DEC, SCO, and HP modified Unix to differentiate their products. This splintered Unix to a degree, though not quite as much as is usually perceived. Necessity being the mother of invention, programmers have created development tools that help them work around the differences between Unix flavors. As a result, there is a large body of software based on source code that will automatically configure itself to compile on most Unix platforms, including Intel-based Unix.

    Regardless, Microsoft would leverage the perception that Unix is splintered beyond hope, and present Windows NT as a more consistent multi-platform alternative.” —Nicholas Petreley, “The new Unix alters NT’s orbit”, NC Worldw74

UNIX to the desktop

    Among the early commercial attempts to deploy UNIX on desktop computers was AT&T selling UNIX in an Olivetti box running a w74 680x0 assembly language is discussed in the assembly language section. Microsoft partnered with Xenix to sell their own version of UNIX.w74 Apple computers offered their A/UX version of UNIX running on Macintoshes. None of these early commercial UNIXs was successful. “Unix started out too big and unfriendly for the PC. … It sold like ice cubes in the Arctic. … Wintel emerged as the only ‘safe’ business choice”, Nicholas Petreley.w74.

    “Unix had a limited PC market, almost entirely server-centric. SCO made money on Unix, some of it even from Microsoft. (Microsoft owns 11 percent of SCO, but Microsoft got the better deal in the long run, as it collected money on each unit of SCO Unix sold, due to a bit of code in SCO Unix that made SCO somewhat compatible with Xenix. The arrangement ended in 1997.)” —Nicholas Petreley, “The new Unix alters NT’s orbit”, NC Worldw74

     To date, the most widely used desktop version of UNIX is Apple’s Mac OS X, combining the ground breaking object oriented NeXT with some of the user interface of the Macintosh.

 

 

The Bare Machine

Stacked Job Batch Systems (mid 1950s - mid 1960s)

A batch system is one in which jobs are bundled together with the instructions necessary to allow them to be processed without intervention.

Often jobs of a similar nature can be bundled together to further increase economy

The basic physical layout of the memory of a batch job computer is shown below:

	--------------------------------------
	|                                    |
	|   Monitor (permanently resident)   |
	|                                    |
	--------------------------------------
	|                                    |
	|             User Space             |
	| (compilers, programs, data, etc.)  |
	|                                    |
	--------------------------------------

The monitor is system software that is responsible for interpreting and carrying out the instructions in the batch jobs. When the monitor started a job, it handed over control of the entire computer to the job, which then controlled the computer until it finished.

A sample of several batch jobs might look like:

	$JOB user_spec		; identify the user for accounting purposes
	$FORTRAN		; load the FORTRAN compiler
	source program cards
	$LOAD			; load the compiled program
	$RUN			; run the program
	data cards
	$EOJ			; end of job

	$JOB user_spec		; identify a new user
	$LOAD application
	$RUN
	data
	$EOJ

Often magnetic tapes and drums were used to store intermediate data and compiled programs.

     

  1. Advantages of batch systems

     

    • move much of the work of the operator to the computer
    • increased performance since it was possible for job to start as soon as the previous job finished

     

     

  2. Disadvantages

     

    • turn-around time can be large from user standpoint
    • more difficult to debug program
    • due to lack of protection scheme, one batch job can affect pending jobs (read too many cards, etc)
    • a job could corrupt the monitor, thus affecting pending jobs
    • a job could enter an infinite loop

     

 

As mentioned above, one of the major shortcomings of early batch systems was that there was no protection scheme to prevent one job from adversely affecting other jobs.

The solution to this was a simple protection scheme, where certain memory (e.g. where the monitor resides) were made off-limits to user programs. This prevented user programs from corrupting the monitor.

To keep user programs from reading too many (or not enough) cards, the hardware was changed to allow the computer to operate in one of two modes: one for the monitor and one for the user programs. IO could only be performed in monitor mode, so that IO requests from the user programs were passed to the monitor. In this way, the monitor could keep a job from reading past it's on $EOJ card.

To prevent an infinite loop, a timer was added to the system and the $JOB card was modified so that a maximum execution time for the job was passed to the monitor. The computer would interrupt the job and return control to the monitor when this time was exceeded.

Spooling Batch Systems (mid 1960s - late 1970s)

One difficulty with simple batch systems is that the computer still needs to read the the deck of cards before it can begin to execute the job. This means that the CPU is idle (or nearly so) during these relatively slow operations.

Since it is faster to read from a magnetic tape than from a deck of cards, it became common for computer centers to have one or more less powerful computers in addition to there main computer. The smaller computers were used to read a decks of cards onto a tape, so that the tape would contain many batch jobs. This tape was then loaded on the main computer and the jobs on the tape were executed. The output from the jobs would be written to another tape which would then be removed and loaded on a less powerful computer to produce any hardcopy or other desired output.

It was a logical extension of the timer idea described above to have a timer that would only let jobs execute for a short time before interrupting them so that the monitor could start an IO operation. Since the IO operation could proceed while the CPU was crunching on a user program, little degradation in performance was noticed.

Since the computer can now perform IO in parallel with computation, it became possible to have the computer read a deck of cards to a tape, drum or disk and to write out to a tape printer while it was computing. This process is called SPOOLing: Simultaneous Peripheral Operation OnLine.

Spooling batch systems were the first and are the simplest of the multiprogramming systems.

One advantage of spooling batch systems was that the output from jobs was available as soon as the job completed, rather than only after all jobs in the current cycle were finished.

Multiprogramming Systems (1960s - present)

As machines with more and more memory became available, it was possible to extend the idea of multiprogramming (or multiprocessing) as used in spooling batch systems to create systems that would load several jobs into memory at once and cycle through them in some order, working on each one for a specified period of time.

	--------------------------------------
	|              Monitor               |
	|   (more like a operating system)   |
	--------------------------------------
	|         User program 1             |
	--------------------------------------
	|         User program 2             |
	--------------------------------------
	|         User program 3             |
	--------------------------------------
	|         User program 4             |
	--------------------------------------

At this point the monitor is growing to the point where it begins to resemble a modern operating system. It is responsible for:

 

  • starting user jobs
  • spooling operations
  • IO for user jobs
  • switching between user jobs
  • ensuring proper protection while doing the above

 

As a simple, yet common example, consider a machine that can run two jobs at once. Further, suppose that one job is IO intensive and that the other is CPU intensive. One way for the monitor to allocate CPU time between these jobs would be to divide time equally between them. However, the CPU would be idle much of the time the IO bound process was executing.

A good solution in this case is to allow the CPU bound process (the background job) to execute until the IO bound process (the foreground job) needs some CPU time, at which point the monitor permits it to run. Presumably it will soon need to do some IO and the monitor can return the CPU to the background job.

Timesharing Systems (1970s - present)

Back in the days of the "bare" computers without any operating system to speak of, the programmer had complete access to the machine. As hardware and software was developed to create monitors, simple and spooling batch systems and finally multiprogrammed systems, the separation between the user and the computer became more and more pronounced.

Users, and programmers in particular, longed to be able to "get to the machine" without having to go through the batch process. In the 1970s and especially in the 1980s this became possible two different ways.

The first involved timesharing or timeslicing. The idea of multiprogramming was extended to allow for multiple terminals to be connected to the computer, with each in-use terminal being associated with one or more jobs on the computer. The operating system is responsible for switching between the jobs, now often called processes, in such a way that favored user interaction. If the context-switches occurred quickly enough, the user had the impression that he or she had direct access to the computer.

Interactive processes are given a higher priority so that when IO is requested (e.g. a key is pressed), the associated process is quickly given control of the CPU so that it can process it. This is usually done through the use of an interrupt that causes the computer to realize that an IO event has occurred.

It should be mentioned that there are several different types of time sharing systems. One type is represented by computers like our VAX/VMS computers and UNIX workstations. In these computers entire processes are in memory (albeit virtual memory) and the computer switches between executing code in each of them. In other types of systems, such as airline reservation systems, a single application may actually do much of the timesharing between terminals. This way there does not need to be a different running program associated with each terminal.

Personal Computers

The second way that programmers and users got back at the machine was the advent of personal computers around 1980. Finally computers became small enough and inexpensive enough that an individual could own one, and hence have complete access to it.

Real-Time, Multiprocessor, and Distributed/Networked Systems

A real-time computer is one that execute programs that are guaranteed to have an upper bound on tasks that they carry out. Usually it is desired that the upper bound be very small. Examples included guided missile systems and medical monitoring equipment. The operating system on real-time computers is severely constrained by the timing requirements.

Dedicated computers are special purpose computers that are used to perform only one or more tasks. Often these are real-time computers and include applications such as the guided missile mentioned above and the computer in modern cars that controls the fuel injection system.

A multiprocessor computer is one with more than one CPU. The category of multiprocessor computers can be divided into the following sub-categories:

 

  • shared memory multiprocessors have multiple CPUs, all with access to the same memory. Communication between the the processors is easy to implement, but care must be taken so that memory accesses are synchronized.
  • distributed memory multiprocessors also have multiple CPUs, but each CPU has it's own associated memory. Here, memory access synchronization is not a problem, but communication between the processors is often slow and complicated.

 

Related to multiprocessors are the following:

 

  • networked systems consist of multiple computers that are networked together, usually with a common operating system and shared resources. Users, however, are aware of the different computers that make up the system.
  • distributed systems also consist of multiple computers but differ from networked systems in that the multiple computers are transparent to the user. Often there are redundant resources and a sharing of the workload among the different computers, but this is all transparent to the user.

  Services What We Offer Areas Covered Rates & Discounts
Estimates Maintenance Plans Links Phone Tech Support
About Us Refer A Friend Why Us? Reference Dictionaries Tutorials
Privacy Policy Service Protocol Disclaimer Contact Us

Web Page Designed By  ADAM
Copyright © 1981 - 2008
MINDPRIDE CONSULTING All rights reserved.
Revised: November 21, 2007