top of page
Search

History of OSs: Part I - What and Why?

Updated: Jan 28, 2020



This series of posts is going to be about a subject that have to a certain extent hijacked my attention recently- The history of The Major operating system we are using today. When I started, I imagined the project as built on a Three pillars structure, being: Dos/Windows, Mac-OS, and Linux. I was aware that before Linux came a system by two guys from bell labs called UNIX, which was kind of a big deal in the mainframe world. But as it turned out Apple’s Mac-OS and iOS are also based on Unix and I read more about the system and its various incarnation it became clear that it will take, together with its own predecessor Multics, at least as much space as any of its younger peers. Maybe as much as them.

I will not go into too much technical detail, mainly because I wouldn’t know enough about that. But I do want to give a general description of the subject itself, along with the human and businesses activity that was going around it.


Lastly this first post will touch upon the definition of an OS and what I termed “pre-history” of the Operating system- from the very first electronic computers until Multics. Obviously, each of these is its very own rabbit hole, and I mainly want to give background for the later history, so you’ll have to forgive me for the superficiality and the hand-waviness.



What is an operating system

Most people know if you’ll ask them what operating system ran on their desk PC (most likely Windows) or on their phone (Statistically: Android). Generally, if you’ll further ask them what an operating system is, you’ll find they understand it in terms of design and Interfaces.

The focus on design is understandable, if misguided, as this is the main way, we distinguish objects in the physical world, and is a thing you could judge at one glance. And the corporations selling systems are very much aware that if they want to persuade you that their new version is a large departure from the previous one, they better make a great change in the design. Nonetheless, the way things look on the screen is mostly independent from what going on under the hood.


The second guess about interfaces is much closer. One of the major jobs of the operating system is bridging the gap between user and machine in ways that are convenient and understandable to us. The system also needs to mediate between pretty much everything else in the machine. They decide how a program could communicate with other software, how much resources it gets and when, it communicates between it and the various devices that make the machine and are connected to it, and so forth.


Another good analogy would be a platform or an environment in which every other software run. Or a layer that lay between the applications and the hardware (the layers model is quite widespread, and we are actually leaving out some layers.)


We could summaries the purpose of the operating system in two points. It lords over the various task, essentially having elevated access to the resources and being the sole authority in how they are distributed, rather than needing the various programs, usually of various origins and authors, to somehow cooperate by themselves, and it automates fundamental tasks, abstracting the lower levels of the machine for the user and also for the developer.


The precise definition of what is part of the OS, what is “above it” and what is “below it” would vary between different schemes, and this could become ambiguous as it not only depends on pure software science but also on branding, packaging and the ability to replace or alter the certain piece of code on a given machine (code that is not meant to be alterable is called firmware).






Prehistory of operating systems

The first computers didn’t have users, but operators. There was also no operating system. The operators ran one program at a time, directly on the machine and this program had access to the whole hardware


Most of the operators were young women that used to be so called “human computers” meaning clerks that done complicated calculation using a mechanical calculator together with pen and paper.


The new job as computer operator (or programmer) required specific technical knowledge. The operator had to think in direct terms of the machine, without any abstraction or interface. The only means of communicating with the computer was directly configuring memory states with rotors and switches.


Later machines enabled their operators to enter code and data with punch-paper, either in card or tape form, and to receive output by line printers. To achieve this, custom software libraries were written specifically to manage the communication between the main system and those devices. These new forms of interaction made the operation of the tasks decidedly more streamlined and automated.


When computer labs became a day-to-day section of larger organisations such as Military Bases, Government Firms, and Universities, the operators had to manage various clients, instead of having their own dedicated missions.


One method was to allot each client an amount of time, that was booked in advance, to run their code and get its output. Another method was to just let the clients stand in line and serve each one in their own turn.


Note that these computers were extremely slow in modern terms, and some of the calculations that researches wanted to run were in fact complicated.

Meanwhile Hardware time was extremely expensive- in Electricity, in Capital and in Overhead. And while software also took much more labour, as clients didn’t have modern high-level languages and had to implement their software on punch-cards, it was still by far the cheaper of the two, and code was commonly written, run and then discarded. Therefore, the clients had to live with what computing resources they've got, and in the same time a more effective method was desirable.


Here came what is termed “Batch Processing”. Instead of the clients physically standing in line in front of the console, crowding the limited floor space, they left their programs with the operators, which ran the programs one after the other. The clients could come afterwards and receive their output. This was further automated by joining the punch cards together into a makeshift tape, letting the computer pull it in when it was ready to receive new input. This capacity too was accomplished by a custom library of programs.


This whole time other general software libraries were developed by various clients in order to aid their own code in repetitious task that didn’t change much from program to program. The libraries were craftily linked to the main program each time. Eventually manufacturers added such libraries to the input and output libraries that already were being provided pre-installed on their machine. By this point they had something that was starting to look and behave like an operating system.


Cases of the human operators causing data loss or errors motivated the manufacturers to add safety and security features to their libraries. The machine now monitored its own memory and its disk storage, it kept count on the pages printed, the card punched and read and so on in order to catch errors and they monitored which program accessed which files and prevented requests that looked like mistakes, such as program trying to access data that belonged to another.


The system was starting more and more to manage itself for large stretches of time and was signalling to the operators only when physical intervention was required for it to continue or if it ran out of tasks.


Eventually the pre-packaged libraries amalgamated into a more unified program that run at start-up and stayed in the system, always in the background, an environment or an interface in which all other tasks ran. It read each task, controlled its execution, recorded its usage of hardware resources, reassigned resources no longer needed, and immediately started the next task. This managing program were used to be called monitors (not to be confused of course with display screens).


From the monitors evolved what is today called the kernel of an operating system. As could be presumed from this name, the amalgamation of software into more complex systems was not done.


Advancements in the later 50s introduced into computer interface the teleprinter- a kind of typewriter that could enter commands into the computer and print back its output. This made the option of booking time for clients, now for the first time - Interactive Users, more viable. The user could have a “conversation” with the computer in real time. But this was still too expensive, as users naturally had to spend time thinking, or retrying what they wanted to do. Hence being an interactive user was still at the level of a privilege. The teletype, as well as very limited CRT displays (think more oscilloscope than a video screen) were also used by the operators to better monitor the system’s operations and state. But even by then nothing superseded the old on/off blinking lights.


Memory allocation, meaning moving data from where it is processed to main memory, was another issue that had a large computer time cost, because processing was (still is) orders of magnitude faster than allocation. Communicating with magnetic drives and tape, or with other networked machines was even slower. So frequently the processor was left idle while it waited for a new batch of data to arrive.


A possible idea to use computer time better was to have multi-tasking, also called “parallel tasking”. The easiest option was to include duplicate processors, but those themselves contributed to the cost. Another was to have a single processor alternate between different tasks. Every time the processor needs an input from a user or a packet of data from main memory, it could attend to another task. More sophisticated schedulers could add a fixed timer for the rotation, and dynamically adjust the timers and the turn order to be more balanced and efficient. Advancement in processing technology in the early 60s made this idea more and more feasible.


This in turn introduced more problems, now that tasks ran in parallel, they ran in the same environment at the same time. You didn’t want one task to interfere with another on purpose or by mistake. It was also more convenient for the programmers if they could write their program without needing to know which memory addresses and other resources will be used by other tasks at the time of execution.


All of this called for a program that could monitor the system memory as well as “virtualise” or abstract it, keeping each process isolated with its own segment of memory, while still keeping the most used data of any of them readily available to the processor. Since this program could control other programs and not vice-versa, this also introduced a system of privileges to the machine.


Next time: We look at a daring, (misguided?) attempt at constructing a sophisticated time-sharing Operating system: Multics.



Sources:


en.Wikipedia: “Multics” “History of Operating Systems” “Operating System” “User Interface” and various others

Multicians.org: History” and various others

0 comments
Post: Blog2_Post
bottom of page