Then2Now: The History of Computing

Welcome to my new series of articles on computing and related topics.  I’m calling it the “Then2Now Series” because we will explore how various computing and internet technologies have progressed over the years.  We will look at how things started, investigate major milestones along the way, and discuss how we’ve gotten to the point where we are now.

At the moment, I’m planning at least three articles in this series.  Today we will explore The History of Computing and discover first off, what a computer is, then look at where and how “computers” started, and finally touch on how we’ve landed on today’s computers.  Future articles might explore topics such as The History of Networking & the Internet, Cloud Computing, Viruses & Malware, and who knows what other topics.  If you’ve got something particlar you’d like to discuss leave a comment or drop me a note and I’ll see if we can write an article about it.

Also, let’s temper our expectations and say that, honestly, this series could be a one-and-done.  I give me about 62-percent chance of picking up one of those other topics and continuing the series for my next article.  For sure, I’ve bet real money on worse odds, but I figured I’d at least be transparent about it.

For now, let’s get started with our first article, Then2Now: The History of Computing.

What Is A Computer?

Before we can get into the history of computing, let’s talk about what a computer is…how do we define “computer?”  Simply, a computer has to perform four tasks:

  • accept input
  • process that input
  • have the ability to store the original input or processed data
  • deliver output

Anything device that can perform these four functions could be considered a computer.

What Were The Earliest Computers?

Well, the earliest computing machines were not machines at all; they were humans!  Being a “computer” was actually a job title.  Computers were people who performed mathematical calculations such as mathematicians, accountants, book keepers, and engineers.  The first machine-style computer (e.g. mechanical computer) was an ancient Greek contraption called the Antikythera mechanism which is generally regarded as being the first analogue computer.  It was designed, built, and used sometime around 200 BC, though scientists and archeologists can’t agree on exactly when.  The Antikythera mechanism was a hand-cranked device with a series of gears and dials which could show the relative position of planets, moon, and the sun over time.  It could also be used to calculate the four-year cycle of athletic games–the precursor to our modern Olympics.

Despite humans creating the first analogue computing device sometime around 200 BC, the next set of machines that could even pretend to be computers didn’t show up until the mid-to-late 1930’s.  (Although, in the late 1800s, Charles Babbage did design on paper several machines that qualify as computers.)   Even then, these were super basic devices that could perform very simple addition or generate waveforms for the testing of audio equipment.  For about the next decade computers were mechanical devices–literally driven by gears, motors, camshafts, etc.–and were mostly limited to basic calculations.  (Image to left:  a portion of the Harvard Mark I mechanical computer c. 1944.  Courtesy Wikimedia.)

During World War II computing exploded with the US Navy and Massachusetts Institute of Technology creating ENIAC, the first electronic computer.  This system required approximately 1,000 square feet of floor space and weighed about 30 tons.  ENIAC was still of the generation of computers programmed by flipping switches on it’s front panels so its utility was limited in much the same way as its portability.  In the late 1940’s however, all that changed.  CSIRAC became the first computer to do something we take for granted every day–run a program.  This system ushered in a new era of computing where the hardware was separate from the software and different programs could be created to achieve different results from the same set of hardware.

Mainframe Computers

Through the 1950s and 1960s mainframe computers became the standard.  Mainframe computers generally filled an entire room with many large cabinets each of which had a special purpose such as processing, storage, input, or output.  In the business world, these behemoths were shared among many departments with each workgroup being allowed specific times of the day or week that they could use the system.  At the time there was no such thing as many users sharing time on one machine simultaneously.  Generally, mainframe computers provided output on paper as screens were not used in computing yet.  (Image to right:  UNIVAC File Computer c. 1961.  Courtesy Wikimedia.)

Minicomputers

While you and I would say, “There’s nothing ‘mini’ about them,” this format of computer was the logical successor to the mainframe.  The “minicomputer” rose in popularity in the late 1960 and throughout the 1970s because they allowed a business to own several systems–generally one for each department.  At this point, the engineers did not have to share their computer with the accountants and so forth.  Minicomputers took up less space and had lower power requirements than mainframe computers but they were still limited by paper-only output as well as not being networked or connected to any other systems.  Minicomputers eventually fell out of favor with the rise of smaller, faster, and more powerful desktop computers and servers.

Desktops & Servers

This brings us up to the 1980s and 1990s where, in the early years, individual users started getting their own computers.  First this would occur in a business setting where several workers, but probably not every worker, in each department would have desktop machines.  These machines were still pretty basic only having a keyboard and perhaps a tape drive for input along with a basic monitor (often a TV) for output.  Computers did not yet use mice as input devices.  Many of these machines were still stand-alone and not networked together.  In some cases the desktops were networked back to something resembling a mainframe which acted like a central server for things like these card-catalog terminals in the Duke University Library.  (Image to left:  Card catalog terminals in Duke University Library.  Courtesy Duke University Archives.)

Later, as the benefits of outfitting every worker and every home with a desktop computer became apparent, the proliferation of these small sized systems revealed a need for sharing information and allowing the systems to communicate with one another.  Networking was born.  As buildings and businesses set up computer networks to allow their systems to communicate owners and I.T. staff realized that having central storage locations for certain data was a pretty good idea.  So, taking a page out of the Big Bad Book of Centralized Computing (i.e. mainframe technology), businesses started installing servers.  These servers sat on the network and allowed users to share files, data, and communicate with one another more easily.  These servers also provided a gateway and a barrier to the open network we now call The Internet.

Around this same time, home users were being introduced to dial-up internet service.  Therefore, even non-networked home computers could join a network and share data with others.  Often these home systems had very modest processing power and some did not even have permanent storage (e.g. hard drives).  (Image to right:  IBM PS/1, Model 2011, c. 1990)

Connecting hundreds and thousands of computers together and allowing people to roam around other’s machines sounds good in theory, however, as you may imagine, this is also when network security starts to become much more of an issue.  That will be a future article though.

By the end of the 1990s we were seeing quite powerful desktop computers, a collection of early laptop-style systems (then called “Notebook Computers”) and a very few examples of early pocket computers such as Palm Pilot, certain Sony Vaio models, and a collection of PalmTop computers offered by HP.  (Full disclosure…I saved a bunch of summer job money and bought an HP 620LX which I used religiously to take notes and keep my life organized through my last couple of years of undergrad.  I loved that thing.)

The Wild West

Starting in about 2000, after everyone realized that the stroke of midnight would not cause planes to fall out of the sky, erase everyone’s credit history, and life would keep on living, computing and networking started to explode.  This is what was referred to as the “Dot Com Boom.”  Companies and individuals were registering websites like crazy and everything was moving from an in-the-real-world presence to an online experience.  With this boom, came the need for lots and lots of servers connected to the internet.  System manufacturers started expending millions, perhaps billions, of dollars on research and development seeking smaller, faster, lighter, less power hungry chips.  At the same time, internet companies of all sizes are scrambling to set up data centers and get more capacity online to service the overwhelming demand from both corporate and private clients alike.  From this demand for more capacity in a smaller space was born what we now refer to as “rack servers.”  (Image to left:  Modern data center with server racks visible.  Present day.)

Not all of the benefits of these development efforts went to corporations and data centers.  As we move into the 2010s through the present day our systems have continued to decrease in size while increasing in capacity.  Below is an abbreviated list of ever-smaller-and-more-powerful computing devices.

  • 2007:  First iPhone (e.g. true pocket computer) introduced
  • 2010:  Apple introduces the original iPad
  • 2012:  Raspberry Pi credit card sized full computer sells for $35 each.
  • 2015:  Apple watch (e.g. wrist computer) is now available
  • 2015:  “Internet of Things” (small internet-connected computing devices) entering into the mainstream
  • 2017:  Nintendo releases the Switch, a full-featured portable gaming console
  • Present Day:  Multiple companies competing for supremacy in VR / AR headset space.

Moving Forward

We can all easily anticipate a continued push of classical computing systems towards reducing size and power consumption while increasing processing, storage, and connectedness.  These types of traditional systems are going to be the workhorse of our information technology for at least the next 30 years.  Beyond that, what’s going to be the truly new technology?  Truthfully, it would not surprise me if implantable computing systems are available within the next decade or two, but that may still be based on the classical systems we use today.  The new technologies might be:

Optical Computing

Optical Computing is also sometimes called Photonic Computing and is simply computing with light as opposed to electricity.  Though the implementation is proving to be quite difficult, this concept is not as far-fetched as you might imagine.  After all, we have successfully moved past the telegraph and now use fiber optic cables to transmit data over long distances.  Why not extend this concept into ultra-short distances such as within a computer chip?  Realistically though, the most likely application for this kind of technology will be in a hybrid system where some parts of the total computer use light and others use electrons.

Quantum Computing

Using an entirely different type of technology to our classical systems, quantum computers can process some types of data at speeds absolutely unthinkable for today’s systems.  Quantum computers could potentially help scientists solve such perplexing problems as cold fusion, faster-than-light travel, artificial general intelligence, and instantaneous communication at light-year scale distances.  Personally, I can’t wait!

Then2Now Series:

Jerod Karam

Jerod Karam is Vice President of Technical Operations at Netvantage Marketing, an online marketing company specializing in SEO, PPC and social media. Jerod consults with internal teams and external clients on all manner of technical projects, manages the flow of information surrounding the company's online objectives, manages relationships with external partners and suppliers, and is a constant bother to everyone in terms of maintaining online security.

Leave a Reply

Your email address will not be published. Required fields are marked *