Monday, October 16, 2017

#31for21 #themmlinky Boyer Lecture by Genevieve Bell and Arguments: "What it means to be human; what it means to be Australian"





Tyler Nelson Weekly really enjoys a singer called Sabrina Carpenter. So for the theme of Argument I decided to put Rap Battle.



You really have to be on your dozens when you are doing this especially when you're having Adventures in Babysitting and maybe arguing with five-year-olds.



Along with Reddit's explainlikeim5 we might have to have arguelikeim5.



Five-year-olds argue in lateral and literal and logical ways.



And it's important not to fob them off and agree with them.



[Yes, it is truer than you know that there are monsters in the house. I had to fight a cockroach in the shower just this weekend! And I don't mean the monstered people who you might see in two weeks ...]



One of the things which stayed with me in Analytical Philosophy was the concept of the lie-to-children which we often make in the world of science and technology.



Which is why we go to other areas to get and to stay honest.



So I bring you Genevieve Bell and the Boyer Lecture for 2017.



https://www.youtube.com/watch?v=p261qJXj3k0 - from Chess in Concert.



So far Bell has given three Boyer Lectures about robotics. It inspired me to check out Douglas Engelbart's The mother of all demos which acquired that name later on.



Here is Grace Lee's infographic / drawing of Bell's conceptual process and main points / main game.



http://www.abc.net.au/news/2017-10-11/history-of-the-robot-by-boyer-lecturer-genevieve-bell/9011840



[Might point out Henry Schoenmeier's Summary: a two-year course in the precis which is somewhere on Trove. Until then:



https://inthismy70thyear.wordpress.com/page/2/ which has the best poems and the most honest reflections].



The lecture series is entitled Fast smart and connected. Number three is about how every technology has a country and a history, like Capek and the development and dissemination of the robot.



"What it means to be human; what it means to be Australian".



Bell's Big Question is "What does it mean to be human in a digital world?"



http://www.abc.net.au/radionational/programs/boyerlectures/genevieve-bell-fast-smart-and-connected-technology-has-a-history/9011390#transcript



http://www.abc.net.au/radionational/programs/boyerlectures/genevieve-bell-fast-smart-and-connected-dealing-lightning/9011388#transcript



http://www.abc.net.au/radionational/programs/boyerlectures/genevieve-bell-fast-smart-and-connected-where-it-all-began/9011340#transcript



I can see this hypothetical five-year-old wondering why these links are naked and why don't I put clothes on them - don't you know that raw HTML grows rotten and cold like an untaken yellow vitamin? And why do we spend so much time looking back when we could be moving forward - and more to the point, being in the present, the now?



So someone or other was "Dealing with Lightning".



And Bell explores her own background in the first lecture - it is instructive, entertaining and educational.



So I might give a quote:



Then eight months ago I came back to Australia, to Canberra, and the Australian National University. And everyone keeps asking me why. 'Why would you come back to Australia?' they ask. In their tone, I hear a fair bit of skepticism, and two distinct questions. 'Why would you leave Silicon Valley?' And 'Why would you come here?' After all, I’d spent the better part of the last 30 years at the epicentre of the biggest set of digital transformations of our lifetime. I saw the birth of Google, Amazon, Facebook, Twitter, as well as things like the iPod, the iPhone, the Kindle and even self-driving cars. I was in the middle of all that. And it’s a very long way from there to Canberra, in almost every conceivable way. Basically, I seem to be going in the wrong direction.
But making this kind of journey isn’t new to me. I left a tenure track job at Stanford University to join Intel back in the late 1990s. Almost no one thought that was a good idea. It probably didn’t help that the job was brokered by a man I meet in a bar in Palo Alto, and that it came without any discernible job description. It was, in many ways, a very foolish thing. I left a world I knew how to navigate and a clear path forward for something much more opaque. In fact, I didn’t really understand what Intel did, or how Silicon Valley worked, despite the fact Stanford was in the middle of it – literally and figuratively. But I did understand that companies like Intel were building the future, and that they were making the technologies that would shape our lives for at least the next generation. And I knew I wanted to be a part of that, and to help shape it differently than the traditional logics of engineering and computer science might dictate. And yes, that really does mean I thought one person could make a meaningful difference in that world. 
The answer to why I thought that lies in my childhood. I am the daughter of an Australian anthropologist. I spent my formative years on my mum’s field sites, living in Indigenous communities in central and northern Australia, with people who still remembered their first sightings of Europeans, cattle and fences. It was, to say the least, an unusual childhood. And it left some pretty indelible marks on me — about social justice, fairness, and how different the world was depending on your point of view and your place in it. And mum made sure my brother and I reflected on that, and understood what it meant.
In fact, she raised us with one pretty simple principle: if you could see a better world, you were morally obligated to help bring it into existence. That you should put your time, your energy, your passion, your intellect, your heart, your soul, everything on the line. You shouldn’t sit on the sidelines, you should actively advocate for the world you wanted to see, and that world should be one that was better for many, not just for you.
I believed I could do that at Intel, and in the Valley — that I could actively work to change the world. And on my good days, I will tell you I probably did. I helped put people into the conversation. I know I helped shape a different set of conversations about people and technology, and perhaps as a result I helped generate a different set of possibilities, and we built better stuff because of it. I know I made it possible, at Intel and beyond, to think about intersections of technology and specific cultures, and to use those insights to drive new forms of innovation and technology development. It was heady stuff.

However, over the last couple of years, I had been increasingly struck by the complicated dance of being human in this world I was helping make digital and about what that might look and feel like, and what we could and should be doing differently. And at the end of 2016, I was taking a long hard look at what I was doing with my life, and whether I was making the right kind of impact given all of that. And whether I was, in fact, helping make a better world.
It was against that backdrop, that an email arrived on Christmas Eve from the chairman of the ABC. Initially I thought it was some kind of seasonal greeting spam. Even after I clicked it open and read the letter, I still didn’t quite understand the invitation. Me, give the Boyer lectures? Yeah, na!  Because I grew up with the Boyers, I don’t remember a time when they weren’t part of the conversation. I know how important the Boyers can be and the difference they can make. Since 1959, the Australian Broadcasting Corporation has produced the Boyers to spark a national conversation about critical ideas. Even if you didn’t listen to the program, you were in that conversation somehow. The roll call of Boyer lecturers is impressive – academics, authors, politicians, judges, public intellectuals – and their topics likewise: law, public policy, architecture, space travel, aboriginal rights, ideas about Australia, Australian-ness and our place in the world. It was hard to see myself in that kind of lineage and I wondered what my distinctive intervention could be.
The Boyers feel oddly familiar to me too. A worn and slightly battered copy of W.E.H. Stanner’s 1968 'After the Dreaming' was always somewhere in mum’s office, and it came up in conversations as a casual reference — Stanner’s Boyer lectures. I have been baby-sat by Boyer lecturers; watched them across the kitchen table talking and eating. I even wore a Boyer lecturer’s dress to my high school dance. It was red and sparkly and Marcia Langton let me borrow it for a night.
Knowing Boyers didn't make it any easier to say “yes” though, because I knew I wasn’t quite like them. And the data bears out that impression: 63 Boyer lecturers to date, and only 12 of them women, and the average age hovers around 60. And the format, a written set of lectures, tending over the years toward an academic style, was equally alien to me. I found myself asking: could I do it differently? Hold true to the idea that the Boyer lectures should spark a conversation and engage Australians in that conversation, but tackle that conversation in a new way. Happily, for me, the ABC was willing to take a risk on me, and on this.
So, if you grew up with the Boyers, it’s is going to feel a little strange. It’s a bit more informal and interactive, more of a conversation than a lecture. I have sound, there is multi-media content, I have the voices of other Australians joining me, and I want this to be both a conversation starter and also a conversation. Which means there are a couple of places where I am going to ask for your help, and your input. And if this is your first Boyer experience, welcome. I hope it won’t be your last.
Here we go …
Twenty years in Silicon Valley has left me with the distinct sense that we need to keep reasserting the importance of people, and the diversity of our lived experiences, into our conversations about technology and the future. It is easy to get seduced by all the potential of the new and the wonders it promises. There is a lot of hype and not so much measured discussion. Many conversations seem to originate in the United States, but I think this is the time and place to have one here in Australia. After all, we are not just passive by-standers in this digital world – we have been active creators of it. So it is time for another conversation, about our possible digital and human futures, and about the world we might want to make together.
Where would you start? Well I think three ideas underpin our current digital worlds. They are deceptively simple. Like a good ethnographic insight, when you share, everyone just nods, as if they have always known it. I think it goes like this: the digital world we have been building is about three things: speed, smartness and connectivity. Put another way, digital technologies are measured by how well they deliver on the promises of being fast, smart and connected.
But where did those ideas come from? Knowing how we got to this moment is important. Where did this digital world come from? And what is built into it? In terms of affordances, and constraints, but also ideas, ideals and cultural logics? What is the world we are building? And where do we fit in it? Where should we fit in it? And perhaps even, which we?
In this Boyer conversation, I want to talk about just one thing: what it means to be human in a world of digital technologies, where the dominant ideas are all about things being fast, smart, and connected. Sure, we can be fast and smart and connected — many of us regularly are so. But at what cost? And is that the way we want to continue to be? Increasingly I wonder if these ideas scale, and how we, as people, might fit into a world constantly remade along the lines and logics of speed, artificial intelligence and constant connectivity.
I want our conversation to unfold in three parts. How this current smart, connected and fast digital world came to be and its current contours; a slice of our technical history that helps us ask different questions of our future; and some suggestions about a new way to configure our digital world with our humanness at the centre.
So, let’s talk about how our digital world came to be all about fast, smart and connected. There are lots of histories of computing I could tell, and lots of famous characters we should name check (hello Charles Babbage, Ada Lovelace, Claude Shannon, Al Gore, the list goes on), and lots of technological innovations that deserve a shout out (valves, transistors, binary, fortran, germanium just to name a few). But I realise this isn’t any history of computing — this is my history.
So, let’s start with the easy one — the history of how computing got fast. We all have some intuitive sense of that. Your first computer was much slower than the one you have now, or the one you are listening to me on today. Think about how quickly a webpage loads, or you can save a photo or buy a pair of shoes. And sure, some of that is the internet, but a lot of it is the power inside the computer itself. Your first mobile phone would feel positively glacial next to your current one. Mine was a Nokia 3310, and I loved it, but I wouldn’t want to text on it now! We know that even in a few short years the computing technology around us has gotten faster. And somewhere in there, our sense of time changed too. We talked about “always-on” a lot in the 2000s—the technology was always on. Of course, that meant we were too. And tensions around that sense of time, of constant connectivity, is a place where the digital world sometimes feels overwhelming. A constant hum, chirp and ping of notifications, updates, likes, tweets, posts, newsfeeds. All demanding attention. I know I feel it, perhaps you do too. The ABC’s recent survey of our smart phone habits here in Australia says we all have a little bit of it – more than half us described our smart phones as feeling like a leash.
So where did it start? That sense of speed in our digital world? The history of computing is intimately tied with the idea of speed and with things needing to be faster — starting back in the late 1930s. Before computers were technical systems, they were, in fact, people who did computation. Mathematicians mostly, who worked with log tables, and calculators and increasingly sophisticated mechanical machinery. There were rooms full of such computers in Britain, the United States and Australia. They were almost always women. Yep, computers were women! Which is ironic. Given how poorly women are now represented in the field, and the stories we like to tell about that.
Computers, as technology, were needed to do the work that people no longer could. It happened in the context of World War II, when the machinery of war demanded increasingly complicated computation. During the war, those human computers were central to the aiming of guns, the cracking of code, and pretty much anything else that involved a lot of math and numbers. According to one historian of the period, calculating a single artillery shell’s trajectory involved somewhere around 100,000 multiplications, which could take at least 30 minutes for a human computer and her mechanical calculating technologies. 
And that just didn’t scale. There were many experiments and attempts to find ways to get to faster computing. To find a way to have machines do the work of humans, far more quickly. One way to think of it is that initially, early electrical computers were, really, just more sophisticated calculators — computer scientists everywhere are wincing as I say that. Because of course it was far more complicated and involved, solving a remarkable range of problems from mechanics to electrical engineering to creating new technologies for storage, calculation, and programming. There were amazing break throughs: the Zuse3 in Germany in 1941; the Atanasoff-Berry Computer at Iowa State University in 1942; the Colossus at Bletchley Park in the UK in 1943; the Harvard Mark 1 in 1944; and the ENIAC – the Electronic Numerical Integrator and Computer at the University of Pennsylvania in 1946. One of ENIAC’s creators estimated his machine could perform 100,000 multiplications per second; which would mean the machine could do in 1 second, what it took a human computer to do in 30 minutes. He was wrong, as it turned out — the ENIAC was only able to do 10,000 multiplications per second, so 10 seconds to do the work of 30 minutes. Even so, you can imagine it was a remarkable transformation!
The ENIAC was more than just an outrageously fast calculator, it represented a new kind of solution for computing. And it would have a far-reaching impact. One of ENIAC’s early visitors was American mathematician John von Neumann. He helped articulate the foundational architecture for an electronic digital computer: input, output, processor, memory. His work became a blueprint; elegant in its simplicity and providing a framework for decades to come.
In the United States and the UK there were significant investments in building out these new electronic digital computers, and many machines followed: Baby or Manchester Mark 1, BINAC, EDSAC, MONIAC. These first-generation computers took up whole rooms, used cumbersome systems of valves which required a great deal of electricity and care, and had frequent down-time and failures. They were, for the most part, big, loud, smelly, noisy, temperamental and power-hungry. People joked that when the ENIAC was switched on, the lights in nearby Philadelphia would flicker and dim.  That said, these machines were also fast — so much faster than humans. They would change not only what was possible, but also reshape the imagination about what might be possible. And as a result, electronic digital computers steadily replaced the human ones, ushering in a new wave of automation and making speed one of the principle measures of computing success.
There was a time when Australia was at the forefront of computing and we had our own first generations computers. CSIR Mark 1, or CICERO as it was called, was turned on for the first time in November 1949. It was the 4th stored program computer in the world – which is pretty cool. It was uniquely designed and built, in Sydney, by English-born scientist Trevor Pearcey and Australian electrical engineer Maston Beard. They were part of a project team within the Commonwealth Scientific and Industrial Research Radio-physics lab, and CICERO was their remarkable experiment. It was, like its American and British counterparts, fast: 500 and later 1,000 times quicker than a mechanical computer. It was power hungry too, and its completion was delayed by nine months, because of electrical shortages in Sydney after the war. Trevor Pearcey, CICERO’s architect, knew his creation was about more than just speed. He believed it would open up a new way of organising knowledge and thus a new way of thinking. Writing in the Australian Journal of Science in February of 1948, he speculated that computers could open up new ways of storing and using large bodies of data. And despite a successful run at the CSIRO, the machine was decommissioned and research dollars and interest moved elsewhere.
CICERO was transferred to the University of Melbourne in 1955, on the back of a truck, covered by well-worn tarps, tied down as best could be, with the air vents balanced precariously on the top of the load. The manifest read 43 crates of electronic equipment. It took nearly year to reassemble it and getting it working again. It was reborn as CSIRAC: the Commonwealth Scientific and Industrial Research Automatic Computer, in a ceremony with the vice chancellor at the University of Melbourne in 1956. CSIRAC was powered on through an elaborate process, and its output system typed this message: “Mr Vice Chancellor: thank you for declaring me open. I can add, subtract, multiply; solve linear and differential equations, play a mediocre game of chess; and also some music.”
And with that, CSIRAC was off and running. And you couldn’t have missed it. It weighed 2 tonnes and was housed in large grey metal cabinets – you could always see wires and lights and bits of the 2000 valves. It took up about 40 square meters of floor space, and used 30 kilowatts of power for its 1 kilobyte of memory. It was big, hungry for electricity and not very powerful. I had to ask someone at work to translate this into 2017 terms, and he said “wow” … and then offered that today’s computing had a million times as much memory for one-ten-millionth the power. Another colleague pointed out, that it would basically take 4 million CSIRACs to replace my current mobile phone, which would require most of the electricity in New South Wales, and most of the landmass too. I am not sure I know quite how to wrap my head around that.
My mum remembers CSIRAC. She was a teenager in Melbourne in the late 1950s, and a school excursion took her to see it.  She said to me: “We were prepped to see something extra-ordinary, that this machine was something amazing that Melbourne had. A man stood in front of it, and told us how fast it was, how many calculations it could do. It made such a lot of noise,” she tells me now, “and we were in awe. It made a really big impression on me, I can still see it in my mind.”
So I went to visit CSIRAC myself. You can too — it sits today in Melbourne Museum, behind a short clear wall. It is silent now, but it was, in its day, like a living thing. When it was running, it was a cacophony of sounds — the roar of cooling vents, the faint hum and pop of electrical transformers, the buzz of a fan belt, and the clatter of a paper ribbon feeder. And it smelt too, of oil, varnish, and melting wax. It had its own pace and rhythm: its operators knew by the sound, sight and smell if it was working properly. They knew how to use a screw-driver to get the right tension on the belt that held the memory drum; how to shut it off when it got too hot outside; how to use a stick with a rubber ball on the end to diagnose dying valves; and not to plug in the tea kettle when CSIRAC was running a job, because it shorted the system out and you had to reboot the whole thing — and that took an hour!
Despite all of that, it was so fast. And so much more than there had been. In its time at the University of Melbourne, it was used by all kinds of people for all kinds of calculations – 30,000 hours’ worth. In an eight-year period, it tackled over 700 projects, including calculations for weather forecasting, forestry, loan repayments, building design (including the Myer Music bowl), and the electrical supply.  It was basically run like an open software project — if you could work out how to use it, you could. Over time, a library of paper-tape routines were developed — a very, very early programming language emerged too – called Interprogram – years ahead of other forms of software.
And it was also just a thing ofwonder and possibility. And we did amazing, unexpected, things with it. It turns out CSIRAC is responsible for the very first digital music. It was played at the inaugural Conference of Automatic Computing Machines in Sydney in August of 1951, and it regularly made music in its time in Melbourne. It was programmed to play contemporary tunes, though there was a lot of debate about its musical quality. My favorite review: “it sounded like a refrigerator defrosting in tune” from the Melbourne Herald, in June of 1956.
And despite all those limitations about power and performance, its early programmers also wrote computer games to pass the time and to explore what CSIRAC could do. There was a game called “the way the ball bounces”, which one of the early operators, a woman named Kay Sullivan played so well, she beat CSIRAC and a new game had to be written. I like knowing that one of our very first games was played by a woman, and she won, a lot. Ultimately CSIRAC was switched off for the last time on November 24th, 1964. One of its operators said that day “it was like something alive dying.”
SILLAC (pronounced silly-yak), our other early computer, is an equally wonderful, very Australian, story. It was built at the University of Sydney, modelled on American John von Neumann’s Institute for Advanced Studies Computer and the ILLAC (Illinois Automatic Computer). It was funded by the proceeds from a winning bet on the 1951 Melbourne Cup — £50,000 to be precise. Yep, a flutter on the ponies helped pay for one of our earliest computers. It doesn’t get much better than that. It commenced running in 1956, and was utilized for many things, including much of the underlying calculations for the Snow Mountains hydro scheme. It was ultimately decommissioned in 1968.
Lots of Australians were exposed to computers through CSIRAC and SILLAC – researchers, scientists, students, and school kids all encountered something remarkable. These early machines speak to Australian ingenuity and drive, and to our ability to be at the cutting-edge of technology, even at great distance from the centers of traditionally recognised innovation. And whilst we didn’t continue our early winning position on novel and unique hardware, we did create generations of programmers, and early computer scientists. We kept the machines running and had ideas about how to extract real value from them, well beyond their initial builds. We made music and games and ideas. And it is a legacy of which we should be proud.
The end of CSIRAC clearly wasn’t the end of computing — in Australian or elsewhere. It was the end of a particular kind of computing. CSIRAC is sometimes called the last of the first — that is, the last of the first wave of computers. Because in the time that CSIRAC clattered and whirred away at the University of Melbourne, the nature of computing underwent a remarkable shift. The whole idea of computing evolved from being about complex calculations for scientific and military activities to a computing as a necessary part of the modern corporation. It became about business. The rhetoric and advertising from companies suggested computers would streamline work, increase efficiencies, and liberate humans from drudgery and repetitive tasks.
So, the history of computing technology is all about fast. That idea of speed that was set in the 1940s, and every successive generation has followed that vector. I think that it is also about more than just that. It is also about ideas of innovation, technology, and change, and about where the future is invented and by whom. After all, the passage from calculators to computers is also about increasing complexity, and about a whole new set of experiences.


Next we have Pat and his TRS80 - which is a Tandy of 1980 - the year before International Business Machines released its Personal Computer. And there were lots of Tandy electronic shops some of which merged with RadioShack.



This is Bell talking about the smart part of computers and technology and being connected too.



It was the first TRS80 in Australia, and he won it, rather by accident, in a raffle at the 8th Australian Computer Fair in September 1978. Pat ended up in the Canberra Times with his new computer. The photo shows a small boy, transfixed, in front of a wooden desk with the TRS80 sharing space with word processor; in his hand is the instruction manual and the program cassettes. He was 11, and none of his friends, or anyone else he knew had a personal computer.  He had no-one to talk to about his new machine or what it could do. Personal computing just wasn’t part of daily life here; it wasn’t part of Australian society; we were, in effect, pre-digital.
But Pat and his TRS80 are an important part of our history, and of how we came to our current digital world. The TRS80 — Trash80, another friend cautions me — had been launched in America in 1977. Various kit computers had been available for nearly a decade, but along with the Apple II and the Commodore PET, the TRS80 was a significant shift in design. Forty years ago, this trinity was an intervention, and an important moment in which our digital world took real shape – they are our pre-history. You can see their ancestry: they are informed by Von Neumann’s ideas about compute architecture and they clearly take advantage of the ongoing evolution in processor technology. But unlike the mainframes, these new machines were small, they had a keyboard, and a screen.  You could program them. They seem approachable.  They were human scale. And they were the first significant wave of computing that came into our homes and into our lives.
In fact, if you are of a certain age, you vividly remember your encounters with these new machines. A TRS80 in Canberra when your mum taught you BASIC; another in Massachusetts when your uncle wanted help with book-keeping and you bought your first software from those guys in Albuquerque who were on their way to make history in Seattle; a Commodore64 in Albury because your dad thought computing was the way of the future; the IBM PC 5150 your dad bought the moment it was available in Alaska and you learnt to program it; a Panasonic JB3000 your dad bought and that you used to digitise your favourite album art; a ZX81 in Boise that you bought with your own money from the back pages of a magazine because it was the coolest PC on the planet.
Pat says his TRS80 was loud – the monitor hissed, and the keyboard had a very distinctive plastic reverberation, and because the processor was in that keyboard, you could feel it heating up under your fingers and smell the circuitry. You had to learn to program, but if you could crack that code, a lot of things felt suddenly possible. You could explore maths and numbers – it was after all an inheritor to the calculator. There were also text-based adventure games, and some low-resolution graphic games. And there was the programming itself. You could write programs to make new things. And if you were determined you could get down below the programming language and interact with the machine itself. It was like opening a door to another world. And everyone I know who has a story about those first moments was profoundly changed by them. Perhaps you were too.
The president of Radio Shack which produced those TRS80s promised personal computers for the home, office and school, declaring that “This device is inevitably in the future of everyone in the civilised world — in some way — now and so far ahead as one can think.” Forty years on, it is safe to say whilst the TRS80 wasn’t our inevitable future, the world that it signalled surely was. Computer power for personal productivity, creativity and entertainment, and also the prospect that we might be able to control some of that power and bend it to our own ends.
We now live in a world of constant connectivity, proliferating devices and lots and lots of digital stuff in lots of places in our lives. Lurking underneath it are complicated networks of communications, payment structures, regulations and standards. And even if you don’t keep your smartphone within arm’s reach, and you don’t use Facebook, or Twitter or Instagram, or Snapchat or Tinder, you live in a world where your friends, your kids, your parents, your bosses, your politicians, your teachers, they all do, and where those services and their underlying ideas are shaping this world and how we live in it. I remember interviewing a woman in South Australia years ago; she told me, in those days, her dad didn’t have the internet. So whenever he wanted to look up something on Google, he just called her on her mobile phone. She said to me: “I am his Google.”  That story has always stuck with me; it reminds me you don’t have to be online to be connected, and you don’t have to be surrounded by technology to be always and already part of the digital world.
So how did we get from the TRS80 to here? Part of the answer is speed – computers just kept getting faster (and more powerful). The other part of the answer is that computers changed shape and direction; they went from mainframes to personal computers to mobile phones back to mainframes again, as servers that power the cloud that makes digital applications and services possible. It is about how the digital world got connected, and as a result got smart.
Unpacking that history isn’t straightforward. The ideas of smart and connected crisscross time, space and national boundaries. Tracing them reveals a messy patchwork of intellectual genealogies, friendships and shared encounters, as well as persistent ideas and relationships. It isn’t a simple story about architecture and technological innovation or inventors and scientists. And there are lots of formal, wonderful histories that have been written. I want to explore just two of those histories, both that begin before the TRS80 but that are intertwined and help make the world digital in its current form. They are still with us and we are still in dialog with the world they have imagined. It is also a personal story, because I grew up in that world of smart and connected and I helped build pieces of it.
For me it started, in America: in a valley in northern California, south of San Francisco. On the map it is called Santa Clara Valley, but we know as Silicon Valley. It is an almost 40-kilometre stretch of low-slung sprawling suburbs and multi-lane highways and industrial parks – really it isn’t that different from other bits of the American suburban landscape. Bordered to the west by the Santa Cruz mountains and the east by the East Bay, and from San Jose in the south to Menlo Park in the north, it is the traditional lands of the Ohlone people. It has been settled by successive waves of Europeans since the early 1700s, and is today home to more than three million people with one of the highest densities of millionaires in America. Google, Facebook, HP, Apple, Intel, AMD, Airbnb … they are all there, along with wealthy venture capitalists and countless small start-ups, some of whom might be household names one day.
I moved to northern California in 1992, to go to graduate school at Stanford University. I remember all the gum trees on campus and my first sight of the wattle blooming beside the road, and the fact that it smelt an awful lot like home – warm dirt, eucalyptus, jasmine. There was endless sunshine and a dry kind of heat, and even though the Pacific Ocean was on the wrong side of the day, it was all a bit achingly familiar. And I knew I was going somewhere important: Stanford was a big deal even then, and their anthropology department was tough and scary. But I just don’t remember thinking I had moved to Silicon Valley, which is crazy given the importance of Stanford to both the myth and the real geography.  When I arrived there were still fruit orchards, plums, apricots, pears, reminders of the region’s importance as an agricultural centre. And you could still find Quonset huts and other pieces of architectural history that pointed to early days of the aerospace industry. And it didn’t feel like the centre of a burgeoning digital world, and that first threads of that world had already been sewn.
That slightly nervous voice is American electrical engineer Douglas Engelbart, and he has every reason to sound anxious. He is doing a live demo of brand new and previously unseen technology in front of a crowd of a thousand very sceptical peers in San Francisco, at the combined Association of Computer Machinery and IEEE Annual meetings. Over a 90-minute period, he and his team would showcase a suite of technologies he called “oN-Line” computing including word processing, version control, a file linking structure, real time collaboration, hypertext, graphics, windows and the mouse. Basically, he is showing off an early version of our digital world.
Which is pretty impressive, because this demo is taking place in December of 1968, and the personal computer hadn’t been invented yet, and most of the technologies displayed were years away from commercial production, and it all represented ways of thinking no-one had ever seen. Oh, and his team and all the technologies were in their lab in Menlo Park, 50 kilometres down the peninsula, and he was conducting the entire demo over a remote link with live television and a direct feed. Even on his best day, Steve Jobs would never have done something that ambitious on stage.
Engelbart was well placed to see the future; he had come of age around those first and second generation of computers, and seen their applications across scientific and military research. And he was in Silicon Valley before it even had that name, but where it was all about building new technological possibilities, and where it was also all about building new social realities. After all, northern California was not only home to Silicon Valley, but also the social upheaval and experimentation of the counterculture.
And Engelbart was in the middle of all that, running a group at the Stanford Research Institute, an offshoot of Stanford University tackling applied problems. He was hugely interested in how we could use computing technology to augment human intelligence, in the context of collaboration and data-sharing. He believed computers could have an outsized impact on how we lived and worked; he wanted to move the conversation from automation and efficiency, and speed, to making the world a better place. And he put all of that into his work and ultimately into that fateful demo.
This demo is known as the Mother of all Demos, and you can still watch it on the internet. At the time, it was said, towards the end of that 90 minutes, that Engelbart was dealing lightning with both hands. Everyone in the room knew they had just seen something remarkable. It was a moment when you could see the future of computing arrayed before you with crystalline clarity. All you needed to do was build it. And people did, at the Home Brew Computer Club, at HP, Apple, Xerox Park. This demo influenced generations of engineers, technologists and computer scientists and gave shape to personal computing.
Less than a year after Engelbart made the future come alive on that stage in San Francisco, he was a party to another huge advance in building the world we now inhabit. Engelbart had asked his audience to imagine a world where “you were supplied with a computer display … backed up by a computer that was live to you all day and was instantly responsive”. The future he showed off was based on that simple premise – simple today because we know what he was talking about. Back in 1968, that felt like a pretty big stretch. But it was coming and sooner than some in his audience might have believed.
On October 29, 1969, the American phone company AT&T connected two computers – one in Engelbart’s world at the Stanford Research Institute and one at the University of California, Los Angeles, about 570 kilometres apart on the west coast. On the UCLA end, a professor wrote of that moment “Talked to SRI, host to host” … it is a very dry sentence to capture something quite remarkable. This was the beginning of the internet – this was the beginning of connecting the world’s computers and all of us too. In an interview years later, that professor, Leonard Kleinrock, would recall the story with more details. He said that he and his students hoped to log on to the Stanford computer and send data between the campuses. So they had the lab at SRI on the phone, watching their local monitor, whilst at UCLA they started typing the word “login”. He says: We typed the L and we asked on the phone, "Do you see the L?" "Yes, we see the L," came the response. "We typed the O, and we asked, "Do you see the O." "Yes, we see the O." "Then we typed the G, and the system crashed".... Somehow, fittingly, that is the start of the internet.
The internet didn’t come out of nowhere; it wasn’t built by the free market or by a company who saw a possible market opportunity. It was, in point of fact, a project commissioned by the American Government’s Department of Defense Advanced Research Project Agency (DARPA for short). The brief was to build robust, fault-tolerant communication network using computers. And it would set in motion a new way of imagining and manifesting communications, information sharing and even communities. Building on ideas developed in the UK and the US, this new network allowed data to move more freely and without dedicated, pre-existing, fixed connections. This form of packet switching was a radical departure from the way information had been moving, and would have far-reaching consequences.
October 1969 was the first test point. By December of that year, host computers were added from the University of Utah and the University of California, Santa Barbara, and the east coast of America was connected up in March of 1970. The Advanced Research Projects Agency Network or ARPANET, as it was then called, grew slowly and expanded to include many American universities and American government hosts. A few international hosts in Norway, Sweden and the UK were also added, but mostly the ARPANET connected computer scientists and other researchers in the science fields in America. The technology continued to evolve and scale, and the United States government continued to underwrite the cost of connecting up nodes – over 213 host computers by 1981. The internet grew up this way, as a network, connected all sorts of people and ideas. People shared material, insights, utilised computing bandwidth at other universities and built out human networks on an unprecedented scale. Although there were lots of rules about how you should use the system, it allowed a new level of sharing and partnerships and created unexpected subcultures.
I found the internet in 1989, 20 years after the first connection. I was taking a programming class in computer science and the instructor kept talking about this thing called the internet. It turned out, for me, the internet could be found at a Vax terminal in a cold, fluorescent lit basement of the Computer Science Building at Bryn Mawr College on the outskirts of Philadelphia. And in no short order, I discovered Usenet chat rooms, a kind of early news feed technology that took advantage of networked computers, and a man in Australia, who said he worked at Optus, and who would publish his own accounts of cricket. As a cricket tragic living in America in the late 1980s, the internet was the thing that kept me in touch with the exploits of Geoff Lawson, Allan Border and the rest of the Australian team. It was slow and clunky and the interface was terrible, but it was a connection to home, and to a sport my grandfather taught me how to love, so it mattered. And I understood, at least intuitively that this internet thing was not just an infrastructure per se, but it was also a kind of digital connection.
It didn’t occur to me to wonder however just how my cricket tragic was doing it. It turns out that the notion of a connected network of information was moving beyond the ARPNET. Large companies built their own internal email systems, and their own limited networks. And in the 1980s, there were public bulletin board systems, basically a computer server you could connect to and share information in moderated forums. And in 1989 the first commercial internet service provider launched in Massachusetts.
Australia was right there. We got the internet on June 23, 1989, when two researchers at the University of Melbourne and the University of Hawaii completed the work necessary to connect Australia. Not even 30 years ago. And it wasn’t a very fast connection, and it just connected universities, and we quickly exceeded the data limits, but it helped Australian researchers engage with a much larger world. Commercial internet access followed almost immediately. Indeed, we can claim credit for having the second commercial internet service provider in the world – in Perth. And another very early ISP in Bryon Bay that same year. And we had comparatively high adoption rates of domestic internet, and a great deal of local activity. We got online en masse and stayed there. Our digital worlds were firmly connected.
The internet wasn’t just for university researchers, or company email systems. It became a thing and a place, and an email address. You signed up to a service – CompuServe, AOL, Prodigy – and you got an email address, and access to a range of services, and media. And your personal computer was now a gateway to a much bigger world. But do you remember your first email address? Do you still have it? And do you remember discovering that you could write to people, and they would write back, and there was no stamp, and no post-office and no fax or telegram? And that it wasn’t instant, but it was quick and it let us be connected to each other in new, unexpected ways? I got my first email address when I arrived at Stanford University in 1992. I needed permission from my department chair, and I had to go to a basement somewhere on campus and explain why, as an anthropologist, I could possibly need an email address. I remember the explanation that I was Australian and a long way from home.  I have been collecting email addresses ever since – they are a catalogue of the places I have been, worked, and with whom I have affiliated.
Like many of you, I have multiple email addresses, and I suspect like many of you, it is now a mixed blessing. According to the keepers of such data, there are now nearly 2.6 billion email users worldwide, with slightly more than 5.2 billion email addresses. Email, whilst hardly the coolest form of connectivity in our connected digital world, remains one of the big volume items on today’s internet.  In fact, globally, we send and receive approximately 215 billion emails a day; with a little more than half of that related to work. At work, if you have email, it is an average of about 122 emails sent and received every day. The connected bit of our digital world has a lot of email, and the sense of being just a little overwhelmed.
The other important bit of our connected digital world is also built on the internet – and that is the web and the digital realm it has enabled. In the late 1980s, as Australia was getting connected and email was still fun, Tim Berners-Lee, an English engineer and computer scientist, was at CERN in Switzerland, and he took the idea of hypertext that Engelbart had showcased in the Mother of all Demos (and which Engelbart himself stole from Vannevar Bush’s idea of memex), and made it real. He wanted to find a way to share and access information across non-compatible computers and networks, a way to index information and make it shareable. In so doing, he created the thing we know now as the world wide web. The web was a different kind of connection; in its best form, it made it easier to find, create, share and curate content, from almost anywhere to almost anywhere else.
Much like packet switching and TCP/IP, the ideas that build the web were radical – they didn’t privilege a particular place, or a particular set of actors, or a particular set of devices. They seemed to make the world available to everyone; the web would be about true democracy, transparency, and a place where we would transcend our bodies and their limitations. If the internet had been built by the military needs, the web would be its counter-argument. At least, that was certainly the story that we all told. From the heady days of the conversations at the WELL (Whole Earth 'Lectronic Link), to trade shows, tech demos, political speeches and the rhetoric inside many new companies built on the web, this was about building a new world. And that last group is important. The web wasn’t just about a new way of indexing information to make it accessible, it was a new space for commerce. Getting connected was business.
The first web-browser, MOSAIC, appeared in 1993. In that year there were an estimated fourteen million users online, accessing about 130 websites. Today 3.2 billion of us are online. I encountered MOSAIC in 1994, an early spring evening in Palo Alto. I was at my friend Dave’s house and he had just come back from an event at the Institute for the Future up the road, where they had showed off the “web” and this thing called Mosaic. So there we were, trying to use the web. We decided to access something iconic – we picked Neil Armstrong landing on the moon. I really don’t remember why. It took us hours, and there was a lot of fiddling around and waiting before we heard ... [the moon landing]. It was another moment in which the current reality shifted on its axis, and Dave and I wondered at the future rushing toward us. What else would this web bring, and what would it feel like?
We now have so many ways to the web, and some many things that connect us – to each other, to information, to ideas, to stories and to services. Most of us own devices that reflect these histories of the internet, the web and computing, but I bet we don’t think about it very often. We use services and applications for everything from banking and travel to dating and flirting. Think about what you were doing right before you were listening to me, or even while you are listening to me. Did you log onto Facebook or LinkedIn, post an update about something, share a link to a video, or a news story on Twitter?  Did you take a photo with your phone, and play with filters before posting it? Buy coffee by waving your bank card at a machine on a counter? Message your colleague at work? Email your father to say hello? Text silly in-jokes to that person you like and wait hopefully for them to text you back? Look up where you are going on Google Maps, and wonder if you will make it on time? And all of that is built on computing technology characterised by speed, connectivity and intelligence.
It is striking, for me, to look back and realize how many things that seemed obvious and inevitable, were in fact new and still open for negotiation. That is, in some ways, the ultimate seduction of the digital world at its best – the future is right there, in your hand, and it seems so natural. You forget to ask where it came from, or why, or what will happen next. It is easy to be enthralled with the logic of new technologies; their promises of efficiency or fun, or some new kind of experience that will revolutionise everything. Those are indeed alluring promises. Likewise, it is easy to forget that technologies rarely spring fully formed into our consumer and work landscapes – they have taken some time to get to their current state. They were built and constructed in particular places, by specific individuals, with clear problem states in mind. Reminding ourselves of those contexts and those histories isn’t always easy, but is instructive.  As we think about our current digital world, and its threads of fast, smart and connected, it is worthwhile to remember they all came from somewhere. All technologies have a history and knowing those histories doesn’t mean we can predict the future, but it does mean we can ask better questions of our own future! So what might those histories tell us …?
While part 3 is coming and I decide what to quote, Frieda Chiu drew a comic about how the typewriter is a feminist liberation. So true, as I had reflected from 1990-2001. And why the QWERTY keyboard is so ... sans logicel.


Typewriter as a feminist liberation machine: yay, the link has its clothes on!

 What a lot of people may not know about the typewriter in particular is that it was designed for blind and visually impaired women who could receive and give feedback.


And then there's the Edison Talking Typewriter.



And I tried - briefly - Pitman shorthand.



Back then typing was considered a pink-collar job.



A young person of my acquaintance - now a PhD candidate in psychology - drew a picture of a typewriting service. [She is one of the people I dedicated Milestones at the Collegiate to if you would like a further prod].



And probably not so far away from my 27-year-old self!



THE CLATTER OF KEYS
The typewriter as we know it was patented by Americans Christopher Latham Sholes, Carlos Glidden and Samuel W Soule in Milwaukee, Wisconsin in 1868. Their patent number, US 79,265, describes improvements in type writing machines — a bland description for a significant breakthrough in attempts to mechanise writing.
I am willing to bet most of us have heard of the industrial revolution and we vaguely remember the idea of mechanisation, automation and new machines.  One of the things we don’t talk about so much is that the industrial revolution created the need to manage a great deal of paper expediently.
All that new commerce required paperwork and bookkeeping – for banking, insurance, inventory, taxation, regulation, publishing, advertising and accounting.
Managing all that new data meant record keeping became increasingly important and time consuming. With a pen and paper, the average bookkeeper could write about 30 words a minute, and that wasn’t enough.
In the 1830s, Isaac Pittman, an Englishman, started to develop a way of writing phonetically. Ultimately, he would create Pittman shorthand, where shapes and symbols represent sounds not words, which when well-practiced lets someone write very quickly indeed.
A good stenographer could write 130 words-per-minute using Pittman — basically four times faster than just writing alone and close the speed of speaking.
But Pittman created a new problem: you could write as fast as a human spoke, but your notebook pages were full of obscure symbols that weren’t really helpful for the “record” part of record keeping. And transcribing it back into legibility made it all very slow again. 
That was where Scholes and Glidden came in. They thought they could speed up the whole process, to automate writing completely. They built a working prototype in a borrowed machine shop — it was basically a hacker/maker space.
Their machine had metal keys to represent each letter of the alphabet and a roll onto which they would type. To make all the letters fit, it turned out long metal strikes were necessary. But they would get caught up on each other, and Scholes and Glidden had to configure the keyboard in such a way as to slow down the typing.
Yep, that’s right, the keyboard we all use, some of us faster than others, was actually designed to make us type more slowly. It was designed to make us inefficient. And that design has been with us since 1868.
Remington and Sons, an American company that specialised in firearms and sewing machines, acquired the rights to commercialise the Sholes and Glidden Typewriter for US$12,000 ($313,000 today).
Production started in March 1873, but the machine was already capturing public attention. It featured in Scientific America in 1872 with a lot of heady expectations: “Legal copying, and the writing and delivering of sermons and lectures, not to speak of letters and editorials, will undergo a revolution as remarkable as that effected in books by the invention of printing.” 
Even the company’s tagline for the typewriter pointed toward a significant disruption: “To save time is to lengthen life.” Despite all of this hype and favourable publicity, it wasn’t a runaway success. It was expensive, costing US$125 ($3,400 today); it was clunky; loud; temperamental; and most frustratingly, it was slower than handwriting and far slower than Pittman.
Over the next 40 years, the typewriter got faster slowly. It took considerable effort and investment, and lots of different experiments and solutions, but by the turn of the 20th century, an American typewriter cost as little as US$5 (about $185 today) and a good typist could type faster than handwriting, and the really good ones were a sight to behold.
Remington and Sons imagined the typists would be women, if only because they speculated that they would be patient and slower with the keys, and therefore less likely to make mistakes. And indeed most typist were women, because those jobs were done by “superfluous women”.
The first time I read that phrase in a book, I remember being so outraged. But it’s a term in demography and a piece of the typewriter story that we shouldn’t blow past. In the late 1800s in the US and the UK there were women who, because of various reforms, had a higher baseline of education, and because of the Civil War in the US and the Call of Empire in the UK, did not have male peers to whom they might otherwise have married. So they were “superfluous” — but don’t be fooled, those superfluous women made history.
A different way to think about this is that the demands of the typewriter, for a skilled and patient operator, created space for women to join the work force. In 1881, the YWCA in New York city bought six typewriters and eight women took their first typing classes; only five years later there were more than 60,000 qualified and employed typists in American cities.
The presence of women in the workforce drove a range of other social and structural changes in the US, the UK and Australia. Some of those changes were seemingly simple, like the need for toilets for women — though perhaps not unexpectedly this proved harder to solve than it ought (doesn’t that sound familiar?).
At a broader level, urban planning and cityscapes had to accommodate a new kind of worker, not bound for a factory, but for an office. Disposable income in the hands of single women drove new cultural activities and helped underwrite things as diverse as penny dreadful novels, department stores and new entertainment experiences.
Of course, typing was women’s work for the most part, and that does mean it wasn’t as well paid as other clerical tasks performed by men, and for the most part in the US, the women who held such jobs were white.
In ways that hold powerful echoes today, these women were often vulnerable to male power and harassment, and sexualised ideas about secretaries, typists and the typing pool were present from the very earliest moments.
It’s ironic, then, that the typewriter was one of the technologies that helped unlock the late 19th century suffragette movement. By making it possible for women to earn money, and to have exposure to new structures of work, knowledge production and infrastructure, and also the means to share ideas at scale, the typewriter had a lasting social impact.
It’s tempting to contain this story within the easy lines of innovation and market forces, and of a clever patent and a patient company. But there’s more to the story of the typewriter, because there is both more to the problem it was solving and more to the ripples that radiated out from it as a solution. 
Perhaps there’s a lesson to learn in that history of the American and English typewriter. A technology’s legacies can linger long beyond the moment it was relevant.
And this isn’t just about technologies — we need to read them in the contexts that produce them and acknowledge in turn the contexts that they might produce.
What is the question that history might help inform? What could be our 21st century typewriter, our sense-making, curation, circulation technology? Will speed be the right way to measure its success and impact?


Then there was Robot uprising and Electricity:



THE ROBOT UPRISING
If typewriters are a technology we routinely underestimate and now frequently forget, there is another technology that is front and center.
In fact, it feels like robots, as a technology, have been everywhere for a very long time. They are some kind of prehistory to the idea of smartness, and its legacy of artificial intelligence: they were smart, technical objects before we ever had this digital world.
But the thing about robots is that they started life in the theatre. More precisely, the word robot was coined in a play, by a Czech playwright and author back in 1921. His name was Karel Capek and he was a journalist-cum-novelist and playwright. In 1921, he was 30 years old. 
An educated man, and a child of privilege, he wrote critically about the world around him and its possible futures. In the aftermath of World War One, he was thinking about the nature of the dawning machine age, and the consequences of what he saw as a mass produced, mechanised war that had just played out all around him with devastating results.
In his play, a factory owned by a man named Rossum mass-produces mechanical creatures who resembled humans and who can be set to work. Over the arc of the play, the mechanical creatures become numerous and also increasingly unhappy, demanding the factory owners help give them more capacity — to reproduce, to love, to feel.
Ultimately the creatures are pitted against humans in an epic struggle that humans are bound to lose. Described variously as a satire, and later as science fiction, Capek’s play owes a great deal to both Mary Shelley’s Frankenstein and the stories of the Golem, and, from this vantage point, I see Blade Runner and the story of the replicants.
For Capek, one of the critical challenges of the play was what to name his mechanical creatures. 
He settled on the word “robot” — a word he and his brother, also a noted writer, made up. Well, they borrowed it from a Czech word “robota” meaning a kind of forced labor, echoing, in particular, ideas of servitude and serfdom.
Encoded in its virtual literary DNA is the idea that robots are, by slight of pen or perhaps keys, always and already embedded in a very particular power relationship with humans. In the play, the General Manager of the factory makes this clear:
"I wanted man to become the master, so that he shouldn't live merely for a crust of bread … I wanted nothing, nothing, nothing to be left of this appalling social structure … I wanted to turn the whole of mankind into an aristocracy of the world … nourished by millions of mechanical slaves."
The language of aristocracy and slaves is telling, and unfolds in the play in a familiar and feared trajectory of such power relations, inevitable struggle and revolution. There is an additional layer here too, for it turns out that Rossum, the name of the factory owner, deliberately evokes another Czech word, “rozum”, which can be translated to mean reason, or intellect.
The play’s name then “Rossums Universal Robots” or RUR for short, says it all. Mechanical creatures are in fact the product of a particular kind of engineered human act – that of the rational – and their job is to do the work we won’t.
Capek is writing this world back in 1921. In the play, one of his central human characters, puts it best:
"Young Rossum invented a worker with the minimum amount of requirements. He had to simplify him. He rejected everything that did not contribute directly to the progress of work. He rejected everything that makes man more expensive. In fact, he rejected man and made the Robot … the Robots are not people. Mechanically they are more perfect than we are, they have an enormously developed intelligence, but they have no soul …  Have you ever seen what a Robot looks like inside? … Very neat, very simple. Really, a beautiful piece of work. Not much in it, but everything in flawless order. The product of an engineer is technically at a higher pitch of perfection than a product of nature."
The play premiered in Prague in January to much acclaim. But it was a long way from Prague, in 1921, to almost anywhere else. And yet by October 1922, Capek's play had opened at the Guild Theatre, on New York’s Broadway.
The reviewer for the New York Times was underwhelmed – he wrote “a robot that fails to raise goose flesh does dire sabotage”. Not exactly a ringing endorsement, but one that firmly locates the story in its literary genealogy of monsters.
Six months later, the play opened on the West End in London. It was in Tokyo in 1924, and premiered in Australia at the Playbox on Rowe Street in Sydney in July 1925.
The Sydney Morning Herald called it a “brilliant satire”. Less than 20 years after the play began its journey in Prague, Capek’s work would find its way onto the BBC, as the first televised science fiction show and later as a full length radio play.
Capek's robot didn't stay literary for long; the promise of a perfectly engineered mechanical human was too good to resist. In 1928, Captain WH Richards, along with a colleague, AH Reffell, built a "real" robot.
Eric, as he was named, presided over the opening of the annual conference of the Model Engineers Society in London’s Royal Horticultural Hall in September 1928. Eric was 1.8 meters tall, clad in aluminium armour with white light bulbs for his flashing eyes.
He was a polite robot. He sat and stood and bowed atop a platform of batteries which powered his actions, including his capacity to move, and, through a speaker system, to speak. His debt to Capek was clear: his breastplate bore the letters RUR —  Rossum's Universal Robots. 
Other robots followed. Westinghouse produced Elektro in 1938. A 2.1-metre-tall, Art Deco-inspired, walking and talking robot, he could even smoke cigarettes, thanks to handy set of bellows in his head. He was photographed with starlets and Johnny Weissmuller, and even helped anchor the 1939 Tomorrow Land exhibit at the World's Fair in Brooklyn, reappearing in 1940 with a robot dog named Sparko.
It is easy to laugh at Erik and Elektro. While Capek's play suggested an uneasy co-existence, these first manmade robots appear less troubling. Of course they were also singular, and clearly required such a lot of human help — these were not Capek’s flawless order.
But while the physical technical creations continued slowly, the stories we told ourselves populated our imaginations, taking firm root in fiction, radio, film, television and cartoons. The tension between mechanical perfection and the death of humanity plays out over and over again, and our imaginations were progressively fueled by more and more sophisticated robots.
Which leaves me with this question: are the robots we fear the ones we are making, or the ones we already made? And how might we reconcile art and engineering? Science fiction and science fact?
How might we acknowledge that the very idea of robots came with its own moral play, that in the robot’s perfect DNA is also the story of its desire to kill us; that the ideas of autonomous machinery are embedded inside broader conversations about power and control; and that technology has a history; and that knowing it sometimes unlocks other ways of thinking and seeing?
How does knowing the robot’s artistic and literary history help us think differently about the ideas of smartness and artificial intelligence? What other histories should we tell? And what other voices are we missing as we make our histories of the future?
FLICKING A SWITCH
Robots initially came from Prague. They had a country, they still carry it in their names. Most technologies start somewhere; so do the ideas on which they are built.
Sometimes the ideas and the technologies move across borders and boundaries with relative ease; sometimes they get stuck. The role of government, of the market, of cultural forces and regulations, can all shape how a technology arrives or doesn’t.
Our current, connected digital world is collection of technological affordances, to be sure; but it is also shaped by public policy, standards, economic forces and even geography. It isn’t the first time a network infrastructure turned out that way. Before the internet, there was electricity.
And the thing about electricity is this: in its early days, it only really did one thing well, which was to make light. So, at first, it was just about street lights:  Los Angeles in 1876, Paris and London in 1878.
The world’s first public electrical supply system went online in Brighton in the UK in 1882 with a whole town’s worth of street lights. 
In Australia we were early adopters of electricity, both as state-funded infrastructure and as a private and commercial enterprise. We lit things up very early: the GPO in Sydney in 1879, the Eastern Markets and Athenaeum Hall in Melbourne in 1880; the Government Printing Office in Brisbane in 1883; the Adelaide Oval in 1889.
Initially, here in Australia, it was a patchwork of connections, small companies and small towns — a bit of a homebrew electricity. We did electrify sport venues, theatres and pubs with alacrity.
Entertainment and cultural experiences were as much the drivers as efficiency gains or productivity. The house of Mr JWH Hullet was electrified in Port Augusta in October 1885 via hydro-electricity. He was the manager of the local waterworks, and later became a civil engineer in Adelaide — I would love to know his story.
The first street in Australia to have electrical lighting was Waratah in western Tasmania in 1886. And the first town to be electrified was Tamworth. Yep, Tamworth. In 1888.
Well, more accurately, Tamworth was the first place in Australia to implement electric street lights powered by a municipal power company. Driven by a desire to achieve independence from the increasing price of gas, which supplied the town’s lighting, Tamworth invested in building its own power station and purchased two steam-driven engines from the UK.
At 8pm on Friday 9 November, 1888, Mayoress Elizabeth Piper turned a key, unlocked a switch and lit up town, with electricity. On that first night there were 52 electrical lights (retrofitted from gas) running along about 21 kilometers of Tamworth’s streets. There was a party, 3,000 people came, and even a sporting event conducted under new arc lights at the oval.
Despite the apparent successes of that first night, electric light wasn’t a popular choice for everyone. After all, town lighting wasn’t new; there had been gas. While today we tend to experience electricity as quiet; it was for Tamworth, a noisy affair with the engine shed whooshing and hissing loudly in the middle of town.
The lights were also brighter than gas, and produced a different colour spectrum.  A letter in the local paper the day after the electricity was turned on expressed frustration: “I need not enlarge on the wholesome ghastly nature of the electric light as thus explained. The injury it inflicts on persons eyes etc … I sincerely hope that in a year or two when the ugly miserable saplings carrying the wires tumble down we may revert to lights that do good service in which we can look.”
I rather like knowing that, despite the charges of brightness, the city lights did not operate for the three days surrounding the full moon, because they couldn’t compete. The town of Young electrified the following year and offered electricity to residents in their homes. And Sydney switched on its own electrical supply with street lighting and more in 1904.
But making the case for electricity wasn’t as simple as saying, "Here is a new technology, adopt it."  Its first proof point was the lightbulb, and for that, there were competitive alternatives — gas, candles, windows, daylight, and apparently the full moon.
The development of a whole host of appliances, both electrifying old ones and creating new ones, helped drive adoption and uptake. There were concerted efforts to engage the Australian public with the merits of electricity. Advertising, public demonstrations, showrooms, travelling door-to-door salesmen and electricians; and cultural spectacles — Lunar Park, the picture palaces and cinema.
In Western Australia, an early power company sent flatbed trucks filled with electrical appliances, out into the suburbs.  These travelling showrooms not only sold appliances but also installed them, repaired them and fitted new wiring if necessary. Organisations like the County Women’s Association and the Electrical Association for Women took an active role in educating the Australian public about how to manage and tame electricity.
The EAW published wiring diagrams on tea towels and distributed them along with their Handbook for Electricity. Florence Violet McKenzie, for whom my chair at the ANU is named, published an “all-electric” cookbook in the 1930s. It was full of recipes for cooking with electricity, but also instructions on how to manage electricity, electrical appliances and do basic household repairs.
It held a strong view about the values of electricity: “Needless to say that with an electric range, the cook is not subject to temperature changes, and she has a cool kitchen, hence she herself has less wear and tear on her nervous systems as she has both less work and less worry.” 
By the late l920s electricity had replaced gas lights in over two-thirds of Sydney’s homes and new homes built in this decade were all wired for electricity.
The entrenched infrastructure provider, in this case the gas companies, fought back. The Australia Gas and Light Company went to great lengths to resist electricity. While they might not have stopped the electrification of Australian homes, they did successfully campaign to keep our stoves connected to gas.
They offered free cooking lessons at their suburban and city gas showrooms, and later added tri-weekly radio shows and cookbooks. I still sometimes cook, from an AGL cookbook, and my stove today is gas. Electricity and gas co-exist in my home, and I am willing to be bet I am not the only one. New technologies do not always supersede the old ones; and we spend a lot of time negotiating and navigating between new and old. Which is to say, networks aren’t straightforward and getting connected isn’t just about a technological infrastructure.
There are important questions to ask. What is getting connected? Why? And how? What drives an infrastructure roll out? Efficiencies? A governmental or civic agenda? Cultural aspirations or experiences? Who is doing the connecting, and what is their motive? Will the network evolve and change over time? What are the measures of success and the driving forces? Who are the other voices in the story, and what might be their threads? And ultimately, what is the world that all this connecting will build?
So what does all of this mean: robots, typewriters, electricity? Technologies real and imagined, built and rehearsed? And why should we care?
The introduction of most new technology is accompanied by utopian and dystopian narratives. Whatever the technology in question, we seem to perpetuate the notion that it will change everything for the better or destroy everything we know and love.
We talk about fast, smart and connected that way. The reality is usually far less stark. Most technologies do indeed change things. But rarely the things we anticipate and rarely in the ways we anticipated, and usually not as quickly as we predict, or as seamlessly — bits of other technologies, infrastructures and networks keep peeking through.
The inspiration for new things often comes from unexpected places — a play in Prague, a patent from Milwaukee, a Mayoress in Tamworth. And the timescale on which all this transformation will happen is far from predictable. As our own fortune-tellers … we’re a bit rubbish.
But all the technologies that surround us now, and the ones that are coming, will have a history too, which might have already started. We need to know where the technologies come from, who built them, why and where, what people hope and imagine for them, and what the tacit assumptions buried inside them are.
The histories of our future are yet to be fully written and we have choices to make. But the window of our opportunities is narrowing, because there are things up for grabs."

2 comments:

Jeannette Cripps said...

Thank you, definitely food for thought within your post #TheMMLinky

Adelaide Dupont said...

Jeanette:

glad you took time to read the Boyer Lectures - at least the parts which I quoted.

How do you stay fast; smart; connected?

#themmlinky