Published July 21st, 2012 by Matthew Magain for Desktop Magazine. Original Link

Copyright notice: Copyright Desktop Magazine. Feel free to quote, but please link back to the original article source. For full article reprinting, please contact the post author for licensing information. Amber Case is a cyborg anthropologist, who is speaking at this year’s Web Directions South 2012 conference in October. Amber took some time out to talk with desktop about wearable computing, location-based services, entrepreneurship, and just what it means to be a cyborg anthropologist.

Tell our readers a little bit about yourself.

I come from a technical background. My grandpa worked on the fourth node of ARPAnet, which was the first internet, and he worked in the vector graphics sector of the graphic design department at the University of Utah. My dad built synthesizers and little computers all the time when he was a kid, and when I was 4 years old, he started reading me a bedtime story called The Evolution of Consciousness, and I started comparing my own brain’s functionality with that of a computer.

Needless to say, I grew up a giant nerd! And I was always looking for a way to understand people a little bit more, and understand the future of computers. When I was in college, I stumbled upon this field called cyborg anthropology, which is about using anthropological techniques to understand how people interact with external objects. And I thought, “Well that’s the thing for me.” The rest is history. Now I’m known as a cyborg anthropologist, whether I want to be or not! But it’s a great lens for stepping back and looking at the world in a new way.

It certainly sounds like technology is in your blood! I liked it when you told everyone at the start of your TED talk that they were a cyborg. But I’ve got to say, I was pretty disappointed to learn that I wasn’t Robocop or Terminator.

Well, one day you may be Robocop, in that maybe you’ll record a crime on your cellphone and you’ll be able to turn somebody in for wrongdoing! Perhaps not as exciting – less explosions, and not as many car chases.

I guess I’ll have to be patient for that stuff. In that talk, you succeeded in marrying prosthetics and mobile phones and social media, and wormholes that bend space and time into this common cohesive thread. Could you elaborate on your suggestion that technology is making us more human? If you think about the early interactions between humans, we had these tribes, with cave walls. And a cave wall was really a great way to externalise thought and allow people to communicate something outside of themselves. So that was a really important part of history, the externalisation of memory. But cave walls are not really portable. So the portable cave wall showed up in the form of a scroll. And scrolls were pretty exciting, and they worked quite well. But the issue with scrolls was that they were pretty hard to unroll. To make it easier to read the scroll, they cut the scroll up into little pages and bound them together: a book.

And that made it really easy for people to flip through. And then with lots of books, it became really easy to create information and externalise it. And then we needed some kind of centralised or decentralised repository to have anything appear on a page.

Then the phone came along, and the original phone was just a way to communicate across distance – to annihilate geography. Modern industrial society has this particular characteristic that it separates people, so families move from one place to another, separated across geography. The railway, and the airplane, and the industrial automobile allowed people to move anywhere and easily traverse space, so all of these people were separated by distance. And it was really hard for them to actually talk with each other because the only thing they had was written letters, and eventually the telegram. But when the phone came out, it was like creating this giant wormhole between one person on one side of the United States (or the world) and the other.

So suddenly these people who lived in these giant industrialised societies, where they walked among strangers with whom they had nothing in common, were able to finally call people and be close to each other – even though the device they were speaking into, in a corner of their house, made them look quite schizophrenic. Remember how the first criticism of cell phones was that “Oh no, people are going to look like they’re crazy. They’re going to sit in a room forever and speak on the phone and never interact with people again. It’s going to ruin society. It’s going to ruin conversation!” And of course that’s not what happened. Every time there’s a new piece of technology, we hear, “It’s destroying us, it’s destroying and eroding the social fabric. It’s making us less human!”

But honestly, the first industrial revolution really dehumanized people. It turned people into machines. If you looked at people spinning wheels and giant fibers in textile mills, if you look at the Triangle Shirtwaist factory that locked its workers in and burned to the ground, you had this dehumanization of humans and human labor. And you had people basically working at machines as machines, as cogs in a machine.

That’s why you had The Wizard of Oz book come out where it’s all about Dorothy and the giant locust storm terrorising the mid-west. You have the industrial tin man who, as man industrialises, wonders if he has a heart anymore. And you have the scarecrow who is the prototypical farmer – can the farmer industrialise and have a brain? It was all these dehumanization of reality.

So what happened when the cellphone and all this technology appeared was that it suddenly allowed people to talk to each other no matter whether they were in a weird, isolated industrial suburb or city or strange environment. Suddenly, it re-stitched the social fabric back together and allowed people to communicate with each other. And when the cellphone came out, it stitched people even more closely together – it allowed people to text each other when they were not at home in that room with that landline telephone. It’s really just an extension of the self. Some people use their cellphone as a portable cable wall, allowing them to send messages to each other and store stuff outside of themselves. And some people use it as a way to stay connected to people who are far away. So I really think it’s helping us to be human again, after we’ve been really dehumanised and been functioning as machines for a long period of time.

I notice you’re also interested in wearable technology like the upcoming Google Glass?

Yes. Very, very interested. It’s pretty funny because the Google Glass marketing is: “Google Glass: it’s totally new. It’s a way to interact with reality.” Yet in the 70s, Steve Mann, who is now a professor at the University of Toronto, was walking around the MIT campus with 80 pounds of wearable computing equipment, live-streaming his life, basically using an early version of Google Glass. Students and colleagues of his, like Thad Starner, are working on the Google Glass project with Google. So that technology at that time didn’t have the design, it didn’t have the capability or convenience or user experience that allowed it to be used by the masses. But one or two key people were developing all of the interesting things that you could do with that.

One of the things that Steve Mann came up with was this concept of sousveillance, which is to look from below, instead of surveillance, which is to observe from above. Instead of this top-down surveillance society in which he thought that technology would enable early technology, he said “in the future, everyone will have tiny devices that will be capable of recording anything at any time.” So during a protest or any civil rights violations, people will have power to look from below, instead of from above.

His concept of wearable computing was very different from what I think people expect Google Glass to be. I think people expect Google Glass to be, “Well, I can take a lot of pictures all the time, which is really cool. But there’s also gonna be a lot of ads, right?” And Steve Mann didn’t like having other people’s images on his reality. So he made a heads up display that allowed him to recognise rectangles that have advertisement in them, cancelled them out, and put text messages and other pieces of information, or research papers on them. So as he walked down the street, he would just see what he wanted to see. He was able to filter and put an ad-block on reality. He would walk into a supermarket and only see the products that he wanted to buy. I think that’s a really important point where he was making his own and he calls it diminished reality.

Augmented reality takes away from reality and includes your view. Diminished reality takes the reality that you don’t like, edits it out, and lets you have the reality that you want. He began doing this in 1978. Before that, he made a little Walkman so he could hear his own music as he walked down the street. Of course, these were all outlandish, crazy-looking devices. And that was the main thing that prevented people from actually adopting them.

But another one of his manifestos was that modern computers are really quite annoying. You have to sit at a computer terminal and hunch over and contort the computer, whereas if you have a wearable computer and you were on it, the computer’s shape contorting to your form, you’d be able to walk around. And whenever you come up with that idea as you walk through the woods or you think about something, you’re able to write it down. This is from (PARC Research scientist) Mark Weiser’s comment – that the best technology should get out of the way, let you live your life. And he called it calm technology – this technology that recedes into the background when it’s not needed and processes information.

And when it is needed, it intercedes into reality and then actually shows up and you can interact with it. And I think this is where we’re trying to go with computing. In the 70s at PARC Research, they had this idea that the world would be full of pads and tabs and little devices – which is now what we have. We have sensors and pads and tabs – these tiny little devices that all connect, and there are a few different screens that we interact with. And he thought these issues would come about sooner than they did – it was around the 2010s, instead of the 1970s.

I don’t think technology evolves as quickly as people think it does. There’s a lot of persistent architecture in the way. For instance, the big issue with the computer mouse when Doug Engelbart invented it is that he thought, “Well, this is a temporary solution to a human interface input problem.” Yet the mouse was around for 40 or 50 years before it finally, slowly got replaced by the multi-touch screen, or any of the other trackpad input devices.

So it’s a big issue that we have these ways of inputting data – we’re accustomed to the ways we use computers. And there are still going to be people who use regular desktop computers, and a bunch of people that won’t migrate over to heads-up displays no matter how sexy and cool they are – they’ll still be out of the price range for people. And I wonder about privacy issues, theft, people throwing rocks at people – there are going to be a lot of issues that occur when these devices show up in reality because it’s the final marriage of your eyes to a camera that you can actually take pictures with, and send to an external place. But it’s no different than having a cellphone – it’s got the same capabilities, it’s just easier to access.

You don’t have that user function where you take a phone out of your bag suddenly, and everyone knows you’re about to take a picture – you can just take a picture. So it will be fun to see all of the ramifications around that, especially when somebody zones out in a meeting and starts browsing Google instead of paying attention to you. Steve Mann tried to come up with a way of solving this where, if you’re looking at the web on your glasses, they would turn dark and turn into sunglasses. And then when you are focused on talking to somebody, the glass would turn clear.

He has this great book called Cyborg, which is all about wearable computing. And it’s just an amazing read, I highly encourage anybody to read it if they’re at all interested in wearable computing. My brain is about to explode with all of that. Can you give any tips to designers who are finding themselves increasingly working in unchartered territory with all these technology? Because it sounds like the future, but it’s really only just around the corner, right?

Yeah, I think the future is finally here. The internet had to occur before we could have any of this nice ubiquitous connectivity that we’re seeing. And of course, there’s always the big hype cycle – the Gartner hype cycle – where everyone’s really excited about stuff like augmented reality. And then there’s the trough of disillusionment. And there’s the plateau of productivity where people actually build stuff with it, but it’s no longer as sexy as it was.

I think we have hopefully reached the end of that augmented reality excitement, and we’re running into the trough of disillusionment. I really hope, because that’s when we’ll be able to actually build things. For me, augmented reality is really, really tacky. It’s like you take the core function of what you really want to enable for a user, as a designer, and take away every single thing possible that gets in the way of that interaction. Once you can’t take anything else away – there’s this famous quote about this – then you have something that’s feasible.

Especially if you have an augmented reality heads up display, you don’t have the great giant landscape and the ability to touch the screen like you do on a mobile device. And you don’t have the web connection of an actual web-connected device. You have this tiny flicker of a moment in which somebody needs to interact or do a very basic function. You don’t need a giant pop-up that’s beautiful beveled edges to tell you that you have an email. You need a tiny dot – the most minimal amount of notification – to note that something has occurred. You can use other things like sounds. You can use things like haptic buzzers.

There are all these other things that people refuse to think of when they think of augmented reality – they just think of visuals. They think of three-dimensional shapes floating in the air that honestly will make people sick. There’s a lot of vertigo associated with heads-up displays. There’s a reason why Steve Mann has part of the heads-up display that goes into his eye at a 90-degree angle to the world. It’s something he’s gotten used to over time. But if you have the minimum amount of information that’s needed to get the point across, then you have something really elegant – an almost invisible interface that smoothly becomes the neural interface, or the sensory interface, of a person, that gets out of the way when it’s not necessary and shows up when it is. I think that elegance in interface design is going to be the most important thing anyone is going to be able to do in the future. Every other crappy augmented reality piece of crap software that tries to tell you where to find the nearest Starbucks in 3D is really not useful. And I think those will go away.

Although big, advertising agencies with giant marketing accounts will keep trying to do the same thing again and again and again, and it will keep on failing. And all you have to do is go back in time, over the past 30 to 40 years, and watch the ebb and flow of the money going into startups that have tried to do such and failed. And watch the very few that have done well. Especially in countries like Sweden and Finland who have done things with non-visual augmented reality. Haptics, audio, GPS, text messages – it’s all out there but no one goes and digs into it. No one digs into the past. And so people are raising millions and millions of dollars, again, to repeat the same mistakes. I love the quote: “If you decide to look at the past, you can profit from it” instead of making the same mistakes. I think it’s my job as a digital paleontologist to find that history that no one has looked at, and try to bring proper attention to it. Because there have been so few people with very limited ability to publish… they have been trying to make these solutions for years. So I try and figure out how to tell people where the mistakes were made in the past, so we don’t waste a lot of real resources.

You know, this is private equity and a lot of people’s trust, and a lot of large monetary funds and public funds that are going into projects that you can predict, from the onset, that will fail. And you can’t go up to the founder and say, “Your project will fail.” That’s really quite rude. But if you had a repository of data that said, “If you’re trying a location-based social networking app, the only one that will work is Grindr,” because that’s a specific demographic that needs that. Anything else is not going to work because there are issues in the number of people that are bound at any given moment in even a densely-populated area that want to meet. And even then, you’ve got issues with GPS bouncing all over the places. These are the kinds of issues that won’t be solved in the real world, but in a theoretical one that people have in their heads – and the fact that it hasn’t been put into practice, is absolutely absurd. And there are a lot of these issues!

So I try to look at those and sometimes I grumble about the industry, but we’ll see another set of people coming up again and again trying to do this. Thankfully, Google Glass will be a platform that’s pretty accessible to people. Hopefully, they’ll solve some of these issues but I’m not sure how well they will. And the other question I have about it is the input device: are you still going to be having your fingers slip around blindly on a slide-y screen to interact? Or are you going to do more gestural interaction? Or are you going to use a one-handed key chording device like a twiddler, which Thad Starner and Steve Mann used, that allows you to type 60 words a minute with one hand while you’re sitting in the street light. So there are a lot of unanswered questions that are expected to come out in 2014. I got a developer version – I’ve put in a note for it, I don’t actually have one yet. So I’m excited to try it out.

Good stuff. Tell us a little bit about Geoloqi, which is your startup.

What I saw in the industry was there’s a lot that’s being held back because location is really difficult to handle on devices: raw location data, real-time GPS, setting up geo-fences, trigger zones, publishing geo-location data, subscribing to geo-location data, developing specific frameworks that take advantage of the particular and angsty GPS chips on iPhones and Androids and other devices. That got in the way of big companies and little companies and developers building really interesting location-based applications. And I don’t mean the ones that are very disempowering like “Track this truck, and if it doesn’t go to this place, and goes to the pizza place instead, let’s find the driver.” More like: turn your lights on and off when you get home; automatically let your mom know that you landed the airport and that you’re okay; let your spouse know that you’re an hour away from home; or allow Comcast or any of the cable companies to notify you that they’re 30 minutes away so that you can actually be at home when a package arrives or when the cable guy arrives.

There’s a lot that can be done when devices know where they are. But the big issue we saw was that trying to handle location on devices would be very draining to battery life. And people always had location on their roadmap and they just didn’t implement it – it was just a big, swirly, tangled mess. No one wanted to build it from scratch.

So we said “Well, we’ll make a platform that’s basically a Lego block, that enables people to do this and drag-and-drop it in their application and bring location into their app, and that’s what we built. My co-founder has been tracking his location for the last four years, at five-second intervals, and he’s made this giant map of everywhere he’s been in Portland. And he’s had a real fascination for location his entire life. He actually helped remodel a whole house and put all sorts of interesting IRC systems and home automation systems in it so that it could talk to people. There’s a little computer in the kitchen that would talk to people and when you walk up to the door, it would recognise your face and let you in and then send a notification to that network in the house that this person was home.

Wow, you guys really are nerds!

Ha! Well I just wanted to bring that to life because I saw these all amazing things being developed in the 70s and no one able to do anything with them today. And it’s high time that people are able to do this, because we now have a device with GPS in our pockets, we now have sensors, we now have wearable computing on the horizon. I didn’t see any developers or startup founders really trying to solve any hard problems. They were trying to solve the problem of, “What does this 18-25 group of people want to get for dinner tonight?”… and make the same dinner startup. That’s not a startup! That’s a website, that’s a side project.

Where were the kids that were tackling the really difficult, obnoxious problems in the industry? I couldn’t find them and I said, “Okay, I’m gonna have to do this. And even if it fails, I don’t care because we tried.” It was more of this silly, nerdy, valiant effort to try and tackle something that everybody else was not tackling because it’s risky and hard. But if it works, the results are extremely important for entire industries.

I’m having a bit of a hard time getting my head around the fact that you’re what, 25? And you’ve done all these amazing things like found a tech startup and you’ve given a TED talk. This self-assuredness and a sense of purpose, where do you think it comes from? And how can we tap into that?

Let’s see. Well since I was little, I had insomnia and I would sit there trying to get my brain to go to sleep and I would think of all the devices that didn’t exist. I wanted something that I could hold in the palm of my hand that was larger on the inside than it was on the outside. And I couldn’t wait for digital cameras. And I came up with all these technologies I wanted.

I always imagined being able to give presentations to people that would change the way they thought of the world. I basically followed exactly what I had in mind for my future since the age of four. People generally don’t do that, I think. With the exception of people who say, “I’m gonna be a lawyer, doctor, firefighter, etc.” I had to do this thing.

There are some people in my family who are just extremely motivated and quirky, and I might have picked it up from them – even though I haven’t hung out with those particular people in my family because they’ve been too motivated to spend any time with me.

I think it’s this discomfort with acknowledging that the way the world currently is, is how it will always be. I think it’s the questions: Why is this thing the way it is? Why is this thing broken? Or when people go through a doorway and they don’t like it, not stopping to fix the door and examining it, but just saying “Oh, that was annoying.” It’s always about finding these things that I overanalyse.

Actually, I wouldn’t wish this thinking process or life or difficult journey on anyone, because a lot of times I just wish I could walk down the street on a summer day and listen to pop music and wear tennis shoes. I don’t know what people usually do! But that seems like a much more easier life. This one is fraught with uncertainty and lots of people made fun of me in school, and even college. So I never really belonged. If you want to go that way, the best way to do it is to work on things that you’re interested in when no one else cares, because then you know you have something that’s really interesting.

And just figure out the amount of time differential that you have. Some people have ideas that don’t come to fruition for three years. Some are less fortunate and have ideas that don’t come to fruition for four, or even 40 years. My co-founder has ideas that come to fruition in three years so I’m really lucky – my ideas are a little bit more long-term so I’m less fortunate – maybe 10 or 15 years. And the problem is that when you have an idea and you wait to implement it, or you think it’s dumb because no one else cares, you’ll most assuredly see it somewhere in the world in three, four or five years – whatever your time differential is. And then you’ll hit yourself in the head and say, “Why didn’t I do that?” And you’ll regret it for the rest of your life.

So I decided that I wouldn’t regret that for the rest of my life, and after experiencing that a few times, I just said I’m just going do this. I don’t care what happens because I won’t have to watch somebody else doing it while I sit back and am too afraid. So I guess that’s the advice that I would have for people who would be entrepreneurs, which is a risky and highly-distressing and stressful thing to do.

Very cool. And you’re coming to Australia in October, which is exciting for us. What can attendees to the Web Directions South conference expect from your presentation?

I’ll go over the history of wearable computing and location, and just give a nice historical context to what they’re experiencing now, to help people understand that this stuff isn’t new. I’ll share some lessons you can learn from the past, and there’ll be a lot of really great eye candy, to make it entertaining.

Amber, thank you so much for your time, and we look forward to catching up in October.

Alright, thanks a bunch!

You can follow Amber Case on Twitter at @caseorganic.

Back to Caseorganic’s Wiki.

Retrieved from ""

This wiki is from