Visiting Las Vegas

Last week I was in Las Vegas for DevConnections and the launch of Microsoft Visual Studio 2010. Normally you won’t catch me dead in Las Vegas along The Strip due to all of the tourists, the constant cigarette smoke and the pervasive heat.

This trip did not turn out so bad. I went in the early part of the year where the temperature is the mid-sixties, beginning of the week so there are few tourists, and of course, with that, very little cigarette smoke.

What was great about this trip was not having to work too hard. I did not need to talk to clients, I was not having to go out every day and sell the services of my company, and I did not have to cover a huge conference as a member of the press. Just a nice, quiet, relaxing little conference talking to vendors and software developers and other business people.

SENSECAM-0049 It was the most relaxing time I have spent in the past three months, anywhere. Almost like a mini-vacation. Plus I got a huge amount of time to think and write about personal development. But no gambling. I refrained from playing Blackjack this trip, though I did lose about forty bucks on the penny slots whilst waiting for my girlfriend.

The last day of my trip I was able to make it over to visit a friend’s house and take a look at the architect’s plans for the new house they are building. What was fun about that was discussing all of the ideas they had for the literal castle in the air that they are building.

One thing i did realise though was that you can take the geek out of the ghetto but not the ghetto out of the geek.

My friends live on a double-gated country club community where everything is planned and not a blade of grass is out of place. I found it very depressing actually and the whole area set my teeth on edge. But what I found most amusing was that these two wonderful people still lived like traditional sci-fi/technology geeks with stacks of books just haphazardly piled up wherever they would go and a toilet cistern that didn’t quite work.

During a walk across the casino floor of the Bellagio I had a particularly interesting philosophical conversation that I really need to write up in to a post. But what was most striking about this conversation was that I did not record it in any way, shape or fashion. Every recording device I had on me at the time decided to pick that exact moment to stop working. My little SONY voice recorder, my SenseCam, my cell phone.


I had this sudden mental disconnection of “should I talk about this if I’m not recording it? What if I forget something I said? What if I don’t talk about it now but then forget to talk about it later?”

I was struck dumb (but only for a brief moment) by this quandary of suddenly being completely disconnected from the world and recording nothing at all.

Will future generations suffer angst at not recording, at not being connected, at being isolated from the world? I wrote about this in a post-singularity short story years ago but I experienced it first-hand for myself in a very real sense, right there, on the casino floor, in the most mundane of settings I could imagine.

SenseCam: An FAQ About My Personal Experiences Wearing One

I have written a little about the SenseCam and my experiences with it, but there are still many questions people ask, so I thought I would attempt to answer some of the more common ones here.

What is a SenseCam?

A SenseCam is a gadget that, at the very least, captures images of people and places at regular intervals or when the camera determines something "interesting" is taking place in front of the lens.

Unlike a regular camera that the user interacts with by holding it up to the subject to be photographed and then depressing a button, the SenseCam automatically snaps pictures as and when it decides.

A SenseCam is worn on a lanyard around the neck, hanging from a belt loop or an arm band. By taking regular images throughout the day, as the wearer of the SenseCam goes about their regular daily life, a visual record is built up on places the wearer has visited, people they interacted with and activities that were engaged in.

These are pictures of SenseCam devices created by Microsoft.

Microsoft Research SenseCamMicrosoft Research SenseCamMicrosoft Research SenseCam

And if you are curious about my particular SenseCam, it is this:

SONY Ericsson k850i SenseCamWhich is a just a plain vanilla, SONY Ericsson k850i with a custom SenseCam application running on it. After all, “software maketh the machine” which is a concept I have been preaching for the past 30 years.

Who wears a SenseCam?

Gordon Bell Lifeloggers/lifebloggers/lifegloggers, technology mavericks, Alzheimer patients or anybody interested in capturing the significant and not so significant events in their life.

There have been some notable technology people that wear a SenseCam or SenseCam-like device, such as Steve Mann and Gordon Bell.

Why do you wear a SenseCam?

I developed and began wearing a SenseCam-like device with the notion of recording as much of the activity and decisions taking place within my software start-up company as is possible.

In the intervening time, wearing a SenseCam has taken on different aspects and I now also use it to remember significant events within my life such as a Christmas party I attended, dinner with friends, or other important times. I hope that one day, all of the data I am collecting, will be able to be processed, and perhaps provide an interesting case study of a human life.

Though the community is small, more and more people are beginning to show interest in SenseCam-like devices for recording their lives, events, and also as an evidence gathering technique.

Even before the term SenseCam was coined, before computers and electronics could be miniaturised sufficiently, people, such as Buckminster Fuller, gathered copious amounts of data about their own lives, leaving nothing out in case it was significant.

What do SenseCam images look like?

SenseCam images look like just regular photographs taken with a digital camera. Sometimes they are blurry, especially when the image was taken under electric lighting conditions with a lot of movement by the wearer, many times the pictures are crystal clear.

Below are some examples of SenseCam images that I have taken around the Los Angeles and San Francisco area.

One of my desks at the officeSanta Monica PierTypical sight in Venice

San Francisco China TownSan Francisco PierBusy intersection 

 The Novel CafeIMF OfficeVisiting a client

I am currently experimenting with making use of the accelerometers in the cell phone to counteract the excessively blurry images captured under electric lights. This works by taking the picture the instant that the accelerometer indicates that certain types of movement that enhance the blur effect have ceased long enough to capture an adequate blur-free image.

How many images does the SenseCam take?

That depends on the activity level of the scene and also how much I am moving. The cell phone I use that runs the SenseCam software has accelerometers in it, so the software can determine how far I have moved, and take a picture if it is a significant distance, which at this time is set at 3 metres.

The SenseCam will also take a picture if it recognises a human face in the scene, though this feature is not particularly robust at this time due to the constraints I have placed on the software. These constraints are imposed by considerations of power consumption rather than computing power.

Pictures are also captured if there is a significant change in light levels within the environment, usually indicating a move from one location to another.

And finally, the SenseCam will capture an image at regular time intervals, which I set to every 15 seconds, though this is user definable through a simple graphical user interface menu system.

Over the course of a day I will capture between 1,000 and 9,000 images depending on scene activity, events in my life, how long I am awake for, and also how long I wear the SenseCam during the day.

How long have you been wearing a SenseCam for?

I have been wearing my SenseCam for a little over two years. In that time I have captured approximately 500GB of data totalling approximately 1,300,000 images and a little over 10,000 hours of audio.

Do you always wear your SenseCam?

"Sex Kitten" on my bed At this time, no. There are many days when I do not wear the SenseCam at all, simply because nothing exciting is going to happen when I am walking at my treadmill desk, with the door closed, writing out articles like this.

I wear the SenseCam only occasionally on mundane days, it is when I leave my home or office that the possibility of exciting things happening around me rises. I have enough commonplace pictures of me drinking coffee, cleaning my bicycle, eating lunch, or writing code that I do not feel a need to capture any more. (Maybe not so many pictures of me cleaning my bicycle.)

Having said that, I might not be wearing my SenseCam and taking pictures all the time, but 24 hours a day, the SenseCam is recording audio and geo-location data as I go through life.

What resolution do you capture images at?

I have a cell phone with a five mega-pixel camera so I capture at that resolution normally, with about a 60% JPG compression setting. I find this gives good quality photos without overly taxing the available memory space.

Capturing at higher or lower resolutions, or with more or less JPEG compression as far as I can determine has a negligible effect on battery life so 60% compression is really just for the convenience of storage space.

What lens do you use?

Wide-angle lens I make use of two lenses on my SenseCam. The built-in lens of the cell phone, which is not particularly good at capturing a lot of the scene in front of me, and also a cheap, magnetic mount wide-angle lens I purchased at Fry’s Electronics that captures a larger area.

The wide-angle lens is a little heavier and more obtrusive when it is pointing at people, so I usually only make use of it when I am out and about, running errands.

How long does the battery last?

Depending on how much activity is taking place, the current SenseCam battery lasts anywhere between 6 hours and 18 hours. As the SenseCam is just a regular cell phone running some fancy software it is trivial to carry a spare battery, and quickly switch it out before it drains away too much. I can also charge the battery via USB, and I do just that when I am walking on my treadmill or driving around town.

What sensor data do you collect?

Apart from the images and audio, the SenseCam I wear collects geo-location information from cellular towers, light levels, acceleration and tilt. My SenseCam is a piece of software that runs on a high end cell phone, therefore I am only limited by the sensors built-in to it. I also capture GPS position from a GPS data logger.

I am not willing at this time to begin hardware hacking, until I have adequate software I feel it is worth expending the extra effort, time and money for.

How do you use the sensor data?

The SenseCam software uses the light levels to determine if someone has walked in front of the camera, or if the wearer has entered a new location.

The geo-location data is used to determine where the wearer is situated within the physical world which is used to tag images once I upload them to my workstation.

Tilt angle and acceleration I am not making full use of at this time, but intend to do so in the near future for auto-correcting for the angle that the SenseCam was at when the picture was taken.

What features does a regular SenseCam have over yours?

Microsoft’s SenseCam has a lot of research dollars thrown at it. Mine has just me and whatever time I can spare to cobble together the software. In terms of hardware, the only two features I can find mentioned in Microsoft’s SenseCam that I cannot replicate at this time is a built-in heart rate monitor and an infra-red sensor for detecting body heat.

There may be SenseCam models containing sensors other than the standard complement, a bit like unique Doctor Who Daleks modified for a specific mission, but I have yet to locate any information on them.

In terms of software, it appears that Microsoft’s Research Centre has created many different desktop applications for determining significant life events, replaying the daily record of images, etc. This is where I fall behind because I just do not have the man hours available to me to emulate all of the work they are doing.

What features does your SenseCam have over Microsoft’s?

Other than the fact that my SenseCam is a fully functional cell phone, the major hardware differences between my SenseCam and Microsoft’s is the amount of storage, the amount of computing power, the battery life, and the high resolution, 5MP (five mega-pixel) camera. Utilising a cell phone as a SenseCam I also get to use a super-bright LED light that can be enabled for low-level lighting conditions, an LCD screen to review images I have taken, and audio recording capability with audio playback for a quick review.

For software, I cannot compete with a team of programmers and researchers but I can learn from their work and create similar applications to them.

My on-phone SenseCam software has a "take picture now" button, a "quick delete" button, "suspend capture" button, "pause/resume" audio recording button, a pedometer, snapshot time logging reminder, gated time logging, and other features I have tinkered around with over the past two years.

Do you have any intention of releasing this software?

At this time I have no intention of releasing my SenseCam software. That is not to say that I will not. Just for now, no.

The software is incredibly easy to replicate and anybody interested in life logging with a SenseCam could create the basic SenseCam software in a matter of days.

One of the major reasons I am not prepared to release the software at this time is I am not willing to turn it in to a project that I have to support. The SenseCam software only runs on two cell phone models that I know of, the SONY Ericsson k790a and the SONY Ericsson k850i.

Along with that, the Python application, PySenseCam, is certainly not ready for prime time, all of the interface is programmer designed and thrown together, so just trying to use it and make sense of it would require a willingness to tinker and put up with untold bugs that, at this time, I am not willing to fix.

How can I build a SenseCam for my own use?

My recommendation if you want to quickly explore creating your own SenseCam is to get a quality cell phone with a good OS, such as Android, iPhone, or a high quality J2ME phone.

The best coffee in town from Groundworks Coffee on Rose Ave, Venice, CA The SenseCam software that runs on the cell phone should not take more than a day or two to get up and running, and then you can tinker and add features. It is the desktop software that reviews those images and lets you tag them, manipulate them, and make sense of them is where most of your development effort will be spent.

Many people, including the popular press, right now are focusing on the device, rather than thinking about the software. It is a little like worrying about the engine in a car and all the things it can do rather than what the car is and how it will change the user and society and how it will need to change to accommodate the user and our future needs. We are all concerned with "what" a SenseCam is about, rather than "how" a SenseCam is about.

Who developed this software?

The SenseCam software running on my SONY Ericsson k850i was developed by me, Justin Lloyd, in my spare time, whilst managing and running my video game software start-up, Infinite Monkey Factory.

There is also a companion application, called PySenseCam, that manipulates and wrangles the large quantities of collected data that does not just tie in to the SenseCam but also any audio I have recorded, GPS geo-location, websites I have visited and snapshots of my computer desktop that are taken at regular intervals. I am attempting in a way to make PySenseCam replicate the functionality of MyLifeBits or the Dymaxion Chronofile.

Who developed the original idea of the SenseCam?

The original name "SenseCam" was coined at Microsoft Research Centre in Cambridge, England, by, as far as I can tell, Lyndsay Williams, one of the original researchers on the project.

Gordon Bell has done a lot of work with the SenseCam too, but the original idea has been thought up multiple times by many people, with the earliest known reference being the "memex" by Vannevar Bush which dates back to the mid-1940’s

From early 2001 up to Thanksgiving 2006 I had been using a digital voice recorder and general purpose digital camera to log all of my conversations and take snapshots of interest for events or tasks I would need to remember later, such as purchasing a particular book I had just seen.

I thought I was coming up with something groundbreaking and original around Thanksgiving 2006 when I sketched out the idea in my notebook, after seeking and not finding an adequate commercial solution that would do what I want.

I began developing the idea further, looking for a simple device I could put to use, when Apple announced the iPhone in Q1 of 2007. Aha, the light goes off above my head, just what I want. And then SONY Ericsson announced the k790a, which was smaller, lighter and more practical with an operating system I was already familiar with.

I developed an application for the cell phone over the course of a couple of days and a desktop companion application to manipulate the data in the following weeks, which at that time was just called "snapshot prototype" that did most of what the main functions of the Microsoft Research SenseCam can do today.

A month or two after developing the software, in April or May of 2007, I spotted Gordon Bell on the cover of a magazine that was about five months old by that time, with an article talking about "his" SenseCam and I immediately thought "bugger, nothing new under the Sun after all." The magazine was published around the same time I was coming up with my concept for my own "SenseCam," though I did not call it by that name at the time.

The original article covering Gordon Bell’s experiences with a SenseCam is here and here. Interestingly, some of the points raised by the original article author I have also written about in the past with regard to how our perception of memory changes what it means to be human. What if you could forget anything you chose? What if you could remember so vividly an experience that you would swear it was real? What happens when every human thought has already been thought? I will transcribe those talks and articles in to a digital form, bring them up to date and post them here in the near future.

What do you do with all of the data you collect?

Fantastic steampunk ship Other than pulling out significant events or discussions from the archives to create an article around, at this time, I do not do very much with the collected data.

I grab images for use in my articles on this website, I manually transcribe notes I need to remember, but most of the data sits in an a directory on my server collecting the equivalent of digital dust.

The tools do not yet exist to adequately manipulate and wrangle the amount of data that a SenseCam collects. I am working on an application for my personal use that attempts to solve this problem. Unfortunately competent, general purpose machine vision and automated transcription of voice are still some years away from being good enough.

How do you manipulate and identify the images?

I make use of Windows Live Photo Gallery, and a small application I have created called PySenseCam, that copies images from the cell phone to a network hard drive, renames them, and sorts them according to date and time that the image was taken.

PySenseCam tags each image with geo-location data from the cell tower ID and also from available GPS data captured from a small GPS data logger.

How do you store and sort your data?

I store all of my data on a regular hard drive in JPG format for the images, WMA for audio, and XML for cell tower and GPS data. Any other sensor data is stored either as meta-data within the JPG or WMA file, or in a plain XML depending on what the source was.

How do you view your SenseCam data?

I do this in two ways. I make use of Windows Live Photo Gallery for just browsing and manual tagging, and I also make use of a small Python application, called PySenseCam that I created, which allows me to quickly play back images and data with audio snippets as though it were a slide show of my day.

How do you tag your data?

I am currently working on features for the PySenseCam application, making use of OpenCV and a few other Open Source projects that do rudimentary image detection, for automatically tagging images and determining significant events during my day.

I also use PySenseCam for manually tagging a group of images to indicate start and end times for significant events that are noteworthy.

Do you think the SenseCam will become ubiquitous?

I think a SenseCam-like device will eventually become ubiquitous, yes. The SenseCam in its current form is cumbersome and intrusive at times, and I believe that once the hardware engineering problems are solved, much like cell phones and computers, a large swathe of the population in the developed world will make use of such a device. I do not see this happening for at least a decade, possibly two, not until there is a compelling reason to do so, but I do believe it will happen.

One day, you will record your entire daily existence in high-definition video and audio, and on that day, we will all realise just how boring everybody else actually is. Oh, and porn, someone will make a porn movie using a SenseCam-like device in the next decade.

Got other questions I did not answer here? Please send them to me and I will add them to this article or if they are interesting enough, create an entirely new article based specifically around the questions and issues you raise.

LifeLogging With A SenseCam Video Round-Up

A quick round up of various SenseCam videos found on YouTube and also a good overview of what a SenseCam is and what it can do. Just in case you are wondering why I am so interested in the Microsoft SenseCam, it is because I developed my own over two years ago and have been wearing it ever since, recording as much of my daily life as I possibly can.

If you are aware of any other interesting videos that I might have missed, please send them to me and I will post them here.

Quick BBC Intro To SenseCam


Quick Summary Of What SenseCam Is


Steven Hodges Talking About Memory


SenseCam for Alzheimer’s Sufferers

One the technology applications that SenseCam could be applied to. Personally, I think SenseCam and the software applications to sift and sort all the data collected will be truly amazing once it becomes ubiquitous and everyone has one and needs one. I cannot imagine living without my SenseCam now that I have one.


Building A SenseCam Whilst Wearing A SenseCam

Oh boy, I love women who know how to handle a soldering iron and can write code in a real programming language.


Tate Gallery Visit

A visit to the Tate as recorded by the SenseCam. I have similar visual records of my visits to the Terracotta Warrior exhibits and other museums.


Stop Motion Aikido

Not sure if this was taken with a SenseCam or not, it shows up under “SenseCam” when I search YouTube.


Going About Your Daily Life


Video Diary

A diary of someone’s day taken from the viewpoint of the SenseCam. What is notable about this is that the wearer occasionally removes the SenseCam to photograph themselves.

As you can see from the viewpoint in the video, there are still issues with the location of a SenseCam missing some areas that are being cropped by the wearer’s placement of the lens.

There are also lots of shots of the car’s steering wheel, and this is a common complaint against my own personal SenseCam. Situating it too low misses the view that I can see, it is almost like I need a separate camera lens mounted just behind my ear that will capture what I see, rather than just a portion of it.


SenseCam Around Cambridge

Lyndsay Williams’ journey around Cambridge, set to some funky techno. Not sure if the music in the video was originally chosen for it or whether some YouTube hack added it later, but it is an interesting project.


PowerPoint About SenseCam Architecture

This is a classic “bad PowerPoint” slideshow, but it is interesting for the fact that it shows some of the architecture of the SenseCam and some of the uses.


Over the past two years I have become so dependent on my SenseCam for capturing images of my life and journaling my daily existence that I cannot now imagine living without it. To paraphrase a certain famous actor, “You can have my SenseCam from me when you can pry it from my cold, dead fingers.”

Many of the images you see in my blog posts were taken with my own SenseCam, and I have so many great shots to share I do not think I will ever run out of images to choose from before I run out of things to say.

Again, if you know of any other SenseCam videos I am missing, send me a link and I will update this post with them.

BBC Video About The SenseCam

This video is a bit old but I just found it so it is new to me: James May of the BBC show about “Big Ideas” where he takes a quick gander at Microsoft’s SenseCam. James wore the SenseCam he was loaned for just a weekend and immediately hit the issue that everyone who wears a SenseCam already has, the sheer amount of data captured by the device.

How do you make sense (a pun) of it all? It is interesting to see that the software that the Microsoft researchers have come up with is no more sophisticated than the Python software I have managed to develop to do the same kinds of manipulations, i.e. locate significant events and in a semi-autonomous way tag them.

Over the past week or so I have been experimenting with SURF and SIFT using OpenCV and Python to automatically determine places I have been to before. I can do this with a GPS but it would be nice to have an automated process, which is about 70% feature complete right now, that can recognise rooms I have been in at the office, or at home, or other locations that I frequent, and automatically group them together.

I am also working on using OpenCV to automatically recognise human faces and group images together, automatically tagging those people that I know and indicating people that are not tagged.

I am still making use of Windows Live Photo Gallery, simply because it offers some very fast image browsing and tagging functionality, along with PhotoSynth, but I have begun to use it less and less as my own application develops new features. With SURF analysis I have an almost complete PhotoSynth clone that can create a 3D scene from all of my SenseCam images.

I am wondering how Alan, the researcher working on software, is able to automatically determine significant events in a day. Currently I am wrestling with this problem by looking for a gathering of human faces, significant light changes in the environment or spending time within a small geographic region determined by geo-location that is outside of my normal pattern or existence. But I have yet to fathom how the Microsoft software does it.



It is good to see the popular press taking an interest in these devices, but I still fear that they are focusing on the wrong thing, turning their attention to the “man jewellery” rather than what happens after the images have been captured.

How SenseCam’s External Memories Screw With Your Own Perception

Ever recalled “facts” about an event in your personal life that simply were not true?

Sometimes termed False Memory Syndrome, it has come to public consciousness mostly because of sexual abuse victims that were never abused. Instead, the alleged victims were lead to believe false memories created by a psychotherapist.

This can also happen to you, when your recollection of certain facts, or details of an event, are swayed by a professional witness or authority figure, such as a police officer or other public official.

I had my very own “false memory” incident on Monday morning, when I mentioned in an article I wrote about being sat in the Cow’s End Coffee Shop on Venice Beach. I was dressed in my Classic Gaming Expo shirt, ripped jeans, and my Rocket Dog sandals, as I pounded away at the keyboard.

The only problem was, I was not.

090516_132142_00615 On Saturday, I went to the Novel Cafe in Venice, wore a blue polo shirt, reasonably tidy jeans, with Rocket Dog sandals, and the sky was overcast for the better part of the day.

The SenseCam pictures I browsed from that day, whilst putting the final touches on the article, were actually from a Saturday the year previous, late in July, when I did sit in the Cow’s End and write. I had just bought this particular pair of sandals so they were brand new, and I threw out the ripped jeans sometime between September 1st and September 5th, 2008. I could not have worn part of the clothing items I thought I did, because I do not even own them anymore.

I was looking at SenseCam pictures for the wrong date, and was sure that I went to the Cow’s End on Saturday, 16th of May, 2009. I even wrote it in to the article, and it was not until I was chatting with my friend who I met at the cafe, that they mentioned they had actually met me at the Novel Cafe approximately two miles away. Go figure.

When we all wear a SenseCam, and put our thoughts and memories onto external devices, how long before our “world view” of what we know to be “true” is distorted by misapplied dates, over-written files, and tampering?

This event reminds me of the time I could not find my hotel, whilst living and working near Seattle, on a contract job for a local video game company.

One late night I came out of the office, after having pulled a near all-nighter on a critical deadline for the next day, the sky was overcast and cloudy, it was just beginning to rain, the parking lot and office were now both deserted. I jump in my Land Rover, flip on the GPS navigation system, and… no GPS signal. Nothing. I sit there for a few minutes more, still nothing.

Okay, this should not be a problem. It was a cheap Motel 6 conveniently close to the office, somewhere along Interstate 5, not more than seven miles away. I have driven this route every day, twice a day, for the past two weeks or so, how hard can it be?

About 40 minutes of humiliating driving up and down I-5, trying every exit ramp that had a Motel 6 (there were three of them) I eventually found the right one.

I did not memorise the address, I did not memorise the route, I did not need to. Why bother? The GPS navigation system has it all done for me.

My cell phone remembers all of the names, telephone numbers and addresses of people and places I visit. My laptop contains my collected thoughts and ideas for the past three decades. My SenseCam captures what I see and hear and where I have been. Why do I need to remember any of it?

Right up until the system fails and my “external memory” goes offline.

I have been observing this same phenomenon in education too, especially in software development. In the 1970’s, 1980’s, and early 1990’s programmers had to build up and memorise a “world map” of the source code, where certain functions existed in which source files, how the entire source code of an application or video game was organised and structured, and in many source files, down to particular line numbers too. Programmers had to remember API (Application Programming Interface) function calls, the parameters required, and many other details. All of this, in a body of source code, that is constantly changing and developing on a daily basis.

Now, with IntelliSense technology built-in to Visual Studio, and Whole Tomato’s add-in that extends the functionality even further, all of the memorising is taken care of. IntelliSense is an auto-complete technology, operating on similar principles to the auto-correct grammar and word correction in Microsoft Word or the auto complete of Google search, but ten times smarter. Type the first few letters of a function or class name and IntelliSense will fill in the a list box of possible names and remind you of which parameters to enter and in what order they go. If you type the same piece of code frequently, it will make suggestions for you. Variables or function names in the local vicinity are pulled in to populate a list of possible words as you type.

used_sensecam_080815_210814_00364With the ability to jump straight to a function definition, a map that gets populated automatically every time there is a change in the source code, programmers are freed up from the necessity of learning and remembering everything they write and figuring out what everybody else wrote too.

Great programmers had a magical knack for figuring out a system very quickly, now it is a skill on its death bed. Mediocre programmers have the same tools that great programmers now have. Having great tools that boost you up is a “good thing,” but it makes it difficult to separate the merely mediocre from the truly great.

Anticipation of what you want and automatic prediction of what you are about to need, is a great addition to any tool, but it makes you dumb and lazy. Programmers are forgetting the location of the function or the name of the variable that they wrote not more than two minutes ago. They do not remember, because they do not have to. It is happening now in software development, it is happening in the written word, I am sure it is happening in subtle ways in other industries too, we just cannot see it yet.

I am all for enabling tools, I am all for freeing people up from the minutiae of a job[1], but I worry that these technologies are at the expense of our intelligence and our own memory.

Often, knowing where to look for the answer, is more important than knowing the answer. Similarly, knowing the question to ask is more important than having the knowledge to fix the problem.

But, sometimes, just sometimes, it pays to know.

[1] Like the ability to spell minutiae without having to use the auto-suggest feature of Microsoft Word.

Live Your Life With A SenseCam

sensecam-2 What’s a SenseCam? Think of a SenseCam as a black box flight recorder for human beings.

Almost everything you or I see, hear or encounter can be recorded in some fashion on a tiny digital device. You can later use the recorded data as a memory aid, to reconstruct an event, to prove who won the he-said, she-said argument, as evidence gathering and as a record of your entire existence.

For the past several years, I have been capturing as much of my life as it is convenient to do, making used of my own homebrew version of a SenseCam, which utilises a modern cell phone.

The concept of the original SenseCam was born at Microsoft Research, and whilst Microsoft was not the first to come up with the idea, they were the first to throw serious research dollars at the project. A SenseCam is generally capable of capturing audio, video, location information, plus any other details that the creator of the device desires such as heart rate, background radiation, etc.

Microsoft Research has compiled a considerable amount of information on the SenseCam; there is also a Wikipedia page about the SenseCam, and many other resources available, detailing usage of a SenseCam, including various news reports.

SenseCam Questions

When I first decided to move away from merely recording audio to capturing audio, images and other data, by utilising a homebrew SenseCam, I had several questions that just could not be answered without actual experimentation and direct experience.

sensecam-1How would I wear the SenseCam? Should I wear the SenseCam around my neck? On an armband? Hanging from my pocket? On a headband? Dangling from my lapel?

Currently I wear my SenseCam on a lanyard around my neck and I am also experimenting with an armband version.

How much should the SenseCam weigh? What is the upper weight limit of a wearable SenseCam? Do I sacrifice functionality for weight reduction? Do I curtail battery life for form factor?

Is there an ethical issue with wearing the SenseCam? What happens when you wear a SenseCam to a confidential client meeting? What should you do when you enter someone’s house? Should I turn off the camera when I use the bathroom? Should I inform the other person I am interacting with, that I am wearing a SenseCam?

Why Use A SenseCam

When I started my software company, I came up with this grand notion of doing things differently compared to every other regular, run-of-mill company out there.

I also came up with the idea of capturing as much of the discussion and argument that took place on a daily basis, inside and outside of the company that immediately surrounds me to see if, at some indeterminate point in the future, whether there was a way to use the data captured. Who knows, there just might be something I did or a topic I discussed, that tipped the company one way or the other.

Somewhere in my nefarious plans, there would be trends that I could perhaps point to and say, “this is where we changed course that changed our history.” Think of it as a mental exercise, one that one day may yield value that I am not aware of yet.

Components I Used

For capturing audio, I use an Olympus DS-50 and a SONY Ericsson k850i for images, audio and geo-location. Data storage is a Drobo storage array plus a number of custom written scripts in Python to manipulate the captured data. I have experimented with a Bluetooth enabled GPS but I have not been impressed with the facility nor the battery life of any of the devices currently on the market.

Cost Of The SenseCam

sensecam_080815_211216_00372 My cost for a SenseCam-like device, hardware experimentation, and data storage at this time is around $2,500. My SenseCam device is built around a $450 state of the art cell phone, right now a SONY Ericsson k850i but previously the k790i, plus there is the additional cost of good quality audio capture, several 4GB memory sticks, a Drobo storage array, a 1GB Olympus DS-50 and various lanyards, arm bands and cell phone cases.

Gordon Bell At Microsoft

The current diva of SenseCam usage is eminently Gordon Bell, a principal researcher at Microsoft Research. Due to his commitment to the project and the untold number of dollars Microsoft has put in to this project some very innovative software, such as MyLifeBits, a digital archival quality piece of software that records everything a person ever does, which has come out of his work at Microsoft.

One thing I was surprised at, when I read about his research work was that Gordon also makes use of an Olympus DS-50 digital audio recorder the same as the one I use. I guess we both discovered independently that the Olympus is a fine tool for capturing high quality audio. I am wondering if Gordon has any intention of directing his researchers to add in high quality audio capture to the SenseCam he currently uses.

SenseCam Hardware Experiments

I started with various, single function, digital devices that did not work very well except for one or two successes such as the Olympus DS-50 digital voice recorder. I slowly progressed to using a state of the art cell phone with a custom J2ME application and a bunch of python scripts on a server to create my own SenseCam.

SenseCam On A Cell Phone

My current cell phone is a SONY Ericsson k850i. There are a number of cell phones out there I could have used, but I went with hardware that I know, and capabilities that are currently sufficient for my experiments. Since my purchase of the k850i, there have been a number of other cell phones released on to the market with better features and improved battery life that I will eventually investigate.

SenseCam On An iPhone

I am contemplating in the near future of obtaining an iPhone to try out various ideas on, as that would also make for an ideal SenseCam device.

I have three concerns with the iPhone; weight, battery life, and replacing the battery. If I deplete the battery to near zero on a daily basis, how will this affect the overall long-term life span of a device with no user-replaceable battery? But, even with these doubts, I am still intent on developing a version of the SenseCam software as an experiment.

SenseCam Battery Life

sensecam_080817_041554_01261With regard to battery life, I am able to achieve about 20 hours of continuous usage from the SenseCam with about 80% battery usage. I did several tests over the course of two week of making no other use of the phone functions to achieve these results. I leave the Bluetooth switched off most of the time and have programmed the Bluetooth on/off function to a shortcut key so that it is only enabled as and when I need it.

My J2ME application monitors battery life and attempts to predict, rather poorly for now, how long the phone can run until the battery dwindles too low to be useful.

I do not wish to constantly run my cell phone battery down to nothing each day, as that will obviously kill the life of the battery. However, I am contemplating obtaining some spare batteries for the phone, as they are incredibly cheap, between $5 and $10 depending on where I purchase them. I could always afford to keep a spare battery or two fully charged and ready to go if I find that, the life of the current battery is beginning to dwindle due to the "memory effect."

SenseCam Software Development

I developed the software for the SenseCam using Java 2 Micro Edition (J2ME) in NetBeans. I had also looked in to using Flash Lite but found the features to be lacking and the CPU usage too high. The SONY Ericsson k850i runs Flash Lite in a web browser so unfortunately, that consumes more memory and more CPU than I care for. In addition, Flash Lite, at this time, does not support all of the features of the k850i without adding extensions to the cell phone.

SenseCam Data Recorded

My data capture from my SenseCam includes audio, images (taken at approximately once every 30 seconds), and geographic location.


I capture audio in two ways, via the SenseCam software on the cell phone and also using an Olympus DS-50 either with the built-in bi-directional stereo microphone for general every day conversations or via a lapel clip microphone for presentations, speeches or other one-to-many conversations where it is important I capture clear, high quality audio for later publication or review.


Image Size

The SONY Ericsson k850i I am currently using has a 5MP (mega pixel) camera. The SenseCam software allows me to adjust both image size and image resolution but the default setting for my software is 960 x 1280 pixels, or about 1.3MP which is more than sufficient for recording life.

Image Area

The lens on a traditional cell phone camera is not sufficient to capture a large part of the scene. Fortunately, third party manufacturers have come to my rescue with a little wide-angle lens I can attach to the camera lens area to capture a much larger part of the scene.

I have developed a python script to post-process the wide-angle lens pictures, apply an inverse transform to deform the fish eye effect on each picture and then spit out an updated image. This deformation greatly aids the appeal of the images captured. With a fish eye effect, it takes the human brain a second or two to mentally process the image being viewed to recognise faces and locations so it is important aesthetically to remove it.


My current SONY Ericsson k850i cell phone does not have any kind of GPS built-in to it. Fortunately, in the recent past Google has started the beta programme for MyLocation, using the ID of a cell tower to pinpoint a cell phone to within a few hundred metres.

There are also open source databases with this information available to me, such as Open Cell ID and Cell DB. Currently my phone logs the cell tower ID once every 30 seconds in to a text file stored on the phone’s memory card.

When I need to copy all of the images and audio from the SenseCam directory over to the server, a Python script scans each of the images that the SenseCam software captured. The script determines the time that the image was taken, and then correlates the time data with the cell tower ID that was active at that location when the image was taken, pulling the cell tower information from the log file.

A quick database lookup in the open cell ID database reveals general vicinity, and the longitude and latitude of the cell tower is stored in to the image EXIF data. I am currently working on a Python application that will use Google maps to show a map of the area the image was captured in, with a thumbnail of the image floating above a marker. I also want to add in the feature to display breadcrumb trails as I move about through my day as an eventual feature.

SenseCam Current Features

The SenseCam software uses a basic facial recognition algorithm to determine if someone is in front of the lens before taking an image. The facial recognition had to be fast and not consume vast amounts of CPU power to enable the cell phone battery to last as long as possible. The algorithm is very primitive and uses many table lookups rather than pure mathematics to determine if a face is within the camera’s view. I am interested in looking in to various digital cameras that have built-in facial recognition, to determine if they might be employed as a SenseCam instead.

sensecam_080829_222656_04687My SONY Ericsson k850i has built-in accelerometers and I use these to determine if the cell phone has been moved since it last took a picture. When I am sat at my desk, I rarely keep the SenseCam around my neck as there is not that much point, watching me type code or write a business contract on my two LCD monitors is not very exciting. By determine if the SenseCam has moved, my software knows whether it is time to wake up and take another picture, based on estimated distance travelled.

Under the preferences screen for the SenseCam software, I have implemented basic settings for adjusting image size and image resolution, both of which can dramatically reduce the amount of storage required.

Obviously, I do not want my SenseCam using the built-in camera flash for any purpose, so it is important that it remains switched off. The default mode is flash switched off, but I have been to a few nightclubs and one rave, and enabled the flash while on the dance floor, which makes for some very interesting and candid photographs.

SenseCam Future Features

In the future, I would like to add a number of features to the SenseCam software based on hardware upgrades that will become available in future cell phones. I know that many newer cell phone models have GPS geo-location built-in, one or two have infrared capability, which I could utilise with a quick image analysis algorithm to determine if a human body or a human face is within the lens view of the camera.

The camera software built-in to the SONY Ericsson k850i has the ability to take several images in quick succession, and then let the user decide on which one is the best picture. I would like to implement a similar feature such as this in my SenseCam software, but have the software itself determine the best picture, based on image analysis for blurriness, streaking, etc.

I mentioned this earlier in the article, but once I have the ability to accurately record position beyond just cell phone tower ID I want to add a breadcrumb trail, to appear as an on top of a map, indicating where I have been, and what I experienced that day, along with audio snippets of conversations I had. The audio snippets will be compiled by automated script. The script will scan for human voices or music within the audio file and inserts a random clip that blends from one audio clip to another. I have a prototype of this system working right now but it is far from useful at this time.

SenseCam Storage

My J2ME application saves images and audio directly to the phone’s internal memory which is around 25MB capacity. As the memory on the cell phone fills up the J2ME application moves the images and audio data over to the memory stick if it is present.

I use an 8GB memory card to record all of the SenseCam data on to. The pictures and audio consume around 300MB per day on the memory card. Every few days I take out the memory card and replace it with an empty one, uploading the captured images and audio data to my Drobo file server.

I swap out the memory cards as opposed to plugging the cell phone in via USB cable as I then do not have to switch off the cell phone temporarily and then forget to switch it back on. So long as I do not pull the memory card out in the middle of the files being moved from internal memory to memory stick I can pull out the memory card at pretty much any time and replace it with a fresh one.

As I also use my k850i to listen to audio books whilst driving to and from meetings, working out, or doing chores around the house and office I want to be able to use part of the memory card for audio book data. I keep around 1GB of the full capacity to use for storing audio books, podcasts and images I have taken with the regular digital camera application built in to the cell phone. Eventually when I get around to it I will upgrade the memory card to either 16GB or 32GB depending on availability.

I use up around 300MB per day of data for my SenseCam. I have not yet added to the J2ME application the ability to change the compression settings on the image files that are saved. I have to add that feature in the near future. When I do that I think that the amount of data I capture per day will drop to around 200MB.

As a side note I also capture a lot of other data too. I capture a snapshot of my computer’s desktop once every 10 seconds using TimeSnapper. I also capture the desktop of my two laptops as well. Combined this contributes another 500MB to the data store.

Audio is captured via the Olympus DS-50, which I still use as a backup to the SenseCam software on the cell phone. This generates approximately another 175MB per day. I also perform a differential backup of my computers each day.

My automated daily data generation, which does not include any artwork, source code or written material, is a little over 1.2GB per day.

It does not really need to be stated but I will say it anyway, the size of hard drives is increasingly dramatically, 2TB drives are readily available, 2.5TB drives are on their way (if they are not here already), 3TB drives by the end of the year and 4TB drives by 2011. No matter how much data I actually capture, I do not see any possible way of me running out of room. Even if I switched over to HD video and capture my life 24/7 my data storage ability would still grow faster than my capture rate.

Delete Nothing

When I first started my audio capturing experiments, and later with my SenseCam experimentation, I decided at the beginning I would delete nothing.

No matter how dark the scene is, no matter how out of focus the images, no matter how blurred with electric lights, no matter how garbled the speech, no matter what background noise is taking place, I will save the data. Somehow, sometime, software will catch up and much of what you or I would consider to be junk data will yield tiny little details that cannot be perceived unaided. It might be trash data today, tomorrow it could be worth its weight in gold, metaphorically speaking. And all that data takes up so little space, it is not worth mine or anybody else’s time to go sifting through it to weed out the rubbish.

Current Storage Options

My current data storage is a 4TB Drobo storage system for stashing all of my data safely away. At my current data consumption rate the three terabytes (TB) of available storage (Drobo uses one terabyte as error correction & recovery data) will last me approximately six years before I need to upgrade the hard drives to something bigger.

By the time I need to do that, I can consider the DroboPro or whatever other product has replaced it by then, which can expand to 256TB of data storage, and should I need more than that, I am sure petabyte hard drives – 1,000 terabytes – will not be too far away.

For those doing the math at home, I am using the IEC standards for calculating capacities and consumption, even though I prefer working in the binary powers of two sizes, i.e. 1,024.

Future Thoughts

Having used a SenseCam on a near daily basis for the past two years, I have gained some insight in to its use and improvements I can make to the software and hardware arrangement. Primarily the improvements I wish to make are in the software after I have captured the images and audio.

True Facial Recognition

I would like to implement a proper facial recognition system in the off-line processing scripts that will identify individuals within the SenseCam images and tag the image with EXIF data indicating who it is I am speaking too. I have a rudimentary script currently written in Python utilising FFNet but it is far from robust or very capable at this time.

Location Recognition

Other than just recording the GPS or cell tower ID of my location, I would like to be able to recognise the spot I am in based purely on the image. By automatically recognising characteristics of a geographic setting, I can piece together a 3D representation of the area I passed through and features or events of interest that I may have missed when I was there.

I do not yet know how I will go about doing this but it is food for the future.

Microsoft PhotoSynth

Microsoft has finally seen fit to release a version of their PhotoSynth technology built in to their Windows Live PhotoGallery. PhotoSynth can reconstruct a scene in 3D space, allowing the user to fly around and examine an area in great detail based on nothing more than a few hundred two-dimensional images taken with a regular, plain old camera. I cannot think of a better use for the hundreds of SenseCam images captured each day that reconstructing interesting locations I may have visited.

Stereoscopic Images

I would like to be able to capture stereoscopic imagery directly from the SenseCam. I have thought about two ways I can do this, the first is using two individual SenseCams running in tandem, with their image capturing capability synchronized by software, or the alternative is to use a clever lens arrangement on a single SenseCam that can create a stereoscopic image using only a single image-capturing device. Stereoscopic imagery is really just a gimmick of the device but until I have experimented with it in depth I do not know if it will have real value.

High Quality Wide Angle Lens

I am considering getting a custom-made high quality wide-angle lens created for the cell phone. The current crop of wide-angle lenses are created from plastic or cheap optical glass, which really deteriorates the quality of the images captured. A custom-made lens would correct many of the issues I currently have with the cheap ones I have experimented with and allow me to get the lens features I believe are important.

Automated Image Cleanup

Many of the images captured under indoor lighting conditions, especially with bright overhead fluorescent lights are involved, come out blurry or with long light streaks on them. My idea is to capture multiple images in quick succession and then determine which is the least blurry or streaky of them and save the best one. I also believe I can develop an automated algorithm that will be able to remove the light streaks from an image by combining multiple images taken very quickly one after the other and subtracting the bad information.


Other than the physical device and obviously the custom software to drive it, there are also a few other items that I initially overlooked. Some of these questions I was only able to answer after several weeks of experimentation in each area.

sensecam_080817_005104_00924One thing I have noticed wearing the SenseCam is that my neck gets tired. The cell phone I use is not exactly lightweight. You do not think too much of it when it is in your trouser pocket or being carried in your hand, but hang that cell phone on a lanyard around your neck for about 12 hours and pretty soon you start to notice how much it weighs.

The reaction by people to the SenseCam nee cell phone hanging around my neck is somewhat mixed. Most people do not even notice the device there, and if they do are usually too polite or maybe disinterested to mention it, it is just a plain and boring cell phone after all and I am a techno geek entrepreneur so why should the sight of me walking around with my cell phone on a lanyard about my neck cause any mention beyond an idle curiosity?

Once people do ask about the device, and I mention what it is for and the research conducted by Microsoft, if they truly get it, their eyes light up and you can see the wheels of business beginning to turn in their heads. Most of the time people do not understand why, when I mention that in the future almost everyone will have a device like this recording their daily activities, the person dismisses it out of hand, and usually goes on to say it will never happen, unable to see beyond the end of their own nose.

One thing I have noticed after mentioning the device to people, is that they quickly forget that it exists, within ten or fifteen minutes they have returned to how they normally act. Because the device is not being directly pointed at them, because I am not actively engaged with the device, and because it is a background object, it becomes once more unobtrusive and the person stops being self-conscious and “performing for the camera” and they carry on as though nothing is different.

For further information on what is a SenseCam and possible future directions I highly recommend you read all of the links in this article and do your own research too.

Almost all of the images posted in this article and others on this website are a direct result of my SenseCam. The two images at the top of the article are used courtesy of Microsoft Research to illustrate their prototypes.