Over the years, technology has proven to us that anything in the world can be simplified. Be it communication, interaction, shopping or even enabling devices to contextually understand the needs of the users, technology has evolved in leaps and bounds.
And the same applies even to the Quantified Self movement. We’ve evolved from rustic, arduous and arcane methodology to more scientific, automatic and quantified measures to record the vital parameters in our lives, which forms the essence of lifelogging.
And that is the crux of my article here. I take a look at how lifelogging mechanisms have evolved over the years, and how they can evolve in the near future.
LIFELOGGING MECHANISMS – THE OLD…
The old lifelogging mechanisms mainly incorporated manual methodologies of recording the data. These mechanisms were mainly restricted to the user actually making a note of the data and entering it into spreadsheets. There were also some specialised software which depended on the user’s data to build graphs for better data visualisation. However, the process was in many ways arduous and taxing.
The capture of data was effortful on the user’s part. The data captured could be encapsulated in two categories:
• Total capture of data: which involved data being recorded on a continuous and regular basis. Generally this involved images, videos and audio logs (for a more seamless experience) and some rudimentary wearable mechanisms.
• Situation specific capture of data: More limited in scope, this modus of data capture was restricted to specific domains involving complex amounts of information. For instance, this could be applied to recording data during meetings, to have an organised log of all the information that was exchanged within a business organisation.
The initial methods involving collecting the data within spreadsheets were not as efficient as they were sought out to be. The primary reasons included the following:
• The data collection mechanisms involved recollection of the parameter on the user’s part. And remembering multiple parameters on a daily basis is somewhat tough, not to say confusing.
• The transition time between the data capture and data recording activities needed to be minimised, in order to maintain the authenticity of the information captured.
• The retrieval mechanisms weren’t all that efficient, the software used was complicated, and the visual representation of the data – although informative – did not provide a valuable insight to the amateur lifelogger.
Moreover, I believe that the biggest shortcoming of the lifelogging systems of the yesteryears was the absence of an intelligent computing mechanism, which could constantly record the data and make sense of it. Of course, wearable cameras and voice recorders did get the job done. However, they represented only one major dimension – the one that they were recording. So, in order to correlate the data, the lifelogger would have to resort to multiple recording devices.
Not only was this expensive, but it was also complicated. Imagine correlating the data from multiple devices – in ideal circumstances you would have to keep the devices on your person at all times, and correlate the data with time. What if one device’s battery died out? What if you couldn’t rule out selective capture? Lifelogging in its nascent stages had its shortcomings.
Of course, that is a different picture today.
LIFELOGGING MECHANISMS – THE NEW
Modern day lifelogging mechanisms are more efficient, and require little to no involvement of the user – at least in the data capture domain.
Through wearables becoming more non obtrusive, sensors reducing in size and coming in a variety of form factors, compact hardware in mobile devices enabled to constantly capture data on the go and software algorithms becoming more armed to the teeth to crunch all the numbers, we’ve seen the process of lifelogging and data capture become more efficient and intricate.
As a simple example, consider the quintessential mobile phone. Mobile devices have a large number of sensors within them, which are capable of recording the data around the user. Sensors such as GPS, accelerometer and ambient light sensors all work in tandem to record the data.
This data that is picked up by the multitude of sensors is tapped into by dedicated applications. If the applications come bundled with wearables, then the data from two different devices can be easily correlated together using the application.
And the best part? Data visualisation in the form of graphs is all the more easier, thanks to simple and effective Graph APIs which go hand in hand with these lifelogging applications. Plus, the data is captured contextually by correlating a lot of variable parameters, so the hardware is always intelligently sensing and recording the data, without the user having to go over the hassles associated with data recording.
Earlier, the onus was on developing specific hardware for recording the data. But with the advent of lifelogging sensors within the smartphone, that need has been rendered unnecessary as well. Most data – be it health stats (the phone does need to have specialised sensors for this), activity and fitness (Google Fit, accelerometer, GPS) and even sleep can be gauged effectively – all through the smartphone!
What does this imply? Amateurs and first time lifeloggers can now jump seamlessly on the lifelogging wagon, and the transition from amateur to professional lifelogging can be made much more easily.
Long story short, lifelogging mechanisms have come a long way since their inception. The future might see more contextual recording mechanisms and the presence of AI in the lifelogging devices. The future of lifelogging, in terms of the methodologies involved, is pretty bright!
Sign up for our monthly mailing list