Nivedit Majumdar Nivedit Majumdar

WWDC 2015: Apple brings context with proactive search

Just yesterday, the WWDC was held, which showcased Apple’s latest announcements in terms of its software platforms. Among the various announcements made, one thing which caught the attention of team Emberify was the announcements regarding the revamped version of Siri.

Siri’s new avatar takes on a whole new level of understanding and prediction, and this is through the prowess of contextual understanding and the element of machine learning. And that is the crux of my article here, how Apple might just change the machine learning game for itself.


To truly understand how Siri has improved in its latest iteration, it is important to look back at how it came about in the first place.

Introduced in the 4S, the virtual assistant from Apple was a handy take on a more personal, handsfree and accessible control of the device. The user could provide commands to the virtual assistant and Siri would happily perform the tasks. Be it calling up somebody, checking the weather conditions or performing simple searches, Siri could do all those successfully.


However, there were limitations. We’ve been talking about Context, and although Siri could do tasks given to it, it could not learn much by itself. It was merely a slave, not an assistant.

All that changed with yesterday’s revelations regarding the new iteration of Siri: the Proactive Assistant.


Siri’s new version brings in a more contextual approach to problem solving, which involves machine learning as well. The Proactive Assistant inculcates a smarter approach to performing tasks, and from the initial impressions it seems to be quite impressive!


A simple principle of machine learning is employed in this case. The software platform simply assesses the amount of time you’ve opened any particular application and makes sure that those applications can be accessed at the earliest.

This also applies to contacts who are contacted on a frequent basis. This is machine learning at its most basic, and most effective form.


This is one particular aspect that I had talked about in my article on VIV and Artificial Intelligence. Commands are usually one dimensional: Do this, Call him, Set an Alarm etc. With the new version of Siri, users can give multidimensional commands.

Confused? I’ll just quote Apple’s announcement on this.


This is a welcome addition to the concept of answering commands and requests from the user, and it is almost similar to the concept we saw in Viv. Users can now add two commands which can be cross referenced by Siri to give the perfect solution or response to the user’s query.


This is a very interesting form of Contextual evaluation. The software platform notes the location of the user and accordingly suggests eateries and other places of interest to the user. This also takes on a new form, with Siri providing relevant trending news based on locations. This can be useful in scenarios when the user is travelling, and there might be a change in weather, or some calamity has befallen.

All in all, location based suggestions are a new way of implementing machine learning and context into the platform.


Not a strictly Contextual term, but an interesting one nonetheless. A study conducted three years back stated that the interaction between a user and Siri is stored on Apple’s cloud servers for about two years. This is no longer the case, as all the proactive assistant data stays anonymous and on the device itself.



With Google already talking about its Now On Tap feature during its keynote at I/O 2015, and now Apple talking about Proactive Assistant during WWDC 2015, we’re seeing Contextual applications and Machine Learning in a whole new level of usage. It is quite enlightening to see the growth of interest in the direction of contextual interfaces and improved interaction levels between users and their devices.

Virtual asistants are now truly upping their game!

Sign up for our monthly mailing list